uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,500,315 | arxiv | \section{Introduction}
Standard learning algorithms assume that each training example is
\emph{fully observed} and doesn't suffer any corruption. However, in
many real-world scenarios, training and test data often undergo some
form of corruption. We consider settings where all the features might
not be observed in every example, allowing for both adversarial and
stochastic feature deletion models. Such situations arise, for
example, in medical diagnosis---predictions are often desired using
only a partial array of medical measurements due to time or cost
constraints. Survey data are often incomplete due to partial
non-response of participants. Vision tasks routinely need to deal with
partially corrupted or occluded images. Data collected through
multiple sensors, such as multiple cameras, is often subject to the
sudden failure of a subset of \mbox{the sensors.}
In this work, we design and analyze learning algorithms that address
these examples of learning with missing features. We consider a
supervised batch setting, where examples and missing features are
drawn according to a fixed and unknown distribution. We design a
learning algorithm which is guaranteed to globally optimize an
intuitive objective function and which also exhibits a generalization
error on the order of $O(d / \sqrt{m})$, where $d$ is the data
dimension and $m$ is the number of examples.
The algorithms are also tested empirically across several publicly
available datasets subject to various artificial and natural types of
feature corruption. We find encouraging results, indicating the
efficacy of the suggested algorithm and its superior performance over
baseline methods.
Learning with missing or corrupted features has a long history in
statistics \cite{little_rubin,dempster_em}, and has received recent
attention in machine
learning~\cite{dekel_corrupted,marlin,cesa_efficient,chechik_struct}.
Imputation methods (see~\cite{little_rubin,marlin,dempster_em}) fill
in missing values, generally independent of any learning algorithm,
after which standard algorithms can be applied to the data. Better
performance might be expected, though, by learning the imputation and
prediction functions simultaneously.
Our work is different from settings where features are missing
only at test time~\cite{dekel_corrupted,Globerson2006nightmare},
settings that give access to noisy versions of all the
features~\cite{cesa-noise} or settings where observed features are
picked by the algorithm~\cite{cesa_efficient}.
In Section \ref{sec:setup} we formally introduce the general setting
we consider. Then, in Section \ref{sec:alg} and \ref{sec:theory} we
detail the proposed algorithm and related theoretical results
respectively. Finally, empirical results are presented in
Section~\ref{sec:empirical}.
\section{The Setting}
\label{sec:setup}
\subsection{Corruption notation}
In our setting it will be useful to denote a training instance $\mat{x}_t
\in \mathbb{R}^d$ and prediction $y$, as well as a corruption vector
$\mat{z} \in \set{0,1}^d$, where
\begin{equation*}
[\mat{z}]_i = \left\{\begin{array}{cl} 0&\mbox{if feature $i$ is not
observed,}\\ 1&\mbox{if feature $i$ is observed.}\end{array}\right.
\end{equation*}
The algorithm we propose is a modification of ridge regression and
thus we will consider the regression setting with $y \in \mathbb{R}$.
However, we note that similar extensions and analyses also easily
apply to classification algorithms such as SVMs. The learning
algorithm is given the corruption vector $\mat{z}$ as well as the
corrupted instance,
\begin{equation*}
\widetilde \mat{x} = \mat{x} \circ \mat{z} \,,
\end{equation*}
where $\circ$ denotes the component-wise product between two vectors.
Note that the training algorithm is never given access to $\mat{x}$,
however it is given $\mat{z}$, and so has knowledge of exactly which
coordinates have been corrupted.
\subsection{Supervised batch setting}
We examine the setting where i.i.d.\ examples $(\mat{x}_i, \mat{z}_i, y_i)$ drawn
according to a fixed but unknown distribution. Note that although
each example is drawn i.i.d.\, i.e. $(\mat{x}_i, \mat{z}_i, y_i)$ and $(\mat{x}_j,
\mat{z}_j, y_j)$ are independent for $i \neq j$, the vectors $\mat{x}_i$ and
$\mat{z}_i$ are not necessarily independent. The goal is to choose a
hypothesis $h$ from some bounded set $\mathcal{H}$ that minimizes the expected
error, with respect to an appropriate loss function $\ell$:
\begin{equation*}
\min_{h \in \mathcal{H}} \mat{E}_{\mat{x}, \mat{z}, y}[\ell(h(\mat{x}, \mat{z}), y)] \,.
\end{equation*}
The hypotheses $h$ we consider in this scenario will be inspired by
imputation-based methods prevalent in statistics literature used to
address the problem of missing features~\cite{little_rubin}. An
imputation mapping is a function used to fill in unobserved features
using the observed features, after which the \emph{completed} examples
can be used for prediction. In particular, if we consider an
imputation function ${\boldsymbol \phi}: \mathbb{R}^d \times \set{0,1}^d \to \mathbb{R}^d$,
which is meant to fill missing feature values, and a linear predictor
$\mat{w} \in \mathbb{R}^d$, we can parameterize a hypothesis with these two
functions $h_{{\boldsymbol \phi}, \mat{w}}(\widetilde \mat{x}_t, \mat{z}_t) = \dprod{\mat{w}}{{\boldsymbol \phi}(\widetilde \mat{x}_t,
\mat{z}_t)}$.
It is clear that the multiplicative interaction between $\mat{w}$ and
${\boldsymbol \phi}$ will make most natural formulations non-convex, and we
elaborate more on this in Section~\ref{sec:alg}. In the
i.i.d.\ setting, the natural quantity of interest is the generalization
error of our learned hypothesis. We provide a Rademacher complexity
bound on the class of $\mat{w},{\boldsymbol \phi}$ pairs we use, thereby showing
that any hypothesis with a small empirical error will also have a
small expected loss. The specific class of hypotheses and details of
the bound are presented in Section \ref{sec:alg}.
\section{Imputation Based Algorithm}
\label{sec:alg}
Here we introduce the particular class of imputation mappings we will
be using, which are of the following form
\begin{equation}
{\boldsymbol \phi}_{\mat{M}}(\widetilde \mat{x}, \mat{z}) = \widetilde \mat{x} + \diag(1-\mat{z})\mat{M}^\top \widetilde \mat{x}\,.
\label{eqn:linearimp}
\end{equation}
Thus we retain all the observed entries in the vector $\widetilde \mat{x}$, but for
the missing features that are predicted using a linear combination
of the observed features and where the $i_{th}$ column of $\mat{M}$ encodes the
averaging weights for the $i_{th}$ feature. Such a linear prediction
framework for features is natural. For instance, when the data vectors
$\mat{x}$ are Gaussian, the conditional expectation of any feature given
the other features is a linear function. The predictions are now
made using the dot product
\begin{equation*}
\dprod{\mat{w}}{{\boldsymbol \phi}(\widetilde \mat{x},\mat{z})} = \dprod{\mat{w}}{\widetilde \mat{x}} +
\dprod{\mat{w}}{\diag(1-\mat{z})\mat{M}^\top\widetilde \mat{x}},
\end{equation*}
where we would like to estimate $\mat{w}, \mat{M}$ based on the data
samples. From a quick inspection of the resulting learning problem,
it becomes clear that optimizing over such a hypothesis class
leads to a non-convex problem. The convexity of the loss plays a
critical role in the regret framework of online learning, which is why
we restrict ourselves to a batch i.i.d.\ setting here.
In the sequel we will provide a convex relaxation to the learning
problem resulting from the
parametrization~\eqref{eqn:linearimp}. While we can make this
relaxation for natural loss functions in both classification and
regression scenarios, we restrict ourselves to a linear regression
setting here as the presentation for that example is simpler due to the
existence of a closed form solution for the ridge regression
problem.
In what follows, we consider only the corrupted data and thus simply
denote corrupted examples as $\mat{x}_i$. Let $\mat{X}$ denote the matrix with
$i_{th}$ row equal to $\mat{x}_i$ and similarly define $\mat{Z}$ as the matrix
with $i_{th}$ row equal to $\mat{z}_i$. It will also be useful to define
$\overline{\mat{Z}} = \mat{1}\1^\top - \mat{Z}$ and $\overline{\mat{z}}_i = \mat{1} - \mat{z}_i$ and finally let $\overline{\mat{Z}}_i
= \diag(\overline{\mat{z}}_i)$.
\subsection{Imputed Ridge Regression (IRR)}
\label{sec:imputation-alg}
In this section we will consider a modified version of the ridge
regression (RR) algorithm \cite{saunders}, robust to missing features.
The overall optimization problem we are interested in is as follows,
\begin{align}
\hspace{-0.28cm} \min_{\{\mat{w},\mat{M}:\|\mat{M}\|_F \leq \gamma\}} \!
\frac{\lambda}{2} \| \mat{w} \|^2 \!+\! \frac{1}{m} \sum_{i=1}^m \! \big(y_i \!-\!
\mat{w}^\top \!(\mat{x}_i \!+\! \overline{\mat{Z}}_i \mat{M}^\top\mat{x}_i)\big)^2
\label{irr_primal}
\end{align}
where the hypothesis $\mat{w}$ and imputation matrix $\mat{M}$ are
simultaneously optimized. In order to bound the size of the
hypothesis set, we have introduced the constraint $\|\mat{M}\|_F^2 \leq
\gamma^2$ that bounds the Frobenius norm of the imputation matrix.
The global optimum of the problem as presented in (\ref{irr_primal})
cannot be easily found as it is not jointly convex in both $\mat{w}$ and
$\mat{M}$. Thus, we next present a convex relaxation of the
formulation~\eqref{irr_primal} which can be solved efficiently.
Before we can describe the convex relaxation, we need one more piece
of notation. Given a matrix $\mat{M} \in \mathbb{R}^{d\times d}$ and a tensor $\mat{N} \in \mathbb{R}^{d\times
d\times d}$ (with the $k_{th}$ ``slice'' denoted $\mat{N}_k \in
\mathbb{R}^{d\times d}$), we define the matrix $\mat{K}_{\mat{M}\mat{N}} \in \mathbb{R}^{m\times
m}$
\begin{equation}
[\mat{K}_{\mat{M}\mat{N}}]_{i,j} = \mat{x}_i^\top \mat{x}_j
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
+ \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j \\
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_j]_k \mat{x}_i^\top \mat{N}_k
\mat{x}_j \,.
\label{eqn:relaxedkernel}
\end{equation}
The following theorem provides a convex relaxation of the
problem~\eqref{irr_primal} that we refer to as Imputed Ridge Regression
(IRR) and which includes a strictly larger hypothesis than the $(\mat{w},
\mat{M})$ pairs with which we began.
\begin{theorem}
\label{prop:irr_relaxed}
The following semi-definite programming optimization problem provides
a convex relaxation to the non-convex problem (\ref{irr_primal}):
\begin{align}
\label{irr_relaxed}
& \min_{\substack{t \\ \mat{M}:\|\mat{M}\|^2_F\leq\gamma^2 \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
t \,,
\quad \mathrm{s.t.}
~~ \left[
\begin{array}{cc}
(\mat{K}_{\mat{M}\mat{N}} + m \lambda \mat{I}) & \mat{y} \\
\mat{y}^\top & t
\end{array}
\right] \succeq 0, ~~
\mat{K}_{\mat{M}\mat{N}} \succeq 0 \,.
\end{align}
\label{prop:irr_relaxation}
\end{theorem}
\begin{proof}
First, we rewrite the imputed ridge regression problem in its dual
formulation.
\begin{align}
\label{eqn:partial_dual}
\min_{\mat{M}} \max_{\boldsymbol \alpha} ~ &
2 \sum_{i=1}^m \alpha_i y_i -
\sum_{i,j=1}^m \! \alpha_i \alpha_j \big( (\mat{x}_i + \overline{\mat{Z}}_i
\mat{M}^\top\mat{x}_i)^\top (\mat{x}_j + \overline{\mat{Z}}_j \mat{M}^\top\mat{x}_j) + m \lambda \mat{I} \big) \\
\mathrm{s.t.} ~& \|\mat{M}\|_F^2 \leq \gamma^2 \nonumber
\end{align}
The inner maximization problem is concave in ${\boldsymbol \alpha}$ and the
optimal solution for any fixed $\mat{M}$ is found via the standard closed
form solution for ridge regression:
\begin{equation*}
{\boldsymbol \alpha}^* = (\underbrace{(\mat{X} + \overline{\mat{Z}} \circ \mat{M}\mat{X}) (\mat{X} + \overline{\mat{Z}} \circ
\mat{M}\mat{X})^\top}_{\mat{K}_\mat{M}} + m \lambda \mat{I})^{-1} \mat{y} \,,
\end{equation*}
where $\circ$ denotes the component-wise (Hadamard) product between
matrices and $\mat{K}_\mat{M}$ will be used to denote the Gram matrix containing
dot-products between imputed training instances. Plugging this
solution into the minimax problem results in the following matrix
fractional minimization problem,
\begin{equation*}
\min_\mat{M} ~ \mat{y} (\mat{K}_\mat{M} + m \lambda \mat{I})^{-1} \mat{y},
~~ \mathrm{s.t.} ~ \|\mat{M}\|_F^2 \leq \gamma^2 \,.
\end{equation*}
This problem is still not convex in $\mat{M}$ due to the quadratic terms
that appear in $\mat{K}_\mat{M}$. The main idea for the convex relation will be
to introduce new variables $[\mat{N}_k]_{i,j}$ which substitute the
quadratic terms $[\mat{M}]_{i,k} [\mat{M}]_{j,k}$, resulting in a matrix
$\mat{K}_{\mat{M}\mat{N}}$ that is linear in terms of the optimization variables $\mat{M}$
and $\mat{N}_k$:
\begin{align*}
[\mat{K}_{\mat{M}}]_{i,j} & = \mat{x}_i^\top \mat{x}_j^\top
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
+ \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j
+ \underbrace{\mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j}_{
\sum_{r,s,k=1}^d [\mat{x}_i]_r [\mat{x}_j]_s [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_j]_k [\mat{M}]_{r,k}
[\mat{M}]_{s,k}} \\
[\mat{K}_{\mat{M}\mat{N}}]_{i,j} & = \mat{x}_i^\top \mat{x}_j^\top
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
+ \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j
+ \underbrace{\sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_j]_k \mat{x}_i^\top \mat{N}_k
\mat{x}_j}_{\sum_{r,s,k=1}^d [\mat{x}_i]_r [\mat{x}_j]_s [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_j]_k
[\mat{N}_k]_{r,s}}
\end{align*}
Note that the matrix $\mat{K}_{\mat{M}\mat{N}}$ no longer necessarily corresponds to
a Gram matrix and that $(\mat{K}_{\mat{M}\mat{N}} + m \lambda \mat{I})$ may no longer be
positive semi-definite (which is required for the convexity of a
matrix fractional objective function). Thus, we add an additional
explicit positive semi-definiteness constraint resulting in the
following optimization problem,
\begin{align*}
\min_{\substack{t \\ \mat{M}:\|\mat{M}\|^2_F\leq\gamma^2 \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
t,
\qquad \mathrm{s.t.} ~~
t - \mat{y}^\top (\mat{K}_{\mat{M}\mat{N}} + m \lambda \mat{I})^{-1} \mat{y} \geq 0,
~ \mat{K}_{\mat{M}\mat{N}} \succeq 0 \,,
\end{align*}
where we've additionally added the dummy variable $t$ and also
constrained the norm of the new $[\mat{N}_k]_{i,j}$ variables. The choice
of the upper bound is made with the knowledge that $[\mat{N}_k]_{i,j}$
replaces the variables $[\mat{M}]_{i,k} [\mat{M}]_{j,k}$ and that the bound
$\|\mat{M}\|_F \leq \gamma$ implies $\sum_{i,j,k=1}^d [\mat{M}]_{i,k}^2
[\mat{M}]_{j,k}^2 = \sum_{k=1}^d (\sum_{i=1}^d [\mat{M}]_{i,k}^2)^2
\leq (\sum_{i,k=1}^d [\mat{M}]_{i,k}^2)^2 \leq \gamma^4$.
The constraint involving the dummy variable $t$ is a Schur complement
and can be replaced with an equivalent positive semi-definiteness
constraint, which results in a standard from semidefinite program
and completes the proof.
\hfill\rule{7pt}{7pt}
\end{proof}
One issue with relaxations is using the relaxed solution
in order to find a good solution to the original
problem. In our case, this would correspond to finding a good $\mat{w}, \mat{M}$
pair for the primal problem~\eqref{irr_primal}. We bypass this step,
and instead directly define the prediction on any point $(\mat{x}_0,\mat{z}_0)$ as:
\begin{equation}
h(\mat{x}_0,\mat{z}_0) = \sum_{i=1}^m \alpha_i( \mat{x}_i^\top \mat{x}_0+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_0
+ \mat{x}_i^\top \overline{\mat{Z}}_0 \mat{M}^\top \mat{x}_0 \\
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_0]_k \mat{x}_i^\top \mat{N}_k \mat{x}_0).
\label{eqn:dualpredictor}
\end{equation}
Here, ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ are solutions to the saddle-point problem
\begin{align}
\label{eqn:irr_saddlepoint}
\min_{\substack{\mat{M}:\|\mat{M}\|_F\leq\gamma \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
\max_{\boldsymbol \alpha}
2 {\boldsymbol \alpha}^\top \mat{y} -
{\boldsymbol \alpha}^\top (\mat{K}_{\mat{M}\mat{N}} + m \lambda \mat{I}) {\boldsymbol \alpha} \,,
\quad \mathrm{s.t.} ~~ \mat{K}_{\mat{M}\mat{N}} \succeq 0\,,
\end{align}
which is in fact equivalent to the one in
Proposition~\ref{prop:irr_relaxation} (simply introduce the relaxation
within equation~\eqref{eqn:partial_dual} instead). Note that the
original problem~\eqref{irr_primal} is in fact the same as
problem~\eqref{eqn:irr_saddlepoint} (or equivalently, the SDP given in
Proposition~\ref{prop:irr_relaxation}) but with the added constraint
$[\mat{N}_k]_{i,j} = [\mat{M}]_{i,k} [\mat{M}]_{j,k}$. Thus, the hypothesis class
used in the relaxation is a superset of the original class.
In the next section, we tightly bound the Rademacher complexity of
this relaxed hypothesis class and precisely quantify the additional
complexity introduced due to the relaxation.
\section{Theoretical Analysis of IRR}
\label{sec:theory}
As mentioned in the previous section, we predict with a hypothesis of
the form~\eqref{eqn:dualpredictor} rather than going back to the primal
class indexed by $(\mat{w}, \mat{M})$ pairs. In this section, we would like to
show that the new hypothesis class parametrized by ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ is
not too rich for the purposes of learning. To do this, we give the
class of all possible hypotheses that can be the solutions to the dual
problem~\eqref{irr_relaxed} and then prove a Rademacher complexity
bound over that class.
The set of all possible ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ triples that can be potential
solutions to~\eqref{irr_relaxed} lie in the following set
\begin{multline*}
\!\!\!\!\!\!\mathcal{H} = \Bigg\{ h(\mat{x}_0, \mat{z}_0) \mapsto \! \sum_{i=1}^m
\! \alpha_i(
\mat{x}_i^\top \mat{x}_0
+
\mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_0
+ \mat{x}_i^\top \overline{\mat{Z}}_0 \mat{M}^\top \mat{x}_0
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_0]_k \mat{x}_i^\top \mat{N}_k \mat{x}_0)
\\ : \|\mat{M}\|_F \leq \gamma, \|\mat{N}\|_F \leq \gamma^2,
\|{\boldsymbol \alpha}\| \leq \frac{B}{\lambda \sqrt{m}}
\Bigg\}
\end{multline*}
The bound on $\|{\boldsymbol \alpha}\|$ is made implicitly in the optimization
problem (assuming the training labels are bounded $\forall i, |y_i|
\leq B$). To see this, we note that the problem~\eqref{irr_relaxed}
is obtained from~\eqref{eqn:irr_saddlepoint} by using the closed-form
solution of the optimal ${\boldsymbol \alpha} = (\mat{K}_{\mat{M}\mat{N}} + m \lambda \mat{I})^{-1}
\mat{y}$. Then we can bound $\|{\boldsymbol \alpha}\| \leq \|\mat{y}\| / \lambda_{\min}(\mat{K}_\mat{M}
+ m \lambda \mat{I}) = \frac{B \sqrt{m}}{ m \lambda}$, where
$\lambda_{\min}(\mat{A})$ denotes the smallest eigenvalue of the matrix
$\mat{A}$. Note that in general there is no linear hypothesis $\mat{w}$ that
corresponds to the hypotheses in the relaxed class $\mathcal{H}$ and that we
are dealing with a strictly more general function class. However, the
following theorem demonstrates that the Rademacher complexity of this
function class is reasonably bounded in terms of the number of
training points $m$ and dimension $d$ and thereby still provides
provable generalization performance \cite{rademacher,bm_rademacher}.
Recall the Rademacher complexity of a class $\mathcal{H}$
\begin{equation}
\ensuremath{\mathfrak{R}}_m(\mathcal{H}) = \mat{E}_S\mat{E}_{{\boldsymbol \sigma}} \left[ \frac{1}{m} \sup_{h \in \mathcal{H}} \bigg|
\sum_{i=1}^m \sigma_i h(\mat{x}_i,\mat{z}_i) \bigg| \right] \,,
\end{equation} where the inner expectation is over independent Rademacher
random variables $(\sigma_1,\ldots,\sigma_m)$ and the outer one over
a sample $S = ((\mat{x}_1,\mat{z}_1),\ldots,(\mat{x}_m,\mat{z}_m))$.
\begin{theorem}
\label{thm:rademacher}
If we assume a bounded regression problem $\forall y, ~|y| \leq B$
and $\forall \mat{x},~ \|\mat{x}\| \leq R$, then the Rademacher complexity of the
hypothesis set $\mathcal{H}$ is bounded as follows,
\begin{equation*}
\ensuremath{\mathfrak{R}}_m(\mathcal{H}) \leq \big(1 + \gamma + (\gamma + \gamma^2) \sqrt{d} \big) \frac{BR^2
}{\lambda \sqrt{m}} = O\Big(\sqrt{\frac{d}{m}}\Big) \,.
\end{equation*}
\end{theorem}
The proof is deferred to the appendix.
Theorem~\ref{thm:rademacher} allows us to control the gap between
empirical and expected risks using standard Rademacher
complexity results. Theorem 8 of~\cite{bm_rademacher}, immediately
provides the following corollary.
\begin{corollary} \label{corr:unif_dev}
Under the conditions of Theorem 2, for any $0 < \delta \leq 1$, with
probability at least $1-\delta$ over samples of size $m$, every $h
\in \mathcal{H}$ satisfies
\begin{multline*}
\mat{E}[(y - h(\mat{x}',\mat{z}))^2] - \frac{1}{m}\sum_{t=1}^m(y_t -
h(\mat{x}'_t,\mat{z}_t))^2 \\
\leq \frac{BR^2(1+\gamma)^2}{\lambda}
\left(\frac{BR^2(1+\gamma)^2 }{\lambda} \sqrt{\frac{d}{m}}
+ \sqrt{\frac{8\ln(2/\delta)}{m}}\right).
\end{multline*}
\end{corollary}
Thus, the difference between the empirical error and true error of the
hypothesis decreases like $O(1/\sqrt{m})$ assuming that the number of
features $d$ and the trade-off parameter $\lambda$ are kept constant.
We note that the constants $B$, $R$ and $\lambda$ all appear in the
standard setting, the only addition here is the dependence on
$\sqrt{d}$. Also, as remarked earlier, we note that an analagous
analysis and relaxation is possible for classification using the
hinge-loss.
\section{Empirical Results}
\label{sec:empirical}
This section presents empirical evaluation of the Imputed Ridge
Regression algorithm of Section~\ref{sec:imputation-alg}. We use
baseline methods \emph{zero-imputation} and \emph{mean-imputation}
where the missing entries are replaced with zeros and mean estimated
from observed values of those features respectively. As a third
baseline, we also present a few preliminary results comparing against
a method we call \emph{independent-imputation}. This method selects an
imputation matrix $\mat{M}$, for use with \eqref{eqn:linearimp},
independently of the learning algorithm. Instead it selects the
$i_{th}$ column of $\mat{M}$ as the best linear predictor of the $i_{th}$
feature given the other features:
\begin{equation*}
[\mat{M}]_{:,i} = \argmin_\v \sum_{k \in \mathcal{X}_i} ([\mat{x}_k]_i - \sum_{j \neq i}
[\mat{x}_k]_j [\v]_j)^2 \,,
\end{equation*}
where $\mathcal{X}_i$ is the set of training examples that have the $i_{th}$
feature present.
Once the data is imputed, a standard ridge-regression algorithm is
used. As reference, we also show the performance of a standard
algorithm on uncorrupted data. The algorithms are evaluated on several
UCI repository datasets, summarized in Table~\ref{table:data}.
The {\tt thyroid} dataset includes naturally corrupted/missing data.
The {\tt optdigits} dataset is subjected to artificial corruption by
deleting a column of pixels, chosen uniformly at random from the 3
central columns of the image (each image contains 8 columns of pixels
total). The remainder of the datasets are subjected to two types of
artificial corruption: \emph{data-independent} or
\emph{data-dependent} corruption. In the case of data-independent
corruption a probability of corruption is chosen uniformly at random
from $[0,\beta]$ for each feature (where $\beta$ can be tuned to
induce more or less corruption), independent of the other features and
independent of data instance (i.e. $\mat{z}_t$ is independent of $\mat{x}_t$).
In the case of data-dependent corruption, a random threshold $\tau_k$
is chosen for each feature uniformly between $[0,1]$ as well as a sign
$\sigma_k$ chosen uniformly from $\{-1,1\}$. Then, if a feature
satisfies $\sigma([\mat{x}_i]_k - \tau_k) > 0$, it is deleted with
probability $\beta$, which again can be tuned to induce more or less
missing data. Table~\ref{table:data} shows the average fraction of
features remaining after being subject to each type of corruption;
that is, the total sum of features available over all instances
divided by the total number of features that would be available in the
corruption-free case.
\begin{table}
\begin{center}
\begin{tabular}{l|llcc}
dataset ~&~ $n$ ~&~ $d$ ~&~ $F_I$ ~&~ $F_D$ \\
\hline
\hline
{\tt abalone} ~&~ 4177 ~&~ 7 ~&~ $.62 \pm .08$ ~&~ $.61 \pm .12$ \\
{\tt housing} ~&~ 20640 ~&~ 8 ~&~ $.64 \pm .08$ ~&~ $.68 \pm .20$ \\
{\tt optdigits} ~&~ 5620 ~&~ 64 ~&~ $.88 \pm .00$ ~&~ $.88 \pm .00$ \\
{\tt park} ~&~ 3000 ~&~ 20 ~&~ $.58 \pm .06$ ~&~ $.61 \pm .08$ \\
{\tt thyroid} ~&~ 3163 ~&~ 5 ~&~ $.77 \pm .00$ ~&~ $.77 \pm .00$ \\
{\tt wine} ~&~ 6497 ~&~ 11 ~&~ $.63 \pm .10$ ~&~ $.69 \pm .13$ \\
\end{tabular}
\end{center}
\caption{
Size of dataset ($n$), features ($d$) and, the overall fraction of
remaining features in the training set after data-independent ($F_I$)
or data-dependent ($F_D$) corruption.}
\label{table:data}
\end{table}
The average error-rate along with one standard deviation is reported
over 5 trials each with a random fold of $1000$ training points. The
remainder of the dataset in each trial is used as the test set. When
applicable, each trial is also subjected to a different random
corruption pattern. All scores are reported with respect to the best
performing parameters, $\lambda$ and $\gamma$, tuned across the values
$\{2^{-12}, 2^{-11}, \ldots, 2^{10}\}$.
In order to solve problem~(\ref{irr_relaxed}) we use a semi-infinite
linear program (SILP) to find an approximately optimal solution (see
e.g.~\cite{largeMKL,SDP-SILP} for examples).
In Table \ref{table:imputed-results} we compare the performance of the
IRR algorithm to zero, mean imputation as well as to standard ridge
regression performance on the uncorrupted data. Here we see IRR
provides improvement over zero-imputation in all cases and does at
least as well as mean-imputation when dealing with data-independent
corruption. In the case of data-dependent corruption, we also compare
against the independent imputation method. Here we see that IRR can
provide significant improvement while the other two-stage imputation
methods suffer. Figure \ref{fig:plots} shows more detailed results
for the {\tt abalone} dataset across different levels of corruption
and displays the consistent improvement which the IRR algorithm
provides. At the bottom of Table \ref{table:imputed-results} we
measure performance with {\tt thyroid} which has naturally missing
values. Here again IRR performs significantly better than the
competitor methods.
\begin{table}
\begin{center}
\begin{tabular}{l|ccccc}
dataset & zero-imp & mean-imp & ind-imp & IRR & no corr \\
\hline
\hline
{\tt abalone} ~& $.199 \pm .004$ ~& $.187 \pm .003$ ~& -- ~& $.183 \pm .002$ ~& $.158 \pm .002$ \\
{\tt housing} ~& $.414 \pm .025$ ~& $.370 \pm .019$ ~& -- ~& $.373 \pm .019$ ~& $.288 \pm .001$ \\
{\tt park} ~& $.457 \pm .006$ ~& $.445 \pm .004$ ~& -- ~& $.451 \pm .004$ ~& $.422 \pm .004$ \\
{\tt wine} ~& $.280 \pm .006$ ~& $.268 \pm .009$ ~& -- ~& $.269 \pm .008$ ~& $.246 \pm .001$ \\
\hline
\hline
{\tt abalone} & $.181 \pm .007$ & $.180 \pm .006$ & $.183 \pm .012$ & $.167
\pm .011$ & $.159 \pm .004$ \\
{\tt housing} & $.392 \pm .068$ & $.400 \pm .064$ & $.363 \pm .041$ &
$.326 \pm .035$ & $.289 \pm .001$ \\
{\tt park} & $.425 \pm .013$ & $.444 \pm .008$ & $.423 \pm .015$ &
$.377 \pm .035$ & $.422 \pm .001$ \\
{\tt wine} & $.264 \pm .009$ & $.264 \pm .009$ & $.260 \pm .011$ & $.256 \pm
.011$ & $.247 \pm .001$\\
\hline \hline
{\tt thyroid} & $.562 \pm .004$ & $.531 \pm .005$ & $.528 \pm .003$ & $.521
\pm .004$ & --
\end{tabular}
\end{center}
\caption{
RMSE for various imputation methods across several datasets when subject
to data-independent (top) and data-dependent corruption (middle). The
bottom row shows performance on the {\tt thyroid} dataset with
naturally occurring missing features.
}
\label{table:imputed-results}
\end{table}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.46\columnwidth]{abalone_series_indepcorr.eps}
&
\includegraphics[width=0.46\columnwidth]{abalone_series.eps}
\end{tabular}
\end{center}
\caption
RMSE on {\tt abalone} across varying amounts of independent (left)
and dependent corruption (right); fraction of features remaining
indicated on x-axis.}
\label{fig:plots}
\vspace{-0.2cm}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{l|cccc}
dataset ~&~ zero-imp ~&~ mean-imp ~&~ IRR ~&~ no corr \\
\hline
\hline
2-vs-all ~&~ $.352 \pm .003$ ~&~ $.351 \pm .004$ ~&~ $.346 \pm .002$ ~&~ $.321 \pm .003$ \\
3-vs-all ~&~ $.450 \pm .005$ ~&~ $.435 \pm .004$ ~&~ $.426 \pm .005$ ~&~ $.398 \pm .004$ \\
4-vs-all ~&~ $.372 \pm .003$ ~&~ $.363 \pm .002$ ~&~ $.364 \pm .003$ ~&~ $.345 \pm .002$ \\
6-vs-all ~&~ $.369 \pm .003$ ~&~ $.360 \pm .002$ ~&~ $.353 \pm .003$ ~&~ $.333 \pm .003$ \\
\end{tabular}
\end{center}
\caption
RMSE (using $\{-1,+1\}$ regression labels)
for one-vs-all classification on \texttt{optdigits} subject to
column-based corruption. }
\label{table:impute-digits}
\end{table}
In Table \ref{table:impute-digits} we see that, with respect to the
column-corrupted {\tt optdigit} dataset, the IRR algorithm performs
significantly better than zero-imputation in all cases and is able to
outperform mean-imputation in three out of four tasks.
\section{Conclusion}
We have introduced a new algorithm to address the problem of learning
with missing features in an i.i.d.\ batch settings. The algorithm is
motivated by an intuitive construction, which initially results in a
non-convex optimization problem. We show an effective convex relation
and are able to bound the complexity of the resulting hypothesis
class, guaranteeing the generalization ability of the algorithm.
Empirically we show many significant results for the suggested IRR
algorithm, which indicate superior performance when compared to zero,
mean and independent imputation methods.
One future direction for investigation includes analyzing more
complicated imputation functions. For example, assuming the data is
generated from a Gaussian distribution, the optimal imputation
function can be parameterized by the conditional covariance matrix
given the observed features. Understanding how to efficiently
parametrizing such a family and providing a convex relaxation such a
setting may allow for significant improvement in performance.
\section{Introduction}
Standard learning algorithms assume that each training example is
\emph{fully observed} and doesn't suffer any corruption. However, in
many real-life scenarios, training and test data often undergo some
form of corruption. We consider settings where all the features might
not be observed in every example, allowing for both adversarial and
stochastic feature deletion models. Such situations arise, for
example, in medical diagnosis---predictions are often desired using
only a partial array of medical measurements due to time or cost
constraints. Survey data are often incomplete due to partial
non-response of participants. Vision tasks routinely need to deal with
partially corrupted or occluded images. Data collected through
multiple sensors, such as multiple cameras, is often subject to the
sudden failure of a subset of \mbox{the sensors.}
In this work, we design and analyze learning algorithms that address
these examples of learning with missing features. The first setting
we consider is online learning where both examples and
missing features are chosen in an arbitrary, possibly adversarial,
fashion. We define a novel notion of regret suitable to the setting
and provide an algorithm which has a provably bounded regret on the
order of $O(\sqrt{T})$, where $T$ is the number of examples.
The second scenario is batch learning, where examples and
missing features are drawn according to a fixed and unknown
distribution. We design a learning algorithm which is guaranteed to
globally optimize an intuitive objective function and which also
exhibits a generalization error on the order of $O(\sqrt{d/T})$,
where $d$ is the data dimension.
Both algorithms are also explored empirically across several publicly
available datasets subject to various artificial and natural types of
feature corruption. We find very encouraging results, indicating the
efficacy of the suggested algorithms and their superior performance
over baseline methods.
Learning with missing or corrupted features has a long history in statistics
\cite{little_rubin,dempster_em}, and has recieved recent attention in
machine learning~\cite{dekel_corrupted, marlin, cesa_efficient,
chechik_struct}. Imputation methods (see~\cite{little_rubin, marlin,
dempster_em}) fill in missing values, generally independent
of any learning algorithm, after which standard algorithms can be
applied to the data. Better performance might be expected, though, by
learning the imputation and prediction functions
simultaneously. Previous works~\cite{marlin} address this issue
using EM, but can get stuck in local optima and do not have strong
theoretical guarantees. Our work also is different from settings where
features are missing only at test time~\cite{dekel_corrupted,
Globerson2006nightmare}, settings that give access to noisy versions
of all the features~\cite{cesa-noise} or settings where observed
features are picked by the algorithm~\cite{cesa_efficient}.
Section \ref{sec:setup} introduces both the general
online and batch settings. Sections \ref{sec:online}
and \ref{sec:batch} detail the algorithms and theoretical results
within the online and batch settings resp. Empirical results
are presented in Section~\ref{sec:empirical}.
\section{The Setting}
\label{sec:setup}
In our setting it will be useful to denote a training instance $\mat{x}_t
\in \mathbb{R}^d$ and prediction $y_t$, as well as a corruption vector
$\mat{z}_t \in \set{0,1}^d$, where
\begin{equation*}
[\mat{z}_t]_i = \left\{\begin{array}{cl} 0&\mbox{if feature $i$ is not
observed,}\\ 1&\mbox{if feature $i$ is observed.}\end{array}\right.
\end{equation*}
We will discuss as specific examples both
classification problems where $y_t \in \{-1,1\}$ and regression
problems where $y_t \in \mathbb{R}$. The learning algorithm is given
the corruption vector $\mat{z}_t$ as well as the corrupted instance,
\begin{equation*}
\mat{x}_t' = \mat{x}_t \circ \mat{z}_t \,,
\end{equation*}
where $\circ$ denotes the component-wise product between two vectors.
Note that the training algorithm is never given access to $\mat{x}_t$,
however it is given $\mat{z}_t$, and so has knowledge of exactly which
coordinates have been corrupted.
The following subsections explain the online and batch settings
respectively, as well as the type of hypotheses that are considered
in each.
\subsection{Online learning with missing features}
\label{sec:setup-online}
In this setting, at each time-step $t$ the learning algorithm is
presented with an arbitrarily (possibly adversarially) chosen
instance $(\mat{x}_t', \mat{z}_t)$ and is expected to predict $y_t$. After
prediction, the label is then revealed to the learner
which then can update its hypothesis.
A natural question to ask is what happens if we simply ignore
the distinction between $\mat{x}'_t$ and $\mat{x}_t$ and just run an
online learning algorithm on this corrupted data. Indeed, doing so
would give a small bound on regret:
\begin{equation}
R(T,\ell) = \sum_{t=1}^T \ell(\dprod{\mat{w}_t}{\mat{x}_t'}, y_t) - \inf_{\mat{w}
\in \mathcal{W}} \sum_{t=1}^T \ell(\dprod{\mat{w}}{\mat{x}_t'}, y_t) \,,
\label{eqn:stdregret}
\end{equation}
with respect to a convex loss function $\ell$ and for any convex
compact subset $\mathcal{W} \subseteq \mathbb{R}^d$. However, any fixed weight
vector $\mat{w}$ in the second term might have a very large loss, making
the regret guarantee useless---both the learner and the comparator
have a large loss making the difference small. For instance, assume
one feature perfectly predicts the label, while another one only
predicts the label with 80\% accuracy, and $\ell$ is the quadratic
loss. It is easy to see that there is no fixed $\mat{w}$ that will perform
well on both examples where the first feature is observed and examples
where the first feature is missing but the second one is observed.
To address the above concerns, we consider using a linear
\emph{corruption-dependent hypothesis} which is permitted to change as
a function of the observed corruption $\mat{z}_t$. Specifically, given the
corrupted instance and corruption vector, the predictor uses a
function $\mat{w}_t(\cdot) : \{0,1\}^d \to \mathbb{R}^d$ to choose a weight
vector, and makes the
prediction $\widehat y_t = \dprod{\mat{w}_t(\mat{z}_t)}{\mat{x}_t'}$. In order to provide
theoretical guarantees, we will bound the following notion of regret,
\begin{multline}
\label{eqn:onlineregret}
\!\!\!\!\!\!R^z(T,\ell) \! = \!\!\! \sum_{t=1}^T \! \ell(\dprod{\mat{w}_t}{\mat{x}_t'},
y_t) - \!\! \inf_{\mat{w} \in \mathcal{W}} \! \sum_{t=1}^T \!
\ell(\dprod{\mat{w}(\mat{z}_t)}{\mat{x}_t'}, y_t),
\end{multline}
where it is implicit that $\mat{w}_t$ also depends on $\mat{z}_t$ and $\mathcal{W}$ now
consists of corruption-dependent hypotheses. Similar definitions of
regret have been looked at in the setting learning with side
information~\cite{CoverOr96, HazanMe2007}, but our special case admits
stronger results in terms of both upper and lower bounds.
In the most general case, we may consider $\mathcal{W}$ as the class of all
functions which map $\set{0,1}^d \to \mathbb{R}^d$, however we show this can
lead to an intractable learning problem. This motivates the study of
interesting subsets of this most general function class. This is the main
focus of Section \ref{sec:online}.
\subsection{Batch learning with missing features}
\label{sec:setup-batch}
In the setup of batch learning with i.i.d.\ data, examples $(\mat{x}_t,
\mat{z}_t, y_t)$ are drawn according to a fixed but unknown distribution
and the goal is to choose a hypothesis that minimizes the expected
error, with respect to an appropriate loss function $\ell$: $\mat{E}_{\mat{x}_t,
\mat{z}_t, y_t}[\ell(h(\mat{x}_t, \mat{z}_t), y_t)]$.
The hypotheses $h$ we consider in this scenario will be inspired by
imputation-based methods prevalent in statistics literature used to
address the problem of missing features~\cite{little_rubin}. An
imputation mapping is a function used to fill in unobserved features
using the observed features, after which the \emph{completed} examples
can be used for prediction. In particular, if we consider an
imputation function ${\boldsymbol \phi}: \mathbb{R}^d \times \set{0,1}^d \to \mathbb{R}^d$,
which is meant to fill missing feature values, and a linear predictor
$\mat{w} \in \mathbb{R}^d$, we can parameterize a hypothesis with these two
function $h_{{\boldsymbol \phi}, \mat{w}}(\mat{x}'_t, \mat{z}_t) = \dprod{\mat{w}}{{\boldsymbol \phi}(\mat{x}'_t,
\mat{z}_t)}$.
It is clear that the multiplicative interaction between $\mat{w}$ and
${\boldsymbol \phi}$ will make most natural formulations non-convex, and we
elaborate more on this in Section~\ref{sec:batch}. In the
i.i.d.\ setting, the natural quantity of interest is the generalization
error of our learned hypothesis. We provide a Rademacher complexity
bound on the class of $\mat{w},{\boldsymbol \phi}$ pairs we use, thereby showing
that any hypothesis with a small empirical error will also have a
small expected loss. The specific class of hypotheses and details of
the bound are presented in Section \ref{sec:batch}. Furthermore, the
reason as to why an imputation-based hypothesis class is not analyzed
in the more general adversarial setting will also be explained in
\mbox{that section.}
\section{Online Corruption-Based Algorithm}
\label{sec:online}
In this section, we consider the class of \emph{corruption-dependent}
hypotheses defined in Section~\ref{sec:setup-online}. Recall the
definition of regret~(\ref{eqn:onlineregret}), which we wish to
control in this framework, and of the comparator class of functions $\mathcal{W}
\subseteq \set{0,1}^d \to \mathbb{R}^d$. It is clear that the function
class $\mathcal{W}$ is much richer than the comparator class in the
corruption-free scenario, where the best linear predictor is fixed for
all rounds. It is natural to ask if it is even possible to prove a
non-trivial regret bound over this richer comparator class $\mathcal{W}$. In
fact, the first result of our paper provides a lower bound on the
minimax regret when the comparator is allowed to pick arbitrary
mappings, i.e.\ the set $\mathcal{W}$ contains all mappings. The result is
stated in terms of the minimax regret under the loss function $\ell$
under the usual (corruption-free) definition~\eqref{eqn:stdregret}:
\begin{equation*}
R^*(T,\ell) = \inf_{\mat{w}_1 \in
\mathcal{W}}\sup_{(\mat{x}_1,\mat{z}_1,y_1)}\cdots\inf_{\mat{w}_T \in
\mathcal{W}}\sup_{(\mat{x}_T,\mat{z}_T,y_T)} R(T,\ell)
\end{equation*}
\begin{proposition} \label{prop:lower-bound}
If $\mathcal{W} = \set{0,1}^d \to \mathbb{R}^d$ the minimax value of the corruption
dependent regret for any loss function $\ell$ is lower bounded as
\begin{multline*}
\inf_{\mat{w}_1 \in \mathcal{W}}\sup_{(\mat{x}_1,\mat{z}_1,y_1)}\cdots\inf_{\mat{w}_T \in
\mathcal{W}}\sup_{(\mat{x}_T,\mat{z}_T,y_T)}
\!\!\! R^z(T,\ell) \\\qquad\qquad\qquad\qquad= \Omega\left( 2^{d/2}
R^*\left(\frac{T}{2^{d/2}},\ell\right)\right).
\end{multline*}
\end{proposition}
This proposition (the proof of which appears in the
appendix \cite{long_version}) shows that the minimax regret is lower
bounded by a term that is exponential in the dimensionality of the
learning problem. For most non-degenerate convex and Lipschitz losses,
$R^*(T,\ell) = \Omega(\sqrt{T})$ without further assumptions (see
e.g.~\cite{AbernethyABR2009minimax}) which yields a
$\Omega(2^{d/4}\sqrt{T})$ lower bound. The bound can be further
strengthened to $\Omega(2^{d/2}\sqrt{T})$ for linear losses which is
unimprovable since it is achieved by solving the classification
problem corresponding to each pattern independently.
Thus, it will be difficult to
achieve a low regret against arbitrary maps from
$\{0,1\}^d$ to $\mathbb{R}^d$. In the following section we consider a
restricted function class and show that a mirror-descent
algorithm can achieve regret polynomial in $d$ and sub-linear in $T$,
implying that the average regret is vanishing.
\subsection{\mbox{Linear Corruption-Dependent Hypotheses}}
Here we analyze a corruption-dependent hypothesis class that is
parametrized by a matrix $\mat{A} \in \mathbb{R}^{d \times k}$, where $k$ may be
a function of $d$. In the simplest case of $k = d$, the
parametrization looks for weights $\mat{w}(\mat{z}_t)$ that depend linearly on
the corruption vector $\mat{z}_t$. Defining $\mat{w}_{\mat{A}}(\mat{z}_t) = \mat{A}\mat{z}_t$
achieves this, and intuitively this allows us to capture how the
presence or absence of one feature affects the weight of another
feature. This will be clarified further in the examples.
In general, the matrix $\mat{A}$ will be $d\times k$, where $k$ will be
determined by a function ${\boldsymbol \psi}(\mat{z}_t) \in \set{0,1}^k$ that maps
$\mat{z}_t$ to a possibly higher dimension space. Given, a fixed ${\boldsymbol \psi}$,
the explicit parameterization in terms of $\mat{A}$ is,
\begin{equation}
\mat{w}_{\mat{A}, {\boldsymbol \psi}}(\mat{z}_t) = \mat{A} {\boldsymbol \psi}(\mat{z}_t) \,.
\end{equation}
In what follows, we drop the subscript from $ \mat{w}_{\mat{A}, {\boldsymbol \psi}}$ in order
to simplify notation. Essentially this allows us to introduce
non-linearities as a function of the corruption vector, but the
non-linear transform is known and fixed throughout the learning
process. Before analyzing this setting, we give a few examples and
intuition as to why such a parametrization is useful. In each example,
we will show how there exists a choice of a matrix $\mat{A}$ that captures
the specific problem's assumptions. This implies that the fixed
comparator can use this choice in hindsight, and by having a low
regret, our algorithm would implicitly learn a hypothesis close to
this reasonable choice of $\mat{A}$.
\subsubsection{Corruption-free special case}
We start by noting that in the case of no corruption (i.e. $\forall
t, \mat{z}_t = \mat{1}$) a standard linear hypothesis model can be cast within
the matrix based framework by defining ${\boldsymbol \psi}(\mat{z}_t) = 1$ and learning
$\mat{A} \in \mathbb{R}^{d \times 1}$.
\subsubsection{Ranking-based parameterization}
One natural method for classification is to order the features by their
predictive power, and to weight features proportionally to their
ranking (in terms of absolute value; that is, the sign of weight
depends on whether the correlation with the label is positive or
negative). In the corrupted features setting, this naturally
corresponds to taking the available features at any round and putting
more weight on the most predictive observed features. This is
particularly important while using margin-based losses such as the
hinge loss, where we want the prediction to have the right sign and be
large enough in magnitude.
Our parametrization allows such a strategy when using a simple
function ${\boldsymbol \psi}(\mat{z}_t) = \mat{z}_t$. Without loss of generality, assume that
the features are arranged in decreasing order of discriminative power
(we can always rearrange rows and columns of $\mat{A}$ if they're not). We
also assume positive correlations of all features with the label; a more
elaborate construction works for $\mat{A}$ when they're not. In this case,
consider the parameter matrix and the induced classification weights
\begin{align*}
[\mat{A}]_{i,j} = \left\{ \hspace{-0.15cm}
\begin{array}{rl}
1, & \hspace{-0.15cm} j = i \\
-\frac{1}{d}, & \hspace{-0.15cm} j < i \\
0, & \hspace{-0.15cm} j > i
\end{array}
\right.\!\!, ~~[\mat{w}(\mat{z}_t)]_i \! = \! [\mat{z}_t]_i\biggr(1 - \!\!\!\!\!
\sum_{\substack{j < i :\\ [\mat{z}_t]_j = 1}} \frac{1}{d}\biggr).
\end{align*}
Thus, for all $i < j$ such that $[\mat{z}_t]_i = [\mat{z}_t]_j = 1$ we have
$[\mat{w}(\mat{z}_t)]_i \geq [\mat{w}(\mat{z}_t)]_j$. The choice of 1 for diagonals and
$1/d$ for off-diagonals is arbitrary and other values might also be
picked based on the data sequence $(\mat{x}_t,\mat{z}_t,y_t)$. In general,
features are weighted monotonically with respect to their
discriminative power with signs based on correlations with the label.
\subsubsection{Feature group based parameterization}
Another class of hypotheses that we can define within this framework
are those restricted to consider up to $p$-wise interactions between
features for some constant $0 < p \leq d$. In this case, we index the
$k = \sum_{i=1}^p \binom{d}{i} = O\big((\frac{d}{p})^p\big)$ unique
subsets of features of size up to $p$. Then define $[{\boldsymbol \psi}(\mat{z}_t)]_j =
1$ if the corresponding subset $j$ is uncorrupted by $\mat{z}_t$ and equal
to $0$ otherwise. An entry $[\mat{A}]_{i,j}$ now specifies the importance of
feature $j$, assuming that at least the subset $i$ is present. Such a
model would, for example, have the ability to capture the scenario of
a feature that is only discriminative in the presence of some $p-1$
other features. For example, we can generalize the ranking example
from above to impose a soft ranking on groups of features.
\subsubsection{Corruption due to failed sensors}
\label{sec:failed_sensors}
A common scenario for missing features arises in applications
involving an array of measurements, for example, from a sensor network,
wireless motes, array of cameras or CCDs, where each sensor is bound to fail
occasionally. The typical strategy for dealing with such situations
involves the use of redundancy. For instance, if a sensor fails, then
some kind of an averaged measurement from the neighboring sensors
might provide a reasonable surrogate for the missing value.
It is possible to design a choice of $\mat{A}$ matrix for the comparator
that only uses the local measurement when it is present, but uses an
averaged approximation based on some fixed averaging distribution on
neighboring features when the local measurement is missing. For each
feature, we consider a probability distribution $p_i$ which specifies the
averaging weights to be used when approximating feature $i$ using
neighboring observations. Let $\mat{w}^*$ be the weight vector that the
comparator would like to use if all the features were present. Then,
with ${\boldsymbol \psi}(\mat{z}) =\mat{z}$ and for $j \neq i$ we define,
\begin{equation} \label{eqn:Amatrixlocal}
[\mat{A}]_{i,i} = \mat{w}^*_i + \sum_{j\ne i}\mat{w}^*_jp_{ji},\quad [\mat{A}]_{i,j} =
-\mat{w}^*_jp_{ji}.
\end{equation}
Thus, say only feature $k$ is missing, we still
have ${\mat{x}'}_t^\top \mat{A} \mat{z}_t = \sum_{i,j} [\mat{x}'_t]_i [\mat{z}_t]_j [\mat{A}]_{i,j} =
\sum_{i\neq k,j\neq k} [\mat{x}_t]_i [\mat{A}]_{i,j} = \sum_{i\neq k} [\mat{x}_t]_i
[\mat{w}^*]_i + [\mat{w}^*]_k \sum_{i \neq k} [\mat{x}_t]_i p_{ki}$, where by assumption
$\sum_{i \neq k} [\mat{x}_t]_i p_{ki} \approx [\mat{x}_t]_k$.
Of course, the averaging in such applications is typically local, and
we expect each sensor to put large weights only on neighboring
sensors. This can be specified via a neighborhood graph, where nodes
$i$ and $j$ have an edge if $j$ is used to predict $i$ when feature
$i$ is not observed and vice versa. From the
construction~\eqref{eqn:Amatrixlocal} it is clear that the only
off-diagonal entries that are non-zero would correspond to the
edges in the neighborhood graph. Thus we can even add this
information to our algorithm and constrain several off-diagonal
elements to be zero, thereby restricting the complexity of the
problem.
\subsection{Matrix-Based Algorithm and Regret}
\label{sec:matrix-alg}
We use a standard mirror-descent style
algorithm~\cite{yudin83book, beck2003mirror} in the matrix based
parametrization described above. It is characterized by a
strongly convex regularizer $\mathcal{R}~:~\mathbb{R}^{d \times k}\to\mathbb{R}$, that is
\vspace{-0.2cm}
\begin{small}
\begin{equation*}
\mathcal{R}(\mat{A}) \geq \mathcal{R}(\mat{B}) + \dprod{\nabla\mathcal{R}(\mat{B})}{\mat{A} - \mat{B}}_F + \frac{1}{2}\|\mat{A}
- \mat{B}\|^2~~\forall\mat{A}, \mat{B} \! \in \! \mathcal{A},
\vspace{-0.2cm}
\end{equation*}
\end{small}
for some norm $\|\cdot\|$ and where $\dprod{\mat{A}}{\mat{B}}_F = \mathrm{Tr}(\mat{A}^\top\mat{B})$ is the trace
inner product. An example is the squared Frobenius norm
$\mathcal{R}(\mat{A}) = \frac{1}{2}\|\mat{A}\|_F^2$. For any such function, we can
define the associated Bregman divergence
\begin{align*}
D_{\mathcal{R}}(\mat{A},\mat{B}) = \mathcal{R}(\mat{A}) - \mathcal{R}(\mat{B}) - \dprod{\nabla\mathcal{R}(\mat{B})}{\mat{A} -
\mat{B}}_F .
\end{align*}
We assume $\mathcal{A}$ is a
convex subset of $\mathbb{R}^{d\times k}$, which could encode
constraints such as some off-diagonal entries being zero in the setup
of Section~\ref{sec:failed_sensors}. To simplify presentation in what
follows, we will use the shorthand $\ell_t(\mat{A}) = \ell(\dprod{\mat{A}
{\boldsymbol \psi}(\mat{z}_t)}{\mat{x}'_t}, y_t)$. The algorithm initializes with any $\mat{A}_0 \in
\mathcal{A}$ and updates
\begin{equation}
\mat{A}_{t+1} \!\!=\! \arg\min_{\mat{A} \in
\mathcal{A}}\left\{\eta_t\dprod{\nabla\ell_t(\mat{A}_t)}{\mat{A}}_F
\!+\! D_{\mathcal{R}}(\mat{A}, \mat{A}_t)\right\}
\label{mirror_descent}
\end{equation}
If $\mathcal{A} = \mathbb{R}^{d\times k}$ and $\mathcal{R}(\mat{A}) =
\frac{1}{2}\|\mat{A}\|_F^2$, the update simplifies to gradient descent
$\mat{A}_{t+1} = \mat{A}_t - \eta_t\nabla\ell_t(\mat{A}_t)$.
Our main result of this section is a guarantee on the regret incurred
by Algorithm~\eqref{mirror_descent}. The proof follows from standard
arguments (see e.g.~\cite{yudin83book,CesaBianchiLugosi06book}). Below, the dual norm is defined
as $\|\mat{V}\|_* = \sup_{\mat{U} : \|\mat{U}\| \leq 1} \dprod{\mat{U}}{\mat{V}}_F$.
\begin{theorem}
Let $\mathcal{R}$ be strongly convex with respect to a norm
$\|\cdot\|$ and $\|\nabla \ell_t(\mat{A})\|_* \leq G$, then
Algorithm \ref{mirror_descent} with learning rate
$\eta_t = \frac{R}{G \sqrt{T}}$ exhibits the following regret
upper bound compared to any $\mat{A}$ with $\|\mat{A}\|\leq R$,
\begin{equation*}
\sum_{t=1}^T \! \ell(\dprod{\mat{A}_t \mat{z}_t}{\mat{x}_t'}, y_t) -
\!\! \inf_{\mat{A} \in \mathcal{A}} \sum_{t=1}^T \! \ell(\dprod{\mat{A} \mat{z}_t}{\mat{x}_t'}, y_t)
\leq 3RG\sqrt{T}.
\end{equation*}
\label{thm:mmdbound}
\end{theorem}
\section{Batch Imputation Based Algorithm}
\label{sec:batch}
Recalling the setup of Section~\ref{sec:setup-batch}, in this section
we look at imputation mappings of the form
\begin{equation}
{\boldsymbol \phi}_{\mat{M}}(\mat{x}', \mat{z}) = \mat{x}' + \diag(1-\mat{z})\mat{M}^\top\mat{x}'\,.
\label{eqn:linearimp}
\end{equation}
Thus we retain all the observed entries in the vector $\mat{x}'$, but for
the missing features that are predicted using a linear combination
of the observed features and where the $i_{th}$ column of $\mat{M}$ encodes the
averaging weights for the $i_{th}$ feature. Such a linear prediction
framework for features is natural. For instance, when the data vectors
$\mat{x}$ are Gaussian, the conditional expectation of any feature given
the other features is a linear function. The predictions are now
made using the dot product
\begin{equation*}
\dprod{\mat{w}}{{\boldsymbol \phi}(\mat{x}',\mat{z})} = \dprod{\mat{w}}{\mat{x}'} +
\dprod{\mat{w}}{\diag(1-\mat{z})\mat{M}^\top\mat{x}'},
\end{equation*}
where we would like to estimate $\mat{w}, \mat{M}$ based on the data
samples. From a quick inspection of the resulting learning problem,
it becomes clear that optimizing over such a hypothesis class
leads to a non-convex problem. The convexity of the loss plays a
critical role in the regret framework of online learning, which is why
we restrict ourselves to a batch i.i.d.\ setting here.
In the sequel we will provide a convex relaxation to the learning
problem resulting from the
parametrization~\eqref{eqn:linearimp}. While we can make this
relaxation for natural loss functions in both classification and
regression scenarios, we restrict ourselves to a linear regression
setting here as the presentation for that example is simpler due to the
existence of a closed form solution for the ridge regression
problem.
In what follows, we consider only the corrupted data and thus simply
denote corrupted examples as $\mat{x}_i$. Let $\mat{X}$ denote the matrix with
$i_{th}$ row equal to $\mat{x}_i$ and similarly define $\mat{Z}$ as the matrix
with $i_{th}$ row equal to $\mat{z}_i$. It will also be useful to define
$\overline{\mat{Z}} = \mat{1}\1^\top - \mat{Z}$ and $\overline{\mat{z}}_i = \mat{1} - \mat{z}_i$ and finally let $\overline{\mat{Z}}_i
= \diag(\overline{\mat{z}}_i)$.
\subsection{Imputed Ridge Regression (IRR)}
\label{sec:imputation-alg}
In this section we will consider a modified version of the
ridge regression (RR) algorithm, robust to missing features. The
overall optimization problem we are interested in is as follows,
\begin{small}
\begin{align}
\hspace{-0.28cm} \min_{\{\mat{w},\mat{M}:\|\mat{M}\|_F \leq \gamma\}} \!
\frac{\lambda}{2} \| \mat{w} \|^2 \!+\! \frac{1}{T} \sum_{i=1}^T \! \big(y_i \!-\!
\mat{w}^\top \!(\mat{x}_i \!+\! \overline{\mat{Z}}_i \mat{M}^\top\mat{x}_i)\big)^2
\label{irr_primal}
\end{align}
\end{small}
where the hypothesis $\mat{w}$ and imputation matrix $\mat{M}$ are
simultaneously optimized. In order to bound the size of the
hypothesis set, we have introduced the constraint $\|\mat{M}\|_F^2 \leq
\gamma^2$ that bounds the Frobenius norm of the imputation matrix.
The global optimum of the problem as presented in (\ref{irr_primal})
cannot be easily found as it is not jointly convex in both $\mat{w}$ and
$\mat{M}$. We next present a convex relaxation of the
formulation~\eqref{irr_primal}. The key idea is to take a dual over
$\mat{w}$ but not $\mat{M}$, so that we have a saddle-point problem in the dual
vector ${\boldsymbol \alpha}$ and $\mat{M}$. The resulting saddle point problem, while
being concave in ${\boldsymbol \alpha}$ is still not convex in $\mat{M}$. At this step we
introduce a new tensor $\mat{N} \in \mathbb{R}^{d\times d\times d}$, where
$[\mat{N}]_{i,j,k} = [\mat{M}]_{i,k}[\mat{M}]_{j,k}$. Finally we drop the non-convex
constraint relating $\mat{M}$ and $\mat{N}$ replacing it with a matrix positive
semidefiniteness constraint.
Before we can describe the convex relaxation, we need one more piece
of notation. Given a matrix $\mat{M}$ and a tensor $\mat{N}$, we define the
matrix $\mat{K}_{\mat{M}\mat{N}} \in \mathbb{R}^{T\times T}$
\begin{small}
\begin{multline}
[\mat{K}_{\mat{M}\mat{N}}]_{i,j} = \mat{x}_i^\top \mat{x}_j
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
+ \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j \\
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_j]_k \mat{x}_i^\top \mat{N}_k
\mat{x}_j \,.
\label{eqn:relaxedkernel}
\end{multline}
\end{small}
The following proposition gives the convex relaxation of the
problem~\eqref{irr_primal} that we refer to as Imputed Ridge Regression
(IRR) and which includes a strictly larger hypothesis than the $(\mat{w},
\mat{M})$ pairs with which we began.
\begin{proposition}
\label{prop:irr_relaxed}
The following semi-definite programming optimization problem provides
a convex relaxation to the non-convex problem (\ref{irr_primal}):
\begin{align}
\label{irr_relaxed}
& \min_{\substack{t,~ \mat{M}:\|\mat{M}\|^2_F\leq\gamma^2 \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
t \\
& \mathrm{s.t.}
~~ \left[
\begin{array}{cc}
\mat{K}_{\mat{M}\mat{N}} + \lambda T \mat{I} & \mat{y} \\
\mat{y}^\top & t
\end{array}
\right] \succeq 0, ~~
\mat{K}_{\mat{M}\mat{N}} \succeq 0 \nonumber \,.
\end{align}
\label{prop:irr_relaxation}
\end{proposition}
The proof is deferred to the appendix for lack of space. The main idea
is to take the quadratic form that arises in the dual
formulation of~\eqref{irr_primal} with the matrix $\mat{K}_\mat{M}$,
\begin{small}
\begin{multline*}
\!\!\!\!\!\! [\mat{K}_{\mat{M}}]_{i,j}\! = \!\mat{x}_i^\top \mat{x}_j
\!+\! \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
\!+\! \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j
\!+\! \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j\!,
\end{multline*}
\end{small}
\noindent and relax it to the matrix $\mat{K}_{\mat{M}\mat{N}}$~\eqref{eqn:relaxedkernel}. The
constraint involving positive semidefiniteness of $\mat{K}_{\mat{M}\mat{N}}$ is
needed to ensure the convexity of the relaxed problem. The norm constraint
on $\mat{N}$ is a consequence of the norm constraint on $\mat{M}$.
One tricky issue with relaxations is using the relaxed solution
in order to find a good solution to the original
problem. In our case, this would correspond to finding a good $\mat{w}, \mat{M}$
pair for the primal problem~\eqref{irr_primal}. We bypass this step,
and instead directly define the prediction on any point $(\mat{x}_0,\mat{z}_0)$ as:
\begin{multline}
\sum_{i=1}^T \alpha_i( \mat{x}_i^\top \mat{x}_0+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_0
+ \mat{x}_i^\top \overline{\mat{Z}}_0 \mat{M}^\top \mat{x}_0 \\
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_0]_k \mat{x}_i^\top \mat{N}_k \mat{x}_0).
\label{eqn:dualpredictor}
\end{multline}
Here, ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ are solutions to the saddle-point problem
\begin{align}
\label{eqn:irr_saddlepoint}
\min_{\substack{\mat{M}:\|\mat{M}\|_F\leq\gamma \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
\!\!\!\!\! \max_{\boldsymbol \alpha}
2 {\boldsymbol \alpha}^\top \mat{y} \! -
\! {\boldsymbol \alpha}^\top (\mat{K}_{\mat{M}\mat{N}} \!+\! \lambda T \mat{I}) {\boldsymbol \alpha} \,.
\end{align}
We start by noting that the above optimization problem is equivalent
to the one in Proposition~\ref{prop:irr_relaxation}. The intuition
behind this definition~\eqref{eqn:dualpredictor} is that the solution to the
problem~\eqref{irr_primal} has this form, with $[\mat{N}]_{i,j,k}$ replaced
with $[\mat{M}]_{i,k}[\mat{M}]_{j,k}$. In the next section, we show a Rademacher
complexity bound over functions of the form above to justify our
convex relaxation.
\subsection{Theoretical analysis of IRR}
As mentioned in the previous section, we predict with a hypothesis of
the form~\eqref{eqn:dualpredictor} rather than going back to the primal
class indexed by $(\mat{w}, \mat{M})$ pairs. In this section, we would like to
show that the new hypothesis class parametrized by ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ is
not too rich for the purposes of learning. To do this, we give the
class of all possible hypotheses that can be the solutions to the dual
problem~\eqref{irr_relaxed} and then prove a Rademacher complexity
bound over that class.
The set of all possible ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ triples that can be potential
solutions to~\eqref{irr_relaxed} lie in the following set
\begin{small}
\begin{multline*}
\!\!\! \mathcal{H} \! = \! \Bigg\{\! h(\mat{x}_0, \mat{z}_0) \! \mapsto \!\! \sum_{i=1}^T \!\! \alpha_i(
\mat{x}_i^\top \mat{x}_0
+
\mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_0
+ \mat{x}_i^\top \overline{\mat{Z}}_0 \mat{M}^\top \mat{x}_0
+\\ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_0]_k \mat{x}_i^\top \mat{N}_k \mat{x}_0)
\!:\! \|\mat{M}\|_F \!\leq\! \gamma, \|\mat{N}\|_F \!\leq\! \gamma^2,
\|{\boldsymbol \alpha}\| \!\leq\! \frac{B}{\lambda \sqrt{T}}
\! \Bigg\}
\end{multline*}
\end{small}
The bound on $\|{\boldsymbol \alpha}\|$ is made implicitly in the optimization
problem (assuming the training labels are bounded $\forall i, |y_i|
\leq B$). To see this, we note that the problem~\eqref{irr_relaxed}
is obtained from~\eqref{eqn:irr_saddlepoint} by using the closed-form
solution of the optimal ${\boldsymbol \alpha} = (\mat{K}_{\mat{M}\mat{N}} + \lambda T \mat{I})^{-1}
\mat{y}$. Then we can bound $\|{\boldsymbol \alpha}\| \leq \|\mat{y}\| /
\lambda_{\min}(\mat{K}_{\mat{M}\mat{N}}
+ \lambda T \mat{I}) = \frac{B \sqrt{T}}{ \lambda T}$, where
$\lambda_{\min}(\mat{A})$ denotes the smallest eigenvalue of the matrix
$\mat{A}$. Note that in general there is no linear hypothesis $\mat{w}$ that
corresponds to the hypotheses in the relaxed class $\mathcal{H}$ and that we
are dealing with a strictly more general function class. However, the
following theorem demonstrates that the Rademacher complexity of this
function class is reasonably bounded in terms of the number of
training points $T$ and dimension $d$ and thereby still provides
provable generalization performance \cite{bm_rademacher}.
Recall the Rademacher complexity of a class $\mathcal{H}$
\begin{equation}
\ensuremath{\mathfrak{R}}_T(\mathcal{H}) = \mat{E}_S\mat{E}_{{\boldsymbol \sigma}} \left[ \frac{1}{T} \sup_{h \in \mathcal{H}} \bigg|
\sum_{i=1}^T \sigma_i h(\mat{x}_i,\mat{z}_i) \bigg| \right] \,,
\end{equation} where the inner expectation is over independent Rademacher
random variables $(\sigma_1,\ldots,\sigma_T)$ and the outer one over
a sample $S = ((\mat{x}_1,\mat{z}_1),\ldots,(\mat{x}_T,\mat{z}_T))$.
\begin{theorem}
\label{thm:rademacher}
If we assume a bounded regression problem $\forall y, ~|y| \leq B$
and $\forall \mat{x},~ \|\mat{x}\| \leq R$, then the Rademacher complexity of the
hypothesis set $\mathcal{H}$ is bounded as follows,
\begin{equation*}
\ensuremath{\mathfrak{R}}_T(\mathcal{H}) \leq \big(1 + \gamma + (\gamma + \gamma^2) \sqrt{d} \big)
\frac{BR^2 }{\lambda \sqrt{T}} = O\bigg(\sqrt{\frac{d}{T}}\bigg) \,.
\end{equation*}
\end{theorem}
Due to space constraints, the proof is presented in the appendix.
Theorem~\ref{thm:rademacher} allows us to control the gap between
empirical and expected risks using standard Rademacher
complexity results. Theorem 8 of~\cite{bm_rademacher}, immediately
provides the following corollary.
\begin{corollary} \label{corr:unif_dev}
Under the conditions of Theorem 2, for any $0 < \delta \leq 1$, with
probability at least $1-\delta$ over samples of size $T$, every $h
\in \mathcal{H}$ satisfies
\begin{align*}
\mat{E}&[(y - h(\mat{x}',\mat{z}))^2] \leq \frac{1}{T}\sum_{t=1}^T(y_t -
h(\mat{x}'_t,\mat{z}_t))^2 \\&+ \frac{BR^2(1+\gamma)^2}{\lambda}
\left(\frac{BR^2(1+\gamma)^2 }{\lambda} \sqrt{\frac{d}{T}}
+ \sqrt{\frac{8\ln(2/\delta)}{T}}\right).
\end{align*}
\end{corollary}
\ignore{
\subsection{TODO}
\begin{itemize}
\item Can we parametrize $M$ matrix in terms of a covariance matrix
(and corruption vector)?
\item Can we show an online analysis?
\item Can we show a lower bound on the Rademacher complexity of the
non-relaxed hypothesis class $\mathcal{H}$ that is close to the upper bound of
the relaxed class $\mathcal{G}$?
\end{itemize}
}
\section{Empirical Results}
\label{sec:empirical}
This section presents empirical evaluation of the online matrix-based
algorithm~\ref{mirror_descent}, as well as the Imputed Ridge
Regression algorithm of Section~\ref{sec:imputation-alg}.
We use baseline methods \emph{zero-imputation} and \emph{mean-imputation}
where the missing entries are replaced with zeros and mean estimated
from observed values of those features resp. Once the data is
imputed, a standard online gradient descent algorithm or
ridge-regression algorithm is used. As reference, we also show
the performance of a standard algorithm on uncorrupted data. The
algorithms are evaluated on several UCI repository datasets,
summarized in Table~\ref{table:data}.
The {\tt thyroid} dataset includes naturally
corrupted/missing data. The {\tt optdigits} dataset is subjected to
artificial corruption by deleting a column of pixels, chosen uniformly
at random from the 3 central columns of the image (each image contains
8 columns of pixels total). The remainder of the datasets are subjected to two
types of artificial corruption: \emph{data-independent} or
\emph{data-dependent} corruption. In the first case, each feature is
randomly deleted independently, while the features are deleted based
on thresholding values in the latter case.
\begin{table}
\begin{center}
{\small
\begin{tabular}{l|llcc}
dataset & $m$ & $d$ & $F_I$ & $F_D$ \\
\hline
\hline
{\tt abalone} & 4177 & 7 & $.62 \pm .08$ & $.61 \pm .12$ \\
{\tt housing} & 20640 & 8 & $.64 \pm .08$ & $.68 \pm .20$ \\
{\tt optdigits} & 5620 & 64 & $.88 \pm .00$ & $.88 \pm .00$ \\
{\tt park} & 3000 & 20 & $.58 \pm .06$ & $.61 \pm .08$ \\
{\tt thyroid} & 3163 & 5 & $.77 \pm .00$ & $.77 \pm .00$ \\
{\tt splice} & 1000 & 60 & $.63 \pm .01$ & $.66 \pm .03$ \\
{\tt wine} & 6497 & 11 & $.63 \pm .10$ & $.69 \pm .13$ \\
\end{tabular}
}
\end{center}
\vspace{-0.3cm}
\caption{
\small
Size of dataset ($m$), features ($d$) and, the overall fraction of
remaining features in the training set after data-independent ($F_I$)
or data-dependent ($F_D$) corruption.}
\label{table:data}
\vspace{-0.5cm}
\end{table}
We report average error and standard deviations over 5 trials, using
1000 random training examples and corruption patterns. We tune
hyper-parameters using a grid search from $2^{-12}$ to $2^{10}$.
Further details and explicit corruption processes appear in the
appendix.
\subsection{\mbox{Online Corruption Dependent Hypothesis}}
Here we analyze the online algorithm presented in section
\ref{sec:matrix-alg} using two different types of regularization. The
first method simply penalizes the Frobenius norm of the parameter matrix
$\mat{A}$ (frob-reg), $\mathcal{R}(\mat{A}) = \| \mat{A} \|_F^2$.
The second method (sparse-reg) forces a sparse solution by
constraining many entries of the parameter matrix equal to zero as
mentioned in Section~\ref{sec:failed_sensors}. We use the regularizer
$\mathcal{R}(\mat{A}) = \gamma \|\mat{A} \mat{1}\|^2 + \|\mat{A}\|_F^2$, where $\gamma$ is an
additional tunable parameter. This choice of regularization is based
on the example given in equation (\ref{eqn:Amatrixlocal}), where we
would have $\|\mat{A}\mat{1}\| = \|\mat{w}^*\|$.
We apply these methods to the {\tt splice} classification task and the
{\tt optdigits} dataset in several one vs.\ all classification tasks.
For {\tt splice}, the sparsity pattern used by the sparse-reg method
is chosen by constraining those entries $[\mat{A}]_{i,j}$ where feature $i$
and $j$ have a correlation coefficient less than 0.2, as measured with
the corrupted training sample. In the case of {\tt optdigits}, only
entries corresponding to neighboring pixels are allowed to be
non-zero.
Figure \ref{fig:plots} shows that, when subject to data-independent
corruption, the zero imputation, mean imputation and frob-reg methods
all perform relatively poorly while the sparse-reg method provides
significant improvement for the {\tt splice} dataset.
Furthermore, we find data-dependent corruption is quite harmful to
mean imputation as might be expected, while both frob-reg and
sparse-reg still provide significant improvement over zero-imputation.
More surprisingly, these methods also perform better than training on
uncorrupted data. We attribute this to the fact that we are using a
richer hypothesis function that is parametrized by the corruption
vector while the standard algorithm uses only a fixed hypothesis. In
Table \ref{table:online-digits} we see that the sparse-reg performs at
least as well as both zero and mean imputation in all tasks and
offers significant improvement in the 3-vs-all and 6-vs-all task. In
this case, the frob-reg method performs comparably to sparse-reg and
is omitted from the table due to space.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.46\columnwidth]{splice_online_indepcorr_small.eps} &
\hspace{-0.28cm}
\includegraphics[width=0.46\columnwidth]{splice_online_dependcorr_small.eps} \\
\includegraphics[width=0.46\columnwidth]{abalone_series_indepcorr.eps} &
\hspace{-0.28cm}
\includegraphics[width=0.46\columnwidth]{abalone_series.eps}
\vspace{-0.35cm}
\end{tabular}
\end{center}
\vspace{-0.25cm}
\caption{\small
0/1 loss as a function of $T$ for {\tt splice} dataset with
independent (top left) and dependent corruption (top right). RMSE on
{\tt abalone} across varying amounts of independent (bottom left) and
dependent corruption (bottom right); fraction of features remaining
indicated on x-axis.}
\label{fig:plots}
\vspace{-0.2cm}
\end{figure}
\begin{table}
\begin{center}
{\small
\begin{tabular}{l|ccc|c}
& \hspace{-0.15cm} zero-imp & \hspace{-0.28cm} mean-imp & \hspace{-0.28cm} sparse-reg & no corr \\
\hline
\hline
{2}
& \hspace{-0.15cm} $.035 \pm .002$ & \hspace{-0.28cm} $.039 \pm .004$ & \hspace{-0.28cm} $.033
\pm .003$ & $.024 \pm .002$ \\
\hline
{3}
& \hspace{-0.15cm} $.041 \pm .002$ & \hspace{-0.28cm} $.043 \pm .001$ & \hspace{-0.28cm} $\mathbf{.039
\pm .002}$ & $.027 \pm .003$ \\
\hline
{4}
& \hspace{-0.15cm} $.020 \pm .002$ & \hspace{-0.28cm} $.023 \pm .002$ & \hspace{-0.28cm} $.021
\pm .001$ & $.015 \pm .001$ \\
\hline
{6}
& \hspace{-0.15cm} $.026 \pm .002$ & \hspace{-0.28cm} $.024 \pm .002$ & \hspace{-0.28cm} $\mathbf{.023
\pm .002}$ & $.015 \pm .002$
\end{tabular}
}
\end{center}
\vspace{-0.4cm}
\caption{\small
One-vs-all classification results on \texttt{optdigits} dataset
(target digit in first column) with column-based
corruption for 0/1 loss.}
\label{table:online-digits}
\vspace{-0.5cm}
\end{table}
\subsection{Imputed Ridge Regression}
In this section we consider the performance of IRR across
many datasets. We found standard SDP solvers to be quite slow for
problem~(\ref{irr_relaxed}). We instead use a semi-infinite
linear program (SILP) to find an approximately optimal
solution (see e.g.~\cite{SDP-SILP} for details).
In Tables~\ref{table:imputed-results-indep}
and~\ref{table:imputed-results-dependent} we compare the performance
of the IRR algorithm to zero and mean imputation as well as to
standard ridge regression performance on the uncorrupted data. Here
we see IRR provides improvement over zero-imputation in all cases and
does at least as well as mean-imputation when dealing with
data-independent corruption. For data-dependent corruption, IRR
continues to perform well, while mean-imputation suffers. For this
setting, we have also compared to an \emph{independent-imputation}
method, which imputes data using an $\mat{M}$ matrix that is trained
independently of the learning algorithm. In particular the $i_{th}$
column of $\mat{M}$ is selected as the best linear predictor of the
$i_{th}$ feature given the rest, i.e.\ the solution to: $\argmin_\v
\sum_{k \in \mathcal{X}_i} ([\mat{x}_k]_i - \sum_{j \neq i} [\mat{x}_k]_j [\v]_j)^2,$
where $\mathcal{X}_i$ is the set of training examples that have the $i_{th}$
feature present. Although, this method can perform better than
mean-imputation, the joint optimization solution provided by IRR
provides an even more significant improvement. At the bottom of Table
\ref{table:imputed-results-dependent} we also measure performance with
{\tt thyroid} which has naturally missing values. Here again IRR
performs significantly better than the competitor methods.
Zero-imputation is not shown due to space, but it performs
uniformly worse. Figure \ref{fig:plots} shows more detailed
results for the {\tt abalone} dataset across different levels of
corruption and displays the consistent improvement which the IRR
algorithm provides.
\begin{table}
\begin{center}
{\small
\begin{tabular}{l|ccc|c}
& \hspace{-0.15cm} zero-imp & \hspace{-0.28cm} mean-imp & \hspace{-0.28cm} IRR & no corr \\
\hline
\hline
A & \hspace{-0.15cm} $.199 \pm .004$ & \hspace{-0.28cm} $.187 \pm .003$ & \hspace{-0.28cm} $\mathbf{.183 \pm
.002}$ & $.158 \pm .002$ \\
H & \hspace{-0.15cm} $.414 \pm .025$ & \hspace{-0.28cm} $\mathbf{.370 \pm .019}$ & \hspace{-0.28cm}
$\mathbf{.373 \pm .019}$ & $.288 \pm .001$ \\
P & \hspace{-0.15cm} $.457 \pm .006$ & \hspace{-0.28cm} $\mathbf{.445 \pm .004}$ & \hspace{-0.28cm} $.451 \pm .004$ & $.422 \pm .004$ \\
W & \hspace{-0.15cm} $.280 \pm .006$ & \hspace{-0.28cm} $\mathbf{.268 \pm .009}$ & \hspace{-0.28cm}
$\mathbf{.269 \pm .008}$ & $.246 \pm .001$ \\
\end{tabular}
}
\caption{RMSE for various imputation methods across the datasets {\tt
abalone} (A), {\tt housing} (H), {\tt park} (P) and {\tt wine} (W)
when subject to data-independent corruption}
\label{table:imputed-results-indep}
\end{center}
\end{table}
\begin{table}
\begin{center}
{\small
\begin{tabular}{l|ccc|c}
& \hspace{-0.15cm} mean-imp & \hspace{-0.28cm} ind-imp & \hspace{-0.28cm} IRR & no corr \\
\hline
\hline
A & \hspace{-0.15cm} $.180 \pm .006$ & \hspace{-0.28cm} $.183 \pm .012$ & \hspace{-0.28cm} $\mathbf{.167 \pm
.011}$ & $.159 \pm .004$ \\
H & \hspace{-0.15cm} $.400 \pm .064$ & \hspace{-0.28cm} $.363 \pm .041$ & \hspace{-0.28cm} $\mathbf{.326 \pm
.035}$ & $.289 \pm .001$ \\
P & \hspace{-0.15cm} $.444 \pm .008$ & \hspace{-0.28cm} $.423 \pm .015$ & \hspace{-0.28cm} $\mathbf{.377 \pm
.035}$ & $.422 \pm .001$ \\
W & \hspace{-0.15cm} $.264 \pm .009$ & \hspace{-0.28cm} $.260 \pm .011$ & \hspace{-0.28cm} $.256 \pm .011$ & $.247
\pm .001$\\
\hline \hline
T & \hspace{-0.15cm} $.531 \pm .005$ & \hspace{-0.28cm} $.528 \pm .003$ & \hspace{-0.28cm} $\mathbf{.521 \pm
.004}$ & \hspace{-0.28cm} --
\end{tabular}
}
\end{center}
\vspace{-0.3cm}
\caption{\small RMSE for various imputation methods across the
datasets {\tt abalone} (A), {\tt housing} (H), {\tt park} (P) and
{\tt wine} (W) when subject to data-dependent corruption. The {\tt
thyroid} (T) dataset has naturally occurring missing features. }
\label{table:imputed-results-dependent}
\vspace{-0.1cm}
\end{table}
In Table \ref{table:impute-digits} we see that, with respect to the
column-corrupted {\tt optdigit} dataset, the IRR algorithm performs
significantly better than zero-imputation and mean-imputation in majority
of tasks.
\begin{table}
\begin{center}
{\small
\begin{tabular}{l|ccc|c}
& zero-imp & mean-imp & IRR & no corr \\
\hline
\hline
2 & \hspace{-0.15cm} $.352 \pm .003$ & \hspace{-0.28cm} $.351 \pm .004$ & \hspace{-0.28cm} $\mathbf{.346 \pm
.002}$ & $.321 \pm .003$ \\
3 & \hspace{-0.15cm} $.450 \pm .005$ & \hspace{-0.28cm} $.435 \pm .004$ & \hspace{-0.28cm} $\mathbf{.426 \pm
.005}$ & $.398 \pm .004$ \\
4 & \hspace{-0.15cm} $.372 \pm .003$ & \hspace{-0.28cm} $\mathbf{.363 \pm .002}$ & \hspace{-0.28cm}
$\mathbf{.364 \pm .003}$ & $.345 \pm .002$ \\
6 & \hspace{-0.15cm} $.369 \pm .003$ & \hspace{-0.28cm} $.360 \pm .002$ & \hspace{-0.28cm} $\mathbf{.353 \pm
.003}$ & $.333 \pm .003$ \\
\end{tabular}
}
\end{center}
\vspace{-0.3cm}
\caption{\small RMSE (using binary labels) for one-vs-all
classification on \texttt{optdigits} subject to column-based
corruption. }
\label{table:impute-digits}
\vspace{-0.5cm}
\end{table}
\section{Conclusion}
We have introduced two new algorithms, addressing the problem of
learning with missing features in both the adversarial online and
i.i.d.\ batch settings. The algorithms are motivated by intuitive
constructions and we also provide theoretical performance
guarantees. Empirically we show encouraging initial results for
online matrix-based corruption-dependent hypotheses as well as many
significant results for the suggested IRR algorithm, which indicate
superior performance when compared to several baseline imputation
methods.
\subsubsection*{Acknowledgements}
We gratefully acknowledge the support of the NSF under award
DMS-0830410. AA was partially supported by an MSR PhD Fellowship. We
also thank anonymous reviewers for suggesting additional references
and improvements to proofs.
\bibliographystyle{plain}
{ \small
\section{Introduction}
Standard learning algorithms assume that each training example is
\emph{fully observed} and doesn't suffer any corruption. However, in
many real-life scenarios, training and test data often undergo some
form of corruption. We consider settings where all the features might
not be observed in every example, allowing for both adversarial and
stochastic feature deletion models. Such situations arise, for
example, in medical diagnosis---predictions are often desired using
only a partial array of medical measurements due to time or cost
constraints. Survey data are often incomplete due to partial
non-response of participants. Vision tasks routinely need to deal with
partially corrupted or occluded images. Data collected through
multiple sensors, such as multiple cameras, is often subject to the
sudden failure of a subset of \mbox{the sensors.}
In this work, we design and analyze learning algorithms that address
these examples of learning with missing features. The first setting
we consider is online learning where both examples and
missing features are chosen in an arbitrary, possibly adversarial,
fashion. We define a novel notion of regret suitable to the setting
and provide an algorithm which has a provably bounded regret on the
order of $O(\sqrt{T})$, where $T$ is the number of examples.
The second scenario is batch learning, where examples and
missing features are drawn according to a fixed and unknown
distribution. We design a learning algorithm which is guaranteed to
globally optimize an intuitive objective function and which also
exhibits a generalization error on the order of $O(d / \sqrt{T})$,
where $d$ is the data dimension.
Both algorithms are also explored empirically across several publicly
available datasets subject to various artificial and natural types of
feature corruption. We find very encouraging results, indicating the
efficacy of the suggested algorithms and their superior performance
over baseline methods.
Learning with missing or corrupted features has a long history in statistics
\cite{little_rubin,dempster_em}, and has recieved recent attention in
machine learning~\cite{dekel_corrupted, marlin, cesa_efficient,
chechik_struct}. Imputation methods (see~\cite{little_rubin, marlin,
dempster_em}) fill in missing values, generally independent
of any learning algorithm, after which standard algorithms can be
applied to the data. Better performance might be expected, though, by
learning the imputation and prediction functions
simultaneously. Previous works~\cite{marlin} address this issue
using EM, but can get stuck in local optima and are devoid of
theoretical guarantees. Our work also is different from settings where
features are missing only at test time~\cite{dekel_corrupted,
Globerson2006nightmare}, settings that give access to noisy versions
of all the features~\cite{cesa-noise} or settings where observed
features are picked by the algorithm~\cite{cesa_efficient}.
In Section \ref{sec:setup} we formally introduce both the general
online and batch settings we consider. Sections \ref{sec:online}
and \ref{sec:batch} detail the algorithms and theoretical results
within the online and batch settings respectively. Empirical results
are presented in Section~\ref{sec:empirical}.
\section{The Setting}
\label{sec:setup}
In our setting it will be useful to denote a training instance $\mat{x}_t
\in \mathbb{R}^d$ and prediction $y_t$, as well as a corruption vector
$\mat{z}_t \in \set{0,1}^d$, where
\begin{equation*}
[\mat{z}_t]_i = \left\{\begin{array}{cl} 0&\mbox{if feature $i$ is not
observed,}\\ 1&\mbox{if feature $i$ is observed.}\end{array}\right.
\end{equation*}
We will discuss as specific examples both
classification problems where $y_t \in \{-1,1\}$ and regression
problems where $y_t \in \mathbb{R}$. The learning algorithm is given
the corruption vector $\mat{z}_t$ as well as the corrupted instance,
\begin{equation*}
\mat{x}_t' = \mat{x}_t \circ \mat{z}_t \,,
\end{equation*}
where $\circ$ denotes the component-wise product between two vectors.
Note that the training algorithm is never given access to $\mat{x}_t$,
however it is given $\mat{z}_t$, and so has knowledge of exactly which
coordinates have been corrupted.
The following subsections explain the online and batch settings
respectively, as well as the type of hypotheses that are considered
in each.
\subsection{Online learning with missing features}
\label{sec:setup-online}
In this setting, at each time-step $t$ the learning algorithm is
presented with an arbitrarily (possibly adversarially) chosen
instance $(\mat{x}_t', \mat{z}_t)$ and is expected to predict $y_t$. After
prediction, the label is then revealed to the learner
which then can update its hypothesis.
A natural question to ask is what happens if we simply ignore
the distinction between $\mat{x}'_t$ and $\mat{x}_t$ and just run an
online learning algorithm on this corrupted data. Indeed, doing so
would give a small bound on regret:
\begin{equation*}
\sum_{t=1}^T \ell(\dprod{\mat{w}_t}{\mat{x}_t'}, y_t) -
\inf_{\mat{w} \in \mathcal{W}} \sum_{t=1}^T \ell(\dprod{\mat{w}}{\mat{x}_t'}, y_t) \,,
\end{equation*}
with respect to a convex loss function $\ell$ and for any convex
compact subset $\mathcal{W} \subseteq \mathbb{R}^d$. However, any fixed weight
vector $\mat{w}$ in the second term might have a very large loss, making
the regret guarantee useless---both the learner and the comparator
have a large loss making the difference small. For instance, assume one
feature perfectly predicts the label, while another one only predicts
the label with 80\% accuracy, and $\ell$ is the hinge loss. It is easy
to see that there is no fixed $\mat{w}$ that will perform well on both
examples where the first feature is observed and examples where the
first feature is missing but the second one is observed.
To address the above concerns, we consider using a linear
\emph{corruption-dependent hypothesis} which is permitted to change as
a function of the observed corruption $\mat{z}_t$. Specifically, given the
corrupted instance and corruption vector, the predictor uses a
function $\mat{w}_t(\cdot) : \{0,1\}^d \to \mathbb{R}^d$ to choose a weight
vector, and makes the
prediction $\widehat y_t = \dprod{\mat{w}_t(\mat{z}_t)}{\mat{x}_t'}$. In order to provide
theoretical guarantees, we will bound the following notion of regret,
\begin{equation}
\label{eqn:onlineregret}
\!\!R_T \! = \!\! \sum_{t=1}^T \! \ell(\dprod{\mat{w}_t}{\mat{x}_t'}, y_t) -
\!\! \inf_{\mat{w} \in \mathcal{W}} \sum_{t=1}^T \! \ell(\dprod{\mat{w}(\mat{z}_t)}{\mat{x}_t'}, y_t),
\end{equation}
where it is implicit that $\mat{w}_t$ also depends on $\mat{z}_t$ and $\mathcal{W}$ now
consists of corruption-dependent hypotheses.
In the most general case, we may consider $\mathcal{W}$ as the class of all
functions which map $\set{0,1}^d \to \mathbb{R}^d$, however we show this can
lead to an intractable learning problem. This motivates the
study of specific interesting subsets of this most general function
class. This is the main focus of Section \ref{sec:online}.
\subsection{Batch learning with missing features}
\label{sec:setup-batch}
In the setup of batch learning with i.i.d.\ data, examples $(\mat{x}_t,
\mat{z}_t, y_t)$ are drawn according to a fixed but unknown distribution
and the goal is to choose a hypothesis that minimizes the expected
error, with respect to an appropriate loss function $\ell$: $\mat{E}_{\mat{x}_t,
\mat{z}_t, y_t}[\ell(h(\mat{x}_t, \mat{z}_t), y_t)]$.
The hypotheses $h$ we consider in this scenario will be inspired by
imputation-based methods prevalent in statistics literature used to
address the problem of missing features~\cite{little_rubin}. An
imputation mapping is a function used to fill in unobserved features
using the observed features, after which the \emph{completed} examples
can be used for prediction. In particular, if we consider an
imputation function ${\boldsymbol \phi}: \mathbb{R}^d \times \set{0,1}^d \to \mathbb{R}^d$,
which is meant to fill missing feature values, and a linear predictor
$\mat{w} \in \mathbb{R}^d$, we can parameterize a hypothesis with these two
function $h_{{\boldsymbol \phi}, \mat{w}}(\mat{x}'_t, \mat{z}_t) = \dprod{\mat{w}}{{\boldsymbol \phi}(\mat{x}'_t,
\mat{z}_t)}$.
It is clear that the multiplicative interaction between $\mat{w}$ and
${\boldsymbol \phi}$ will make most natural formulations non-convex, and we
elaborate more on this in Section~\ref{sec:batch}. In the
i.i.d.\ setting, the natural quantity of interest is the generalization
error of our learned hypothesis. We provide a Rademacher complexity
bound on the class of $\mat{w},{\boldsymbol \phi}$ pairs we use, thereby showing
that any hypothesis with a small empirical error will also have a
small expected loss. The specific class of hypotheses and details of
the bound are presented in Section \ref{sec:batch}. Furthermore, the
reason as to why a imputation-based hypothesis class is not analyzed
in the more general adversarial setting will also be explained in
\mbox{that section.}
\section{Online Corruption-Based Algorithm}
\label{sec:online}
In this section, we consider the class of \emph{corruption-dependent}
hypotheses defined in Section~\ref{sec:setup-online}. Recall the
definition of regret~(\ref{eqn:onlineregret}), which we wish to
control in this framework, and of the comparator class of functions $\mathcal{W}
\subseteq \set{0,1}^d \to \mathbb{R}^d$. It is clear that the function
class $\mathcal{W}$ is much richer than the comparator class in the
corruption-free scenario, where the best linear predictor is fixed for
all rounds. It is natural to ask if it is even possible to prove a
non-trivial regret bound over this richer comparator class $\mathcal{W}$. In
fact, the first result of our paper provides a lower bound on the
minimax regret when the comparator is allowed to pick arbitrary
mappings, i.e.\ the set $\mathcal{W}$ contains all mappings.
\begin{proposition} \label{prop:lower-bound}
If $\mathcal{W} = \set{0,1}^d \to \mathbb{R}^d$ the minimax value of the corruption
dependent regret is lower bounded as follows,
\begin{equation*}
\inf_{\mat{w}_1 \in \mathcal{W}}\sup_{(\mat{x}_1,\mat{z}_1,y_1)}\cdots\inf_{\mat{w}_T \in \mathcal{W}}\sup_{(\mat{x}_T,\mat{z}_T,y_T)}
\!\!\! R_T = \Omega(2^{d/4}\sqrt{T}).
\end{equation*}
\end{proposition}
This proposition (the proof of which appears in the appendix) shows
that the minimax regret is lower bounded by a term that is exponential
in the dimensionality of the learning problem. The key idea of the
proof is that we can choose a number of distinct corruption patterns
$\{\mat{z}_1,\ldots,\mat{z}_K\}$ each associated with a completely independent
learning problem.
Then we show that the complexity of learning $K$ independent tasks
where we are presented with an example corresponding to one of the
tasks at every round is lower bounded by $\Omega(\sqrt{KT})$. The
result follows by showing that the number $K$ of independent patterns is
$\Omega(2^{d/2})$ in our framework. Thus, it will be difficult to
achieve a low regret against arbitrary maps from
$\{0,1\}^d$ to $\mathbb{R}^d$. In the following section we consider a
restricted function class and show that a mirror-descent
algorithm can achieve regret polynomial in $d$ and sub-linear in $T$,
implying that the average regret is vanishing.
\subsection{\mbox{Linear Corruption-Dependent Hypotheses}}
Here we analyze a corruption-dependent hypothesis class that is
parametrized by a matrix $\mat{A} \in \mathbb{R}^{d \times k}$, where $k$ may be
a function of $d$. In the simplest case of $k = d$, the
parametrization looks for weights $\mat{w}(\mat{z}_t)$ that depend linearly on
the corruption vector $\mat{z}_t$. Defining $\mat{w}_{\mat{A}}(\mat{z}_t) = \mat{A}\mat{z}_t$
achieves this, and intuitively this allows us to capture how the
presence or absence of one feature affects the weight of another
feature. This will be clarified further in the examples.
In general, the matrix $\mat{A}$ will be $d\times k$, where $k$ will be
determined by a function ${\boldsymbol \psi}(\mat{z}_t) \in \set{0,1}^k$ that maps
$\mat{z}_t$ to a possibly higher dimension space. Given, a fixed ${\boldsymbol \psi}$,
the explicit parameterization in terms of $\mat{A}$ is,
\begin{equation}
\mat{w}_{\mat{A}, {\boldsymbol \psi}}(\mat{z}_t) = \mat{A} {\boldsymbol \psi}(\mat{z}_t) \,.
\end{equation}
In what follows, we drop the subscript from $ \mat{w}_{\mat{A}, {\boldsymbol \psi}}$ in order
to simplify notation. Essentially this allows us to introduce
non-linearities as a function of the corruption vector, but the
non-linear transform is known and fixed throughout the learning
process. Before analyzing this setting, we give a few examples and
intuition as to why such a parametrization is useful. In each example,
we will show how there exists a choice of a matrix $\mat{A}$ that captures
the specific problem's assumptions. This implies that the fixed
comparator can use this choice in hindsight, and by having a low
regret, our algorithm would implicitly learn a hypothesis close to
this reasonable choice of $\mat{A}$.
\subsubsection{Corruption-free special case}
We start by noting that in the case of no corruption (i.e. $\forall
t, \mat{z}_t = \mat{1}$) a standard linear hypothesis model can be cast within
the matrix based framework by defining ${\boldsymbol \psi}(\mat{z}_t) = 1$ and learning
$\mat{A} \in \mathbb{R}^{d \times 1}$.
\subsubsection{Ranking-based parameterization}
One natural method for classification is to order the features by their
predictive power, and to weight features proportionally to their
ranking (in terms of absolute value; that is, the sign of weight
depends on whether the correlation with the label is positive or
negative). In the corrupted features setting, this naturally
corresponds to taking the available features at any round and putting
more weight on the most predictive observed features. This is
particularly important while using margin-based losses such as the
hinge loss, where we want the prediction to have the right sign and be
large enough in magnitude.
Our parametrization allows such a strategy when using a simple
function ${\boldsymbol \psi}(\mat{z}_t) = \mat{z}_t$. Without loss of generality, assume that
the features are arranged in decreasing order of discriminative power
(we can always rearrange rows and columns of $\mat{A}$ if they're not). We
also assume positive correlations of all features with the label; a more
elaborate construction works for $\mat{A}$ when they're not. In this case,
consider the parameter matrix and the induced classification weights
\begin{align*}
[\mat{A}]_{i,j} = \left\{ \hspace{-0.15cm}
\begin{array}{rl}
1, & \hspace{-0.15cm} j = i \\
-\frac{1}{d}, & \hspace{-0.15cm} j < i \\
0, & \hspace{-0.15cm} j > i
\end{array}
\right., \qquad [\mat{w}(\mat{z}_t)]_i = [\mat{z}_t]_i\biggr(1 -
\sum_{\substack{j < i :\\ [\mat{z}_t]_j = 1}} \frac{1}{d}\biggr).
\end{align*}
Thus, for all $i < j$ such that $[\mat{z}_t]_i = [\mat{z}_t]_j = 1$ we have
$[\mat{w}(\mat{z}_t)]_i \geq [\mat{w}(\mat{z}_t)]_j$. The choice of 1 for diagonals and
$1/d$ for off-diagonals is arbitrary and other values might also be
picked based on the data sequence $(\mat{x}_t,\mat{z}_t,y_t)$. In general,
features are weighted monotonically with respect to their
discriminative power with signs based on correlations with the label.
\subsubsection{Feature group based parameterization}
Another class of hypotheses that we can define within this framework
are those restricted to consider up to $p$-wise interactions between
features for some constant $0 < p \leq d$. In this case, we index the
$k = \sum_{i=1}^p \binom{d}{i} = O\big((\frac{d}{p})^p\big)$ unique
subsets of features of size up to $p$. Then define $[{\boldsymbol \psi}(\mat{z}_t)]_j =
1$ if the corresponding subset $j$ is uncorrupted by $\mat{z}_t$ and equal
to $0$ otherwise. An entry $[\mat{A}]_{i,j}$ now specifies the importance of
feature $j$, assuming that at least the subset $i$ is present. Such a
model would, for example, have the ability to capture the scenario of
a feature that is only discriminative in the presence of some $p-1$
other features. For example, we can generalize the ranking example
from above to impose a soft ranking on groups of features.
\subsubsection{Known corruption patterns}
In certain cases, only a constant number of corruption patterns may be
possible $\set{\mat{p}_1,\ldots,\mat{p}_k} \subset \set{0,1}^d$. Furthermore,
these patterns may be known ahead of time due to the nature of the
corruption process or may be observed using unlabelled data
\cite{little_rubin,marlin}. In such a case, define ${\boldsymbol \psi}(\mat{p}_i) =
\mat{e}_i$, where $\mat{e}_i$ is the $i$th standard basis vector, allowing one
weight vector per pattern.
\subsubsection{Corruption due to failed sensors}
\label{sec:failed_sensors}
A common scenario for missing features arises in applications
involving an array of measurements, for example, from a sensor network,
wireless motes, array of cameras or CCDs, where each sensor is bound to fail
occasionally. The typical strategy for dealing with such situations
involves the use of redundancy. For instance, if a sensor fails, then
some kind of an averaged measurement from the neighboring sensors
might provide a reasonable surrogate for the missing value.
It is possible to design a choice of $\mat{A}$ matrix for the comparator
that only uses the local measurement when it is present, but uses an
averaged approximation based on some fixed averaging distribution on
neighboring features when the local measurement is missing. For each
feature, we consider a probability distribution $p_i$ which specifies the
averaging weights to be used when approximating feature $i$ using
neighboring observations. Let $\mat{w}^*$ be the weight vector that the
comparator would like to use if all the features were present. Then,
with ${\boldsymbol \psi}(\mat{z}) =\mat{z}$ and for $j \neq i$ we define,
\begin{equation} \label{eqn:Amatrixlocal}
[\mat{A}]_{i,i} = \mat{w}^*_i + \sum_{j\ne i}\mat{w}^*_jp_{ji},\quad [\mat{A}]_{i,j} =
-\mat{w}^*_jp_{ji}.
\end{equation}
Thus, say only feature $k$ is missing, we still
have
\begin{equation*}
{\mat{x}'}_t^\top \mat{A} \mat{z}_t = \sum_{i,j} [\mat{x}'_t]_i [\mat{z}_t]_j [\mat{A}]_{i,j} =
\sum_{i\neq k,j\neq k} [\mat{x}_t]_i [\mat{A}]_{i,j} = \sum_{i\neq k} [\mat{x}_t]_i
[\mat{w}^*]_i + [\mat{w}^*]_k \sum_{i \neq k} [\mat{x}_t]_i p_{ki},
\end{equation*}
where by assumption
$\sum_{i \neq k} [\mat{x}_t]_i p_{ki} \approx [\mat{x}_t]_k$.
Of course, the averaging in such applications is typically local, and
we expect each sensor to put large weights only on neighboring
sensors. This can be specified via a neighborhood graph, where nodes
$i$ and $j$ have an edge if $j$ is used to predict $i$ when feature
$i$ is not observed and vice versa. From the
construction~\eqref{eqn:Amatrixlocal} it is clear that the only
off-diagonal entries that are non-zero would corresponding to the
edges in the neighborhood graph. Thus we can even add this
information to our algorithm and constrain several off-diagonal
elements to be zero, thereby restricting the complexity of the
problem.
\subsection{Matrix-Based Algorithm and Regret}
\label{sec:matrix-alg}
We use a standard mirror-descent style algorithm
algorithm~\cite{yudin83book, beck2003mirror} in the matrix based
parametrization described above. It is characterized by a
strongly convex regularizer $\mathcal{R}~:~\mathbb{R}^{d \times k}\to\mathbb{R}$, that is
\vspace{-0.2cm}
\begin{equation*}
\mathcal{R}(\mat{A}) \geq \mathcal{R}(\mat{B}) + \dprod{\nabla\mathcal{R}(\mat{B})}{\mat{A} - \mat{B}}_F + \frac{1}{2}\|\mat{A}
- \mat{B}\|^2~~\forall\mat{A}, \mat{B} \! \in \! \mathcal{A},
\vspace{-0.2cm}
\end{equation*}
\noindent for some norm $\|\cdot\|$ and where $\dprod{\mat{A}}{\mat{B}}_F = \mathrm{Tr}(\mat{A}^\top\mat{B})$ is the trace
inner product. An example is the squared Frobenius norm
$\mathcal{R}(\mat{A}) = \frac{1}{2}\|\mat{A}\|_F^2$. For any such function, we can
define the associated Bregman divergence
\begin{align*}
D_{\mathcal{R}}(\mat{A},\mat{B}) = \mathcal{R}(\mat{A}) - \mathcal{R}(\mat{B}) - \dprod{\nabla\mathcal{R}(\mat{B})}{\mat{A} -
\mat{B}}_F .
\end{align*}
We assume $\mathcal{A}$ is a
convex subset of $\mathbb{R}^{d\times k}$, which could encode
constraints such as some off-diagonal entries being zero in the setup
of Section~\ref{sec:failed_sensors}. To simplify presentation in what
follows, we will use the shorthand $\ell_t(\mat{A}) = \ell(\dprod{\mat{A}
{\boldsymbol \psi}(\mat{z}_t)}{\mat{x}'_t}, y_t)$. The algorithm initializes with any $\mat{A}_0 \in
\mathcal{A}$ and updates
\begin{equation}
\mat{A}_{t+1} \!\!=\! \arg\min_{\mat{A} \in
\mathcal{A}}\left\{\eta_t\dprod{\nabla\ell_t(\mat{A}_t)}{\mat{A}}_F
\!+\! D_{\mathcal{R}}(\mat{A}, \mat{A}_t)\right\}
\label{mirror_descent}
\end{equation}
If $\mathcal{A} = \mathbb{R}^{d\times k}$ and $\mathcal{R}(\mat{A}) =
\frac{1}{2}\|\mat{A}\|_F^2$, the update simplifies to gradient descent
$\mat{A}_{t+1} = \mat{A}_t - \eta_t\nabla\ell_t(\mat{A}_t)$.
Our main result of this section is a guarantee on the regret incurred
by Algorithm~\ref{mirror_descent}. The proof follows from standard
arguments (see e.g.~\cite{zinkevich,
yudin83book,CesaBianchiLugosi06book}). Below, the dual norm is defined
as $\|\mat{V}\|_* = \sup_{\mat{U} : \|\mat{U}\| \leq 1} \dprod{\mat{U}}{\mat{V}}_F$.
\begin{theorem}
Let $\mathcal{R}$ be strongly convex with respect to a norm
$\|\cdot\|$ and $\|\nabla \ell_t(\mat{A})\|_* \leq G$, then
Algorithm \ref{mirror_descent} with learning rate
$\eta_t = \frac{R}{G \sqrt{T}}$ exhibits the following regret
upper bound compared to any $\mat{A}$ with $\|\mat{A}\|\leq R$,
\begin{equation*}
\sum_{t=1}^T \! \ell(\dprod{\mat{A}_t \mat{z}_t}{\mat{x}_t'}, y_t) -
\!\! \inf_{\mat{A} \in \mathcal{A}} \sum_{t=1}^T \! \ell(\dprod{\mat{A} \mat{z}_t}{\mat{x}_t'}, y_t)
\leq 3RG\sqrt{T}.
\end{equation*}
\label{thm:mmdbound}
\end{theorem}
Furthermore, we can use standard techniques to show that the
averaged matrix $\widehat{\mat{A}}_T = \frac{1}{T}\sum_{t=1}^T\mat{A}_t$ has a
$\ensuremath{\mathcal{O}}\left(\frac{1}{\sqrt{T}}\right)$ generalization error in a
scenario where the data $(\mat{x}_t, \mat{z}_t, y_t)$ is
i.i.d.~\cite{CesaBianchiCoGe02}. In Section~\ref{sec:empirical}, we
further support these theoretical results empirically.
\section{Batch Imputation Based Algorithm}
\label{sec:batch}
Recalling the setup of Section~\ref{sec:setup-batch}, in this section
we look at imputation mappings of the form
\begin{equation}
{\boldsymbol \phi}_{\mat{M}}(\mat{x}', \mat{z}) = \mat{x}' + \diag(1-\mat{z})\mat{M}^\top\mat{x}'\,.
\label{eqn:linearimp}
\end{equation}
Thus we retain all the observed entries in the vector $\mat{x}'$, but for
the missing features that are predicted using a linear combination
of the observed features and where the $i_{th}$ column of $\mat{M}$ encodes the
averaging weights for the $i_{th}$ feature. Such a linear prediction
framework for features is natural. For instance, when the data vectors
$\mat{x}$ are Gaussian, the conditional expectation of any feature given
the other features is a linear function. The predictions are now
made using the dot product
\begin{equation*}
\dprod{\mat{w}}{{\boldsymbol \phi}(\mat{x}',\mat{z})} = \dprod{\mat{w}}{\mat{x}'} +
\dprod{\mat{w}}{\diag(1-\mat{z})\mat{M}^\top\mat{x}'},
\end{equation*}
where we would like to estimate $\mat{w}, \mat{M}$ based on the data
samples. From a quick inspection of the resulting learning problem,
it becomes clear that optimizing over such a hypothesis class
leads to a non-convex problem. The convexity of the loss plays a
critical role in the regret framework of online learning, which is why
we restrict ourselves to a batch i.i.d.\ setting here.
In the sequel we will provide a convex relaxation to the learning
problem resulting from the
parametrization~\eqref{eqn:linearimp}. While we can make this
relaxation for natural loss functions in both classification and
regression scenarios, we restrict ourselves to a linear regression
setting here as the presentation for that example is simpler due to the
existence of a closed form solution for the ridge regression
problem.
In what follows, we consider only the corrupted data and thus simply
denote corrupted examples as $\mat{x}_i$. Let $\mat{X}$ denote the matrix with
$i_{th}$ row equal to $\mat{x}_i$ and similarly define $\mat{Z}$ as the matrix
with $i_{th}$ row equal to $\mat{z}_i$. It will also be useful to define
$\overline{\mat{Z}} = \mat{1}\1^\top - \mat{Z}$ and $\overline{\mat{z}}_i = \mat{1} - \mat{z}_i$ and finally let $\overline{\mat{Z}}_i
= \diag(\overline{\mat{z}}_i)$.
\subsection{Imputed Ridge Regression (IRR)}
\label{sec:imputation-alg}
In this section we will consider a modified version of the
ridge regression (RR) algorithm, robust to missing features. The
overall optimization problem we are interested in is as follows,
\begin{align}
\hspace{-0.28cm} \min_{\{\mat{w},\mat{M}:\|\mat{M}\|_F \leq \gamma\}} \!
\frac{\lambda}{2} \| \mat{w} \|^2 \!+\! \frac{1}{T} \sum_{i=1}^T \! \big(y_i \!-\!
\mat{w}^\top \!(\mat{x}_i \!+\! \overline{\mat{Z}}_i \mat{M}^\top\mat{x}_i)\big)^2
\label{irr_primal}
\end{align}
where the hypothesis $\mat{w}$ and imputation matrix $\mat{M}$ are
simultaneously optimized. In order to bound the size of the
hypothesis set, we have introduced the constraint $\|\mat{M}\|_F^2 \leq
\gamma^2$ that bounds the Frobenius norm of the imputation matrix.
The global optimum of the problem as presented in (\ref{irr_primal})
cannot be easily found as it is not jointly convex in both $\mat{w}$ and
$\mat{M}$. We next present a convex relaxation of the
formulation~\eqref{irr_primal}. The key idea is to take a dual over
$\mat{w}$ but not $\mat{M}$, so that we have a saddle-point problem in the dual
vector ${\boldsymbol \alpha}$ and $\mat{M}$. The resulting saddle point problem, while
being concave in ${\boldsymbol \alpha}$ is still not convex in $\mat{M}$. At this step we
introduce a new tensor $\mat{N} \in \mathbb{R}^{d\times d\times d}$, where
$[\mat{N}]_{i,j,k} = [\mat{M}]_{i,k}[\mat{M}]_{j,k}$. Finally we drop the non-convex
constraint relating $\mat{M}$ and $\mat{N}$ replacing it with a matrix positive
semidefiniteness constraint.
Before we can describe the convex relaxation, we need one more piece
of notation. Given a matrix $\mat{M}$ and a tensor $\mat{N}$, we define the
matrix $\mat{K}_{\mat{M}\mat{N}} \in \mathbb{R}^{T\times T}$
\begin{equation}
[\mat{K}_{\mat{M}\mat{N}}]_{i,j} = \mat{x}_i^\top \mat{x}_j
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
+ \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j \\
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_j]_k \mat{x}_i^\top \mat{N}_k
\mat{x}_j \,.
\label{eqn:relaxedkernel}
\end{equation}
The following proposition gives the convex relaxation of the
problem~\eqref{irr_primal} that we refer to as Imputed Ridge Regression
(IRR) and which includes a strictly larger hypothesis than the $(\mat{w},
\mat{M})$ pairs with which we began.
\begin{proposition}
\label{prop:irr_relaxed}
The following semi-definite programming optimization problem provides
a convex relaxation to the non-convex problem (\ref{irr_primal}):
\begin{align}
\label{irr_relaxed}
& \min_{\substack{\mat{M}:\|\mat{M}\|^2_F\leq\gamma^2 \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
t \\
& \mathrm{s.t.}
~~ \left[
\begin{array}{cc}
\mat{K}_{\mat{M}\mat{N}} + \lambda T \mat{I} & \mat{y} \\
\mat{y}^\top & t
\end{array}
\right] \succeq 0, ~~
\mat{K}_{\mat{M}\mat{N}} \succeq 0 \nonumber \,.
\end{align}
\label{prop:irr_relaxation}
\end{proposition}
The proof is deferred to the appendix for lack of space. The main idea
is to take the quadratic form that arises in the dual
formulation of~\eqref{irr_primal} with the matrix $\mat{K}_\mat{M}$,
\begin{equation*}
[\mat{K}_{\mat{M}}]_{i,j} = \mat{x}_i^\top \mat{x}_j
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_j
+ \mat{x}_i^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j
+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j,
\end{equation*}
\noindent and relax it to the matrix $\mat{K}_{\mat{M}\mat{N}}$~\eqref{eqn:relaxedkernel}. The
constraint involving positive semidefiniteness of $\mat{K}_{\mat{M}\mat{N}}$ is
needed to ensure the convexity of the relaxed problem. The norm constraint
on $\mat{N}$ is a consequence of the norm constraint on $\mat{M}$.
One tricky issue with relaxations is using the relaxed solution
in order to find a good solution to the original
problem. In our case, this would correspond to finding a good $\mat{w}, \mat{M}$
pair for the primal problem~\eqref{irr_primal}. We bypass this step,
and instead directly define the prediction on any point $(\mat{x}_0,\mat{z}_0)$ as:
\begin{equation}
\sum_{i=1}^T \alpha_i( \mat{x}_i^\top \mat{x}_0+ \mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_0
+ \mat{x}_i^\top \overline{\mat{Z}}_0 \mat{M}^\top \mat{x}_0 \\
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_0]_k \mat{x}_i^\top \mat{N}_k \mat{x}_0).
\label{eqn:dualpredictor}
\end{equation}
Here, ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ are solutions to the saddle-point problem
\begin{align}
\label{eqn:irr_saddlepoint}
\min_{\substack{\mat{M}:\|\mat{M}\|_F\leq\gamma \\ \mat{N}: \sum_k \|\mat{N}_k\|_F^2 \leq \gamma^4}}
\max_{\boldsymbol \alpha}
2 {\boldsymbol \alpha}^\top \mat{y} -
{\boldsymbol \alpha}^\top (\mat{K}_{\mat{M}\mat{N}} + \lambda T \mat{I}) {\boldsymbol \alpha} \,.
\end{align}
We start by noting that the above optimization problem is equivalent
to the one in Proposition~\ref{prop:irr_relaxation}. The intuition
behind this definition~\eqref{eqn:dualpredictor} is that the solution to the
problem~\eqref{irr_primal} has this form, with $[\mat{N}]_{i,j,k}$ replaced
with $[\mat{M}]_{i,k}[\mat{M}]_{j,k}$. In the next section, we show a Rademacher
complexity bound over functions of the form above to justify our
convex relaxation.
\subsection{Theoretical analysis of IRR}
As mentioned in the previous section, we predict with a hypothesis of
the form~\eqref{eqn:dualpredictor} rather than going back to the primal
class indexed by $(\mat{w}, \mat{M})$ pairs. In this section, we would like to
show that the new hypothesis class parametrized by ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ is
not too rich for the purposes of learning. To do this, we give the
class of all possible hypotheses that can be the solutions to the dual
problem~\eqref{irr_relaxed} and then prove a Rademacher complexity
bound over that class.
The set of all possible ${\boldsymbol \alpha}, \mat{M}, \mat{N}$ triples that can be potential
solutions to~\eqref{irr_relaxed} lie in the following set
\begin{multline*}
\mathcal{H} = \Bigg\{ h(\mat{x}_0, \mat{z}_0) \mapsto \sum_{i=1}^T \alpha_i(
\mat{x}_i^\top \mat{x}_0
+
\mat{x}_i^\top \mat{M} \overline{\mat{Z}}_i \mat{x}_0
+ \mat{x}_i^\top \overline{\mat{Z}}_0 \mat{M}^\top \mat{x}_0
+ \sum_{k=1}^d [\overline{\mat{z}}_i]_k [\overline{\mat{z}}_0]_k \mat{x}_i^\top \mat{N}_k \mat{x}_0)
\\ : \|\mat{M}\|_F \leq \gamma, \|\mat{N}\|_F \leq \gamma^2,
\|{\boldsymbol \alpha}\| \leq \frac{B}{\lambda \sqrt{T}}
\Bigg\}
\end{multline*}
The bound on $\|{\boldsymbol \alpha}\|$ is made implicitly in the optimization
problem (assuming the training labels are bounded $\forall i, |y_i|
\leq B$). To see this, we note that the problem~\eqref{irr_relaxed}
is obtained from~\eqref{eqn:irr_saddlepoint} by using the closed-form
solution of the optimal ${\boldsymbol \alpha} = (\mat{K}_{\mat{M}\mat{N}} + \lambda T \mat{I})^{-1}
\mat{y}$. Then we can bound $\|{\boldsymbol \alpha}\| \leq \|\mat{y}\| / \lambda_{\min}(\mat{K}_\mat{M}
+ \lambda T \mat{I}) = \frac{B \sqrt{T}}{ \lambda T}$, where
$\lambda_{\min}(\mat{A})$ denotes the smallest eigenvalue of the matrix
$\mat{A}$. Note that in general there is no linear hypothesis $\mat{w}$ that
corresponds to the hypotheses in the relaxed class $\mathcal{H}$ and that we
are dealing with a strictly more general function class. However, the
following theorem demonstrates that the Rademacher complexity of this
function class is reasonably bounded in terms of the number of
training points $T$ and dimension $d$ and thereby still provides
provable generalization performance \cite{bm_rademacher}.
Recall the Rademacher complexity of a class $\mathcal{H}$
\begin{equation}
\ensuremath{\mathfrak{R}}_T(\mathcal{H}) = \mat{E}_S\mat{E}_{{\boldsymbol \sigma}} \left[ \frac{1}{T} \sup_{h \in \mathcal{H}} \bigg|
\sum_{i=1}^T \sigma_i h(\mat{x}_i,\mat{z}_i) \bigg| \right] \,,
\end{equation} where the inner expectation is over independent Rademacher
random variables $(\sigma_1,\ldots,\sigma_T)$ and the outer one over
a sample $S = ((\mat{x}_1,\mat{z}_1),\ldots,(\mat{x}_T,\mat{z}_T))$.
\begin{theorem}
\label{thm:rademacher}
If we assume a bounded regression problem $\forall y, ~|y| \leq B$
and $\forall \mat{x},~ \|\mat{x}\| \leq R$, then the Rademacher complexity of the
hypothesis set $\mathcal{H}$ is bounded as follows,
\begin{equation*}
\ensuremath{\mathfrak{R}}_T(\mathcal{H}) \leq \big(1 + \gamma + (\gamma + \gamma^2) d \big) \frac{BR^2
}{\lambda \sqrt{T}} = O\Big(\frac{d}{\sqrt{T}}\Big) \,.
\end{equation*}
\end{theorem}
Due to space constraints, we only provide a brief proof sketch here
with details deferred to the appendix. Our hypothesis class $\mathcal{H}$
consists of four terms that contribute to the predictions on any
point, coming from the four terms in the definition of
$\mat{K}_{\mat{M}\mat{N}}$~\eqref{eqn:relaxedkernel}. We bound the contributions from
each of these terms separately using the next lemma.
\begin{lemma}
\label{lem:term_bounds}
Under the same conditions as stated in Theorem~\ref{thm:rademacher},
the following bounds hold:
\begin{align*}
& \mat{E}_{\boldsymbol \sigma} \bigg[ \sup_{{\boldsymbol \alpha}}
\Big| \sum_{i,j=1}^T \sigma_i \alpha_j {\mat{x}'_i}^\top \mat{x}_j
\Big| \bigg] \leq \frac{BR^2T^{\frac{1}{2}}}{\lambda} \\
& \mat{E}_{\boldsymbol \sigma} \bigg[ \sup_{{\boldsymbol \alpha},\mat{M}} \Big| \sum_{i,j=1}^T
\sigma_i \alpha_j {\mat{x}'_i}^\top \overline{\mat{Z}}_j \mat{M}^\top \mat{x}_j
\Big| \bigg] \leq \frac{\gamma BR^2 T^{\frac{1}{2}}}{\lambda} \\
& \mat{E}_{\boldsymbol \sigma} \bigg[ \sup_{{\boldsymbol \alpha},\mat{M}} \Big| \sum_{i,j=1}^T
\sigma_i \alpha_j {\mat{x}'_i}^\top \mat{M} \overline{\mat{Z}}'_i \mat{x}_j
\Big| \bigg] \leq \frac{\gamma BR^2 \sqrt{d T}}{\lambda} \\
& \mat{E}_{\boldsymbol \sigma} \bigg[ \! \sup_{{\boldsymbol \alpha},\mat{N}} \Big|\!\! \sum_{i,j=1}^T
\!\! \sigma_i \alpha_j \!\! \sum_{k=1}^d [\overline{\mat{z}}'_i]_k [\overline{\mat{z}}_j]_k
{\mat{x}'}_i^\top \! \mat{N}_k
\mat{x}_j
\Big| \bigg] \!\! \leq \!\! \frac{\gamma^2 BR^2 \sqrt{d T}}{\lambda}.
\end{align*}
\end{lemma}
\noindent Using the lemma, the proof of the theorem is
straight-forward, and we can easily trace the four terms in the bound
to the terms coming from Lemma~\ref{lem:term_bounds}.
Theorem~\ref{thm:rademacher} allows us to control the gap between
empirical and expected risks using standard Rademacher
complexity results. Theorem 8 of~\cite{bm_rademacher}, immediately
provides the following corollary.
\begin{corollary} \label{corr:unif_dev}
Under the conditions of Theorem 2, for any $0 < \delta \leq 1$, with
probability at least $1-\delta$ over samples of size $T$, every $h
\in \mathcal{H}$ satisfies
\begin{align*}
\mat{E}[(y &- h(\mat{x}',\mat{z}))^2] \leq \frac{1}{T}\sum_{t=1}^T(y_t -
h(\mat{x}'_t,\mat{z}_t))^2 + \frac{BR^2(1+\gamma)^2}{\lambda}
\left(\frac{BR^2d(1+\gamma)^2 }{\lambda \sqrt{T}}
+ \sqrt{\frac{8\ln(2/\delta)}{T}}\right).
\end{align*}
\end{corollary}
\ignore{
\subsection{TODO}
\begin{itemize}
\item Can we parametrize $M$ matrix in terms of a covariance matrix
(and corruption vector)?
\item Can we show an online analysis?
\item Can we show a lower bound on the Rademacher complexity of the
non-relaxed hypothesis class $\mathcal{H}$ that is close to the upper bound of
the relaxed class $\mathcal{G}$?
\end{itemize}
}
\section{Empirical Results}
\label{sec:empirical}
This section presents empirical evaluation of the online matrix-based
algorithm~\ref{mirror_descent}, as well as the Imputed Ridge
Regression algorithm of Section~\ref{sec:imputation-alg}.
We use baseline methods \emph{zero-imputation} and \emph{mean-imputation}
where the missing entries are replaced with zeros and mean estimated
from observed values of those features resp. Once the data is
imputed, a standard online gradient descent algorithm or
ridge-regression algorithm is used. As reference, we also show
the performance of a standard algorithm on uncorrupted data. The
algorithms are evaluated on several UCI repository datasets,
summarized in Table~\ref{table:data}.
The {\tt thyroid} dataset includes naturally
corrupted/missing data. The {\tt optdigits} dataset is subjected to
artificial corruption by deleting a column of pixels, chosen uniformly
at random from the 3 central columns of the image (each image contains
8 columns of pixels total). The remainder of the datasets are subjected to two
types of artificial corruption: \emph{data-independent} or
\emph{data-dependent} corruption. In the first case, each feature is
randomly deleted independently, while the features are deleted based
on thresholding values in the latter case.
\begin{table}
\begin{center}
\begin{tabular}{l|llcc}
dataset & $m$ & $d$ & $F_I$ & $F_D$ \\
\hline
\hline
{\tt abalone} & 4177 & 7 & $.62 \pm .08$ & $.57 \pm .07$ \\
{\tt housing} & 20640 & 8 & $.64 \pm .08$ & $.68 \pm .20$ \\
{\tt optdigits} & 5620 & 64 & $.88 \pm .00$ & $.88 \pm .00$ \\
{\tt park} & 3000 & 20 & $.58 \pm .06$ & $.60 \pm .07$ \\
{\tt thyroid} & 3163 & 5 & $.77 \pm .00$ & $.77 \pm .00$ \\
{\tt splice} & 1000 & 60 & $.63 \pm .01$ & $.66 \pm .03$ \\
{\tt wine} & 6497 & 11 & $.63 \pm .10$ & $.68 \pm .12$ \\
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{
Size of dataset ($m$), features ($d$) and, the overall fraction of
remaining features in the training set after data-independent ($F_I$)
or data-dependent ($F_D$) corruption.}
\label{table:data}
\vspace{-0.5cm}
\end{table}
We report average error and standard deviations over 5 trials, using
1000 random training examples and corruption patterns. We tune
hyper-parameters using a grid search from $2^{-12}$ to $2^{10}$.
Further details and explicit corruption processes appear in the
appendix.
\subsection{\mbox{Online Corruption Dependent Hypothesis}}
Here we analyze the online algorithm presented in section
\ref{sec:matrix-alg} using two different types of regularization. The
first method simply penalizes the Frobenius norm of the parameter matrix
$\mat{A}$ (frob-reg), $\mathcal{R}(\mat{A}) = \| \mat{A} \|_F^2$.
The second method (sparse-reg) forces a sparse solution by
constraining many entries of the parameter matrix equal to zero as
mentioned in Section~\ref{sec:failed_sensors}. We use the regularizer
$\mathcal{R}(\mat{A}) = \gamma \|\mat{A} \mat{1}\|^2 + \|\mat{A}\|_F^2$, where $\gamma$ is an
additional tunable parameter. This choice of regularization is based
on the example given in equation (\ref{eqn:Amatrixlocal}), where we
would have $\|\mat{A}\mat{1}\| = \|\mat{w}^*\|$.
We apply these methods to the {\tt splice} classification task and the
{\tt optdigits} dataset in several one vs.\ all classification tasks.
For {\tt splice}, the sparsity pattern used by the sparse-reg method
is chosen by constraining those entries $[\mat{A}]_{i,j}$ where feature $i$
and $j$ have a correlation coefficient less than 0.2, as measured with
the corrupted training sample. In the case of {\tt optdigits}, only
entries corresponding to neighboring pixels are allowed to be
non-zero.
Figure \ref{fig:plots} shows that, when subject to data-independent
corruption, the zero imputation, mean imputation and frob-reg methods
all perform relatively poorly while the sparse-reg method provides
significant improvement for the {\tt splice} dataset.
Furthermore, we find data-dependent corruption is quite harmful to
mean imputation as might be expected, while both frob-reg and
sparse-reg still provide significant improvement over zero-imputation.
More surprisingly, these methods also perform better than training on
uncorrupted data. We attribute this to the fact that we are using a
richer hypothesis function that is parametrized by the corruption
vector while the standard algorithm uses only a fixed hypothesis. In
Table \ref{table:online-digits} we see that the sparse-reg performs at
least as well as both zero and mean imputation in all tasks and
offers significant improvement in the 3-vs-all and 6-vs-all task. In
this case, the frob-reg method performs comparably to sparse-reg and
is omitted from the table due to space.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.46\columnwidth]{splice_online_indepcorr_small.eps} &
\hspace{-0.28cm}
\includegraphics[width=0.46\columnwidth]{splice_online_dependcorr_small.eps} \\
\includegraphics[width=0.46\columnwidth]{abalone_series_indepcorr.eps} &
\hspace{-0.28cm}
\includegraphics[width=0.46\columnwidth]{abalone_series.eps}
\vspace{-0.35cm}
\end{tabular}
\end{center}
\vspace{-0.25cm}
\caption
0/1 loss as a function of $T$ for {\tt splice} dataset with
independent (top left) and dependent corruption (top right). RMSE on
{\tt abalone} across varying amounts of independent (bottom left) and
dependent corruption (bottom right); fraction of features remaining
indicated on x-axis.}
\label{fig:plots}
\vspace{-0.2cm}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{l|cccc}
& \hspace{-0.15cm} zero-imp & \hspace{-0.28cm} mean-imp & \hspace{-0.28cm} sparse-reg & \hspace{-0.28cm} no corr \\
\hline
\hline
{2}
& \hspace{-0.15cm} $.035 \pm .002$ & \hspace{-0.28cm} $.039 \pm .004$ & \hspace{-0.28cm} $.033
\pm .003$ & \hspace{-0.28cm} $.024 \pm .002$ \\
\hline
{3}
& \hspace{-0.15cm} $.041 \pm .002$ & \hspace{-0.28cm} $.043 \pm .001$ & \hspace{-0.28cm} $.039
\pm .002$ & \hspace{-0.28cm} $.027 \pm .003$ \\
\hline
{4}
& \hspace{-0.15cm} $.020 \pm .002$ & \hspace{-0.28cm} $.023 \pm .002$ & \hspace{-0.28cm} $.021
\pm .001$ & \hspace{-0.28cm} $.015 \pm .001$ \\
\hline
{6}
& \hspace{-0.15cm} $.026 \pm .002$ & \hspace{-0.28cm} $.024 \pm .002$ & \hspace{-0.28cm} $.023
\pm .002$ & \hspace{-0.28cm} $.015 \pm .002$
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption
One-vs-all classification results on \texttt{optdigits} dataset
(target digit in first column) with column-based
corruption for 0/1 loss.}
\label{table:online-digits}
\vspace{-0.5cm}
\end{table}
\subsection{Imputed Ridge Regression}
In this section we consider the performance of IRR across
many datasets. We found standard SDP solvers to be quite slow for
problem~(\ref{irr_relaxed}). We instead use a semi-infinite
linear program (SILP) to find an approximately optimal
solution (see e.g.~\cite{largeMKL, SDP-SILP} for details).
In Table \ref{table:imputed-results} we compare the performance of the
IRR algorithm to zero and mean imputation as well as to standard ridge
regression performance on the uncorrupted data. Here we see IRR
provides improvement over zero-imputation in all cases and does at
least as well as mean-imputation when dealing with data-independent
corruption and continues to perform well when subject to
data-dependent corruption, while mean-imputation suffers. Figure
\ref{fig:plots} shows more detailed results for the {\tt abalone}
dataset across different levels of corruption and displays the
consistent improvement which the IRR algorithm provides.
\begin{table}
\begin{center}
\begin{tabular}{l|cccc}
& zero-imp & mean-imp & IRR & no corr \\
\hline
\hline
A & \hspace{-0.15cm} $.199 \pm .004$ & \hspace{-0.28cm} $.187 \pm .003$ & \hspace{-0.28cm} $.183 \pm .002$ & \hspace{-0.28cm} $.158 \pm .002$ \\
H & \hspace{-0.15cm} $.414 \pm .025$ & \hspace{-0.28cm} $.370 \pm .019$ & \hspace{-0.28cm} $.373 \pm .019$ & \hspace{-0.28cm} $.288 \pm .001$ \\
P & \hspace{-0.15cm} $.457 \pm .006$ & \hspace{-0.28cm} $.445 \pm .004$ & \hspace{-0.28cm} $.451 \pm .004$ & \hspace{-0.28cm} $.422 \pm .004$ \\
W & \hspace{-0.15cm} $.280 \pm .006$ & \hspace{-0.28cm} $.268 \pm .009$ & \hspace{-0.28cm} $.269 \pm .008$ & \hspace{-0.28cm} $.246 \pm .001$ \\
\hline
\hline
A & \hspace{-0.15cm} $.183 \pm .007$ & \hspace{-0.28cm} $.182 \pm .004$ & \hspace{-0.28cm} $.170 \pm .008$ & \hspace{-0.28cm} $.160 \pm .002$ \\
H & \hspace{-0.15cm} $.401 \pm .069$ & \hspace{-0.28cm} $.400 \pm .059$ & \hspace{-0.28cm} $.337 \pm .044$ & \hspace{-0.28cm} $.289 \pm .001$ \\
P & \hspace{-0.15cm} $.431 \pm .014$ & \hspace{-0.28cm} $.444 \pm .010$ & \hspace{-0.28cm} $.394 \pm .022$ & \hspace{-0.28cm} $.422 \pm .003$ \\
W & \hspace{-0.15cm} $.264 \pm .010$ & \hspace{-0.28cm} $.265 \pm .010$ & \hspace{-0.28cm} $.256 \pm .012$ & \hspace{-0.28cm} $.246 \pm .001$ \\
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption
RMSE for various imputation
methods across the datasets {\tt abalone} (A), {\tt housing} (H), {\tt
park} (P) and {\tt wine} (W) when subject to data-independent (top)
and data-dependent corruption (bottom).}
\label{table:imputed-results}
\vspace{-0.1cm}
\end{table}
In Table \ref{table:impute-digits} we see that, with respect to the
column-corrupted {\tt optdigit} dataset, the IRR algorithm performs
significantly better than zero-imputation in all cases and is able to
outperform mean-imputation in three out of four tasks. At the bottom
of Table \ref{table:impute-digits} we measure performance with {\tt
thyroid} which has naturally missing values. Here again IRR performs
significantly better than the competitor methods.
\begin{table}
\begin{center}
\begin{tabular}{l|cccc}
& zero-imp & mean-imp & IRR & no corr \\
\hline
\hline
2 & \hspace{-0.15cm} $.352 \pm .003$ & \hspace{-0.28cm} $.351 \pm .004$ & \hspace{-0.28cm} $.346 \pm .002$ & \hspace{-0.28cm} $.321 \pm .003$ \\
3 & \hspace{-0.15cm} $.450 \pm .005$ & \hspace{-0.28cm} $.435 \pm .004$ & \hspace{-0.28cm} $.426 \pm .005$ & \hspace{-0.28cm} $.398 \pm .004$ \\
4 & \hspace{-0.15cm} $.372 \pm .003$ & \hspace{-0.28cm} $.363 \pm .002$ & \hspace{-0.28cm} $.364 \pm .003$ & \hspace{-0.28cm} $.345 \pm .002$ \\
6 & \hspace{-0.15cm} $.369 \pm .003$ & \hspace{-0.28cm} $.360 \pm .002$ & \hspace{-0.28cm} $.353 \pm .003$ & \hspace{-0.28cm} $.333 \pm .003$ \\
\hline
\hline
T & \hspace{-0.15cm} $.562 \pm .004$ & \hspace{-0.28cm} $.531 \pm .005$ & \hspace{-0.28cm} $.521 \pm .004$
& \hspace{-0.28cm} -- \\
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption
RMSE (using $\{-1,+1\}$ regression labels)
for one-vs-all classification on \texttt{optdigits} subject to
column-based corruption. The bottom row shows
performance on the {\tt thyroid} dataset with naturally occurring
missing features.
}
\label{table:impute-digits}
\vspace{-0.5cm}
\end{table}
\section{Conclusion}
We have introduced two new algorithms, addressing the problem of
learning with missing features in both the adversarial online and
i.i.d.\ batch settings. The algorithms are motivated by intuitive
constructions and we also provide theoretical performance
guarantees. Empirically we show encouraging initial results for
online matrix-based corruption-dependent hypotheses as well as many
significant results for the suggested IRR algorithm, which indicate
superior performance when compared to zero- and mean-imputation
methods.
|
1,116,691,500,316 | arxiv | \section{SUPPLEMENTARY INFORMATION}
\section{A: Detailed properties of Bayesian networks}
Let $\mathcal{A}=\{a_j|j=1,2,\dots,N_{\mathcal{A}}\}$ be the set of random variables on a BN, where $a_1,a_2,\dots$ is in the topological ordering. The conditional probability is given by $p(a_j|a_{j-1},\dots, a_1)=p(a_j|{\rm pa}(a_j))$, where ${\rm pa}(a_j) \subseteq \{a_{1}, a_2, \dots , a_{j-1}\}$ is the set of parents of $a_j$.
In this section, we prove two theorems~\cite{Bayesian} that has been used in the derivation of the main result in the main manuscript.
\textbf{Theorem 1 (The chain rule for Bayesian networks).}
\textit{For any index $j$, we have}
\begin{align}
p(a_j, a_{j-1}, \dots, a_1)= \prod_{j'=1}^{j} p(a_{j'}|{\rm pa}(a_{j'})).
\label{The chain rule for Bayesian networks}
\end{align}
\textit{Proof}.
\begin{align}
p(a_j, a_{j-1}, \dots, a_1) &= p(a_j|a_{j-1}, \dots, a_1)p(a_{j-1}, \dots, a_1) \nonumber\\
&= p(a_j|a_{j-1}, \dots, a_1)p(a_{j-1}| a_{j-2}, \dots, a_1)p(a_{j-2}, \dots, a_1) \nonumber\\
&= \cdots \nonumber\\
&= \prod_{j'=1}^{j} p(a_{j'}|a_{j'-1}, \dots, a_{1} ) \nonumber\\
&= \prod_{j'=1}^{j} p(a_{j'}|{\rm pa}(a_{j'})).\Box
\label{Proof of the chain rule for Bayesian networks}
\end{align}
In the derivation of the main result, we used this theorem as $p(X, \mathcal{C}) = \prod_{k=1}^N \prod_{l=1}^{N'} p(x_k|{\rm pa} (x_k))p(c_l|{\rm pa} (c_l))$, because $\mathcal{C} \cup X = \{c_1, c_2, \dots, c_{N'}, x_1, x_2, \dots, x_N\}=\{a_1, a_2, \dots, a_J \}$, where $a_J$ is chosen to satisfy $a_J = x_N$.
\textbf{Theorem 2 (Consistency of the specification of BN).}
\textit{If $\mathcal{A}'$ is a subset of $\{a_{j-1}, a_{j-2}, \dots, a_1\}$ and ${\rm pa}(a_j)$ is a subset of $\mathcal{A}'$ (${\rm pa}(a_j) \subseteq \mathcal{A}' \subseteq \{a_{j-1},a_{j-2}, \dots, a_1\}$), we have}
\begin{align}
p(a_j|\mathcal{A}')= p(a_j|{\rm pa}(a_j)).
\label{Consistency of the specification of BN}
\end{align}
\textit{Proof}.
\begin{align}
p(a_j|\mathcal{A}') &= \frac{p(a_j, \mathcal{A}')}{p(\mathcal{A}')} \nonumber\\
&= \frac{\sum_{\{a_{j},a_{j-1}, \dots, a_1\} \setminus \{a_j, \mathcal{A}' \}}p(a_j, a_{j-1}, \dots, a_1 )}{\sum_{\{a_{j},a_{j-1}, \dots, a_1\} \setminus \{\mathcal{A}' \}}p(a_j, a_{j-1}, \dots, a_1 )}\nonumber\\
&= \frac{\sum_{\{a_{j},a_{j-1}, \dots, a_1\} \setminus \{a_j,\mathcal{A}' \}}\prod_{j'=1}^{j} p(a_{j'}|{\rm pa}(a_{j'}))}{\sum_{\{a_{j-1}, \dots, a_1\} \setminus \{\mathcal{A}' \}}p(a_{j-1},a_{j-2}, \dots, a_1 )}\nonumber \nonumber\\
&= p(a_{j}|{\rm pa}(a_{j}))\frac{\sum_{\{a_{j-1}, \dots, a_1\} \setminus \{\mathcal{A}' \}}\prod_{j'=1}^{j-1} p(a_{j'}|{\rm pa}(a_{j'}))}{\sum_{\{a_{j-1}, \dots, a_1\} \setminus \{\mathcal{A}' \}}p(a_{j-1},a_{j-2}, \dots, a_1 )}\nonumber\\
&= p(a_{j}|{\rm pa}(a_{j})).\Box
\label{Proof of consistency of the specification of BN}
\end{align}
In the derivation of the main result, we used this theorem as $p(c_l|{\rm pa}_X(c_l), \mathcal{C}_{l-1}) = p(c_l|{\rm pa}(c_l)) $, because ${\rm pa}(a_{J'}) \subseteq \{{\rm pa}_X(c_l), \mathcal{C}_{l-1}\} \subseteq \{a_{J'-1},a_{J'-2}, \dots, a_1\}$, where $a_{J'}$ is chosen to satisfy $a_{J'}=c_l$.
\section{B: Physical meaning of the transfer entropy}
In Ref.~\cite{Schreiber}, Schreiber introduced the transfer entropy for stochastic dynamics with two variables $I=\{ i_1, i_2, \dots,i_n, \dots\}$ and $J=\{ j_1, j_2, \dots,j_n,\dots \}$, where $i_n$ ($j_n$) denotes the state of the system $I$ ($J$) at time $n$. The transfer entropy from $J$ to $I$ is defined by
\begin{align}
T_{J\to I}\equiv \sum p(I,J) \ln \frac{p(i_{n+1}, j_{n}, j_{n-1}, \dots, j_{n-l} |i_{n}, i_{n-1} \dots, i_{n-k})}{p(i_{n+1}|i_{n}, i_{n-1} \dots, i_{n-k}) p(j_{n}, j_{n-1}, \dots, j_{n-l} |i_{n}, i_{n-1} \dots, i_{n-k})},
\end{align}
Here, $T_{I\to J}$ characterizes the information flow from $I$ to $J$; in fact, $T_{I\to J}$ is given by the difference between the entropy rate in $I$ and that under the condition of $J$:
\begin{align}
T_{J\to I}\equiv \Delta s_{I|J} - \Delta s_{I},
\end{align}
where the entropy rate in $I$ and that under the condition of $J$, $\Delta s_{I}$ and $\Delta s_{I|J}$, are defined as $\Delta s_{I} \equiv \sum p(I)[\ln p(i_{n+1}, i_{n}, i_{n-1} \dots, i_{n-k})- \ln p(i_{n}, i_{n-1} \dots, i_{n-k})]$ and $\Delta s_{I|J} \equiv \sum p(I,J)[ \ln p(i_{n+1}, i_{n}, i_{n-1} \dots, i_{n-k}| j_{n}, j_{n-1}, \dots, j_{n-l})- \ln p(i_{n}, i_{n-1} \dots, i_{n-k}| j_{n}, j_{n-1}, \dots, j_{n-l})]$, respectively.
The quantity $\left< I_{\rm tr}^l \right>$ in our main result is defined as
\begin{align}
\left< I_{\rm tr}^l\right> \equiv \sum_{\mathcal{C}, X} p(\mathcal{C}, X) \ln \frac{p(c_l, {\rm pa}_X(c_l)| c_{l-1}, \dots, c_1 )}{p(c_l| c_{l-1}, \dots, c_1 )p({\rm pa}_X(c_l)| c_{l-1}, \dots, c_1 )},
\end{align}
which equals the transfer entropy $T_{X\to \mathcal{C}}$.
\section{C: Multidimensional Langevin systems}
We consider the following multidimensional over-damped Langevin equation:
\begin{align}
\gamma^{(\mu')} \dot{x}^{(\mu')}(t) \!=\! f^{(\mu')} (x^{(1)}(t), \dots , x^{(n')}(t)) +\xi^{(\mu')} (t),
\label{Langevin1}
\end{align}
\begin{align}
\left< \xi^{(\mu')}(t) \xi^{(\nu')}(t') \right> &= 2 \gamma^{(\mu')} k_B T^{(\mu')} \delta_{\mu' \nu'} \delta(t-t')
\label{Langevin2}\\
\left< \xi^{(\mu')}(t)\right> &= 0,
\label{Langevin3}
\end{align}
where $x^{(\mu')}$ ($\mu' =1, \dots, n'$) denotes a dynamical variable. With small time interval $\Delta t$, we discretize the dynamical variables as $x^{(\mu')}_k\equiv x^{(\mu')}(k\Delta t)$. We write $x^{(1)}_k\equiv x_k$. When ${\bm x}_{k}\equiv \{ x^{(2)}_k, \dots, x^{(n')}_k \}$ is fixed, we obtain the conditional probability $p(x_{k+1}|x_k, {\bm x}_k)$ in terms of the Stratonovich product:
\begin{align}
p(x_{k+1}|x_k, {\bm x}_k)=\mathcal{N} \exp \left[-\frac{\Delta t}{4\gamma^{(1)} k_{B} T^{(1)}} \left( \gamma^{(1)} \frac{\epsilon^{(1)}_k}{\Delta t}- f^{(1)} (\bar{x}^{(1)}_k , {\bm x}_k) \right)^2 - \frac{\Delta t}{2}\frac{\partial}{\partial x^{(1)}} f^{(1)} (\bar{x}^{(1)}_k, {\bm x}_k) \right],
\label{forwardpath}
\end{align}
where $\bar{x}^{(\mu')}_k \equiv (x^{(\mu')}_k + x^{(\mu')}_{k+1})/2$, $\epsilon^{(\mu')}_k \equiv x^{(\mu')}_{k+1} -x^{(\mu')}_{k}$, $f^{(1)} (\bar{x}_k, {\bm x}_k) \equiv f^{(1)} (\bar{x}_k, x^{(2)}_k, \dots, x^{(n')}_k)$, and $\mathcal{N}$ is the prefactor which does not depend on $f^{(\mu')}$~\cite{Chernyak}. We stress that we use the mid-point rule only for $x^{(1)}$. We define the conditional probability of the backward process $p_B(x_{k}|x_{k+1}, {\bm x}_{k})$ as
\begin{align}
p_B(x_{k}|x_{k+1}, {\bm x}_{k})=\mathcal{N} \exp \left[-\frac{\Delta t}{4\gamma^{(1)} k_{B} T^{(1)}} \left( -\gamma^{(1)} \frac{\epsilon^{(1)}_k}{\Delta t}- f^{(1)} (\bar{x}^{(1)}_k , {\bm x}_k) \right)^2 - \frac{\Delta t}{2}\frac{\partial}{\partial x^{(1)}} f^{(1)} (\bar{x}^{(1)}_k, {\bm x}_k) \right].
\label{backwardpath}
\end{align}
\begin{figure}
\centering
\includegraphics[width=80mm,clip]{supple_graph1.eps}
\caption{BN corresponding to the multidimensional Langevin equation.
}
\label{figsupple:langevin}
\end{figure}
Figure~\ref{figsupple:langevin} shows the Bayesian network (BN) corresponding to the multidimensional Langevin equation [Eqs. (\ref{Langevin1}), (\ref{Langevin2}) and (\ref{Langevin3})] for the time interval between $k\Delta t$ and $(k+1) \Delta t$. Thus we have $\mathcal{B}^{k+1} = {\bm x}_k$. From Eqs. (\ref{forwardpath}) and (\ref{backwardpath}) , we obtain $\Delta s_{\rm bath}^{k+1}$:
\begin{align}
\Delta s_{\rm bath}^{k+1} &= \ln \frac{p(x_{k+1}|x_k, {\bm x}_k)}{p_B(x_{k}|x_{k+1}, {\bm x}_{k})}\\
&= -\frac{1}{k_B T^{(1)}} f^{(1)} (\bar{x}^{(1)}_k , {\bm x}_k) \epsilon^{(1)}_k.
\end{align}
The definition of the heat flux in system $x^{(1)}$ by Sekimoto~\cite{Sekimoto2, Sekimoto3} is given by $J^{(1)} \equiv f^{(1)} (\bar{x}^{(1)}_k, \bar{x}^{(2)}_k, \dots, \bar{x}^{(n')}_k) \epsilon^{(1)}_k$. We then compare $\Delta s'_{\rm bath} \equiv - J^{(1)}/ (k_{B} T^{(1)})$ with $\Delta s_{\rm bath}^{k+1}$ as
\begin{align}
\Delta s'_{\rm bath} -\Delta s_{\rm bath}^{k+1} &= - \frac{1 }{k_{B} T^{(1)}} \left[\sum_{\mu' =2}^{n'} \frac{\partial f^{(1)}} {\partial x^{(\mu')}} (\bar{x}^{(1)}_k, x^{(2)}_k, \dots , x^{(\mu')}_k, \bar{x}^{(\mu'+1)}_k, \dots, \bar{x}^{(n')}_k) \epsilon^{(1)}_k \epsilon^{(\mu')}_k \right] \\
&= o(\Delta t ),
\end{align}
where we used $\epsilon^{(1)}_k \epsilon^{(\mu')}_k = o(\Delta t)$ with $\mu' \neq 1$ because of the independence of the noises [Eq. (\ref{Langevin2})].
Therefore, our definition of the entropy change in the heat baths on the BN (\textit{i.e.}, $\Delta s_{\rm bath}^{k+1}$) is equivalent to the Sekimoto's definition (\textit{i.e.}, $\Delta s'_{\rm bath}$) up to $o(\Delta t)$.
\section{D: Repeated feedback control}
We consider systems under repeated feedback control. Figure~\ref{figsupple:repeated} shows the BN corresponding to the repeated feedback control discussed by Horowitz and Vaikuntanathan~\cite{Horowitz}.
\begin{figure}
\centering
\includegraphics[width=50mm,clip]{supple_graph2.eps}
\caption{BN corresponding to the repeated feedback control.
}
\label{figsupple:repeated}
\end{figure}
There are system $X$ and memories $M^{(\mu')}$ with $\mu'=1,\dots ,N'$ ($N' \leq N$). Measurements are performed on system $X$ at time $T(\mu')$, where $T(\mu')$ is the natural number such as $1=T(1)<T(2)<\cdots<T(N')<N$. The state of $X$ at time $T(\mu')$ is given by $x_{T(\mu')}$, where the measurement outcome is stored in $m^{(\mu')}_1$. The states of $X$ under feedback control can then depend on $m^{(\mu')}_1$ after time $T(\mu')$.
We have $\mathcal{C} = \{m^{(1)}_1, m^{(2)}_1 ,\dots ,m^{(N')}_1 \}$ and ${\rm pa}(x_1) =\emptyset$, and therefore $I_{\rm fin}= I(x_{N}: \{m^{(1)}_1, \dots , m^{(N')}_1\})$, $I_{\rm ini} = 0$, $I^{l}_{\rm tr}= I(x_{T(l)}: m^{(l)}_1|m^{(l-1)}_1 , \dots , m^{(1)}_{1})$, and
\begin{equation}
\Theta= I(x_{N}: \{m^{(1)}_1, \dots , m^{(N')}_1\})-\sum_{l=1}^{N'} I(x_{T(l)}: m^{(l)}_1|m^{(l-1)}_1 , \dots , m^{(1)}_{1}).
\end{equation}
According to the main result, we obtain the following generalized Jarzynski equality:
\begin{equation}
\left<\exp[-\sigma+ \Theta ]\right>= 1.
\label{bayesian jarzynski}
\end{equation}
On the other hand, the equality derived by Horowitz and Vaikuntanathan~\cite{Horowitz} is given by
\begin{equation}
\left<\exp \left[-\beta W_d-\sum_{l=1}^{N'} I^{l}_{\rm tr} \right] \right> =1,
\label{Horowitz jarzynski}
\end{equation}
where $I_{\rm tr}^l$ is our definition of the transfer entropy that is given by $I^{l}_{\rm tr}= I(x_{T(l)}: m^{(l)}_1|m^{(l-1)}_1 , \dots , m^{(1)}_{1})$ and $\beta$ is the inverse temperature of the heat bath. $W_d$ is the dissipated work that is given by $\beta W_d \equiv \Delta s_{\rm bath} + \ln p_{\rm eq}(x_1)-\ln p_{\rm eq}(x_N|m_1^{(1)} , \dots, m_{1}^{(N')})$, where $p_{\rm eq}$ is the canonical equilibrium distribution for fixed control parameter. $\beta W_d$ is equivalent to $\sigma-I_{\rm fin}$ such that $\sigma - I_{\rm fin}= \Delta s_{\rm bath} + \ln p(x_1)- \ln p(x_N|m_1^{(1)} , \dots, m_{1}^{(N')})$.
Therefore, our result can reproduce the result obtained by Horowitz and Vaikuntanathan in Ref.~\cite{Horowitz}, when the initial and final states of the system are in thermal equilibrium.
\section{E: Detailed calculations in the adaptation model}
In the adaptation model in the main manuscript, we consider the following master equations:
\begin{align}
\frac{dp^{X}_{0}}{dt} (t)\! &=\! - \omega_{0, 1}^{X}(F^X_{0}(t) ) p^{X}_{0} (t)+ \!\omega_{1, 0}^{X} (F^X_{1} (t)) p^{X}_{1}(t),
\label{mastersup1}
\\
\frac{dp^{X}_{1}}{dt} (t)\! &=\!- \omega_{1, 0}^{X}(F^X_{1}(t)) p^{X}_{1} (t)\!+\! \omega_{0,1}^{X}(F^X_{0}(t)) p^{X}_{0}(t).
\label{mastersup2}
\end{align}
where the transition rate is given by
\begin{equation}
\omega^{X}_{\mu,\nu} (F^X_{\mu}(t))= \frac{1}{\tau^X} \exp\left[-\beta^{X}(\Delta^{X}_{\mu \nu} -F^{X}_{\mu}(t)) \right].
\label{transition rate}
\end{equation}
In the following, we show that $\Delta s_{\rm bath}^{k+1}$ is equal to $-\beta^X \Delta F^{X}$.
We note that $p^{X}_{0}(t)+p^{X}_{1}(t)=1$ holds because of the normalization of the probability distribution.
We rewrite Eq. (\ref{mastersup1}) as
\begin{align}
\frac{dp^{X}_{0}}{dt} (t)\! &=\! - [ \omega_{0, 1}^{X}(F^X_{0}(t) ) + \omega_{1, 0}^{X}(F^X_{1} (t) )] p^{X}_{0}(t) + \omega_{1, 0}^{X} (F^X_{1}(t)).
\label{Master}
\end{align}
When $F^X_{0}$ and $F^X_{1}$ are constants, we get the solution of Eq. (\ref{Master}) as
\begin{align}
p^{X}_{0}(t) =p^{X}_{0,{\rm eq}}+ (p^{X}_{0} (0) - p^{X}_{0,{\rm eq}} ) \exp \left[ - (\omega_{0, 1}^{X}(F^X_{0}) + \omega_{1, 0}^{X} (F^X_{1}))t \right],
\label{solution}
\end{align}
where $p^{X}_{0,{\rm eq}}$ is defined as
\begin{align}
p^{X}_{0,{\rm eq}}(F^{X}_0, F^{X}_1) \equiv \frac{\omega_{1, 0}^{X}(F^X_{1})}{\omega_{0, 1}^{X}(F^X_{0}) + \omega_{1, 0}^{X}(F^X_{1})} =\frac{\exp(-\beta^X F^X_{0} )}{\exp(-\beta^X F^X_{0})+\exp(-\beta^X F^X_{1})}.
\end{align}
The state of $O$ ($M$) at time $t=k\delta$ ($t=k\delta - \delta'$) describes $o_k$ ($m_{k}$) with $\delta > \delta'$. We set the interaction between the memory $X=M$ and the output system $X=O$ as follow. Let $F^{M}_{\mu}(t)$ at time $k\delta - \delta' \leq t \leq (k+1)\delta - \delta'$ be
\begin{eqnarray}
F^{M}_{\mu}(t) = \begin{array}{ll} F_{\mu,j'} & (o_{k}=j'), \end{array}
\end{eqnarray}
and let $F^{O}_{\mu}(t)$ at time $k\delta \leq t \leq (k+1)\delta$ be
\begin{eqnarray}
F^{O}_{\mu}(t) = \begin{array}{ll}
F'_{\mu,j'k'} & (m_{k}=j',m_{k+1}=k'),
\end{array}
\end{eqnarray}
where $j', k' =0 ,1$.
Substituting $p_0^{M}(0)=0,1$ into the solution of Eq. (\ref{solution}), we have the conditional probabilities $p(m_{k+1}|m_{k},o_{k})$:
\begin{eqnarray}
p(m_{k+1}=0|m_{k}=0,o_k=j') &=& q_{j'}+ \left(1 - q_{j'}\right)\exp \left[ -\omega_{j'} \delta \right],
\label{so1}\\
p(m_{k+1}=0|m_{k}=1,o_k=j') &=& q_{j'} - q_{j'}\exp \left[ -\omega_{j'} \delta \right],
\label{so2}\\
p(m_{k+1}=1|m_{k}=i',o_k=j') &=& 1- p(m_{k+1}=0|m_{k}=i',o_k=j') ,
\label{so3}
\end{eqnarray}
where $i'=0,1$, $q_{j'} \equiv p_{0,\rm eq}^{M} (F_{0,j'}, F_{1,j'})$ and $\omega_{j'} \equiv \omega_{0, 1}^{M}(F_{0,j'}) + \omega_{1, 0}^{M} (F_{1,j'})$.
Substituting $p_0^{O}(0)=0,1$ into Eq. (\ref{solution}), we also have the conditional probabilities $p(o_{k+1}|o_{k},m_{k},m_{k+1})$:
\begin{eqnarray}
p(o_{k+1}=0|o_{k}=0,m_{k}=j' ,m_{k+1}=k') &=& q'_{j'k'}+ \left(1 - q'_{j'k'} \right)\exp \left[ -\omega'_{j'k'} \delta \right]
\label{so4}\\
p(o_{k+1}=0|o_{k}=1,m_{k}=j' ,m_{k+1}=k') &=& q'_{j'k'} - q'_{j'k'} \exp \left[ -\omega'_{j'k'} \delta \right]
\label{so5}\\
p(o_{k+1}=1|o_{k}=i',m_{k}=j' ,m_{k+1}=k')&=&1- p(o_{k+1}=0|o_{k}=i' ,m_{k}=j' ,m_{k+1}=k'),
\label{so6}
\end{eqnarray}
where $q'_{j'k'} \equiv p_{0,\rm eq}^{O} (F'_{0,j'k'}, F'_{1,j'k'})$ and $\omega'_{j'k'} \equiv \omega_{0, 1}^{O}(F'_{0,j'k'}) + \omega_{1, 0}^{O} (F'_{1,j'k'})$.
We assume that the conditional probabilities of backward process $p_B(m_{k}|m_{k+1},o_{k})$ and $p_B(o_{k}|o_{k+1},m_{k},m_{k+1})$ are defined as $p_B(m_{k}=l'|m_{k+1}=i',o_{k}=j') \equiv p(m_{k+1}=l'|m_{k}=i',o_{k}=j')$ and $p_B(o_{k}=l'|o_{k+1}=i',m_{k}=j',m_{k+1}=k') \equiv p(o_{k+1}=l'|o_{k}=i',m_{k}=j',m_{k+1}=k')$ with $l'=0,1$, respectively.
From Eqs. (\ref{transition rate}), (\ref{so1}), (\ref{so2}) and (\ref{so3}), we have $\Delta s_{\rm bath}^{k+1}$ with $X=M$:
\begin{align}
\Delta s_{\rm bath}^{k+1} &=\ln \left[ \frac{p(m_{k+1}|m_{k},o_{k}) }{p_B(m_{k}|m_{k+1},o_{k}) } \right] \\
&=\left\{ \begin{array}{ll}
0 & (m_{k+1}=0, m_{k}=0, o_{k}=j'))\\
\ln q_{j'} -\ln(1-q_{j'}) & (m_{k+1}=0, m_{k}=1, o_{k}=j') \\
\ln (1- q_{j'}) -\ln q_{j'} & (m_{k+1}=1, m_{k}=0, o_{k}=j')\\
0 & (m_{k+1}=1, m_{k}=1, o_{k}=j')\\
\end{array} \right. \\
&=\begin{array}{ll} -\beta^M( F_{l',j'} -F_{i',j'}) & (m_{k+1}=l', m_{k}=i', o_{k}=j'), \end{array}
\end{align}
where $\mathcal{B}^{k+1}= \{o_{k} \}$. From Eqs. (\ref{transition rate}), (\ref{so4}), (\ref{so5}) and (\ref{so6}), we have $\Delta s_{\rm bath}^{k+1}$ with $X=O$:
\begin{align}
\Delta s_{\rm bath}^{k+1} &=\ln \left[ \frac{p(o_{k+1}|o_{k},m_{k}, m_{k+1}) }{p_B(o_{k}|o_{k+1},m_{k}, m_{k+1}) } \right] \\
&=\left\{ \begin{array}{ll}
0 & (o_{k+1}=0, o_{k}=0, m_{k}=j', m_{k+1}=k' )\\
\ln q'_{j'k'} -\ln(1-q'_{j'k'}) & (o_{k+1}=0, o_{k}=1, m_{k}=j', m_{k+1}=k' )\\
\ln (1- q'_{j'k'}) -\ln q'_{j'k'} & (o_{k+1}=1, o_{k}=0, m_{k}=j', m_{k+1}=k' )\\
0 & (o_{k+1}=1, o_{k}=1, m_{k}=j', m_{k+1}=k' )\\
\end{array} \right. \\
&=\begin{array}{ll} -\beta^O( F'_{l',j'k'} -F'_{i',j'k'}) & (o_{k+1}=l' ,o_{k}=i', m_{k}=j', m_{k+1}=k'), \end{array}
\end{align}
where $\mathcal{B}^{k+1}= \{m_{k}, m_{k+1} \}$, we reach the conclusion that $\Delta s_{\rm bath}^{k+1}$ is given by the effective free-energy difference.
\section{F: The parameter set of the numerical illustration in the adaptation model}
We set the parameters of the numerical illustration in Fig.~5 of the main manuscript as follows: $\delta=0.5$, $\beta^M = \beta^O =0.01$, $\tau^O =\tau^M =0.001$, $\Delta^{M}_{01}= \Delta^O_{01}=100$, $F_{0,0}=F_{0,1} = 100$, $F_{1,0} =10$, $F_{1,1}=30$, $F'_{0,00}=F'_{0,01}=F'_{0,10}=F'_{0,11}=100$, $F'_{1,00}=30$, $F'_{1,01}=20$, $F'_{1,10}=10$ and $F'_{1,11}=5$. In this case, we have $q'_{00}=0.332$, $q'_{01}=0.310$, $q'_{10}=0.289$ and $q'_{11}=0.278$. We note that the value of $\left< \sigma \right>- \left< \Theta \right>$ in Fig.~5 of the main manuscript is close to $0$ when the initial states are close to the stationary distribution of the output system, which is similar to the probabilities $q'_{00}$, $q'_{01}$, $q'_{10}$ and $q'_{11}$.
|
1,116,691,500,317 | arxiv | \section*{}
\footnotesize{This work was supported by the Swiss National Science Foundation (00020\_182598), the Helmholtz Young Investigators Group VH-NG-1404 and the Canton of Neuchâtel.}
\bibliographystyle{unsrt}
|
1,116,691,500,318 | arxiv | \section{Introduction}
\label{intro}
Time-frequency analysis exploits translations and modulations to analyse functions and operators. Gabor analysis is the outcome of the confluence of time-frequency analysis and the theory of Hilbert space frames (\cite{DS}). Janssen's work (\cite{J}) initiated its mathematical investigations and \cite{DGM} marked its emergence as an important research area. A central object in the theory, both from the theoretical and applications points of view, is the frame operator. Frame operators of Weyl-Heisenberg frames in $L^2(\mathbb{R})$ have been completely characterised (\cite{EP}). We seek to get a better insight about these operators by viewing it as an integral operator from $L^2(\mathbb{R})$ into $L^2(\mathbb{R})$ itself, rather than as a map from modulation spaces into the space of tempered distributions.
A large quantum of work has been carried out by experts using abstract theories in very general settings (see for instance,\cite{BS}, \cite{HGFK}, \cite{W}, Chapters 11 and 14 of \cite{G}). Most of the known results are about such (pseudo-differential) operators, mapping a restrictive class like the Schwartz space into a space, more general (e.g.the space of tempered distributions \cite{HGFK}) than what is actually required, whereas Weyl-Heisenberg frame operators are maps from $L^2(\mathbb{R})$ into itself. As pointed out in \cite{G} (Chapter 14) and \cite{G1}, very little is known about the boundedness of these operators when their Gabor atoms lie outside the modulation space $M^1$ or the Wiener space $W$. The recent survey \cite{G1} on the intrigues of Gabor frames mentions the need for fresh approaches and new classes of window functions to tackle a number of fundamental open problems in the field.
Here our aim is modest:
\textit{identify some specific function spaces in $L^2(\mathbb{R})$ as suitable classes for Gabor atoms and obtain the Kohn-Nirenberg symbol of the associated frame operator directly from the Gabor atom, in an elementary fashion, without bringing in any abstract theory.}
Although the role of pseudo-differential operators in Gabor analysis (\cite H) and the representation of the Weyl-Heisenberg frame operators using Gabor multipliers (\cite {DT}) have been discussed before, an explicit expression for the Kohn-Nirenberg symbol (\cite{K}) of a Weyl-Heisenberg frame operator in terms of its Gabor atom is not seen in the literature. We provide this through a direct approach, based only on elementary Fourier analysis. New classes $\mathcal E_{a,b}$ and $\mathcal P_{a,b}$ of window functions in $L^2(\mathbb{R})$ are introduced for this purpose. Our symbol theorem holds for Weyl-Heisenberg frames having Gabor atoms in the larger class $\mathcal P_{a,b}$ and leads to Kohn-Nirenberg operators, which turn out to be Weyl-Heisenberg frame operators under suitable conditions.
Some needed definitions and facts about abstract frames, frame operators and Weyl-Heisenberg frames are given in section 2. New function spaces and the symbol function are introduced in Section 3. The symbol theorem, Kohn-Nirenberg operators and some applications are presented in the last section.
\section{Preliminaries}
\label{sec:1}
A family $\lbrace u_{k} : k \in \mathbb{N}\rbrace$ in a Hilbert space $\mathcal {H}$ is called a \textit{frame}, if the inequality:
$$\alpha \|x \|^{2} \leq \Sigma_k |\langle x, u_{k}\rangle|^{2} \leq \beta \|x \|^{2}$$
holds for some positive constants $\alpha$ and $\beta$ and for all $x \in \mathcal {H}$.
The \textit{frame operator} of a frame is given by $Sx = \Sigma \langle x, u_{k}\rangle u_{k}, \ x \in \mathcal {H}$,
\noindent the series converging unconditionally, and is a bounded linear, positive, invertible operator on $\mathcal{H}$. If only the upper inequality is satisfied, $\{u_k\}$ is called a \textit{Bessel sequence} and the operator $S$ is still defined as a bounded linear operator. We call it the \textit{preframe operator} of $\{u_k\}$.
Here we only consider \textit{Weyl-Heisenberg frames} (also known as \textit{Gabor frames}), a special class of frames of the form $(g,a,b) := \lbrace E_{mb}T_{na}g: m,n \in \mathbb Z\rbrace$ in $L^2(\mathbb R)$, generated by translations $T_{na}$ and modulations $E_{mb}, a,b>0$ of a $g\in L^2(\mathbb R)$ (known as a \textit{Gabor atom} or a \textit{window function}).
The Fourier transform $\widehat f$ of an $f\in L^1(\mathbb R)$ is the function defined on $\mathbb R$ by
$$\widehat f(\xi) = \int_{\mathbb R} \ f(t) \ e^{-2\pi \imath \xi t}\ dt, \ \xi\in \mathbb R.$$ If $f\in L^1 \cap L^2(\mathbb R)$, then $\widehat f \in L^2(\mathbb R)$ with $\|f\|_2 = \|\widehat f\|_2$ and the Fourier transform extends to a unitary operator $\mathcal F$ on $L^2(\mathbb R)$. We call $\mathcal F$ the Fourier transform operator on $L^2(\mathbb R)$ and write, for notational convenience, $\widehat g$ for $\mathcal Fg$ and $\check g$ for $\mathcal F^{-1}g$ even when $g\in L^2(\mathbb R)$.
The background material can be found in \cite{C} and \cite{G}.
\section{Function spaces for generating symbols}
The subspaces of $L^2(\mathbb R)$ introduced here will form the setting for the construction of our explicit expression for the Kohn-Nirenberg symbol for a Weyl-Heisenberg frame operator.
\begin{proposition}
For $g \in L^{2}(\mathbb{R})$, the series $\Sigma_n\ g(x-na) \overline{g}(x+t-na)$ converges absolutely for almost every $x,t \in \mathbb{R}$ and any $a>0$.
\end{proposition}
\begin{proof}
Use Schwarz inequality and the fact
$\|g\|_2^{2} = \underset {0}{\overset{a}\int} (\Sigma_n |g(x-na)|^{2})\ dx$.
\end{proof}
\begin{definition}
\rm{For $a>0$ and $g \in L^{2}(\mathbb{R})$, the associated function $\Phi$ is defined by $$\Phi(x,t):= \Sigma_n \ g(x-na) \overline{g}(x+t-na),\ a.e. \ x,t\in \mathbb {R}$$}.
\end{definition}
\begin{proposition}
Suppose $g\in L^{2}(\mathbb{R})$ satisfies $\Sigma_n |g(x-na)| < \infty, a.e. x\in \mathbb R$. Then
i) $\Phi_{x} \in L^{2}(\mathbb{R})$ for a.e. $x \in \mathbb{R}$ and
ii) $\check \Phi_{x} \in L^{2}\cap L^{1}(\mathbb{R})$ for a.e. $x \in \mathbb{R}$ when $ \widehat{g} \in L^{1}(\mathbb{R})$, where $\Phi_{x}(t) := \Phi(x,t)$.
\end{proposition}
\begin{proof}
Fix an $x\in \mathbb{R}$ such that $\Sigma_n |g(x-na)| < \infty $. Then the partial sums $S_{k}$ defined by
$S_{k} = \Sigma_{n=-k}^k \ g(x-na)$ $\overline{T_{na-x}g}$,
form a Cauchy sequence in $L^{2}(\mathbb{R})$, and the limit is just $\Phi_{x}$. Hence $\Phi_{x} \in L^{2}(\mathbb{R})$ and $\check \Phi_{x} \in L^{2}(\mathbb{R})$ for a.e. $x \in \mathbb{R}$.
For ii), use similar arguments for $\check{S_{k}}$ show that $\check \Phi_{x} \in L^{1}(\mathbb{R})$ for a.e. $x \in \mathbb{R}.$
\end{proof}
Motivated by this, we now introduce our function spaces $\mathcal P_{a,b}$ and $\mathcal E_{a,b}$.
\begin{definition}
\rm{For $a,b>0$ let $\mathcal P_{a,b}$ be the space
of those $g\in L^1(\mathbb R)$ satisfying
i) $\Sigma_m |\check \Phi_{x} (\xi - mb)| \leq B_x$ for some $B_x$, for a.e. $\xi, x \in \mathbb{R}$; \ \ ii) $\widehat g\in L^1(\mathbb R)$.
The space $\mathcal E_{a,b}$ is the class of functions $g\in L^1(\mathbb R)$ for which there are positive constants $A,B$
such that $ \Sigma_n |g(x - na)| \leq A$ and $
\Sigma_m | \widehat{g}(\xi - mb)| \leq B$ for a.e. $x,\xi \in \mathbb R$.
For $g \in \mathcal P_{a,b}$, define $\Psi$ by
$\Psi(x, \xi):= \Sigma_m \ \check \Phi_{x} (\xi - mb)$, a.e. $x, \xi \in \mathbb{R}.$}
\end{definition}
\begin{proposition}\label{PE}
For all $a,b>0$, $\mathcal {E}_{a,b}$ is a subspace of $\mathcal P_{a,b}$ that is invariant under both translations and modulations.
\end{proposition}
\begin{proof}
Since $\widehat{T_cf} = E_{-c}\widehat{f}$ and $\widehat{E_cf} = T_{c}\widehat{f}$ for $c\in \mathbb{R}$, we need only to establish the inclusion: $\mathcal {E}_{a,b}\subset \mathcal {P}_{a,b}$. For $g\in \mathcal {E}_{a,b}$, $\Sigma_n|g(x-na)| \int_{\mathbb{R}} |\overline{g}(x+t-na)|\ dt$ is finite for a.e. $x$ and an application of the dominated convergence theorem yields
$\check \Phi_{x}(\xi) \ = \Sigma_n\ g(x-na)\ e^{-2\pi i\xi (x-na)} \int_{\mathbb{R}}\overline{g}(u) \ e^{2\pi i\xi u}du$
\\\hspace*{1.5cm}
$= \check{\overline{g}}(\xi) \Sigma_n\ g(x-na)\ e^{-2\pi i\xi (x-na)}.$
\noindent This leads to an estimate stronger than asserted, independent of $x$:
$\Sigma_m\ | \check \Phi_{x} (\xi - mb)|
\leq \Sigma_m \ | \check{\overline{g}}(\xi - mb)| \Sigma_n\ |g(x-na)|\leq AB$.
\end{proof}
The Wiener space $W\subset L^1 \cap L^2(\mathbb{R})$ is the space of measurable functions $g$ with $\|g\|_W := \Sigma_k \|g\chi_{[k, k+1)}\|_{\infty} < \infty.$
According to experts, $W \cap \widehat W$ is a natural and practically important space for sampling (See \cite{G} for details on $W$ and its relation to sampling.) Thus, the following inclusions are of interest.
\begin{proposition}
For all $a, b>0$, the space $\mathcal {E}_{a,b}$ contains $ W\cap \widehat{W}$ and hence the Schwartz space $\mathcal S$ as well as the Feichtinger algebra $\mathcal S_0$.
\end{proposition}
\begin{proof}
For $g \in W\cap \widehat{W}$, there is a constant $C_a$ for any $a>0$ such that
\\\hspace*{2cm}$\Sigma_n |g(x-na)| \leq C_a \|g\|_W$
\\ for a.e. $x\in \mathbb R$ ( \cite{C} p.221, \cite{G} p.105).
The first assertion follows since if $g = \widehat h, h\in W$, then $\widehat g(\xi) = h(-\xi)$ so that $\widehat{g} \in W$ and so, for any $b>0$ and a.e. $\xi \in \mathbb R$,
\\\hspace*{2cm}$\Sigma_m |\widehat g(\xi - mb)| \leq C_b \|\widehat{g}\|_W $.
Next we observe that $g\in W\cap \widehat{W}$ if both $g(t)$ and $\widehat g(t)$ are $ O(1/(1+|t|)^2)$.
\\
For, if $C > 0$ is such that $ |g(t)| \leq C/(1+|t|)^2$
for all $t \in \mathbb{R}$, then
\\\hspace*{2cm}
$\gamma_n:= \max \{|g(t)|: n\leq t\leq n+1\} \leq C/(1+|n|)^2, n\geq 0$
\\\noindent and similarly $\gamma_n \leq C/(1+|n-1|)^2, n\leq 0$. Thus $\Sigma_n \gamma_n <\infty$ and $g\in W$.
The same considerations for $\widehat g$ in place of $g$ gives $\widehat g\in W$ and so $g\in \widehat{W}$.
In particular, $\mathcal S$ lies in $ W\cap \widehat{W}$.
Finally, the Feichtinger algebra $\mathcal S_0$ (or the modulation space $M^1$) also lies in $W \cap \widehat{W}$, by Proposition 12.1.4 of \cite{G}.
\end{proof}
Among the many interesting properties of spaces $\mathcal E_{a,b}$ and $\mathcal P_{a,b}$, we content ourselves with presenting only those that are relevant to the symbol function.
\begin{proposition} \label{FSP}
For $g \in \mathcal P_{a,b}$, $\Psi$ has an absolutely convergent expansion
\\\hspace*{2cm}
$\Psi(x, \xi) = \Sigma_m
\Sigma_n \ g(x-na)\
\check{\overline{g}}(\xi - mb) e^{-2\pi i(x-na)(\xi - mb)}$
\\\indent Further, for almost every $x,$ the function $\Psi_x$ is integrable
on $[0, b)$ and has an absolutely convergent Fourier series expansion
\\\hspace*{3cm} $ \Psi_{x}(\xi) = \Sigma_k \ c_{k}e^{2\pi i\xi (k/b)}$
\\where $c_{k} = c_k(x) = (1/b)\Sigma_n \ g(x-na) \overline{g}(x-na+k/b).$
\end{proposition}
\begin{proof}
First note that $\check \Phi_{x}\in L^{2}\cap L^1(\mathbb{R})$ by the previous proposition.
\\
Moreover $\Sigma |g(x-na)| \int_{\mathbb{R}} |\overline{g}(x+t-na)|\ dt$ is finite for a.e. $x$ and an application of the dominated convergence theorem yields expansion for $\check \Phi_{x}$, hence for $\Psi$:
$\check \Phi_{x}(\xi) = \int_{\mathbb{R}}\Sigma \ g(x-na)\overline{g}(x+t-na) \ e^{2\pi i\xi t}dt$
\\\hspace*{1.4cm}
$= \Sigma \ g(x-na) \int_{\mathbb{R}}\overline{g}(x+t-na) \ e^{2\pi i\xi t}dt$
\\\hspace*{1.4cm}
$= \Sigma \ g(x-na)\ e^{-2\pi i\xi (x-na)} \int_{\mathbb{R}}\overline{g}(u) \ e^{2\pi i\xi u}du$
\\\hspace*{1.4cm}
$= \check{\overline{g}}(\xi) \Sigma \ g(x-na)\ e^{-2\pi i\xi (x-na)}$,
$\Psi(x, \xi) = \Sigma_m \check \Phi_{x} (\xi - mb)$
$= \Sigma_m \Sigma_n g(x-na) \ \check{\overline{g}}(\xi - mb) e^{-2\pi i(x-na)(\xi - mb)},$
\\ the absolute convergence of the series being a consequence of the assumption on $g$. Writing $A_x = \Sigma \ |g(x-na)|$, from this we get the estimates:
$\Sigma\ | \check \Phi_{x} (\xi - mb)|
\leq A_x \ \Sigma \ | \check{\overline{g}}(\xi - mb)|$
as well as
$\int_{[0, b)} |\Psi_x(\xi)| \ d\xi \leq A_x \int_{[0, b)}
\Sigma \ | \check{\overline{g}}(\xi - mb)| \ d\xi $
$= A_x \int_{\mathbb R} | \check{\overline{g}}(\xi)| \ d\xi$
$ < \infty,$
\\the last integral being finite because
$\check{\overline{g}}\in L^1(\mathbb R)$.
The Fourier coefficients $c_{k}$ of $\Psi_{x}$ can now be evaluated without difficulty:
\\\hspace*{6mm}
$c_{k} = (1/b) \int_{[0, b)} \Psi_{x}(\xi) e^{-2\pi i\xi (k/b)}d\xi$
\\\hspace*{1cm} $=(1/b) \int_{[0,b)} \Sigma_m \ \check{\overline{g}}
(\xi - mb) \Sigma_n g(x-na) e^{-2\pi i(\xi-mb)(x-na)} e^{-2\pi i\xi(k/b)} d\xi$
\\\hspace*{1cm} $= (1/b)\Sigma_n \ g(x-na) \int_{[0,b)}
\Sigma_m\ \check{\overline{g}}(\xi - mb)e^{-2\pi i[(\xi-mb)(x-na)+\xi(k/b)]}d\xi $
\\\hspace*{1cm} $=(1/b)\Sigma_n\ g(x-na)
\int_{\mathbb{R}}\check{\overline{g}}(v)e^{-2\pi iv(x-na+k/b)}dv $
\\\hspace*{1cm} $=(1/b)\Sigma_n\ g(x-na)\overline{g}(x-na+k/b)$
\\ since the integral in the penultimate step exists and is
$\mathcal F \mathcal F^{-1}{\overline{g}}(x-na+k/b) = \overline{g}(x-na+k/b).$
The absolute convergence of the double series for $\Psi$ justifies the interchange of summations over $m$ and $n$. Analogous reasoning, using Fubini, validate taking the integral inside the summation over $n$.
To see absolute convergence, observe that
$ \Sigma_k\ |c_{k}| \leq (1/b)\Sigma_k \ \Sigma_n\ |g(x-na)||\overline{g}((x-na)+(k/b))|$
\\\hspace*{1.8cm} $ = (1/b) \Sigma_n \ |g(x-na)|( \Sigma_k\ |\overline{g}((x-na)+(k/b))|)$
$ < \infty, $
\\the last two sums being finite because $g\in L^1(\mathbb R)$.
\end{proof}
In the literature, $\Sigma_n \ g(x-na)\overline{g}(x-na+k/b)$ is usually
denoted by $G_k(x)$ for $k\in \mathbb Z$.
In our notation,
$G_k(x) = \Phi_x(k/b) = b\ c_k(x)$. Thus
$ \Psi_{x}(\xi) = (1/b) \Sigma_k G_k(x)\ e^{2\pi i\xi (k/b)}$.
\section{Kohn-Nirenberg symbols and operators}
Now we express a Weyl-Heisenberg frame operator in terms of the Kohn-Nirenberg symbol. This leads to Kohn-Nirenberg operators. We make use of the dense subspace $A_1(\mathbb R) := \lbrace f\in L^1(\mathbb{R}): \widehat f \in L^1(\mathbb R)\rbrace$ of $L^2(\mathbb R)$.
We adopt the following definition from \cite{G} for our symbol theorem.
A \textit{pseudo-differential operator with Kohn-Nirenberg symbol $\sigma$} is an operator of the form
$ K_{\sigma}f(x) := \int_{\mathbb{R}} \sigma (x, \xi) \widehat{f}(\xi) e^{2 \pi \imath x \xi} d \xi$.
\begin{theorem}
Let $g \in \mathcal P_{a,b}$ and suppose that $(g, a, b )$ is a Bessel sequence. Then its preframe operator $S$ is given by
\\\hspace*{2cm}$Sf(x) = \int_{\mathbb{R}} \Psi (x, \xi) \widehat{f}(\xi) e^{2 \pi \imath x \xi} d \xi = \mathcal {F}^{-1}M _{\Psi_{x}}\mathcal {F}f(x)$ \\for almost every $x \in \mathbb{R}$ and for all $f$ in the dense subspace $A_1(\mathbb R)$. Thus, on $A_1(\mathbb R)$, $S$ is the pseudo-differential operator with Kohn-Nirenberg symbol $\Psi$.
\end{theorem}
\begin{proof}
For convenience, we write $e(x)$ for $e^{2\pi \imath x}$ in this proof.
Since $g \in \mathcal P_{a,b}$, we have $\Sigma_m \ |\check \Phi_{x}(\xi - mb)| \leq B_x < \infty $
for some $B_x>0$ and a.e. $x,\xi \in \mathbb{R}$. Fix such an $x$ and consider the bounded measurable function $\Psi_{x}$
associated with the triplet $(g,a,b).$
For $f \in A_1(\mathbb{R})$ we have $\mathcal {F}f = \widehat{f}$ and
$\int_{\mathbb{R}} \Sigma_m\ | \check \Phi_{x} (\xi-mb)|
|\widehat{f}(\xi)| d \xi \ \leq B_x \| \widehat{f}\|_1 < \infty$.
\\ Thus an application of dominated convergence theorem is valid and yields
$\mathcal {F}^{-1}M _{\Psi_{x}}\mathcal {F}f(x)
= \int_{\mathbb{R}}(\Sigma_m \check \Phi_{x}(\xi -mb))\widehat{f}(\xi)e(\xi x)\ d\xi$
\\\hspace*{2cm}
$= \Sigma_m\ \int_ {\mathbb{R}} \check \Phi_{x} (\xi -mb) \widehat{f}(\xi)e(\xi x)\ d\xi$
\\\hspace*{2cm} $= \Sigma_m\ \int_ {\mathbb{R}} \check \Phi_{x} (\xi -mb)
(\int_{\mathbb{R}} f(t)e(-\xi t) dt) e(\xi x)\ d\xi.$ (*)
Now $\int_ {\mathbb{R}}| \check \Phi_{x} (\xi -mb)|\ d\xi \int_{\mathbb{R}} |f(t)|\ dt $ $= \|\check \Phi_{x}\|_{1} \|f\|_{1} < \infty$,
\noindent so an application of Fubini's theorem below is justified and we compute:
$ \int _ {\mathbb{R}}\check \Phi_{x} (\xi -mb)\int_{\mathbb{R}} f(t) e(-\xi t)\ dt\
e(\xi x) d \xi$
\\\hspace*{2cm}
= $\int_{\mathbb{R}} f(t) \int_ {\mathbb{R}} \check \Phi_{x}(\xi-mb) e(\xi(x-t)) d\xi \ dt$
\\\hspace*{2cm}
= $\int_{\mathbb{R}} f(t) \int_ {\mathbb{R}} \check \Phi_{x}(u)e((u+mb)(x-t))\ du\ dt$
\\\hspace*{2cm}
= $\int_{\mathbb{R}} f(t)\int_ {\mathbb{R}} \check \Phi_{x}(u) e(-u(t-x))\ du\ e(mb(x-t))\ dt$
\\\hspace*{2cm}
= $\int_{\mathbb{R}} f(t) \ \mathcal F \mathcal F^{-1} \Phi_x (t-x)\ e(mb(x-t)) \ dt$
\\\hspace*{2cm}
= $\int_{\mathbb{R}} f(t) \Phi_{x} (t-x) e(mb(x-t))\ dt$
\\\hspace*{2cm}
= $e(mbx)\int_{\mathbb{R}}f(t) \Phi_{x} (t-x) e(-mbt)\ dt$
\\\hspace*{2cm}
= $e(mbx)\int_{\mathbb{R}} f(t) \Sigma_n g(x-na)\overline{g} (t-na) e(-mbt)\ dt$
\\\hspace*{2cm}
= $e(mbx) \int_{\mathbb{R}} f(t) \Sigma_n \ T_{na}g(x)\overline{T_{na}g} (t) e(-mbt)\ dt$.
\noindent But $|f|, | T_{na}\overline{g}| \in L^{2}(\mathbb{R})$ so we have
$\int_{\mathbb{R}}|f(t)| | \overline{T_{na}g}(t)|\ dt < \infty$ by Schwarz inequality and consequently
$\Sigma_n \ |T_{na}g(x)| \int_{\mathbb{R}} |f(t)| | \overline{T_{na}g}(t)|\ dt < \infty$.
This validates an application of dominated
convergence theorem, and we get
$\int _ {\mathbb{R}}\check \Phi_{x} (\xi -mb)( \int_{\mathbb{R}} f(t) e(-\xi t) dt)
e(\xi x)\ d\xi$
\\\hspace*{2cm}
= $e(mbx)\Sigma_n \ T_{na}g(x) \int_{\mathbb{R}}f(t) e(-mbt) \overline{T_{na}g}(t)\ dt$
\\\hspace*{2cm}
= $e(mbx) \Sigma_n \ T_{na}g(x) \int_{\mathbb{R}}f(t) \overline{E_{mb}T_{na}g}(t)\ dt$
\\\vspace*{2mm}\hspace*{2cm}
= $\Sigma_n e(mbx) \ T_{na}g(x) \langle f, E_{mb} T_{na}g \rangle$
\\\vspace*{2mm}\hspace*{2cm}
= $\Sigma_n E_{mb}\ T_{na}g(x) \langle f, E_{mb} T_{na}g \rangle $
\\\vspace*{2mm}\hspace*{2cm}
= $\Sigma_n \langle f, E_{mb} T_{na}g \rangle E_{mb}\ T_{na}g(x).$
\noindent Substituting the expression on the right in (*), we thus get
\\\hspace*{2cm}
$\mathcal {F}^{-1}M _{\Psi_{x}}\mathcal {F}f(x)$
$= \Sigma_{m,n} \langle f, E_{mb} T_{na}g \rangle E_{mb}\ T_{na}g(x)$\\
for all $x$ in a set $E_1$ of full measure.
\\But $\Sigma_{m,n} \langle f, E_{mb} T_{na}g \rangle E_{mb}\ T_{na}g = Sf$
on a set $E_2$ of full measure.
Thus $\mathcal {F}^{-1}M _{\Psi_{x}}\mathcal {F}f(x) = Sf(x)$ for all $x$
in the set $E_1\cap E_2$ of full measure, thereby completing the proof.
\end{proof}
Motivated by the symbol theorem above, we define the \textit{Kohn-Nirenberg operator} $K_{\Psi}$, corresponding to the symbol function $\Psi$ for a $g\in \mathcal P_{a,b}$, by
\\\hspace*{2cm} $ K_{\Psi} (f)(x) = \mathcal F^{-1}M_{\Psi_x} \mathcal{F}f(x)$
\\\noindent for $f\in A_1(\mathbb{R})$ and a.e. $x \in \mathbb{R}$.
An important problem for pseudo-differential operators is their $L^2$ boundedness. We find situations when a Kohn-Nirenberg operator is a bounded linear operator on $L^2(\mathbb{R})$ and yields a Weyl-Heisenberg preframe operator.
\begin{theorem} \label{KN1}
Suppose $g \in \mathcal P_{a,b}$ satisfies any one of the following conditions:
i) $\underset{m,n}{\Sigma} \ | \langle f, E_{mb}T_{na}g\rangle | \leq \beta \|f\|_2$ for some $\beta >0$ and for all $f \in A_1(\mathbb{R})$;
ii) $\underset{n}{\Sigma} |g(x-na)| \leq A$ and $\underset{k}{\Sigma} |g(x-k/b)| \leq B$ for a.e. $x\in \mathbb{R}$ for some positive constants $A, B$ and $0< ab \leq 1$.
In each of these cases, both of the following assertions hold.
a) $K_{\Psi}$ is defined on a dense subspace $D$ of $L^2(\mathbb{R})$,
$K_{\Psi}(f) \in L^2(\mathbb{R})$ for all $f \in D$ and $K_{\Psi}$ extends to a positive, bounded linear operator on $L^2(\mathbb{R})$.
b) $(g,a,b)$ is a Bessel sequence with preframe operator $S = K_{\Psi}$.
\end{theorem}
\begin{proof}
We prove a) in each case and b) will follow easily from known results.
\\i) Take $D$ as the dense subspace $A_1(\mathbb{R})$ of $L^2(\mathbb{R})$. Since $g \in \mathcal P_{a,b}$, for all $f\in D$, as in the proof of the representation theorem, we have
\hspace{2cm}$\mathcal F^{-1}M_{\Psi_{x}}\mathcal Ff(x) = \Sigma_{m,n}\ \langle f, E_{mb}T_{na}g\rangle \ E_{mb}T_{na}g(x)$
\\for a.e. $x \in \mathbb{R}$. Using this and applying the Schwarz inequality we have
\\
\noindent $\int_{\mathbb{R}} |K_{\Psi}(f)(x)|^2 \ dx$
\\\hspace*{1cm} $= \int_{\mathbb{R}} |\mathcal F^{-1}M_{\Psi_{x}}\mathcal{F}f(x)|^2 \ dx $
\\\hspace*{1cm} $= \int_{\mathbb{R}} |\Sigma_{m,n} \ \langle f, E_{mb}T_{na}g\rangle \ E_{mb}T_{na}g(x)|^2 \ dx $
\\\hspace*{1cm} $\leq \int_{\mathbb{R}} \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle| \ |E_{mb}T_{na}g(x)|
\Sigma_{k,l} \ | \langle f, E_{kb}T_{la}g\rangle| |E_{kb}T_{la}g(x)| \ dx $
\\\hspace*{1cm} $= \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle| \Sigma_{k,l} \ | \langle f, E_{kb}T_{la}g\rangle|
\int_{\mathbb{R}} |E_{mb}T_{na}g(x)||E_{kb}T_{la}g(x)| \ dx $
\\\vspace*{2mm}\hspace*{1cm} $\leq \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle| \ \Sigma_{k,l} \ |\langle f, E_{kb}T_{la}g\rangle| \|E_{mb}T_{na}g\|_2 \ \|E_{kb}T_{la}g\|_2$
\\\vspace*{2mm}\hspace*{1cm} $= (\Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle|)^2 \|g\|_2^2 $
\\\vspace*{2mm}\hspace*{1cm} $\leq \beta^2 \|f\|_2^2 \|g\|_2^2$.
\noindent Thus $K_{\Psi}f \in L^2(\mathbb{R})$ and $K_{\Psi}$ is a bounded linear operator on $D$. By denseness of $D$, it extends to a bounded operator on $L^2(\mathbb{R})$.
Now if $T$ is the preframe operator of the Bessel sequence $(g,a,b)$, we have, for $f\in D$,
$ \langle K_{\Psi}(f), f\rangle = \int_{\mathbb{R}} \mathcal F^{-1}M_{\Psi_x}\mathcal F f(x)\ \bar f(x)\ dx$
\\\hspace*{2.1cm} $=\int_{\mathbb{R}} \Sigma_{m,n} \ \langle f, E_{mb}T_{na}g\rangle \ E_{mb}T_{na}g(x)\ \bar f(x) \ dx$
\\\hspace*{2.1cm} $= \int_{\mathbb{R}} Tf(x) \bar{f}(x) dx$
$= \langle Tf, f \rangle $
\\\vspace*{2mm}\hspace*{2.1cm} $= \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle|^2$
$\geq 0.$
\\\noindent This shows that $K_{\Psi}$ is a positive operator.
ii) The assumed conditions on $g$ give the estimate
\\\vspace*{2mm}\hspace*{1cm}
$\Sigma_k |G_k(x)| \leq \Sigma_{k,n} |g(x-na) \bar g(x-na+k/b)|$
\\\vspace*{2mm}\hspace*{2.4cm} $= \Sigma_n |g(x-na)| \Sigma_k |\bar g(x-na+k/b)|$
$\leq AB,$
\noindent and this, in turn, yields the estimate
$\Sigma_k |G_k(x)|^2 \leq (\Sigma_k |G_k(x)|)^2 \leq (AB)^2.$
In this case the dense subspace $D$ we consider is the space of compactly supported, bounded functions in $L^2(\mathbb{R})$. By Proposition 2.4 of \cite{CCJ} the series $(1/b) \Sigma_k (T_{k/b}f) G_k$ converges unconditionally in the norm of $L^2(\mathbb{R})$ and
\vspace*{2mm} $\langle (1/b) \Sigma_k (T_{k/b}f) G_k, f\rangle = \Sigma_{m,n} |\langle f, E_{mb}T_{na}g\rangle|^2, f\in D$.
\vspace*{2mm} But
$(1/b) \Sigma_k (T_{k/b}f)(x) G_k(x) = \mathcal F^{-1}M_{\Psi_x}\mathcal F f(x)
= K_{\Psi}(f)(x)$ for a.e. $x$.
\noindent Hence $K_{\Psi}(f) \in L^2(\mathbb{R})$ for $f\in D$ and (see \cite{CCJ}, proof of Proposition 2.4)
\noindent $\|K_{\Psi}(f) \|_2 ^2 = \|(1/b) \Sigma_k (T_{k/b}f) G_k\|_2^2 $
$\leq \int \ |f(x)|^2 \Sigma_k|G_k(x)|^2 \ dx$
$\leq (AB)^2 \|f\|_2^2$.
\noindent \vspace*{2mm}Thus $\|K_{\Psi}(f) \|_2 \leq AB \|f\|_2$ and $K_{\Psi}$is clearly linear and bounded on $D$ and so extends to the whole of $L^2(\mathbb{R})$. The operator is positive
since
\\\hspace*{2cm}
$\langle K_{\Psi}(f), f\rangle = \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle|^2 \ \geq 0$
for $f\in D$.
\vspace*{2mm}To see b), note that the upper frame inequality is a consequence of the assumption on $g$ in case i). In case ii), $\Sigma_k |G_k(x)|$ is bounded almost everywhere and so by a well known result of Casazza and Christensen (Theorem 9.1.5, \cite{C}), $(g,a,b)$ is a Bessel sequence. The last assertion is clear since, in both cases,
\vspace*{2mm}$\langle K_{\Psi}(f), f\rangle = \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle|^2 = \langle Sf, f\rangle$.
\end{proof}
If either $g \in \mathcal E := \cap \{ \mathcal {E}_{a,b} : 0 < ab < 1 \}$ (note that $\mathcal E$ itself is a large space containing the Schwartz space) or is a compactly supported bounded function in $\mathcal P_{a,b}$ and $0<ab \leq 1$, then condition ii) of Theorem \ref{KN1} is satisfied so that $K_{\Psi}$ extends to a bounded, positive linear operator on $L^2(\mathbb{R})$ and becomes the preframe operator of the Bessel sequence $(g,a,b)$. In particular, the assertion holds if $g \in C_c^{\infty}(\mathbb{R})$. Thus for a large class of generators of Gabor frames in $L^2(\mathbb{R})$, the corresponding frame operator is the Kohn-Nirenberg operator $K_{\Psi}$, whose Kohn-Nirenberg symbol $\Psi$ is explicitly given in terms of the Gabor atom $g$ and the frame parameters $a$ and $b$ by
$$\Psi(x, \xi) = \Sigma_m\ \Sigma_n\ g(x-na)\ \check{\overline{g}}(\xi - mb) e^{-2\pi i(x-na)(\xi - mb)}.$$
In this expression of $\Psi$, the symmetry in time and frequency aspects, the equal importance given to both the frame parameters $a$ and $b$ and the independence of the adjoint lattice parameters $1/a$ and $1/b$ are significant.
Even if $g \in L^2(\mathbb{R})$ is not meeting the requirements of Theorem \ref{KN1}, as discussed in \cite{EP1}, it is possible to approximate the frame operator of a Weyl-Heisenberg frame $(g,a,b)$ in $ L^2(\mathbb{R})$ by preframe operators generated by window functions chosen from the class $C_c^{\infty}(\mathbb{R})$. Since $C_c^{\infty}(\mathbb{R})$ functions meet the requirements of Theorem \ref{KN1}, the corresponding preframe operators are all Kohn-Nirenberg operators. Hence in view of Lemma 5 in \cite{EP1}, we have the following:
\begin{corollary}
For every Weyl-Heisenberg frame operator $S$ on $L^2(\mathbb{R})$, there is a sequence $\{K_{\Psi_j}\}$ of Kohn-Nirenberg operators on $L^2(\mathbb{R})$ such that $\lim_j \langle K_{\Psi_j} f, f \rangle = \langle Sf, f \rangle$ for $f\in B_c(\mathbb R)$.
\end{corollary}
Analogously, other results on approximations, subsequent to Lemma 5 in \cite{EP1} can also be restated in terms of Kohn-Nirenberg operators on $L^2(\mathbb{R})$.
The next corollary, showing that the operator $K_{\Psi}$ is useful for characterising Weyl-Heisenberg frames in $L^2(\mathbb{R})$, is immediate from Theorem \ref{KN1}.
\begin{corollary}\label{KN2}
Let $g$ be as in the Theorem \ref{KN1}. Then $(g,a,b)$ is a Weyl-Heisenberg frame if and only if the Kohn-Nirenberg operator $K_{\Psi}$ is bounded below: there is a positive constant $\alpha$ such that $ \langle K_{\Psi}f, f\rangle \geq \alpha \|f\|_2^2,f\in L^2(\mathbb{R})$.
\end{corollary}
As a simple application of our methods, we look at the Walnut representation of the Weyl-Heisenberg frame operator $S$, presented in \cite{C} as
\\\hspace*{2cm}
$ Sf(x) = (1/b) \underset{k \in \mathbb{Z}}{\Sigma} (T_{k/b} f)(x) G_{k}(x)$ for all $f\in L^2(\mathbb{R})$,
\\\noindent where the series is absolutely convergent for almost all $x\in \mathbb{R}$.
A thorough discussion on this can also be found in \cite{CCJ}.
\begin{proposition}
Let $g \in \mathcal P_{a,b}$ generates a frame $(g, a, b )$ with $S$ as its frame operator. Then the representation of $S$ as the pseudo-differential operator with symbol $\Psi$ yields the Walnut representation and conversely.
\end{proposition}
\begin{proof}
It is easy to see that
$\mathcal F^{-1} M_{\Psi_{x}} \mathcal F f
= (1/b) \Sigma G_{k}(x) T_{(k/b)} f$, using the absolutely convergent Fourier expansion of
$\Psi$ obtained in Proposition \ref{FSP}.
If $f\in A_1(\mathbb{R})$, then the right side is uniformly convergent and so each side is
a continuous function. Evaluating
at $x$, we get the Walnut representation.
Conversely, from the Walnut representation we can get
$ Sf(x) \ \ =((1/b)\Sigma G_{k}(x)\mathcal F^{-1} \mathcal F T_{k/b}) f(x)$
\\\hspace*{1.5cm} $ =\mathcal F^{-1} ((1/b)\Sigma G_{k}(x) \mathcal F T_{k/b}) f(x)$
\\\hspace*{1.5cm} $= \mathcal F^{-1} ((1/b)\Sigma G_{k}(x) E_{k/b} \mathcal F)f(x)$
\\\hspace*{1.5cm} $= \mathcal F^{-1} \Psi_{x}\mathcal F(f)(x)$, again by Proposition \ref{FSP}.
\end{proof}
The series $Sf = (1/b) \Sigma_k (T_{k/b} f) G_{k}$ in the Walnut representation converges in norm for all $f\in L^2(\mathbb{R})$ whenever $g \in W$ and the operator is bounded in norm (see 6.3.2 of \cite{G}) in this case. The first part of the proof above shows that the same conclusions hold if $g \in \mathcal {P}_{a,b}$ satisfies conditions of Theorem \ref{KN1}.
The following observation highlights the significance of the symbol $\Psi$ in characterising Weyl-Heisenberg frames.
\begin{theorem}\label{CWF}
If $g \in \mathcal E_{a,b}$ is such that $\Psi(x,\xi)$ is a function of $\xi$, say $ \psi_2 (\xi) = \Psi (x,\xi)$ a.e., then $(g,a,b)$ is a frame if and only if a condition of the form
$0<\alpha \leq \Psi(x, \xi) \leq \beta$ holds a.e. $x,\xi \in \mathbb{R}$.
\end{theorem}
\begin{proof}
Under the hypothesis, $ \langle \mathcal {F}^{-1} M _{\Psi_{x}}\mathcal {F}f, f \rangle = \langle \psi_{2}\widehat{f}, \widehat{f} \rangle, f \in L^{2}(\mathbb{R})$.
If $(g,a,b)$ is a frame with frame operator $S$, then
$\alpha \|f\|^{2} \leq \langle Sf, f \rangle \leq \beta \| f \|^{2}$, say,
for all $f \in L^{2}(\mathbb{R})$.
But $Sf(x) = \mathcal {F}^{-1}M_{\psi_{2}}\mathcal {F}f(x)$, so $S = \mathcal {F}^{-1}M_{\psi_{2}}\mathcal {F}$. Thus $\alpha I \leq M_{\psi_2} \leq \beta I$
and so $\psi_2$ satisfies the asserted inequalities.
Conversely, suppose $0<\alpha \leq \Psi(x,\xi) \leq \beta$ for a.e. $x,\xi \in \mathbb{R}$.
Since $\|f\|^{2} = \|\widehat{f}\|^{2}$, we have
$\alpha \|f\|^{2} \leq \langle \psi_{2}\widehat{f}, \widehat{f} \rangle \leq \beta \| f \|^{2}$ for all
$f \in L^{2}(\mathbb{R})$ and
\vspace*{2mm}$\mathcal {F}^{-1} M_{\psi_{2}}\mathcal {F}f(x) =
\Sigma_{m,n} \ \langle f, E_{mb}T_{na}g\rangle E_{mb}T_{na}g(x)$,
\vspace*{2mm}$\langle \psi_{2}\widehat{f}, \widehat{f} \rangle = \langle \mathcal {F}^{-1} M _{\psi_{2}}\mathcal {F}f, f \rangle
= \Sigma_{m,n} \ | \langle f, E_{mb}T_{na}g\rangle |^{2}$.
\noindent These inequalities, in tandem, prove that $(g,a,b)$ is a frame.
It remains to get the series for $\psi_2$. By arguments analogous to those used in the proof of Proposition \ref{FSP},
$\Psi^{\xi}(x) = \Psi(x,\xi)$ has an absolutely convergent Fourier series
$(1/a) \Sigma_k \gamma_{k} e^{2 \pi i(k/a) x}$
where the coefficients are given by
$\gamma_k = \gamma_{k} (\xi) = \Sigma_m \
\check{\overline{g}}(\xi-mb) \check{\overline{g}}(\xi-mb+k/a)$.
\noindent If $\Psi$ is independent of $x$, this Fourier series reduces to the constant
$\gamma_0 = \psi_2(\xi) = (1/a) \Sigma_m |\check{\overline{g}}(\xi - mb)|^{2}$.
\end{proof}
A similar observation can be made when $\Psi(x,\xi)$ is a function of $x$ alone and consequently, a version of the famous \textit{painless nonorthogonal expansion} (\cite{DGM}) can be given when the Gabor atom lies in $\mathcal {P} :=\cap \{\mathcal {P}_{a,b}: ab < 1\}$. Another version of this is also possible using Theorem \ref{CWF}, when the Gabor atom is in $\mathcal {E}$ and the support condition is imposed on the \textit{Fourier domain}.
\begin{corollary}
Suppose $g \in \mathcal E$ is such that the support of $\widehat{g}$ lies in the interval $ [0, L]$. Then for any $a,b$ with $ b \leq L <1/a$, the associated $\Psi$ is independent of the first variable $x$, $\Psi(x,\xi) = \psi_2(\xi)$ for almost all $x$ and for all $\xi$ for which the series for $\psi_2$ converges and the conclusions of the last theorem hold.
\end{corollary}
\begin{proof} Observe that
$\Psi$ is bounded with respect to both $x$ and $\xi$ since
$| \Psi_(x,\xi)|
\leq \underset{n \in \mathbb{Z}}{\Sigma} \ | g(x-na)| \ \underset{m \in \mathbb{Z}}{\Sigma}\ | \check{\overline{g}}(\xi - mb)|$
$\leq AB.$
\\\noindent Since $\Psi$ is $a$-periodic in $x$, we can consider its Fourier series as above. The assumed properties on the support of $\widehat{g}$ imply that $\gamma_{k}(\xi) = 0$ for
$k \neq 0$ as before and
$ \Psi(x,\xi) = \gamma_{0}(\xi) = (1/a) \Sigma_m |
\check{\overline{g}}(\xi - mb)|^{2}$.
The last part is clear.
\end{proof}
\bibliographystyle{amsplain}
|
1,116,691,500,319 | arxiv | \subsubsection*{\bibname}}
\begin{document}
\twocolumn[
\aistatstitle{Beyond the Policy Gradient Theorem \\for Efficient Policy Updates in Actor-Critic Algorithms}
\aistatsauthor{ Romain Laroche \And R\'emi Tachet des Combes }
\aistatsaddress{ \hspace{0.15cm} Microsoft Research Montr\'eal \hspace{2.5cm} Microsoft Research Montr\'eal } ]
\begin{abstract}
In Reinforcement Learning, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets evolve with time and the policy optimization process must be efficient at unlearning what it previously learnt. In this paper, we discover that the policy gradient theorem prescribes policy updates that are slow to unlearn because of their structural symmetry with respect to the value target. To increase the unlearning speed, we study a novel policy update: the gradient of the cross-entropy loss with respect to the action maximizing $q$, but find that such updates may lead to a decrease in value. Consequently, we introduce a modified policy update devoid of that flaw, and prove its guarantees of convergence to global optimality in $\mathcal{O}(t^{-1})$ under classic assumptions. Further, we assess standard policy updates and our cross-entropy policy updates along six analytical dimensions. Finally, we empirically validate our theoretical findings.
\end{abstract}
\section{INTRODUCTION}
The policy gradient theorem derived for the first time in~\cite{Williams1992} is seminal to all the policy gradient theory~\citep{sutton2000policy,konda2000actor,Ahmed2019,kumar2019sample,zhang2020global,qiu2021finite}, and the actor-critic algorithmic innovations~\citep{mnih2016asynchronous,silver2016mastering,vinyals2019grandmaster}. In this paper, we discover that if, starting from a uniform policy, $n$ policy gradient updates have been performed with respect to some values $q$, then at least as many policy gradient updates with respect to opposite values will be needed to return back to a uniform policy. We argue that unlearning in $\mathcal{O}(n)$ is too slow for efficient Reinforcement Learning~\citep[RL,][]{Sutton1998}. Indeed, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets evolve with time and an efficient policy optimization process must be fast at unlearning what it previously learnt. We show further that the unlearning slowness of policy gradient updates critically compounds with the number of such chained decisions as well as with decaying learning rates and/or state visitations. The structural flaw in the policy gradient lies in its symmetry with respect to the $q$-function.
We therefore look for alternative solutions. As described in \cite{Agarwal2019}, the direct parametrization update also applies a classic policy gradient update, but follows it with a projection on the probability simplex. This projection breaks the symmetry, like a wall preventing the parameters from going further forward but still allowing them to go backwards. Unfortunately, an adaptation of the direct parametrization to non-tabular settings, \textit{i.e.} to function approximation, remains an open problem because the projection cannot be readily differentiated.
To overcome this limitation, we study a novel policy update that improves the unlearning speed: the cross-entropy policy update. It consists in updating the parameters with the gradient of the cross-entropy loss between the output of a softmax parametrization and the current local optimal action $a_q(s)=\argmax_{a\in\mathcal{A}}q(s,a)$. This policy update displays a consistent empiric convergence to global optimality in our experiments. But unfortunately, our analysis reveals that such updates may at times lead to a decrease in value, which is a serious dent in its theoretical grounding. We conjecture its convergence and global optimality to be true, but they remain an open problem.
In the meantime, we propose to alter the cross-entropy loss gradient in order to guarantee monotonicity of the value function. We prove that the resulting modified cross-entropy policy update converges in $\mathcal{O}(t^{-1})$ to a global optimal under the set of assumption/conditions made in \cite{Laroche2021}. We pursue our theoretical analysis with an overview of the main policy updates used in the literature, and analyze them along six axes: convergence to global optimality, asymptotic convergence rates, sensitivity to the gravity well exposed in \cite{Mei2020b}, unlearning speed, compatibility with stochastic updates, and adaptability to function approximation. Due to space limitations, proofs for all propositions and theorems were moved to Appendix \ref{app:theory}.
Finally, we empirically validate our analysis on diverse finite MDPs. The results show that the cross entropy softmax updates are as efficient as the direct parametrization updates on hard planning tasks on which policy gradient methods fail to converge to optimality in a reasonable amount of time. Due to space limitations, some details regarding implementation choices, applicative domains, and experimental results were moved to Appendix \ref{app:Experiments}. Moreover, the code is attached to the proceedings.
Our contributions are summarized below:
\begin{itemize}
\item We identify the slow unlearning behaviour of policy updates following the policy gradient theorem.
\item We develop two novel policy updates based on the cross-entropy loss tackling the aforementioned issue.
\item We assess standard policy updates and our policy updates along six analytical dimensions.
\item We empirically validate our theoretical findings.
\end{itemize}
\section{FAMILIES OF POLICY UPDATES}
\label{sec:updates}
The objective for an agent consists in maximizing the sum of discounted rewards:
\begin{align}
\!\!\mathcal{J}(\pi) &\doteq \mathbb{E}\!\!\left[\sum_{t=0}^\infty \gamma^t R_t\bigg|\begin{array}{cc}
\!\!\!\!S_0\sim p_0, S_{t+1}\sim p(\cdot|S_t,A_t),\!\!\!\! \\
\!\!\!A_t\sim \pi(\cdot|S_t), R_t\sim r(\cdot|S_t,A_t)\!\!\!\!\!
\end{array} \right]\!\!,\!\! \label{eq:obj}
\end{align}
where state $S_0$ is sampled from the initial state distribution $p_0$, and at each time step $t\geq 0$, action $A_t$ is sampled from the current policy $\pi$, reward $R_t$ is sampled according to the reward kernel $r$, and next state $S_{t+1}$ is sampled according to the transition kernel $p$. $0\leq\gamma<1$ is the discount factor ensuring that the infinite sum of rewards converges.
In this paper, we will consider policies $\pi_\theta$, parametrized by $\theta$, that get recursively updated as follows:
\begin{align}
\theta' \leftarrow U(\theta, d, q,\eta),
\end{align}
where $d(s)$ is classically the state-visit distribution induced by the current policy $\pi_\theta$, but may be any state distribution following the generalized policy update in \cite{Laroche2021}, $q$ is the current state-action value function estimate for $\pi_\theta$, and $\eta$ is a learning rate scalar.
\subsection{PG updates: $U_\textsc{pg}$-$U_\textsc{pg-sm}$-$U_\textsc{pg-es}$}
A natural approach is to take the gradient of $\mathcal{J}(\pi_\theta)$ with respect to its parameters. This corresponds to the update prescribed by the policy gradient theorem:
\begin{align}
U_\textsc{pg}(\theta, d, q,\eta)\! \doteq\! \theta \!+\! \eta \sum_{s\in\mathcal{S}} d(s)\! \sum_{a\in\mathcal{A}} q(s, a) \nabla_{\theta} \pi_\theta(a|s), \label{eq:pg}
\end{align}
where $\nabla_{\theta} \pi_\theta$ denotes the gradient of $\pi_\theta$ with respect to its parameters $\theta$.
In practice, the vast majority of practitioners use update $U_\textsc{pg}$ in Eq. \eqref{eq:pg} with a softmax parametrization:
\begin{align}
\pi_\theta(a|s) &\doteq \frac{\exp(f_\theta(s,a))}{\sum_{a'\in\mathcal{A}}\exp(f_\theta(s,a'))} \label{eq:softmax}\\
\nabla_\theta \pi_\theta(a|s) &= \pi_\theta(a|s) [\nabla_\theta f_\theta(s,a) \nonumber \\
&\hspace{1.5cm}- \sum\nolimits_{a'} \nabla_\theta f_\theta(s,a') \pi_\theta(a'|s)]\label{eq:softmax_grad}
\end{align}
where $f_\theta:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$ is typically a neural network parametrized by its weights $\theta$. We let $U_\textsc{pg-sm}$ denote the update resulting from this parametrization.
Recently, \cite{Mei2020b} discovered that softmax's policy gradient has two issues: slow convergence (aka \textit{damping}) and sensitivity to parameter initialization (aka \textit{gravity well}). They propose the escort transform to address them:
\begin{align}
\pi_\theta(a|s) &\doteq \frac{|f_\theta(s,a)|^p}{\lVert f_\theta(s,\cdot) \rVert_p^p} \\
\nabla_\theta \pi_\theta(a|s) &\propto \pi_\theta(a|s)^{1-\frac{1}{p}} [\nabla_\theta f_\theta(s,a) \nonumber \\
&\hspace{-0.6cm}- \sum\nolimits_{a'} \nabla_\theta f_\theta(s,a') \pi_\theta(a|s)^{1 / p} \pi_\theta(a'|s)^{1-\frac{1}{p}} \ \label{eq:escort-grad
\end{align}
where $p$ is the hyperparameter for the transformation, usually set to 2, $\lVert \cdot \rVert_p$ denotes the $p$-norm, and $\propto$ means proportional to, hiding factors in $p$ and $\lVert f_\theta(s,\cdot) \rVert_p$. We let $U_\textsc{pg-es}$ denote the update resulting from this parametrization.
We will argue in Section \ref{sec:symmetry} that updates of the form of $U_\textsc{pg}$ generally take too much time to unlearn their past steps. Consequently, we investigate other policy updates.
\subsection{Direct parametrization update: $U_\textsc{di}$}
The direct parametrization is arguably the simplest one:
\begin{align}
\pi_\theta(a|s) \doteq f_\theta(s,a) \quad&\quad \nabla_\theta \pi_\theta = \nabla_\theta f_\theta,
\end{align}
but, since $f_\theta(s,\cdot)$ must live on the probability simplex $\Delta_\mathcal{A}$, an orthogonal projection on $\Delta_\mathcal{A}$ is required after each update $U_\textsc{pg}$:
\begin{align}
U_\textsc{di}(\theta, d, q,\eta)\doteq \text{Proj}_{\Delta_\mathcal{A}}[U_\textsc{pg}(\theta, d, q,\eta)]
\end{align}
The direct parametrization has been studied quite extensively in the context of finite MDPs. We will argue in Section \ref{sec:deepimpl} that $U_\textsc{di}$ cannot be readily applied with function approximation. We thus investigate other policy updates.
\begin{table*}[t!]
\begin{tcolorbox}[tab2,tabularx={c|l||c|c|c|c|c|c}]
Id & Name & Conv. \& Opt. & Rates & Gravity Well & Unlearning* & Stochastic & Deep Impl. \\\hline\hline
$U_\textsc{pg-sm}$ & Policy gradient softmax & yes & $\mathcal{O}(t^{-1})$ & \textcolor{red}{strong} & \textcolor{red}{slow}* & yes & yes \\\hline
$U_\textsc{pg-es}$ & Policy gradient escort & yes & $\mathcal{O}(t^{-1})$ & weak & \textcolor{red}{slow}* & yes & yes \\\hline
$U_\textsc{di}$ & Direct parametrization & yes & exact & none* & fast* & \textcolor{red}{no} & \textcolor{red}{no} \\\hline
$U_\textsc{ce}$ & Cross-entropy softmax* & \textcolor{red}{no}* & $\mathcal{O}(t^{-1})$ & none* & fast* & \textcolor{red}{no} & yes \\\hline
$U_\textsc{mce}$ & Modified C-E softmax* & yes* & $\mathcal{O}(t^{-1})$* & none* & fast* & \textcolor{red}{no} & yes
\end{tcolorbox}
\caption{Summary of the theoretical analysis. The five updates are evaluated across six dimensions. Conv. \& Opt. relates to the existence of a proof of the convergence and global optimality of the update (see Section \ref{sec:conv}). Rates measures the asymptotic speed of convergence (see Section \ref{sec:asymp}). Gravity Well reports how strong they endure a gravitational pull (see Section \ref{sec:gravity}). Unlearning evaluates the speed at which an update can recover from its own past updates (see Section \ref{sec:symmetry}). Stochastic reports whether the update is compatible with stochastic updates (see Section \ref{sec:expected}). Deep Impl. assesses whether the method can be implemented with function approximation (see Section \ref{sec:deepimpl}). The entries have a red font when the answer is considered as disadvantageous. The entries are associated with a star when the answer is novel.}
\label{tab:analysis}
\end{table*}
\subsection{Cross-entropy update: $U_\textsc{ce}$}
Instead of the gradient of the objective function, we propose to follow the gradient derived from the classification problem of selecting the best action according to the current value function $q$: the cross-entropy loss on a softmax parametrization (see Eq. \eqref{eq:softmax}):
\begin{align}
\!\!\!U_\textsc{ce}(\theta, d, q,\eta) &\doteq \theta + \eta \sum_{s\in\mathcal{S}} d(s) \nabla_{\theta} \log \pi_\theta\left(a_q(s)|s\right), \label{eq:ce1}\\
(U_\textsc{ce})_{s,a} &= \theta + \eta d(s) \left(\mathds{1}_{a=a_q(s)}-\pi_\theta\left(a|s\right)\right), \label{eq:ce2}
\end{align}
where $a_q(s) = \argmax_{a\in\mathcal{A}} q(s,a)$ is the action that maximizes $q$ in state $s$. The cross-entropy update can be seen as a soft version of the SARSA algorithm (reached as $\eta$ tends to infinity). Unfortunately, the cross-entropy update does not guarantee a monotonous increase of the value. Indeed, an imbalance of the policy for suboptimal actions may lead to an increase in the policy for some of them, and potentially to a decrease in the policy value.
\begin{restatable}[\textbf{Non-monotonicity of $U_\textsc{ce}$}]{proposition}{nonmonotonicity}
Updating a policy with $U_\textsc{ce}$ may decrease its value.
\label{prop:non-monotonicity}
\end{restatable}
We will consider $U_\textsc{ce}$ because it works well in practice but the non-monotonicity of its value compromises our proofs of convergence and optimality.
\subsection{Modified cross-entropy update: $U_\textsc{mce}$}
In order to solve the monotonicity issue with $U_\textsc{ce}$, we propose to modify its update such that all suboptimal actions get penalised equally, thereby correcting the imbalance responsible for the non-monotonicity of the update:
\begin{align}
(U_\textsc{mce})_{s,a_q(s)} &\doteq \theta + \eta d(s) \left(1-\pi_\theta\left(a_q(s)|s\right)\right) \label{eq:mce1} \\
(U_\textsc{mce})_{s,a\neq a_q(s)} &\doteq \theta - \frac{\eta d(s)}{|\mathcal{A}|-1} \left(1-\pi_\theta\left(a_q(s)|s\right)\right). \label{eq:mce2}
\end{align}
Note that updating $a\neq a_q(s)$ is not necessary but allows $\sum_a \theta_{s,a}$ to remain constant over time, and thus prevents the weights from diverging artificially. We prove its monotonicity under the true updates.
\begin{restatable}[\textbf{Monotonicity of $U_\textsc{mce}$}]{proposition}{monotonicity}
Updating a policy with $U_\textsc{mce}$ increases its value.
\label{prop:monotonicity}
\end{restatable}
\subsection{Cross-entropy related updates}
In the context of transfer and multitask learning, \cite{parisotto2016actormimic} train a set of experts on various tasks and then distill the learnt policies into a single agent via the cross-entropy loss. The idea of increasing the probability of greedy actions, while decreasing the probability of bad actions, has also been used in the field of Conservative Policy Iteration~\citep{Kakade2002,Pirotta2013,scherrer2014approximate}. Finally, the Pursuit family of algorithms introduced in the automata literature also shares some common ground with the idea~\citep{Thathachar1986EstimatorAF,Agache2002}.
\section{THEORETICAL ANALYSIS}
\label{sec:analysis}
In this section, we analyze the five policy updates presented in Section \ref{sec:updates} across six dimensions summarized in Table \ref{tab:analysis}. We first check whether there exists proof of their convergence to global optimality in Section \ref{sec:conv}. In Section \ref{sec:asymp}, we look at their asymptotic convergence rates. In Section \ref{sec:gravity}, we investigate their sensitivity to the gravity well~\citep{Mei2020b}. In Section \ref{sec:symmetry}, we define the unlearning setting and assess the updates' unlearning speed. In Section \ref{sec:expected}, we discuss the compatibility of the update rules to stochastic updates. Finally, we discuss their deep implementations in Section \ref{sec:deepimpl}.
\subsection{Convergence and global optimality}
\label{sec:conv}
The convergence and global optimality of $U_\textsc{pg-sm}$ has been proved under different sets of assumptions/conditions in \cite{Agarwal2019,Mei2020a,Laroche2021}. The convergence and global optimality of $U_\textsc{pg-es}$ has been proved in \cite{Mei2020b}. The convergence and global optimality of $U_\textsc{di}$ has been proved under different sets of assumptions/conditions in \cite{Agarwal2019,Laroche2021}.
While $U_\textsc{ce}$ displays a consistent empiric convergence to optimality in our experiments, the non-monotonicity of its value function is a dent in its theoretical grounding. We conjecture its convergence and global optimality to be true, but they remain an open problem.
Using tools from \cite{Laroche2021}, we prove the convergence of $U_\textsc{mce}$ to the global optimum on finite MDPs:
\begin{restatable}[\textbf{Convergence and optimality of $U_\textsc{mce}$}]{theorem}{optimality}
Starting from an arbitrary set of parameters $\theta_0$, we consider the process induced by $\theta_{t+1} = U_\textsc{mce}(\theta_t,d_t,q_t,\eta_t),$
where $q_t=q_{\pi_{\theta_t}}$ is the state-action value of current policy $\pi_{\theta_t}$. Then, under the assumption that the optimal policy is unique, the condition that each state $s$ is updated with weights that sum to infinity over time: $\sum_{t=0}^\infty \eta_t d_t(s)=\infty$,
is necessary and sufficient to guarantee that the sequence of value functions $(q_t)$ converges to global optimality.
\label{thm:optimality}
\end{restatable}
The necessary and sufficient condition on the infinite sum of weights is identical to that for the convergence of $U_\textsc{pg-sm}$ and $U_\textsc{di}$ in the same framework~\citep{Laroche2021}. The optimal policy uniqueness assumption is not required for $U_\textsc{pg-sm}$ and $U_\textsc{di}$, but we conjecture that this is only required for technical reasons and that the theorem holds true even without uniqueness. We run in Section \ref{sec:chainexp} an experiment to empirically support this conjecture.
\subsection{Asymptotic convergence rates}
\label{sec:asymp}
Asymptotic convergence rates for $U_\textsc{pg-sm}$ is well documented to be in $\mathcal{O}(t^{-1})$~\citep{Mei2020a,Laroche2021}. \cite{Mei2020b} prove a convergence rate in $\mathcal{O}(t^{-1})$ for $U_\textsc{pg-es}$. \cite{Laroche2021} prove that $U_\textsc{di}$ will converge to an optimal policy in a finite number of steps.
The convergence of the softmax parametrization under the cross-entropic loss has been studied in the past with a rate of $\mathcal{O}(t^{-1})$~\citep{soudry2018implicit}. Under the assumption that $U_\textsc{ce}$ converges, its rates should be the same. Theorem \ref{thm:rates} shows the same holds true for $U_\textsc{mce}$.
\begin{restatable}[\textbf{Convergence rates of $U_\textsc{mce}$}]{theorem}{rates}
The process, assumption, and condition defined in Theorem \ref{thm:optimality} guarantee that the sequence of value functions $(q_t)$ asymptotically converges in $\mathcal{O}\left(\left(\sum_{t'=0}^t \eta_{t'} \min_{s\in\mathcal{S}} d_{t'}(s)\right)^{-1}\right)$.
\label{thm:rates}
\end{restatable}
This is the same rate as $U_\textsc{pg-sm}$ and it reaches rates in $\mathcal{O}\left(t^{-1}\right)$, when the learning rates is kept constant and when off-policy updates prevent $d_{t}(s)\geq d_{\mathrel{\scalebox{0.5}{$\bot$}}}>0$ from decaying.
The various update rules all converge at least in $\mathcal{O}(t^{-1})$. Given that our setting of interest is RL, where the minimal theoretical cumulative regret is known to be $\mathcal{O}(\log t)$, there is no point in converging faster than $\mathcal{O}(t^{-1})$. For this reason, we consider all the updates converge \textit{fast enough}.
\subsection{Gravity well}
\label{sec:gravity}
\cite{Mei2020b} identify the softmax gravity well issue, \textit{whereby gradient ascent trajectories are drawn towards suboptimal corners of the probability
simplex and subsequently slowed in their progress toward the optimal vertex}. A condition for this to happen is for the action $a_{q_t}$ maximizing $q_t$ not to incur the maximal update step in policy space. Formally, the following condition guarantees the absence of gravity well:
\begin{align}
a_{q_t}(s) \in \argmax_{a\in\mathcal{A}}\left(\pi_{t+1}(a|s)-\pi_{t}(a|s)\right). \label{eq:gravity_condition}
\end{align}
Theorem \ref{thm:gravitywell} analyses condition \ref{eq:gravity_condition} for all five updates $U_\textsc{pg-sm}$, $U_\textsc{pg-es}$, $U_\textsc{di}$, $U_\textsc{ce}$, and $U_\textsc{mce}$.
\begin{restatable}[\textbf{Gravity well}]{theorem}{gravitywelll}
$U_\textsc{di}$ and $U_\textsc{mce}$ are guaranteed to satisfy Eq. \eqref{eq:gravity_condition}.
$U_\textsc{pg-sm}$, $U_\textsc{pg-es}$, and $U_\textsc{ce}$ may transgress Eq. \eqref{eq:gravity_condition}. Furthermore, $U_\textsc{pg-sm}$ may not even satisfy $\pi_{t+1}(a_{q_t}(s)|s)-\pi_{t}(a_{q_t}(s)|s)\geq 0$.
\label{thm:gravitywell}
\end{restatable}
With policy gradient softmax $U_\textsc{pg-sm}$, we see in the proofs of Th.~\ref{thm:gravitywell} that if $\pi_\theta(a_{q_t}(s)|s)$ is close to 0, it is possible, and even rather easy, for $U_\textsc{pg-sm}$ to induce a larger update step in policy space for a suboptimal action than for $a_{q_t}(s)$ itself. This can last for a significant amount of time because $\pi_\theta(a_{q_t}(s)|s)$ will only observe a small update, hence many steps will be needed to escape the gravity well, as \cite{Mei2020b} empirically observed. Even worse, it may happen that the policy for $a_{q_t}(s)$ decreases.
Similarly to $U_\textsc{pg-sm}$, $U_\textsc{pg-es}$ may induce a larger update step for a suboptimal action than for the optimal one (only when $p>1$). However, that effect is dampened by the $1 - \frac{1}{p}$ power on $\pi_\theta(a|p)$ seen in Eq. \eqref{eq:escort-grad}, and the setting for condition \eqref{eq:gravity_condition} not to be satisfied is much more restricted. Consequently, the gravity well effect exists but is not strong enough to compromise the update efficiency, as predicted by \cite{Mei2020b}.
Regarding $U_\textsc{ce}$, it is rather easy to construct examples where condition \eqref{eq:gravity_condition} is not satisfied. However, we argue that this differs from the gravity well issue, as the action benefiting from a higher update step is not the action with the highest policy. Our experiment confirms that $U_\textsc{ce}$ does not suffer from the gravity well.
Finally, condition \eqref{eq:gravity_condition} is guaranteed to be satisfied with policy updates $U_\textsc{di}$ and $U_\textsc{mce}$.
\subsection{Unlearning what has been learnt}
\label{sec:symmetry}
In RL, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets evolve with time and the policy optimization process must be efficient at unlearning what it previously learnt. We use two settings to investigate this property: the unlearning setting measures how fast each algorithm is to recover from bad preliminary $q$ targets, and the domino setting evaluates how this compounds when the bad targets are chained. Our analysis reveals that the symmetry of $U_\textsc{pg}$ with respect to the $q$ target strongly slows down unlearning and that this pitfall compounds geometrically in hard planning tasks.
\subsubsection{Unlearning setting (constant weights)}
To evaluate the ability to unlearn convergence stemming from bad preliminary $q$ targets, we consider a single state MDP with two actions and a tabular parametrization $\theta \in \mathbb{R}^2$. Starting from an initial set of parameters $\theta_0$, we apply $n$ updates with $(q(a_1)=1,q(a_2)=0)$ and then measure the number $n'$ of ``opposite'' updates, i.e. with $(q(a_1)=0,q(a_2)=1)$ necessary to unlearn these steps, that is to recover a policy such that $\pi_{\theta_{n+n'}}(a_1)\leq\pi_{\theta_0}(a_1)$.
\begin{restatable}[\textbf{Unlearning setting -- constant weights}]{theorem}{unlearn}
In the setting where $\eta$ is constant:
\begin{itemize}
\item[(i)] Under assumptions enunciated below--and verified by $U_\textsc{pg-sm}$ and $U_\textsc{pg-es}$, $U_\textsc{pg}$ needs $n' \geq n$ updates.
\item[(ii)] $U_\textsc{di}$ needs $n' = \min\{n\:;\lceil\frac{1}{\eta}\rceil\}$ updates.
\item[(iii)] $U_\textsc{ce}$ and $U_\textsc{mce}$ need $n' \leq 2 + \frac{1}{\eta}\log(1+2\eta n)$ updates.
\end{itemize}
\label{thm:unlearn}
\end{restatable}
Letting $(\theta)_1$ and $(\theta)_2$ denote $\theta$'s components, Theorems \ref{thm:unlearn}(i), \ref{thm:unlearndecay}(i), and \ref{thm:domino}(i) require the following mild assumptions:
\begin{align*}
&\text{Invariance w.r.t. }\theta\text{: } \pi_{\theta_1} = \pi_{\theta_2} \implies \nabla_\theta \pi_{\theta_1} = \nabla_\theta \pi_{\theta_2}\\
&\text{Monotonicity: } \pi_\theta(a_1)\! \geq \!\pi_0(a_1) \!\!\! \implies \!\!\! \left\{\!\!\! \begin{array}{l}
(\nabla_\theta \pi_{\theta}(a_1))_{1}\geq 0 \\
(\nabla_\theta \pi_{\theta}(a_1))_{2}\leq 0
\end{array}\right.\\
&\text{Concavity: } \pi_\theta(a_1)\! \geq \!\pi_0(a_1) \!\!\! \implies \!\!\! \left\{\!\!\!\! \begin{array}{l}
(\nabla_{\pi}(\nabla_\theta \pi_{\theta}(a_1))_{1})_{a_1}\!\! \leq \!0 \\
(\nabla_{\pi}(\nabla_\theta \pi_{\theta}(a_1))_{2})_{a_1}\!\! \geq \!0
\end{array}\right.
\end{align*}
Invariance w.r.t. $\theta$ states that any two parameters implementing the same policy have equal gradients--our theorems do not deal with overparametrization. Monotonicity states that the gradient is monotonic with $\pi$: higher components imply higher policy. It is fairly standard to expect each parameter to be assigned to an action. Concavity states that, as the policy for $a_1$ grows, the absolute value of its gradient with respect to $\theta$ diminishes. Since $\pi_\theta(a_1)$ is a function of $\theta$ from $\mathbb{R}^2$ to $[0,1]$, it is expected that the gradient diminishes as $\pi_\theta(a_1)$ grows to 1.
Theorem \ref{thm:unlearn} states that to unlearn $n$ steps towards a certain action, $U_\textsc{pg}$ requires a number of updates $n'$ at least as large as the number of steps taken initially. As shown in our experiments, this may significantly slow down convergence to the optimal policy. In contrast, $U_\textsc{di}$ updates display no asymptotic dependency in $n$, while $U_\textsc{ce}$ and $U_\textsc{mce}$ updates require a logarithmic number of steps to recover.
\subsubsection{Unlearning setting (decaying weights)}
In practice, the learning weight $\eta_t d_t(s)$ applied to the gradient is likely to decay over time, either because the learning rate $\eta_t$ is required to decay to guarantee convergence of stochastic policy updates, and/or because the state density $d_t(s)$ will decay (depending on $s$) as the policy converges.
In order to model this scenario, we reproduce the unlearning setting with a decaying learning rate: $\forall t\geq 1, \eta_t = \frac{\eta_1}{\sqrt{t}}$ for $\eta_1>0$. From time $t=1$ to $t=n$, updates with $\eta_t$ and $(q(a_1)=1,q(a_2)=0)$ are performed, from time $t=n+1$ onwards, updates with $\eta_t$ and $(q(a_1)=0,q(a_2)=1)$ are applied. We then study $n'$ such that $\pi_{\theta_{n+n'}}(a_1)\leq\pi_{\theta_0}(a_1)$.
\begin{restatable}[\textbf{Unlearning setting -- decaying weights}]{theorem}{unlearndecay}
In the setting where $\eta_t = \frac{\eta_1}{\sqrt{t}}$:
\begin{itemize}
\item[(i)]
Under the assumptions enunciated above--verified by $U_\textsc{pg-sm}$ and $U_\textsc{pg-es}$, $U_\textsc{pg}$ needs $n' \geq 3n - 4\sqrt{n} + 1$.
\item[(ii)] $U_\textsc{di}$ updates require at most $n'$ equal to\\ $\hspace*{-5pt}\textstyle\min\left\{3n + 4\sqrt{n} + 1\:\:;\: \left(\frac{1}{\eta_1}+1\right)^{2} +\sqrt{n}\left(2+\frac{2}{\eta_1}\right)\right\}.$
\item[(iii)] $U_\textsc{ce}$ and $U_\textsc{mce}$ updates require at most $n'$ equal to \\$\hspace*{0.6cm}\textstyle\left(4+\frac{\log(1+4\eta_1 \sqrt{n})}{2\eta_1}\right)^{2} +\sqrt{n}\left(8+\frac{\log(1+4\eta_1 \sqrt{n})}{\eta_1}\right).$
\end{itemize}
\label{thm:unlearndecay}
\end{restatable}
All updates are severely affected by the decaying weights. Since the decay can stem from the learning rate actually decaying and/or from the state visit $d_t(s)$ decreasing as the behavioural policy converges, these worsened recovery rates are an argument in favor of (i) using expected updates in action space~\citep{Ciosek2018} to allow the use of a constant learning rate, and (ii) performing off-policy updates~\citep{Laroche2021} to properly control the policy update state visitation density.
\subsubsection{Domino setting}
Next, we argue that the number of updates compounds exponentially with the horizon of the task in the following sense: for a $q$ estimate to flip in one state, it is required that all future states have flipped beforehand. To illustrate this effect, we consider the domino setting: several binary decisions are taken sequentially in states $s\in\mathcal{S}=\{s_k\}_{k\in[|\mathcal{S}|]}$. The $q$-function is artificially designed as follows:
\begin{align*}
q(s_{|\mathcal{S}|},a_1) = 0 \quad\text{and}\quad &q(s_{|\mathcal{S}|},a_2) = 1 \\
\text{if } \pi_{\theta_t}((a_2|s_{k+1}) \leq \pi_{\theta_0}(a_2|s_{k+1}),\;&\left\{\begin{array}{l}
q(s_k,a_1) = 1 \\
q(s_k,a_2) = 0,
\end{array}\right. \\
\hspace{-0.2cm}\text{otherwise,}\;&\left\{\begin{array}{l}
q(s_k,a_1) = 0 \\
q(s_k,a_2) = 1.
\end{array}\right.
\end{align*}
Intuitively, the decision in state $s_{k+1}$ needs to be correct for the gradient in state $s_k$ to point in the right direction. When this happens, we say that state $s_{k+1}$ has flipped. We say that the domino setting has been solved in $t$ steps when $\pi_{\theta_t}(a_2|s_{1}) > \pi_{\theta_0}(a_2|s_{1})$.
\begin{restatable}[\textbf{Domino setting}]{theorem}{domino}
Under the assumption that $\eta$ and $d(s_k)>0$ are constant, in order to solve the setting,
\begin{itemize}
\item[(i)] $U_\textsc{pg}$ updates require at least $2^{|\mathcal{S}|-1}$ steps.
\item[(ii)] $U_\textsc{di}$ updates require at most $1+\frac{|\mathcal{S}|-1}{\eta}$ steps.
\item[(iii)] $U_\textsc{ce}$ and $U_\textsc{mce}$ updates require at most: \\$\hspace*{2cm}\frac{32e^{8\eta+3}}{\eta^3}(|\mathcal{S}|-1)\log (|\mathcal{S}|-1)\text{ steps}.$
\end{itemize}
\label{thm:domino}
\end{restatable}
\myuline{Disclaimer:} The domino setting is a thought experiment for the backpropagation of an optimal policy through a decision chain. However, in practice, the updates will not flip this way. Moreover, its implementation requires a reward function of amplitude $2^{|\mathcal{S}|-1}$. We acknowledge that the domino setting makes things look worse than they really are, but we claim, with the validation of our empirical experiments, that the unlearning slowness of the policy gradient updates are a critical burden for hard planning tasks.
\subsection{Stochastic versus expected updates}
\label{sec:expected}
In Section \ref{sec:symmetry}, we showed drawbacks of $U_\textsc{pg}$'s symmetry property w.r.t. the $q$ function. It can, however, also be an asset, as it allows for stochastic updates. $U_\textsc{di}$'s projection on the simplex breaks the symmetry but, fortunately, only at the frontier of its domain. So, $U_\textsc{di}$ also allows for stochastic updates as long as they obey the classic Robbins-Monro conditions. In contrast, the cross-entropy updates $U_\textsc{ce}$ and $U_\textsc{mce}$ are asymmetric everywhere. Moreover, they ignore the $q$-value gaps, and are thus biased with stochastic updates. Therefore, they must use expected updates~\citep{Ciosek2018}. Given Theorem \ref{thm:unlearndecay}'s analysis, we argue that applying expected updates is actually necessary no matter the update type, so that constant learning rates can be used, and efficient unlearning performance attained.
\subsection{Implementation with function approximation}
\label{sec:deepimpl}
To the best of our knowledge, all actor-critic algorithms with function approximations have implemented policy gradient updates $U_\textsc{pg}$ with or without the true state visitation density: often omitting the discount factor in the state visitation~\citep{Thomas2014,Nota2020,Zhang2020deeper}, and sometimes not correcting off-policy updates~\citep{wang2016sample,espeholt2018impala,vinyals2019grandmaster,schmitt2020off,zahavy2020self}. A vast majority of the implementations also chose by default the softmax parametrization $U_\textsc{pg-sm}$. However, given their shape, implementing cross-entropy updates $U_\textsc{ce}$ and $U_\textsc{mce}$ should be straightforward. The assessment of their actual efficiency is left for future work. Implementing $U_\textsc{pg-es}$ ought not be of any challenge. Nevertheless, since it was already an issue with tabular representations, the instability of $U_\textsc{pg-es}$ when the parameters cross 0 may be a source of concern with function approximation.
The direct parametrization $U_\textsc{di}$ remains to be discussed for function approximation. Sparsemax~\citep{Martins2016,Laha2018} could be seen as an implementation, but, since it omits the projection step, it misses one of its fundamental feature: asymmetry. Indeed, if an action is already assigned a 0 probability, a proper direct update would either decrease it and then project it back to 0, or increase it. In contrast, Sparsemax exhibits a null gradient and therefore immobility in both directions. Adapting $U_\textsc{di}$ to function approximation remains an open problem to this date.
\begin{figure*}[t]
\begin{center}
\scalebox{0.75}{
\begin{tikzpicture}[->, >=stealth', scale=0.6 , semithick, node distance=2cm]
\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=1]
\node[state] (x0) {$s_0$};
\node[state] (x1)[right of=x0] {$s_1$};
\node[state] (x2)[right of=x1] {$s_2$};
\node[state] (x3)[right of=x2] {$s_3$};
\node[state] (x4)[right of=x3] {$s_4$};
\node[state] (x5)[right of=x4] {$s_5$};
\node[state] (x6)[right of=x5] {$s_6$};
\node[state] (x7)[right of=x6] {$s_7$};
\node[state] (x8)[right of=x7] {$s_8$};
\node[state,accepting] (x9)[right of=x8] {$s_9$};
\node[state,accepting] (x-1)[above of=x4] {$s_9$};
\node[] (y)[above of=x5] {$r=0.8\gamma^8$};
\path
(x0) edge[above] node{$a_1$} (x-1)
(x1) edge[above] node{$a_1$} (x-1)
(x2) edge[above] node{$a_1$} (x-1)
(x3) edge[above] node{$a_1$} (x-1)
(x4) edge[above] node{$a_1$} (x-1)
(x5) edge[above] node{$a_1$} (x-1)
(x6) edge[above] node{$a_1$} (x-1)
(x7) edge[above] node{$a_1$} (x-1)
(x8) edge[above] node{$a_1$} (x-1)
(x0) edge[above] node{$a_2$} (x1)
(x1) edge[above] node{$a_2$} (x2)
(x2) edge[above] node{$a_2$} (x3)
(x3) edge[above] node{$a_2$} (x4)
(x4) edge[above] node{$a_2$} (x5)
(x5) edge[above] node{$a_2$} (x6)
(x6) edge[above] node{$a_2$} (x7)
(x7) edge[above] node{$a_2$} (x8)
(x8) edge[above] node{$a_2$} (x9)
(x8) edge[below] node{$r=1$} (x9);
\end{tikzpicture}
}
\caption{Deterministic chain with $|\mathcal{S}|=10$. Initial state is $s_0$. Reward is 0 except when entering final state $s_9$, which is represented with two icons for clarity but is a single state. $r(\cdot,a_2)$ is set such that $q(s_0,a_1)=0.8 q_\star(s_0,a_2)$.}
\label{fig:chain-env}
\end{center}
\vspace{-20pt}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfloat[\textsc{NoExplo} ($\approx$1000 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/randommdps/noexplo/results.pdf}
\label{fig:randommdpsno}
}
\subfloat[\textsc{LowOffPol} ($\approx$1000 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/randommdps/lowoffpol/results.pdf}
\label{fig:randommdpslow}
}
\subfloat[\textsc{HiOffPol} ($\approx$1000 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/randommdps/hioffpol/results.pdf}
\label{fig:randommdpshi}
}
\caption{Random MDPs: number of steps to obtain performance equal to 95\% of the gap between $\pi_\star$ and $\pi_u$.}
\label{fig:randommdps}
\end{figure*}
\section{EMPIRICAL ANALYSIS}
\label{sec:empirical}
This section intends to compare the five updates studied until now in RL experiments (with $p=2$ for $U_\textsc{pg-es}$). In RL, many confounding factors such as exploration or the nature of the environment may compromise the empirical analysis. We will endeavour to control these factors by:
\paragraph{Investigating three exploratory and off-policy updates schemes} designed within the J\&H algorithm\footnote{The precise J\&H implementation is detailed in Appendix \ref{app:Algorithms}.}~\citep{Laroche2021}, where $\epsilon_t$ denotes the probability to give control to Hyde, a pure exploratory agent, at the beginning of each trajectory, and where $o_t$ denotes the probability of updating the actor with a sample collected with Hyde (therefore likely to be off-policy):
\begin{itemize}
\item \textsc{NoExplo}: no exploration: $\epsilon_t=0$ and $o_t=0$,
\item \textsc{LowOffPol}: exploration and low off-policy updates: $\epsilon_t=\min\{1\:;\frac{10}{\sqrt{t}}\}$ and $o_t=\min\{1\:;\frac{10}{\sqrt{t}}\}$,
\item \textsc{HiOffPol}: exploration and high off-policy updates: $\epsilon_t=\min\{1\:;\frac{10}{\sqrt{t}}\}$ and $o_t=\frac{1}{2}$.
\end{itemize}
\paragraph{Evaluating on three RL environments
\begin{itemize}
\item \myuline{Random MDPs:} procedurally generated environments where efficient planning is unlikely to matter and where stochasticity plays an important role.
\item \myuline{Chain domain:} a deterministic domain with rewards misleading towards suboptimal policies, and thus where planning is the main challenge. The chain domain evaluates the unlearning ability of the updates.
\item \myuline{Cliff domain:} a slight modification of the chain domain in order to create gravity wells. In addition to their unlearning abilities, the cliff domain should evaluate the updates' resilience to the gravity well pitfall.
\end{itemize}
\subsection{Random MDPs experiments}
A Random MDP environment with $|\mathcal{S}|=100$ and $|\mathcal{A}|=4$ is procedurally generated at the start of each run. The transition from each state action pair stochastically connects to two states uniformly sampled from $\mathcal{S}$ and with probability generated from a uniform partition of the segment [0,1]. In most cases, the obtained MDP is strongly connected and exploration is barely an issue. However, stochasticity is strong and an accurate $q$ estimate is necessary to find a policy with a good performance.
We evaluate the policies with the number of steps (each step is a transition and an update) they need to reach a normalized performance $\overline{\mathcal{J}}_\pi$ equal to 95\% of the gap between the performance of the optimal policy $\pi_\star$ and that of the uniform one $\pi_u(\cdot|\cdot)=\frac{1}{|\mathcal{A}|}$, formally $\overline{\mathcal{J}}_\pi = \frac{\mathcal{J}(\pi)-\mathcal{J}(\pi_u)}{\mathcal{J}(\pi_\star) - \mathcal{J}(\pi_{u})}$.
The result of the Random MDPs experiments is presented on Figure \ref{fig:randommdps}, where we display the number of steps to reach $\overline{\mathcal{J}}(\pi)=0.95$ versus the learning rate for the actor. The three exploration/off-policiness settings are shown on three separate subfigures: \textsc{NoExplo} in \ref{fig:randommdpsno}, \textsc{LowOffPol} in \ref{fig:randommdpslow}, and \textsc{HiOffPol} in \ref{fig:randommdpshi}.
The similarity of \textsc{NoExplo} and \textsc{LowOffPol} confirms that the exploration plays a minimal role in this setting. More generally, the policy gradient updates $U_\textsc{pg-sm}$ and $U_\textsc{pg-es}$ are quite unchanged by the presence of off-policy updates, maybe indicating that their slowness to change policies makes them less likely to follow fine $q$ optimality. It is also worth noting that $U_\textsc{pg-es}$ has the narrowest learning rate bandwidth inside which it is efficient: lower, the updates are too slow, higher, the updates get too large and induce divergence because of the \textit{bouncing} behaviour in 0.
$U_\textsc{di}$ gets the best performance in all settings.
$U_\textsc{di}$, $U_\textsc{ce}$, and $U_\textsc{mce}$ all converge to SARSA when $\eta$ tends to $\infty$ and by extrapolation, we may imagine where their curves would meet and therefore SARSA's performance. Thus, we observe that all the policy update algorithms do much better than SARSA on \textsc{NoExplo} and \textsc{LowOffPol}. $U_\textsc{ce}$ and $U_\textsc{mce}$ perform equally, but their strong similarity to SARSA make them the worst on \textsc{NoExplo} and \textsc{LowOffPol}. Indeed, the stochasticity in the environment makes the $q$ predictions of the critic unstable, which prevents the cross-entropy updates to converge. Off-policy updates help because they allow to quickly fix bad $q$ prediction. With high off-policy updates, $U_\textsc{ce}$ and $U_\textsc{mce}$ perform much better with a wide range for the learning rate.
\begin{figure*}[t!]
\centering
\subfloat[\textsc{NoExplo}: $|\mathcal{S}|=5$ (100 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/chain/noexplo/results.pdf}
\label{fig:chainno}
}
\subfloat[\textsc{LowOffPol}: $|\mathcal{S}|=7$ (100 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/chain/lowoffpol/results.pdf}
\label{fig:chainlow}
}
\subfloat[\textsc{HiOffPol}: $|\mathcal{S}|=10$ (100 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/chain/hioffpol/results.pdf}
\label{fig:chainhi}
}
\caption{Chain: number of steps to obtain performance equal to 50\% of the gap between $\pi_\star$ and $\pi_s$.}
\label{fig:chain}
\vspace{-10pt}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfloat[\textsc{LowOffPol}: ($\approx$20 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/chain/nbstates/lowoffpol/results.pdf}
\label{fig:nbchainlow}
}
\subfloat[\textsc{HiOffPol}: ($\approx$20 runs)]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/chain/nbstates/hioffpol/results.pdf}
\label{fig:nbchainhi}
}
\subfloat[\textsc{HiOffPol}: with duplicate actions]{
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/rebuttal/duplicate-actions.pdf}
\label{fig:nbcliffhi}
}
\caption{Chain: number of steps to obtain performance equal to 50\% of the gap between $\pi_\star$ and $\pi_s$.}
\label{fig:nbchain}
\vspace{-10pt}
\end{figure*}
\begin{figure}[b!]
\centering
\includegraphics[trim = 5pt 5pt 5pt 20pt, clip, width=0.32\textwidth]{figures/cliff/hioffpol/results.pdf}
\caption{Cliff experiment \textsc{HiOffPol}: $|\mathcal{S}|=7$.}
\label{fig:duplicates}
\end{figure}
\subsection{Chain experiment}
\label{sec:chainexp}
Framing the domino setting in an RL task would have been possible, but we opted for a less processed environment, already existing in the literature. The chain domain, depicted on Figure \ref{fig:chain-env}, implements a deterministic walk along a line, starting from $s_0$ to $s_{|\mathcal{S}|-1}$. In any state, it is possible to jump off the chain (action $a_1$) and get an immediate reward, but the optimal policy consists in walking (action $a_2$) throughout the chain. We expect the PUs to first guide the agent towards $a_1$, and require exploration and updates to propagate the optimal policy $\pi_\star(a_2|s)\approx 1$ and value through the chain. Thus, starting from the end of the chain, the policy in each state will need to be flipped in order to choose the best action at the first state.
We evaluate the policies with the number of steps (each step is a transition and an update) they need to reach a normalized performance $\overline{\mathcal{J}}_\pi$ that is equal to half the gap between that of the optimal policy $\pi_\star$ and that of the suboptimal one $\pi_{s}(a_1|\cdot)=1$, formally $\overline{\mathcal{J}}_\pi = \frac{\mathcal{J}(\pi)-\mathcal{J}(\pi_s)}{\mathcal{J}(\pi_\star) - \mathcal{J}(\pi_{s})}$.
Similarly to the Random MDPs, we first look at the incidence of the actor learning rate for each update rule. The results are presented in Figure \ref{fig:chain}. First, we notice that the size of the chain had to be adapted to each setting. Indeed, with no exploration and no off-policy updates, policy updates tends to completely converge to the suboptimal policy and stop updating the subsequent states much more easily. In all the settings, the cross-entropy updates $U_\textsc{ce}$ and $U_\textsc{mce}$ yield the best results even better than $U_\textsc{di}$. In \textsc{LowOffPol} (Figure \ref{fig:chainlow}) and \textsc{HiOffPol} (Figure \ref{fig:chainhi}), this might just be a hyperparameter shift, but in \textsc{NoExplo} (Figure \ref{fig:chainno}), the advantage is real, which was unexpected to us since it unlearns a bit more slowly in theory. This might be due to the fact that it can both learn and unlearn fast but still explore. In contrast, either $U_\textsc{di}$ has a small learning rate, and it is slow to unlearn, or it has a large learning rate and it completely stops exploring too fast. This analysis seems confirmed by the fact that $U_\textsc{di}$ yields a performance similar to that of $U_\textsc{pg-sm}$ and $U_\textsc{pg-es}$, suggesting it never gets the benefit of its projection's step ability to break the symmetry of the policy gradient.
We conduct an additional experiment where we observe how each update behaves when the length of the chain grows (Figures \ref{fig:nbchainlow} and \ref{fig:nbchainhi}). We set their best learning rate:
$\eta_\textsc{pg-sm}=10,\eta_\textsc{pg-es}=1,\eta_\textsc{di}=1,\eta_\textsc{ce}=1,$ and $\eta_\textsc{mce}=1$ for \textsc{LowOffPol} and \textsc{HiOffPol}. We observe that $U_\textsc{pg-sm}$ and $U_\textsc{pg-es}$ have approximately the same behaviour and are much slower to converge to the optimal policy. $U_\textsc{pg-es}$ has more visible variance because it diverges sometimes (and less runs have been performed for this experiment). With sufficient exploration, $U_\textsc{di}$, $U_\textsc{ce}$, and $U_\textsc{mce}$ have the same behaviour granted that their learning rate is set sufficiently high (setting $\eta_\textsc{di}=1$ was slightly low and this is the reason why $U_\textsc{di}$ is a bit slower.
To further investigate it, we run our chain environment ($|\mathcal{S}|=10$) where optimal actions have been duplicated $|\mathcal{A}|-1$ times. The results are displayed on Figure \ref{fig:duplicates}. The results show no convergence delay for $U_\textsc{ce}$, as compared with $U_\textsc{di}$, and a slight delay for $U_\textsc{mce}$. Overall, this experiment supports our conjecture that the uniqueness assumption is an artefact of our proof technique and not a structural flaw of the policy update.
\subsection{Cliff experiment}
The cliff experiment is similar to the chain except it includes a third action: $a_3$ jumps and leads to the abyss: $r(s,a_3)=0$ and terminates. It has been designed to create gravity wells: the policy will first converge to $a_1$, and will eventually be able to converge to optimal $a_2$ with enough exploration and updates. Like with the chain domain, we report the time to reach $\overline{\mathcal{J}}_\pi=0.5$.
The results are presented in Figure \ref{fig:nbcliffhi}. As expected, despite the cliff being rather short: $|\mathcal{S}|=7$, $U_\textsc{pg-sm}$ is very slow to converge to the optimal policy, which we interpret as a manifestation of its gravity well. $U_\textsc{pg-es}$ does significantly better but still hits a performance wall with high learning rates because of its divergence behaviour. With its best learning rate, $U_\textsc{pg-es}$ shows more or less the same relative performance gap with the best updates as in the chain domain. We thereby attribute the gap to its unlearning slowness rather than to a gravity well effect.
$U_\textsc{di}$, $U_\textsc{ce}$, and $U_\textsc{mce}$ display the best performance in a similar fashion than in the chain domain at the exception that $U_\textsc{ce}$ and $U_\textsc{mce}$ are not behaving exactly identically (but still similarly). Indeed, with small learning rates, $U_\textsc{mce}$ does a bit better than $U_\textsc{ce}$. We naturally attribute this to its ability to perform monotonic updates that should help to converge faster when starting from a bad parameter initialization, because of past convergence to the suboptimal solution.
\section{CONCLUSION}
In this paper, we identified the unlearning slowness implied by policy gradient updates in actor-critic algorithms. We proposed several alternatives to these updates: the direct parametrization, and two novel cross-entropy-based updates, including one for which we prove convergence to optimality at a rate of $\mathcal{O}(t^{-1})$, under classic assumptions. We further extend the analysis to the study of their ability to enter a gravity well, their unlearning speed, and various implementation constraints. Finally, we empirically validate our theoretical findings on finite MDPs.
In the future, we would like to extend convergence/optimality proofs of policy updates to the stochastic and approximate case, as this kind of results is starting to emerge for softmax policy gradient updates~\citep{zhang2021global,zhang2022chattering}. We would also like to study other non policy gradient policy updates, and find some that solve the both the policy gradient unlearning slowness and the stochasticity issue related to the cross-entropy updates. Finally, we would like to investigate the relationship of these policy updates with the function approximators used in complex tasks.
\bibliographystyle{apalike}
\subsubsection*{\bibname}}
\begin{document}
\onecolumn
\aistatstitle{Instructions for Paper Submissions to AISTATS 2022: \\
Supplementary Materials}
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2022.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
|
1,116,691,500,320 | arxiv | \section{Introduction}
Blue straggler stars (BSSs) are brighter and bluer (hotter) than the
main sequence (MS) turnoff and they mimic a rejuvenated stellar
population in globular clusters (GCs). From their position in the
colour-magnitude diagram (CMD) and from direct measurements (Shara et
al. 1997; see also Ferraro, Fusi Pecci \& Bellazzini 1995), they are
known to be more massive than the normal MS stars, thus indicating
that some process able to increase the initial mass of single stars
must be at work: BSSs could be generated by collision-induced stellar
mergers (COL-BSSs; Hills \& Day 1976), or they may form by the
mass-transfer activity in a binary system (MT-BSSs; McCrea 1964),
possibly up to the complete coalescence of the two companions. BSSs in
different environments could have different origins (Fusi Pecci et
al. 1992). In particular, BSSs in loose GCs might be produced from the
coalescence of primordial binaries, whereas in high-density GCs BSSs
might arise mostly from stellar collisions, particularly those
involving binaries. Moreover, there is evidence that both mechanisms
could act simultaneously within the same cluster (see the two distinct
sequences of BSSs recently discovered in M30 by Ferraro et al. 2009),
also depending on the different stellar densities at various distances
from the cluster center. This is suggested by the bimodality of the
BSS radial distribution detected in several GCs (Ferraro \& Lanzoni
2009 for a review). While theoretical predictions about the
properties of BSSs generated by different formation channels are still
uncertain and controversial, the search for chemical patterns onto the
BSS surface seems to be the most promising route to discriminate
between the two scenarios. In fact, hydrodynamic simulations (Lombardi
et al. 1995; Sarna \& DeGreve 1996) suggest that very little mixing
should occur between the inner cores and the outer envelopes of the
colliding stars (hence COL-BSSs should not show any abundance
anomaly), while depleted surface abundances of carbon (C) and oxygen
(O) are expected for MT-BSSs, since the accreted material should come
from the core region of a peeled parent star where nuclear processing
has occurred. Indeed this seems to be the case for the sub-sample of
BSSs with significant CO-depletion discovered in 47 Tucanae (47Tuc) by
Ferraro et al. (2006, hereafter F06).
In this framework, we are conducting an extensive survey of surface
abundance patterns and kinematical properties of BSSs in a selected
sample of GCs, by using the Very Large Telescope (VLT) of the European
Southern Observatory (ESO). In this letter we report on new findings
of this project concerning the GC M4.
\section{Observations}
The observations were performed at the ESO-VLT during three nights in
June 2008, using the multi-object facility FLAMES-GIRAFFE (Pasquini et
al. 2002). The sample includes 20 BSSs, 53 subgiant branch stars
(SGBs) and 38 red giant branch stars (RGBs). The spectroscopic target
selection has been performed on a photometric catalog obtained by
combining ACS@HST data for the central region (i.e., at radial
distances $r<100\arcsec$) and WFI@ESO observations for the outer
region. We have also taken into account the stellar proper motions for
the wide-field sample (Anderson et al. 2006), and discarded all the
stars having a source with a comparable or a brighter luminosity at a
distance smaller than $3\arcsec$. The selected BSS sample represents
$\sim 70\%$ of the entire population within $800\arcsec$ ($\approx 11$
core radii) from the cluster centre (Lanzoni et al. 2010, in
preparation), with $\sim 50\%$ of them being located at
$r<100\arcsec$. The RGBs and SGBs have been selected from the WFI
sample at $100\arcsec< r<800\arcsec$.
Three different setups were used for the spectroscopic observations:
HR15, HR18 and HR22, suitable to sample the H$\alpha$ line, the \Oi
triplet at $\lambda \simeq 7774$ \AA, and the \Ci line at $\lambda
\simeq 9111$ \AA, respectively. Exposure times amount to one hour for
the HR15 setup, and two hours each for the HR18 and HR22. Spectra
have been pre-reduced using the standard GIRAFFE ESO pipeline. The
accuracy of the wavelength calibration has been checked by measuring
the position of a number of emission telluric lines (Osterbrock et
al. 1996). Then, we subtracted the mean sky spectrum from each
stellar spectrum. By combining the exposures, we finally obtained
median spectra with signal-to-noise ratios S/N$\simeq 50-100$ for the
selected BSSs and SGBs, and S/N$\simeq 100-300$ for the RGBs.
\section{Analysis and results}
The procedure adopted to derive the radial and rotational velocities,
and the [O/Fe], [C/Fe] and [Fe/H] abundance ratios for the observed
sample is summarized below. Table \ref{bss} lists the values obtained
for the BSSs, together with the adopted temperatures and gravities.
\noindent
{\bf Radial velocities --} Radial velocities ($V_{\rm rad}$) were
measured using the IRAF task \textit{fxcor} that performs the Fourier
cross-correlation between the target spectra and a template of known
radial shift, following the prescriptions by Tonry \& Davis (1979). As
a template for the samples of BSSs, SGBs and RGBs we used the
corresponding spectra with the highest S/N ratio, for which we
computed $V_{\rm rad}$ by measuring the wavelength position of a few
tens of metallic lines. Radial velocity values obtained from the three
different setups are consistent each other within the uncertainties
which are of the order of $0.5\kms$ for the SGB stars and most of the
BSSs (see Table 1), and $0.15\kms$ for the RGBs. For the fast
rotating stars (see below) $V_{\rm rad}$ has been estimated from the
centroid of the H$\alpha$ line, which is almost unaffected by
rotation.
Figure \ref{vradisto} shows the derived $V_{\rm rad}$ distribution for
the giants (RGBs+SGBs) and the BSSs. The mean radial velocity of the
total sample is $\langle V_{\rm rad}\rangle = 71.28\pm0.50\kms$, with
a dispersion $\sigma=5.26 \kms$. The distribution for the giant stars
is peaked at nearly the same value, $\langle V_{\rm rad}\rangle =
71.25\pm0.43\ (\sigma=4.08)\ \kms$, which we adopt as the systemic
velocity of M4. This is in good agreement with previous determinations
(Peterson et al. 1995, Harris 1996, Marino et al. 2008, Sommariva et
al. 2009, Lane et al. 2009). The average of the BSS radial velocity
distribution is also similar, $\langle V_{\rm rad}\rangle
=71.40\pm2.01\kms$, but the dispersion is larger ($\sigma=9\kms$) due
to the discordant values measured for five of these stars (see Table
1).
\noindent
{\bf Rotational velocities --} To derive the stellar rotational
velocities we have used the method described by Lucatello \& Gratton
(2003). In particular we have computed the Doppler broadening due to
the rotation of the object ($I_{\rm rot}$), which is linked to the
rotational velocity ($v_{\rm rot} \sin i$) by means of a coefficient
$\alpha$: $I_{\rm rot} = v_{\rm rot} \sin i/\alpha$ (see equation 3 in
Lucatello \& Gratton 2003, where $I_{\rm rot}$ is indicated as
``$r$''). In principle, the coefficient $\alpha$ should be calibrated
by using rotational velocity standards observed with the same
instrumental configuration. Although we have not performed such a
calibration, the comparison with synthetic spectra computed for
different rotational velocities indicates that $\alpha$ is close to
unity. We have adopted $1.5\pm 0.5\kms$ for the micro-turbulent
velocity\footnote{While this value has been strictly measured for a
few SGB stars only, the impact of such an assumption on the derived
rotational velocities and chemical abundances is negligible.}, and
the relation obtained by Gray (1992) for the
macro-turbulence. Temperatures have been derived by fitting the wings
of the H$\alpha$ line, which are insensitive to other parameters like
metallicity and gravity (Fuhrmann 1993). In particular, we performed a
$\chi^2$ minimization between the observed H$\alpha$ profile and a
grid of synthetic spectra computed with different temperatures, by
using SYNTHE code (Kurucz 1993). Typical uncertainties for the
temperatures are of the order of $\sim 50$ K for the BSSs, and $\sim
100-150$ K for the giants. Since the adopted technique is efficient
only for rotational velocities up to $\simeq 50 - 60\kms$, the values
of $I_{\rm rot}$ for faster stars have been estimated by comparing the
observations in the \Oi triplet region with rotationally broadened
synthetic spectra.
Figure \ref{rotisto} presents the derived rotational index
distributions: that of the giants is peaked at $I_{\rm rot}=0.0 \kms$,
with the highest value being $13.4\pm 3.4\kms$ for a SGB. The
rotational index distribution for the BSSs (see Table 1) is quite
different, with eight stars (40\% of the total) being fast rotators,
i.e., rotating at more than $50\kms$ (while normal F-G type stars
typically spin at less than $\sim 30\kms$; Cort\'es et
al. 2009). Interestingly, three (out of five) BSSs with anomalous
$V_{\rm rad}$ also are fast rotators.
\noindent
{\bf Chemical abundances --} As the reference population needed to
identify possible anomalies in the BSS surface abundances, we have
considered the SGBs, since episodes of mixing and dredge-up may have
modified the primordial abundance patterns in the RGBs. Chemical
abundances have been derived from the equivalent width measurements by
using the WIDTH9 code (Kurucz 1993; Sbordone et al. 2004). Gravities
have been determined (within 0.2 dex) by comparing the target position
in the CMD with a grid of evolutionary tracks extracted from the
\textit{BaSTI Library} (Pietrinferni et al. 2006)\footnote{The
derived values for BSSs may be affected by systematic
uncertainties, since these stars could be underluminous for their
masses (van der Bergh et al. 2001; Sandquist et al. 2003; Mathieu
\& Geller 2009). While we do not have direct spectroscopic
indicators for gravity, uncertainties of 0.2 dex translate in
abundances variations smaller than 0.1 dex.}. This also yielded
to a mass distribution for the observed BSSs, which peaks at $\sim 1
M_\odot$, with the most massive object being at $\sim 1.3 M_\odot$.
Abundance errors have been computed by taking into account the
uncertainties on the atmospheric parameters and those on the
equivalent width measurements. For each star they typically are of the
order of $0.1-0.2$ dex.
The iron content for the SGBs and BSSs has been derived from the
equivalent widths of about ten and 2--7 \Fei lines, respectively. For
the SGBs the resulting average iron abundance is [Fe/H]$= -1.10 \pm
0.01$, with a dispersion $\sigma = 0.07$ about the mean, in good
agreement with previous values (ranging between $-1.20$ and $-1.07$;
Harris 1996; Ivans et al. 1999; Marino et al. 2008; Carretta et
al. 2009). Because of the significant deformation of the spectral
line profiles, no iron abundance has been derived for the eight fast
rotators. Moreover, technical failures in the spectrograph fiber
positioning prevented us to measure it for two additional objects (see
Table 1). The iron abundance obtained for the remaining ten BSSs has
a mean value of $-1.27$ and a dispersion $\sigma=0.10$, consistent,
within the errors, with the values derived for the SGBs.
Oxygen and carbon abundances have been computed from the equivalent
widths of the \Oi triplet and the \Ci lines respectively, and the
derived values have then been corrected for non-local thermodynamic
equilibrium effects. For O abundances these corrections were derived
by interpolating the grid by Gratton et al. (1999); for C abundances
we adopted the empirical relation obtained by interpolating the values
listed by Tomkin et al. (1992). The resulting average abundances for
the SGB sample are [C/Fe]$=-0.16\pm 0.02$ ($\sigma = 0.17$) and
[O/Fe]$=0.29\pm 0.02$ ($\sigma = 0.17$). No measurements have been
possible for the very fast rotating BSSs and for a few other objects
(see Table 1). Hence, we were able to measure both the C and O
abundances only for 11 BSSs, out of 20 observed. Figure \ref{abb}
shows the results obtained in the [C/Fe]$-$[O/Fe] plane. The values
measured for the 11 BSSs are in agreement with those of the SGBs, with
no evidence of depletion either in carbon or in oxygen. We finally
notice that also in the 3 cases for which only the oxygen \textit{or}
the carbon abundance has been measured (see Table 1), the values
obtained are in agreement with those of the SGBs.
\section{Discussion}
Before discussing in details the main findings of the present work it
is necessary to verify whether some of the investigated stars do not
belong to the cluster. In particular, five BSSs have been found to
display anomalous $V_{\rm rad}$, which may cast some doubts about
their membership. However, BSSs \#42424 and \#64677 have measured
proper motions well in agreement with those of the cluster members
(Anderson et al. 2006). All the other objects have measured rotational
velocities significantly larger than expected for normal stars of the
same spectral type, thus making unlikely that they belong to the
field. We have also used the Besan\c{c}on Galactic model (Robin et
al. 2003) to derive the radial velocity and metallicity distributions
of the Galactic field stars in the direction of M4, within the same
magnitude and colour intervals shared by our BSS sample. The $V_{\rm
rad}$ distribution is peaked at $-14.6\kms$ and has a dispersion
$\sigma = 50.7\kms$. As a consequence, the probability that the BSSs
with anomalous radial velocity belong to the field is always smaller
than 1.7\%. The theoretical metallicity distribution of field stars
is peaked at [Fe/H]$=-0.17\pm 0.02$ ($\sigma = 0.45$). The iron
abundance measured in two of the five BSSs with anomalous radial
velocity ([Fe/H]$=-1.35$ for \#42424, and $-1.23$ for \#64677) clearly
is largely inconsistent with the field value and concordant, within
the errors, with that of M4 stars. Based on these considerations we
therefore conclude that all the BSSs with anomalous $V_{\rm rad}$ are
indeed members of M4. We notice that if the discrepancies were caused
by the orbital motion in binary systems, under realistic assumptions
about the total mass, the orbits would be reasonable for a GC, with
separations ranging from a few to 10--20 AU. These BSSs do not show
any evidence of photometric variability (Kaluzny et al. 1997), and
variations of $V_{\rm rad}$ with full amplitudes exceeding $3\kms$ on
a time interval of 72 hours are excluded by our observations for BSSs
\#42424 and \#64677 (no such information is available for the fast
rotators). However, these results do not disprove that they
are in binary systems. For instance, they are still consistent with
what expected for $\sim 90$\% of binaries characterized by an
eccentricity-period distribution similar to that recently observed
by Mathieu \& Geller (2009) and populating the tail of the velocity
distribution in M4 (Mathieu \& Geller, private
communication). Indeed, further observations are urged to search for
clear-cut signatures of binarity.
The fact that none of the 14 BSSs for which we measured C and/or O
abundances shows signatures of depletion is quite intriguing. Out of
the 42 BSSs investigated in 47Tuc, F06 found that 6 (14\%) are
C-depleted, with 3 of them also displaying O-depletion. Accordingly,
in M4 we could have expected 1-2 BSSs with depleted carbon abundance,
and 0-1 BSS with both C and O depletion. Hence, the resulting
non-detection may just be an effect of low statistics and is still
consistent with the expectations. Alternatively the lack of chemical
anomalies in M4 BSSs might point to a different formation process:
while at least 6 BSSs (the CO-depleted ones) in 47Tuc display surface
abundances consistent with the MT formation channel, all the
(investigated) BSSs in M4 may derive from stellar collisions, for
which no chemical anomalies are expected. Finally, it is also possible
that the CO-depletion is a transient phenomenon (F06) and (at least
part of) the BSSs in M4 are indeed MT-BSSs which already evolved back
to normal chemical abundances.
The most intriguing result of this study is the discovery that a large
fraction (40\%) of the investigated BSSs in M4 are fast rotators, with
rotational velocities ranging from $\sim 50\kms$, up to more than
$150\kms$. We emphasize that this is the largest population of fast
rotating BSSs ever found in a cluster. Approximately $30\%$ of the
BSSs spinning faster (at $20-50\kms$) than MS stars of the same colour
has been recently found in the old open cluster NGC188 (Mathieu \&
Geller 2009), while BSSs in younger open clusters are found to rotate
slower than expected for their spectral type (e.g., Shetrone \&
Sandquist 2000; Sch\"{o}nberner et al. 2001). For GCs only scarce and
sparse data have been collected to date. The most studied case is that
of 47Tuc, where 3 (7\%) BSSs out of the 45 measured objects (Shara et
al. 1997; De Marco et al. 2005; F06) have rotational velocities larger
than $50\kms$, up to $\sim 155\kms$. The object studied by Shara et
al. (1997) is the second brightest BSS in 47Tuc, and all the others
are located at the low-luminosity end of the BSS region in the CMD. In
addition they span almost the entire range of surface temperatures and
distances from the cluster centre. For comparison, apart from being
more numerous, the fast rotating BSSs in M4 are also found at all
luminosities, temperatures and radial distances (see Figure
\ref{cmd}). There is a weak indication that the rapid rotating BSSs
in M4 tend to be more centrally segregated than normal BSSs, even if
the small number of stars in our sample prevents a statistically
robust result (following a Kolmogorov-Smirnov test, the probability
that the two distributions are extracted from the same parent
population is $\sim 44\%$). The fastest rotating BSS in M4
(\#2000121, with $I_{\rm rot} \sim 150-200\kms$) corresponds to star
V53 of Kalunzy et al. (1997), which is classified as a W UMa contact
binary. Interestingly, even in the F06 sample of 47Tuc the fastest BSS
(spinning at $\sim 100\kms$) is a W UMa binary. These two stars also
occupies a very similar position in the CMD, at the high-temperature
and low-luminosity side of the BSS region.
Unfortunately, from the theoretical point of view, rotational
velocities cannot be easily interpreted in terms of BSS formation
processes. In fact, fast rotation is expected for MT-BSSs because of
angular momentum transfer, but some braking mechanisms may then
intervene with efficiencies and time-scales that are still unknown.
The predictions about rotational velocity of COL-BSS are controversial
(Benz \& Hills 1987; Leonard \& Livio 1995; Sills et al. 2005). In
particular, the latter models show that angular momentum losses
through disk locking are able to decrease the BSS rotational velocity
from values as high as $\sim 100\kms$ to $20\kms$. Hence, the fast
rotating BSSs observed in M4 could be generated either through mass
transfer activity or through stellar collisions, on condition that no
significant braking has (still) occurred.
Three of the BSSs (namely \#52702, \#53809 and \#1000214) of the five
with anomalous $V_{\rm rad}$ also are fast rotators. Such a high
rotation is difficult to account for by synchronization or mass
transfer effects in binary systems, since the orbital separation would
not be small enough. A fascinating alternative is that such anomalies
in the radial and rotational velocities are due to three- and
four-body interactions that occurred in the cluster core: these could
have originated fast spinning BSSs and kicked them out to the external
regions at high speed (apart from star \#1000214, the other two are
located well beyond the cluster core radius). Interestingly enough,
the dynamical history of the cluster could support such a scenario. In
fact, although its stellar density profile is well reproduced by a
King model, recent Monte-Carlo simulations (Heggie \& Giersz 2008)
suggest that M4 could be a post-core collapse cluster, its core being
sustained by the ``binary burning'' activity. The fast rotating and
high velocity BSSs could be the signature of such an activity.
While bringing a wealth of information, the results obtained to date
for 47Tuc and M4 are still too scarce to provide a clear overall
picture. Collecting additional data on rotational velocities and
chemical abundances for a significant number of BSSs in a sample of
GCs with different structural parameters is indeed a crucial
requirement for finally understanding the formation mechanisms of
these puzzling stars and their link with the cluster dynamical
history.
\acknowledgements We acknowledge R. Bedin and Y. Momany for useful
discussions, and the referee, Robert D. Mathieu, for helpful
suggestions in improving the paper. This research was supported by
the Agenzia Spaziale Italiana (under contract ASI-INAF I/016/07/0), by
the Istituto Nazionale di Astrofisica (INAF, under contract
PRIN-INAF2008) and by the Ministero dell'Istruzione, dell'Universit\`a
e della Ricerca.
|
1,116,691,500,321 | arxiv | \section{Introduction}
Boolean models are often used to obtain insights into the dynamical properties of physical systems composed of elements that appear to execute binary logic. A paradigmatic case is the behavior of digital circuits in which physical gates are designed with the specific intention of executing Boolean logic. In typical circuits designed to carry out well defined computations, one typically introduces an external clock that determines when each gate is to be updated. By making the time between ticks of the clock sufficiently long, it is possible to make devices that accurately carry out any desired Boolean operations (von Neumann 1956).
Computation is not the only possible use for digital circuitry, however. For some applications, such as private communications, remote sensing, or random number generation, one may want circuits that generate chaos with an ultra-broadband spectrum. One approach to creating such circuits is to do away with the clock and allow each gate to respond continuously to its inputs. The analog characteristics of the response at very high frequencies, along with the different signal propagation delays between the gates, can then lead to complicated dynamics that may or may not be captured by Boolean models. We call such devices {\em autonomous digital circuits}. We wish to understand the potential sources of chaotic dynamics in autonomous digital circuits and other physical systems involving interactions with similar characteristics.
An autonomous Boolean network (ABN) is a set of nodes with binary values coupled by links with associated time delays. Each node is updated continuously according to a designated Boolean function of the values of its inputs at the appropriate previous times. If node $A$ receives an input from node $B$, we refer to $A$ as a ``target'' of $B$. In principle, the time delay between the switching of node $A$ and its target $B$ is due to a signal propagation time on the link, which may depend on whether the switch was a rise (from {\sc off}\ to {\sc on}) or a fall (from {\sc on}\ to {\sc off}). It is also possible that the processing time at $B$ depends on the state of all the other inputs to $B$ at the time the signal arrives. In general, we expect a nominal time delay associated with each link and slight adjustments depending on what information is being transmitted and whether it induces activation or decay of the target node. Moreover, it is known that the set of attractors of a network can be influenced by memory effects (Norrell {\it et al.}\ 2007), which can be modeled in ABNs as a dependence of the time delays on the amount of time that the receiving node has been in its current state.
The present communication is motivated by recent experiments by Zhang {\it et al.}\ (2009), who constructed an autonomous digital circuit using commercially-available, high-speed electronic logic gates. The topology of their Boolean network is shown in Fig.\ \ref{fig:network}(a). It consists of three nodes that each have two inputs and one output that propagates to two different nodes. The time it takes a signal to propagate to node $j$ from node $i$ is denoted by $\tau_{ji}$ ($i,j=1,2,3$). Nodes 1 and 2 execute the Exclusive-{\sc or} ({\sc xor}) logic operation, while node 3 executes the {\sc xnor} (see truth tables in the Fig.~\ref{fig:network}(a)). There is no clock in the system; the logic elements process input signals whenever they arrive, to the extent that they are able. They observed that the temporal evolution of the voltage at any given point in the circuit has a non-repeating pattern with clear Boolean-like state transitions, displays exponential sensitivity to initial conditions, and has a broad power spectrum extending from dc to beyond 2 GHz. Fig.~\ref{fig:network}(b) shows the voltage at the output of node 2. Because the circuit includes feedback loops with incommensurate time delays, it spontaneously evolves to dynamical states with the shortest possible pulse widths, a regime in which time-delay variations generate chaos.
\begin{figure}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{3nodes_net.eps}}
\end{center}
\caption{\label{fig:network} (a) Topology of the chaotic Boolean network investigated by Zhang {\it et al.}\ (2009) and truth table for logic operation performed by the nodes 1, 2 ({\sc xor}), and 3 ({\sc xnor}) on their respective inputs.
(b) Temporal evolution of the voltage at one point of the chaotic network.}
\end{figure}
Zhang {\it et al.}\ (2009) also demonstrated using numerical simulations that an ABN model can account for the major features observed in their network. Our goal here is to use analysis and numerical simulations to identify the possible sources of chaos in simple ABNs and thereby clarify the origins of chaotic dynamics observed in autonomous digital circuits. In this paper, we focus on the simplest network that yields nontrivial behavior -- a single {\sc xor}\ gate with two output links that both feed back to its own inputs, shown schematically in Fig.~\ref{fig:xor}.
\begin{figure}
\begin{center}
\resizebox{6cm}{!}{\includegraphics{XOR_2delays}}
\end{center}
\caption{The network of primary interest in this paper.
\label{fig:xor}}
\end{figure}
There are two generic effects in Boolean circuits that dramatically alter the attractor structure. First, the time delay on the link from $B$ to $A$ depends on whether $A$ and $B$ are switching {\sc on}\ or {\sc off}\ (Norrell {\it et al.}\ 2007), as mentioned above.
We refer to the special case in which time delays do {\em not} depend on the direction of the switches as {\em symmetric}. Second, the nodes in the network cannot process pulses of arbitrarily short duration; pulses shorter than some cutoff duration $\tau_{\rm spr}$ are filtered out so that they never reach the next target node (Klemm \& Bornholdt 2005, Norrell {\it et al.}\ 2007).
A pulse is defined as two consecutive state transitions of a single node. The pulse width is the temporal separation between those transitions.
If a transition turning a node {\sc off}\ (or {\sc on}) were to occur earlier than $\tau_{\rm spr}$ after a transition turning that node {\sc on}\ (or {\sc off}), the system evolves as if neither transition had ever occurred. We refer to this effect as {\em short-pulse rejection} and assume it is present in all cases.
We use the term {\em symmetric} ABN, or SABN, to refer to a system with a short-pulse rejection mechanism and time delays that are independent of input and target states and their histories. In the limit $\tau_{\rm spr} \rightarrow 0$, our SABN is equivalent to the Boolean Delay Equations discussed by Ghil and collaborators (Dee \& Ghil 1984, Ghil \& Mullhaupt 1985, and Ghil \textit{et al.} 2008).
A third effect that turns out to be quite important is the dependence of the time delay along a link on the state of the input node and its recent history. This has been termed the ``degradation'' effect because it typically takes the form of a variation in delay time for switches at the trailing edge of pulses near the short-pulse-rejection limit (Bellido-D\'{i}az \textit{et al.} 2000). We will see below that this effect is necessary and often sufficient to generate chaos in simple networks.
This paper is organized as follows. In Section~\ref{sec:singleloop} we establish notation and some useful definitions and discuss the periodic behavior of networks with only a single feedback loop with no degradation effect. In Section~\ref{sec:twoloops}, we show rigorously that the symmetric two-loop {\sc xor}\ system with no degradation can have only periodic (or fixed point) attractors. In Section~\ref{sec:divergence}, we argue that chaotic behavior should not be expected in any system in the absence of a degradation effect, though we cannot rule out the possibility entirely. In Section~\ref{sec:chaos}, we present a numerical model of a degradation effect in the simplest possible feedback system --- a single copier node with a self-input --- and show that it can produce chaos. Finally, in Section~\ref{sec:degradation_xor}, we present numerical results elucidating the nature of the chaos that appears in the {\sc xor}\ system when degradation effects are included.
\section{Single loops}\label{sec:singleloop}
Assuming that there are no time-varying external inputs to the network, a purely feed-forward network will have only fixed point attractors. Any persistent periodic or chaotic oscillations in the system must be driven by some sort of feedback loop or combination of multiple feedback loops, where a feedback loop is defined as a ring of any number of nodes that allows a signal generated at one node to propagate back to the input of that node. The simplest case is a single node that is its own target.
The basic structure of a network containing a single loop is one ring of nodes, each of which either copies or inverts its input. The system may also contain additional nodes that are targets of one or more nodes on the ring, and these targets may themselves have additional targets forming feed-forward subnetworks that lead to dead ends. Finally, the nodes on the ring may have additional inputs that are controlled by feed-forward chains. These nodes may determine whether each node on the ring acts as a copier or an inverter. (See Fig.~\ref{fig:singlering}.)
\begin{figure}
\begin{center}
\resizebox{7cm}{!}{\includegraphics{single_loop}}
\end{center}
\caption{A network with one feedback loop. White nodes form the loop. Dark nodes may determine whether nodes in the loop act like copiers or inverters. Light gray nodes are slaved to the dynamics of the loop and do not influence it in any way.
\label{fig:singlering}}
\end{figure}
To explain the behavior SABNs containing only one feedback loop, we introduce some definitions and notation. Consider a loop of $N$ nodes in which node $i$ is an input to node $i+1$ and node $N$ is an input to node $1$. Let $\tau_{BA}$ designate the time delay associated with the link with input $B$ and target $A$. We refer to each switch from {\sc off}\ to {\sc on}\ in a time series of any given node as a {\em positive kink} and each switch from {\sc on}\ to {\sc off}\ as a {\em negative kink}. A kink may be thought of as propagating along a link and then being processed by the target node.
A {\em copier} transmits the kink to its outputs and an {\em inverter} changes the sign of the kink before transmitting it. At any given instant, there may be many kinks on the loop. There is a topological constraint, however, depending on whether the loop has an even or odd number of inverters. We use the terms {\em even loop} and {\em odd loop} to distinguish these cases. On an even loop, the single-valued nature of each node forces the number of kinks to be even. On an odd loop, the number of kinks must be odd.
To specify a state of the system, we must specify a continuous time series for each node over a time interval $[t-\tau_{i-1,i},t]$. We will see later that this is important for measurements of trajectory divergence. For now, we simply note that all $N$ such time series must be specified as an initial condition in order to determine the subsequent dynamics. The system state specified by the initial conditions may be visualized as a number of kinks that are propagating along their respective links at time $t$ and will reach their targets before any signal from their input nodes can.
Let $\tau_{\rm loop}\equiv \tau_{N,1}+\sum_{i=1}^{N-1}\tau_{i,i+1}$ be the sum of the time delays on all links in the loop. For any initial condition, if $\tau_{\rm spr}$ is set to zero, then every kink present moves around the loop in time $\tau_{\rm loop}$. On an even loop, the system will return to its original configuration at this time. On an odd loop, the
configuration at $t=\tau_{\rm loop}$ will be the inversion of the original and the system will be periodic with period $2\tau_{\rm loop}$. Setting $\tau_{\rm spr}$ to a nonzero value results in the elimination of pairs of opposite sign kinks that are separated in time by less than $\tau_{\rm spr}$. Once all such pairs have been eliminated, short-pulse rejection mechanism plays no further role and the system is periodic.
Allowing different delay times on each link for kinks of different signs causes a dramatic reduction in the number of attractors. The time required for a positive kink beginning at a given site to make a full circuit and return to that site will be different (in the absence of fine tuning) from the time required for a
negative kink to return. Thus, if there are two kinks in the system, one will catch up to the other, eventually leading to a pulse of width smaller than $\tau_{\rm spr}$, which will be annihilated. In an even loop, all pulses will eventually annihilate and the attractors will always be fixed points in which all nodes hold steady values consistent with their input.
The case of odd loops is more complicated. When a positive kink propagates around the loop once, it is converted to a negative kink. Thus, a kink of either sign will take exactly the same time to propagate around the loop twice. If a pulse is wide enough to avoid annihilation during the time required for the two traversals, it will return with no change in its width. A pulse of this type, however, is only marginally stable. If the width of the pulse is perturbed, there is no mechanism for restoring it to its original value. In a system where noise causes small random fluctuations in the delay times, the width of a pulse will execute a random walk and the pulse will eventually collapse due to short-pulse rejection. Because of the topological constraint, there will always be one kink left that cannot be annihilated, and it will propagate, creating a unique oscillatory attractor.
Thus, we see that SABNs are special in that they admit a large set of marginally stable attractors that collapse to a much smaller set for arbitrarily small symmetry breaking (in the even loop case) or noise (in the odd loop case). We will return to the asymmetric case later. For now, we continue the discussion of SABNs.
\section{Two loops}\label{sec:twoloops}
We next consider the simplest possible (nontrivial) system with two feedback loops: a single {\sc xor}\ gate whose two inputs come directly from its own output, shown earlier in Fig.~\ref{fig:xor}. The output of an {\sc xor}\ gate changes its value every time one of its inputs changes, so that kinks propagating through this network can be annihilated only through short-pulse rejection. If the {\sc xor}\ is replaced with any two-input logic function other than {\sc not xor}, the dynamics leads quickly to a fixed point or a very simple oscillation, as can be checked by inspection. The {\sc not xor} case is identical to the {\sc xor}\ under exchange of the meaning of {\sc on}\ and {\sc off}.
The dynamics of the system is defined by a Boolean delay equation for the state $x(t)$, together with a procedure for rejecting short-pulses.
The Boolean delay equation is
\begin{equation}
x(t) = x(t-\tau_1) \oplus x(t-\tau_2).
\label{eq:xor}
\end{equation}
Let $\epsilon$ be an infinitesimal duration.
To implement short-pulse rejection, we adjust $x(t)$ as follows. If $x(t) \neq x(t-\epsilon)$, indicating that a switch has occurred at time $t$, and $\int_0^{\tau_{\rm spr}} x(t-s) \oplus x(t-\epsilon) ds \neq 0$,
indicating that the gate has switched sometime during the past short-pulse rejection interval, then $x(t-s) = x(t-\tau_{\rm spr})$ for all $s$ in $[0,\tau_{\rm spr})$.
In the two-loop case, it is not obvious that the attractors must be periodic. Ghil and co-workers have studied the case of no short-pulse rejection and noted that the system exhibits an ultraviolet catastrophe in which kinks become dense in time and pulse widths tend toward zero (Ghil \& Mullhaupt 1985, Ghil \textit{et al.} 2008). The short-pulse rejection mechanism in our SABNs eliminates kinks that are too close and thereby regularizes the divergence. The following theorem shows that what remains can only be periodic. We count the trivial always-{\sc off}\ fixed point as a degenerate case of a periodic attractor.
\begin{theorem}\label{thm:xor}
For a SABN consisting of a single {\sc xor}\ with two self-inputs having delays $\tau_1$ and $\tau_2$ (as shown in Figure~\ref{fig:xor}), the attractors are always periodic.
\end{theorem}
{\em Proof:} We will first show that the attractor reached when the system is initiated with a single kink is always periodic. We will then show that the introduction of additional kinks in the initial condition cannot alter this result.
Assume, without loss of generality, that $\tau_1<\tau_2$. Let $t_s$, for $s=1,2,\ldots$ represent the times that the output of the gate switches, and define the intervals between switching times as $\Delta_s\equiv t_s-t_{s-1}$. Now note that the future of the system is determined if a past sequence of switching events spanning a duration of $\tau_2$ (the extent of the memory encoded in the longest delay line) is specified. The strategy is to show that the set of possible sequences $\Delta_s$ for $s_1<s<s_2$ is finite for values of $s_2-s_1$ corresponding to $t_{s_2}-t_{s_1}<\tau_2$. If this is true, the system must eventually revisit some sequence that is long enough to determine its future behavior, which immediately implies periodicity. (Any deterministic system with a finite number of states must have only periodic or fixed point attractors.)
Consider a system initialized by a single kink at time $t_0=0$. That is, the gate is assumed to be {\sc off}\ for all times less than $t_0$ and switched on at $t_0$. Each time the kink propagates around one of the delay lines, it causes the output of a new kink. Thus, kinks could conceivably be generated at times
\begin{equation}\label{eqtn}
t_{i,j} = i\tau_1 + j\tau_2
\end{equation}
for any positive integers $i$ and $j$. It is convenient to represent these times as sites of a lattice -- the dots in Fig.~\ref{fig:lattice} (Ghil \& Mullhaupt 1985).
The vertical axis in the figure represents time. The horizontal axis is not physical. It simply provides a way to visualize the causal processes that generate kinks. Moving down and to the left from a given site leads to another site a time $\tau_1$ later. Moving down and to the right leads to another site a time $\tau_2$ later. Note that the full set of sites is an infinite wedge subset of a Bravais lattice. We refer to a given site and its corresponding event by the pair $(i,j)$ that specifies the time the event occurs according to Eq.~(\ref{eqtn}).
Event $(i,j)$ may {\em not} actually occur because some kinks are annihilated by the short-pulse rejection mechanism. If events are generated at $(i,j)$ and $(k,\ell)$ for which $|t_{i,j}-t_{k,\ell}|<\tau_{\rm spr}$, a pulse will be created that is too short to pass through the gate. Thus, those two events will not generate any future events. To trace out the dynamics on this lattice, we begin by circling the top site, $(0,0)$. We then circle $(0,1)$ and $(1,0)$, indicating that events will occur at the corresponding times. For each event $(i,j)$ that occurs, we circle the two events $(i,j+1)$ and $(i+1,j)$. The sites are circled in chronological order, and, if the time interval between a newly circled site and the last one circled is less than $\tau_{\rm spr}$, both circles are removed, indicating that neither event actually occurs. A degenerate case arises when a single site gets circled twice -- once from each of its upstream neighbors. This represents two kinks arriving simultaneously, which does not cause a switch in the output of the {\sc xor}\ gate, so the site does not get circled. Figure~\ref{fig:lattice} shows an example.
\begin{figure}
\begin{center}
\includegraphics[scale=0.75]{figlattice.eps}
\end{center}
\caption{Example of a lattice of possible switching times and the
pattern of actual switching times after short-pulse rejection
effects are taken into account. The vertical spacings corresponding
to $\tau_1$, $\tau_2$, and $\tau_{\rm spr}$ are shown. The two arrows near
the top of the lattice indicate two kinks that arrive simultaneously
at a dot and therefore produce no outgoing kink. The grey vectors
indicate minimal rejection pairs. The circled dots represent the
events that actually occur.\label{fig:lattice}}
\end{figure}
We now show that there is an upper bound to the horizontal distance
between two circled sites that are vertically separated by less than
$\tau_2$. That is, if $|t_{i,j}-t_{k,\ell}|<\tau_2$, then
$|k-i|+|j-\ell|$ is bounded from above. Let $D$ be the smallest
horizontal distance between two events that are vertically separated
by less than $\tau_{\rm spr}$; \textit{i.e.}, $D$ is the smallest value of $m+n$ for
which $|m\tau_1-n\tau_2|<\tau_{\rm spr}$. In the case shown in
Fig.~\ref{fig:lattice}, $D=5$ (from $m=3$ and $n=2$). The vector
joining two such sites is $\Vec{v}_{\rm min}\equiv(D,m\tau_1-n\tau_2)$.
We will refer to a pair of sites separated by $\Vec{v}_{\rm min}$ as a
``minimal rejection pair.''
If $\tau_2/\tau_1$ is rational, the lattice contains multiple sites that occur at exactly the same time and are separated by a minimal horizontal distance $W$. ($W$ is equal to $D$ if and only if $\tau_{\rm spr}$ is sufficiently small.) We may identify such points, thus turning the lattice into a strip of width $W$ having periodic boundary conditions -- a cylinder of circumference $W$. At any given time $t$, the configuration of circled sites lying in a band between $t-\tau_2$ and $t$ determines the future evolution uniquely. Without loss of generality, assume that $t$ coincides with a lattice site. Because the sites form a Bravais lattice, the configuration of lattice sites within the band is finite (and unique). The number of possible configurations of circled sites is therefore finite, which ensures that the system must eventually revisit some configuration that it has already passed through. The subsequent evolution will then cycle periodically through that configuration.
For the case of irrational $\tau_2/\tau_1$, there are no pairs of sites that occur at exactly the same time. Nevertheless, we can prove that the set of circled sites must be confined to a region whose width never exceeds $D$, so the number of accessible configurations is again finite. The proof proceeds by contradiction. Assume that there are two circled sites separated vertically by less than $\tau_2$ and horizontally by more than $D$. Because no two sites sit at exactly the same time, each circled site must be connected to $(0,0)$ by an unbroken chain of circled sites that caused it. But any two such chains that begin from sites separated by more than $D$ must contain a minimal rejection pair. To see this, consider any two paths starting from the same site.
Let $(i_r,j_r)$ with $r=0,1\ldots R$ denote the sites on one path, and $(k_s,\ell_s)$ with $s=0,1\ldots S$ be sites on the other. Define $(\mu,\nu)_{r,s}\equiv(k_s-i_r,\ell_s-j_r)$, where $(\mu,\nu)_{0,0}=(0,0)$. Each step on either trajectory either changes $\mu$ or $\nu$ by $\pm 1$.
Thus, the set $\{(\mu,\nu)\}$ must contain every possible pair with $\mu < k_s-i_r$ and $\nu<\ell_s-j_r$. Now, assume that the paths contain sites corresponding to times that differ by less than $\tau_2$ and having a horizontal separation greater than $D$. Let $(i_R,j_R)$ be the site on the left and $(k_S,\ell_S)$ be the site on the right, so that $i_R-k_S$ and $\ell_S-j_R$ are both positive.
We then have
\begin{equation} \label{eqn:horizontalsep}
(i_R-k_S) + (\ell_S-j_R)>D
\end{equation}
and
\begin{equation} \label{eqn:verticalsep}
|(i_R-k_S)\tau_1-(\ell_S-j_R)\tau_2| < \tau_2.
\end{equation}
Recall that $(m,n)$ gives the minimal rejection pair. If $(i_R-k_S)<m$, Eq.~(\ref{eqn:horizontalsep}) requires $(\ell_S-j_R)>n$, which implies that Eq.~(\ref{eqn:verticalsep}) must be violated, as can be seen immediately by comparing to the known relation $m\tau_1-n\tau_2<\tau_{\rm spr}$, with $\tau_{\rm spr}<\tau_2$. Similarly, $(\ell_S-j_R)<n$ requires $(i_R-k_S)>m$, which again implies a violation. Thus, we must have $(i_R-k_S)\geq m$ and $(\ell_S-j_R)\geq n$, which implies in turn that there must be some pair $(r,s)$ for which $(\mu,\nu)_{r,s} = (m,n)$. When these sites were circled, however, they would have been subject to short-pulse rejection. Hence the configuration of two chains cannot represent a possible trajectory of the system.
We have proven that the dynamics initiated by a single kink must be confined to a tube of width $D$ on the lattice. The tube need not be vertical or even straight, but it cannot have a horizontal width greater than $D$ at any time. The periodicity of the trajectory then follows immediately from the fact that the number of configurations of circled dots that can be covered by a rectangle of height $\tau_2$ and width $D$ is finite, which guarantees that some configuration will be repeated after a sufficiently long time, and the trajectory between these repeated configurations will then be repeated {\it ad infinitum}. Figure~\ref{fig:lattice} shows an example in which the trajectory repeatedly returns to a configuration in which only one kink is present.
To complete the proof that all attractors are periodic, we must consider the possibility of initiating the system with two or more kinks. To analyze such cases, we overlay the lattices originating from each kink in the initial interval of duration $\tau_2$. The $(0,0)$ sites from the different lattices will be displaced vertically by times between $0$ and $\tau_2$. The lattices emanating from each $(0,0)$ site will all be translated copies of the same lattice.
To see that the entire set of events must still be confined to a tube of finite width, first note that any event must be connected to one of the $(0,0)$ sites by a chain of events contained entirely in one lattice. By the reasoning used in proving Theorem~\ref{thm:xor}, the events on any single lattice must be confined to a tube of width $D$. The problem, then, is to show that there is an upper bound on the horizontal distance between circled sites on two different lattices when their vertical separation is less than $\tau_2$.
If $\tau_2/\tau_1$ is rational, the lattices can all be represented on a finite radius cylinder (or infinite plane with periodic boundary conditions at the sides) and it is clear that the number of configurations in a band of any given height is finite. If $\tau_2/\tau_1$ is irrational, the proof follows essentially the same reasoning as the single lattice case. Consider events on any two of the lattices. There are now two minimal rejection vectors $\Vec{v}_{\rm min}$ depending on which lattice contains the trajectory on the right (with larger $n-m$). Nevertheless, given any two widely separated points, the paths to each point that must pass through one of the minimal rejection separations for that pair of lattices. This must be true for each distinct pair of lattices, so the entire trajectory must be confined to a finite tube. Because short-pulse rejection prevents initialization with arbitrarily close kinks, the number of initial kinks is bounded. This means that the number of overlayed lattices is bounded, which implies again a finite number of configurations within a rectangle of height $\tau_2$ and a specified width, so the full trajectory must be periodic. {\bf Q.E.D.}
\begin{theorem}
For a SABN consisting of a single {\sc xor}\ with two self-inputs having delays $\tau_1$ and $\tau_2$, and with $\tau_{\rm spr}$ sufficiently small that no collapse to the always-{\sc off}\ state occurs before $t = \tau_2$, the trajectory will never reach the always-{\sc off}\ state. \label{thm:nocollapse}
\end{theorem}
\begin{figure}
\begin{center}
\resizebox{10cm}{!}{\includegraphics{fignocollapse.eps}}
\end{center}
\caption{
A portion of the event lattice relevant for the proof of
Theorem~\ref{thm:nocollapse}. The long grey vector indicates
short-pulse rejection. The other arrows show events required to
annihilate additional kinks generated from events $A$ and $B$.
\label{fig:nocollapse}}
\end{figure}
{\em Proof:} The only way to reach the always-{\sc off}\ state would be for the two last events to annihilate by short-pulse rejection. That is, for the pattern of circled lattice sites to produce two candidates at the minimal rejection distance without producing any other circled dots on the interior of the minimal width tube
containing the trajectory. Figure~\ref{fig:nocollapse} shows that this cannot happen. When event $A$ generates event $C$, it must also generate $E$, which occurs later than $C$ and $D$. (We assume that $E$ and $D$ are distinct points. If $\tau_{\rm spr}$ is too large, the collapse can occur immediately because $C$ and $D$ are both derived from $A$). In order for $E$ to be annihilated, event $F$ must be present, which in turn would generate $G$. Note that $F$ has the same value of $m+n$ as $A$. Repeated application of this reasoning shows that annihilation of all further events through exact coincidences would require an infinite line of events at the same value of $i+j$ as $A$, but such a line is impossible, both because it would eventually reach the edge of the triangular wedge of possible lattice points and because it would require events occurring at later times than our supposed final annihilation of $C$ and $D$. Thus, the always-{\sc off}\ collapse cannot occur. {\bf Q.E.D.}
We now turn to the asymmetric case, in which the time delays associated with positive ($\tau_1^{\mathrm{ON}}$ and $\tau_2^\mathrm{ON}$) and negative kinks ($\tau_1^{\mathrm{OFF}}$ and $\tau_2^{\mathrm{OFF}}$) on any given link may be different.
The evolution is calculated as follows. We construct a queue of times at which the gate switches states. Let $t_1$ be a time at which the gate switches from $x$ to $X$. When $t_1$ is the earliest time in the queue, it is removed (processed) and two future times $t_1+\tau_1^{X}$ and $t_1+\tau_2^{X}$ are added. Let the next time after $t_1$ in the queue be $t_2$. The switch at $t_2$ causes the gate to return to the state $x$. If $t_2-t<\tau_{\rm spr}$, then $t_2$ is removed from the queue along with any times just added due to the switch at $t_1$. Otherwise, only $t_2$ is removed and $t_2+\tau_1^{x}$ and $t_2+\tau_2^{x}$ are added. If $t_2+\tau_i^{x} < t_1+\tau_i^{X}$, however, then both are removed from the queue, as this implies that the trailing edge of a pulse overtook the leading edge as it propagated along the link.
The lattice picture now becomes more complicated. We need a 4D lattice with one basis vector for each possible delay time. Each time a site is circled, the state of the {\sc xor}\ gate determines which two lattice directions are available for the next step, and the state of the gate is determined by the parity of the number of steps that have been taken up to the time in question. Note that the number of steps cannot simply be counted by tracing the single path leading to the transition of interest. All of the events above the one under consideration must be counted to determine the current state.
The proof of the bounded width of a trajectory on the 2D lattice breaks down for higher dimensions. The difficulty is that it becomes possible for trajectories to avoid hitting pairs of points at the minimal rejection distance by moving in the third (or fourth) dimension. We do not (yet) have a proof that periodicity is necessary for the asymmetric case, but extensive numerical simulations have failed to turn up any counterexamples. Figure~\ref{fig:abn} shows a typical trajectory with the 4D lattice projected onto a plane. A step one unit to the right indicates traversal of link 1, with the two possible vertical displacements $\tau_1^{\mathrm{ON}}$ or $\tau_1^{\mathrm{OFF}}$, similarly, steps to the left correspond to traversing link 2, with delay either $\tau_2^{\mathrm{ON}}$ or $\tau_2^{\mathrm{OFF}}$. Only the circled sites are shown.
\begin{figure}
\resizebox{\textwidth}{!}{\includegraphics{figabn1.eps}}
\caption{Typical trajectory of a two-loop ABN initialized with a single positive kink.
The columns are successive time intervals of 100 units and the periodic
attractor is evident in the fourth column. The time delays for this simulation are $\tau_1^{\mathrm{ON}}=1.000$, $\tau_1^{\mathrm{OFF}}=1.117$, $\tau_2^{\mathrm{ON}}= \phi$, and $\tau_2^{\mathrm{OFF}}=1.117 \phi$, where $\phi$ is the golden ratio, $1.618\ldots$. The short-pulse rejection time was $\tau_{\rm spr} = 0.040$. The system was initialized with a single positive kink at one of the inputs at $t=0$.
\label{fig:abn}}
\end{figure}
We also could not prove that the asymmetric system will never collapse
to the always-{\sc off}\ state. We cannot rule out the possibility that
two short-pulse rejections can annihilate all four of the events
emanating from two events separated by more than the minimal
short-pulse rejection distances on the 4D lattice.
\section{Trajectory divergence}\label{sec:divergence}
In the absence of a proof that all attractors of ABNs are periodic, it is useful to ask how one might detect chaos in a simulation. Spectral analysis of the time series may be useful, but it is also possible for the periodic attractors to be extremely long and it may be difficult to resolve the peaks. We consider two methods for following the divergence of trajectories that initially differ by a small perturbation. In both cases, we introduce a perturbation by artificially delaying (or accelerating) the arrival of one kink by a small amount $\epsilon$. In the first method, we construct the sequence of switching times $t_n$ for the original trajectory and $t'_n$ for the perturbed one. We then plot $\delta_n=|t'_n-t_n|$ vs.\ $t_n$. In the second, we define the Boolean difference at a given time to be 0 if the gate is in the same state on both trajectories and 1 if the states are different. We integrate the Boolean difference between the original and perturbed trajectories over a time window $\tau_2$ (the longest delay time on a link). We denote the integral by $d(s)$, where $s$ represents the time between the applied perturbation and the end of the integration interval.
It is immediately clear that the only source of trajectory divergence as measured by the first method is the rejection of a short-pulse in one trajectory but not the other. There is no other mechanism that increases or decreases the time difference between corresponding kinks. If there is a short-pulse rejection in one system but not the other, however, the pulse that makes it through could generate subsequent events that cause the two trajectories to diverge. In the limit of infinitesimal $\epsilon$, short-pulse rejection differences will never occur and there can be no trajectory divergence.
In the second method, the distance between trajectories can grow because several kinks emanating from the perturbed one will also be perturbed by $\epsilon$. This distance cannot grow larger than $n \epsilon$, where $n$ is the number of kinks that can be accommodated in an interval of duration $\tau_2$. Note that $n$ is finite for any nonzero value of $\tau_{\rm spr}$, so again there is no exponential divergence in the limit of infinitesimal $\epsilon$. Further divergence requires short-pulse rejection differences as in the first method.
On the other hand, for any given $\epsilon>0$, there may be a pulse eventually generated with duration closer than $\epsilon$ to the short-pulse rejection threshold. This effect could conceivably lead to exponential divergence in $d$ over an intermediate scale between the short-pulse rejection time and the time required for a kink to generate $n$ new ones --- roughly $n \tau_1$.
The growth rate of the number of kinks generated by a single kink or pulse injected into a SABN is known to be polynomial when there is no short-pulse rejection, with an exponent of $\ell-1$, where $\ell$ is the minimum number of incommensurate delay times necessary to express the network evolution (Ghil \& Mullhaupt 1985).
Short-pulse rejection will reduce the rate of growth as the number of kinks increases, however, so there will be no exponential divergence in SABNs.
In the asymmetric case, we study numerically whether a single short-pulse rejection occurring in one trajectory but not the other can lead to exponential divergence in $d$ over an intermediate scale.
Figure~\ref{fig:polynomial_growth} shows measurements of $\ln d(s)$ obtained from simulations with $\tau_{\rm spr} = 1.0\times10^{-3}$, where a perturbation $\epsilon = 0.5\times 10^{-3}$ is applied at $t_0$.
The transition time $t_0$ is chosen so that the original pulse covering $t_0$ was just filtered by the short-pulse rejection, while the pulse with transition time $t_0+\epsilon$ just barely survived. We average $\ln d(s)$ over 30 pairs of trajectories with different initial conditions. As we see in Fig. \ref{fig:polynomial_growth}, the distance grows polynomially in this case also.
In section \ref{sec:twoloops}, we have proven that the SABN consisting of a single {\sc xor}\ gate with two self inputs must yield only periodic attractors, and we have numerical evidence suggesting that, in the absence of memory effects (other than short-pulse rejection), the ABN will not yield exponential divergence of trajectories either.
\begin{figure}[htb]
\begin{center}
\resizebox{8cm}{!}{\includegraphics{Fig7}}
\end{center}
\caption{An average of $\ln d(s)$ for the {\sc xor}\ gate with two delayed self-inputs and short-pulse rejection. The solid line is simulation data and the dashed line a fit to a power-law form of $d(s)$ with exponent 1.2.
\label{fig:polynomial_growth}}
\end{figure}
\section{Boolean chaos}\label{sec:chaos}
The next place to look for a source of chaos in the autonomous digital circuit is in the memory effects associated with the response of the gate to two successive kinks that form a pulse. In principle (and in the circuit of Zhang {\it et al.}\ 2009), the delay time for a kink that is the trailing edge of a pulse can depend on the width of the pulse. This dependence is called the {\em degradation effect} (Bellido-D\'{i}az \textit{et al.} 2000).
We now present numerical evidence showing that the degradation effect can produce chaos in the simplest of all networks, a single copier with one self-input. The dependence of delay times on pulse width introduces nontrivial dynamics in the durations of successive pulses which can lead to stabilization of pulse widths and numbers of kinks, and opens the possibility of chaotic sequences of pulse widths.
To understand the origin of the degradation effect requires that we consider
the underlying analog signal that has finite rise and fall times between {\sc on}\
and {\sc off}\ states; the inherent propagation delays of signals propagating
through the logic gates; and the associated Boolean idealization of the
waveform. See Fig.~\ref{fig:degradation}.
One method for implementing the degradation effect in the autonomous
Boolean network is as follows. Let $r$ be the time of occurrence of a rising kink and $f$ be the time of the subsequent falling kink. Let $r'$ and $f'$ be the times of the rising and falling kink induced by the kinks at $r$ and $f$. These times represent the arrival of a kink at the gate. The actual time when the gate variable switches is slightly later and depends on the state of the gate at the time of the arrival of the kink. That is, the actual switching time associated with the kink at $r$ is $r'-\tau_r$, where $\tau_r$ is the propagation delay. The difference between between $f'-\tau_f$ and $f$ is a function of the time that the gate was {\sc on}; \textit{i.e.}, a function of $f-(r'-\tau_r)$. We define a degradation function $g\left(f-(r'-\tau_r)\right)$ that gives the delay $f'-f$.
\begin{figure}
\resizebox{\textwidth}{!}{\includegraphics{figdegradation.eps}}\\ \\
\caption{Implementation of the degradation effect. The top curve is a schematic
illustration of the temporal evolution of a state variable. When the variable is above a threshold, the gate is considered to be {\sc on}. The lower curve
shows the Boolean approximation of the input to the gate. For simplicity of illustration, we assume the gate is its own target. The rising kink $r_i$ causes the state variable to begin a transition. The indication that it has crossed the threshold is transmitted through a link with time delay $\tau_r$ and generates the rising kink at the input at $r_{i+1}$. The situation for falling kinks is similar.
\label{fig:degradation}}
\end{figure}
For the case of a single pulse cycling through a single copier with a
self-input, we consider the evolution equations for the sequences $r_i$
and $f_i$.
If the kinks are not rejected because of a short-pulse, $f_i$ will
generate a new kink $f_{i+1}$, with
\begin{equation} \label{eqn:fj}
f_{i+1} = f_i + g_f(f_i - (r_{i+1}-\tau_r)).
\end{equation}
Similarly, for a rising kink, we have
\begin{equation} \label{eqn:rj}
r_{i+1} = r_i + g_r(r_i-(f_i-\tau_f)).
\end{equation}
In principle, the constants $\tau_r$ and $\tau_f$ need not be equal
and the functions $g_f$ and $g_r$ may be different. Because we are
interested only in showing the existence of chaos, we will consider only
the most tractable case: $\tau_r = \tau_f \equiv \tau$ and $g_r = g_f \equiv g$.
It is convenient to rewrite the evolution in terms of the pulse widths.
Defining $w_i\equiv f_i-r_i$ and $v_i\equiv r_{i+1}-f_i$, Eqs.~(\ref{eqn:fj}) and~(\ref{eqn:rj}) give
\begin{eqnarray}
w_{i+1} & = & w_i + g(\tau-v_i) - g(\tau-w_i)\,, \label{eqn:wv}\\
v_{i+1} & = & v_i + g(\tau-w_{i+1}) - g(\tau-v_i)\,. \label{eqn:wv2}
\end{eqnarray}
From Fig.~\ref{fig:degradation}, one sees that $w_i$ determines $v_i$ in a manner required by
Eq.~(\ref{eqn:rj}): $g(\tau-w_i) - w_i = v_i$, which implies a constraint on the possible initial values and allows the map to be reduced to a single variable, say $v$.
Substituting this relation into the expression for $w_{i+1}$ (Eq.~(\ref{eqn:wv})) and inserting the results into Eq.~(\ref{eqn:wv2}) gives
\begin{equation} \label{eqn:vmap}\\
v_{i+1} = v_i + g(\tau + v_i - g(\tau-v_i)) - g(\tau-v_i) \equiv h(v_i)\,.
\end{equation}
A stable fixed point of the map (\ref{eqn:vmap}) corresponds to periodic behavior in
which every pulse has the same width $w^*$ and every dip has the same
width $v^*$. Equation~(\ref{eqn:wv}) implies $g(\tau-v^*) = g(\tau-w^*)$,
but this does not necessarily imply $w^* = v^*$.
The function $g(x)$ should satisfy two constraints. First, $g(x)\geq\tau$ for all $x$
because $\tau$ is the minimum delay time required for traversing the link.
Second, $g(x)$ should asymptote to some constant $\tau_0>\tau$ for large $x$
because the gate will settle into a fixed state if its input is held constant for
a long enough time. By trial and error (based on intuition gleaned from experimental
studies of the response of real electronic gates), we
identify a function $g(x)$ that generates chaos in the simple copier model considered here.
It is given by
\begin{equation}
g(\tau-v) = \tau +a + b\left(\tau-v-c\right)\exp\left(-(\tau-v)/A\right),
\label{eq:g_of_x}
\end{equation}
with $\tau = 1.3$, $a=0.26$, $b=13.0$, $c=0.02$, and $A=0.18$.
\begin{figure}[htb]
\begin{center}
\resizebox{10cm}{!}{\includegraphics{Fig9.eps}}
\end{center}
\caption{
(a) The degradation function $g(x)$ for the chaotic copier.
(b) The sequence of values $v_i$ is chaotic.
(c) The plot of $v_{i+1}$ {\it vs.} $v_i$ shows a clear 1D relation.
(d) Average of $\ln d(s)$ over 121 trajectories. The dashed line is a fit to the region of exponential scaling.
}
\label{fig:g_of_x}
\end{figure}
Figure~\ref{fig:g_of_x} shows the form of $g(x)$, the sequence of values $v_i$ that it generates through Eq.~(\ref{eqn:vmap}), and
scatter plot of $v_{i+1}$ as a function of $v_i$. The latter falls on a 1D map given by Eq.~(\ref{eqn:vmap}).
Note that neither $v$ nor $w$ ever takes on a value smaller than $\approx 0.5$,
so the dynamics is consistent with a short-pulse
rejection mechanism with $\tau_{\rm spr}$ less than this value.
We have run the map for $10^6$ iterates and have not seen any evidence of periodicity.
The Lyapunov exponent for 1D maps can be calculated by
\begin{equation}
\lambda_{\mathrm{discrete}} =
\lim_{N\rightarrow \infty}\frac{1}{N}\sum^N_{i=1} \ln \left| \dfrac{dh(v_i)}{dv_i}\right|,
\label{eq:lyapunov}
\end{equation}
where $v_i$ is a fiducial trajectory, calculated numerically from the map evolution equation (Eq. (\ref{eqn:vmap})).
With $N=10^4$ we obtained $\lambda_{\mathrm{discrete}} \approx 0.787$ for the same parameters used in Fig.~\ref{fig:g_of_x}. As $\lambda_{\mathrm{discrete}}$ gives the average expansion rate per cycle of the continuous system, we relate this Lyapunov exponent in discrete time to the maximum positive Lyapunov exponent $\lambda$ in continuous time by $\lambda = \lambda_{\mathrm{discrete}}/\overline{T}$, where $\overline{T}$ is the average cycle duration ($T_i = w_i+v_i$).
Using this relation we can test the assumption of Zhang {\it et al.}\ (2009) that the Boolean distance $d(s)$ grows as $\exp(\lambda s)$.
We measure $\overline{T} \approx 2.12$ from the discrete time series and, from the Boolean variable in continuous time, we calculate the average of $\ln d(s)$, shown in Figure \ref{fig:g_of_x}(d). In this section, however, instead of applying a perturbation to pairs of trajectories, as described in the previous section, we look for segments of the time series with an initially close Boolean difference, to reproduce the method in Zhang {\it et al.}\ (2009). In the region of exponential growth, $\left<\ln d(s) \right>$ is fit to a straight line with slope $\lambda = 0.37$, in good agreement with the ratio $\lambda_{\mathrm{discrete}}/\overline{T} \approx 0.371$.
Details of the use of $d(s)$ for precise measurement of the largest Lyapunov exponent have yet to be worked out, but the good agreement between $\lambda$ and $\lambda_{\mathrm{discrete}}/\overline{T}$ indicates that the exponential growth of $d(s)$ is a reliable indicator of chaos.
Our simulations show that a simple copier with an appropriate form for the degradation effect can generate a chaotic sequence of pulse widths. We have observed chaos with other parameter values and other choices of the degradation function $g(x)$, but the analysis of the map in Eq.~(\ref{eqn:vmap}) for arbitrary $g(x)$ is difficult. Two criteria are necessary for chaos (though not sufficient): the trajectory must not visit the vicinity of a stable fixed point of the map; and it must not visit $v\leq \tau_{\rm spr}$ or a value of $v$ such that the corresponding $w\leq\tau_{\rm spr}$ for whatever choice of $\tau_{\rm spr}$ one takes to be of interest. Analytic expressions for these conditions are not easily determined.
In this section, we have shown that a simple, single-loop Boolean network (a copier with self-feedback) shows chaos when the degradation effect is taken into account. In related work, chaos has also been observed in hybrid models with continuous variables governed by equations of the form $dx_i(t)/dt = F_i({\bf x}) - x_i(t)$, where $F$ is a binary-valued function (Mestl {\it et al.}\ 1996). From the perspective adopted in the present paper, such models may be thought of as explicit definitions of the underlying dynamics that produces degradation effects. They are special cases, however, in that they do not include explicit time delays (or, alternatively, explicit descriptions of the propagation of signals along links).
Glass {\it et al.}\ (2005) have studied an electronic circuit explicitly designed to implement the hybrid model dynamics proposed as a model of a 5-node gene regulation network. In their circuit, the output of each logic gate is used to charge a capacitor, thereby simulating one specific form of a degradation effect. They derive an approximate analytical solution for the response of the analog voltages in the circuit and show that the resulting map produces chaos.
The 10 ms time constant introduced by the capacitor in their circuit is much slower than the switching times associated with the gates that implement the functions $F_i$ or the signal propagation times (typically on the order of tens of nanoseconds) and thus avoids the necessity of accounting for time delays explicitly. Formally, this is equivalent to setting the time delay parameter $\tau$ in our model to zero. The full dynamics of the hybrid system can be analyzed in this case without the need for a Boolean delay model of the type discussed in the present work.
Here, we treat the system variables as Boolean and account for the analog behavior through the effect of degradation on the time delays between gates. Our method is potentially more flexible because we can incorporate a wider range of degradation functions that might arise in any particular model of the underlying dynamics and is applicable in cases where propagation delays are important. Except for pulses near the short-pulse rejection limit, the variations due to degradation are small compared to the explicit delay $\tau$. Further investigation of the relation between the circuit and models of (Glass {\it et al.}\ 2005) and (Mestl {\it et al.}\ 1996) is beyond our present scope, but may prove interesting.
\section{Boolean chaos in two loops}\label{sec:degradation_xor}
Returning to the {\sc xor}\ system, we now show that a simple form of the
degradation effect can generate chaos there as well. The situation is
not entirely analogous to the single copier studied in the previous
section because the two loops tend to create more pulses of rather short duration and
more possibilities for short-pulse rejections, both of which could
strongly influence the trajectory divergence rate.
In our simulations of the symmetric ABN with a single {\sc xor}\, we use the degradation functions shown in Fig.~\ref{fig:xordivergence}(a) for the two links. Here, for ease of simulation, the delay times are given by a piecewise-linear function of the input pulse width.
After calculating the time of a transition event using Eq.\ (\ref{eq:xor}) we calculate the input pulse width and correct the time delays according to the equation
\begin{equation}
g(x) = \left\{
\begin{array}{lll}
\tau_k +A (x-x_A), & \mathrm{for} & x_A < x \leq 0.50\\
\tau_k -A (x-x_B), & \mathrm{for} & 0.50 < x \leq x_B\\
\tau_k, & \mathrm{for} & x > x_B
\end{array}\right. ,
\label{eq:degradation}
\end{equation}
where $x$ is a pulse width, $A= 1.50$, $x_A= 0.10$, $x_B= 0.90$, and $\tau_k$ assumes either the value $\tau_1 = 9.58$ or $\tau_2 = 10.75$, depending on which link is being traversed. Pulses shorter than $\tau_{\rm spr} = 0.10$ are cutoff, as described previously.
We find that the details of the shape of the functions do not matter for the results presented below.
Our simulations show that the system oscillates indefinitely without falling onto a periodic orbit or collapsing to the always-{\sc off}\ or -{\sc on}\ states. We use the two methods described in Sec.\ \ref{sec:divergence} to analyze the trajectory divergence. An extremely small perturbation $\epsilon = 10^{-6}$ was applied to a given orbit. Figure~\ref{fig:xordivergence}(b) shows the Boolean distance $d(s)$ averaged over 100 pairs, clearly indicating a substantial exponentially-increasing regime.
Figure~\ref{fig:xordivergence}(c) shows the evolution of the difference $\delta_n=t'_n-t_n$ between one pair of original and disturbed trajectories.
The difference $\delta_n$ increases exponentially, as expected for an adequate definition of distance between trajectories in a chaotic system.
Therefore, we conclude that chaos, as defined by an exponential sensitivity to differences in the initial conditions, can be achieved by either one-loop or two-loop ABN only in the presence of degradation effects. Both $\delta_n$ and $d(s)$ are useful to distinguish the qualitative dynamics and to estimate the largest Lyapunov exponent of the system. Both quantities grow exponentially in the chaotic case and polynomially in the periodic case.
\begin{figure}[htbp]
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{Fig10}}
\end{center}
\caption{(a) Piecewise-linear degradation functions used on the two-loop system simulations. The solid and dashed line are the delays of links 1 and 2, respectively. (b) Average over 100 pairs of trajectories of the logarithm of the Boolean distance $d(s)$. The dashed line is a fit to a linear function of slope $\lambda = 0.071$.
(c) Logarithm of the timing difference $\delta_n$ between one pair of perturbed and unperturbed trajectories. \label{fig:xordivergence}}
\end{figure}
\section{Conclusions}
Motivated by the recent experiments of Zhang \textit{et al.} (2009) where chaos was observed in an autonomous Boolean network, we studied systematically the dynamics of various simple Boolean networks. From previous work on Boolean Delay Equations, it is known that Boolean networks whose nodes obey ideal Boolean rule display steady or periodic behavior, or display an ultraviolet catastrophe where the number of kinks circulating in the network per unit time grows as a power law. Hence, chaos is not possible for such a network. The ultraviolet catastrophe is obviously prevented in experiments due to non-ideal behavior of the logic gates. These effects include short-pulse rejection, asymmetry between the logic states, and the degradation effect. We first considered only the effect of short-pulse rejection for a network consisting of a single node and a single loop (link). We proved that only periodic behavior is possible. We then considered the case of a network consisting of a single node executing the {\sc xor}\ function with two loops, which is known to display an ultraviolet catastrophe. Even in this case, short-pulse rejection renders the behavior periodic. The situation is less clear when we take into account both short-pulse rejection and asymmetry between the logic states. While we were unable to prove that the two-loop network is always periodic, numerical simulations suggest that this is the case. Finally, we showed, through numerical simulations, that chaos is possible for even the simplest network consisting of a copier and a single loop when the degradation effect is included in the model. Chaos is also displayed in the two-loop {\sc xor}\ network when the degradation effect is taken into account.
Given that a degradation effect is present at some level in any real network, our results strongly suggest that there exists a class of experimental Boolean-like networks, containing at least one {\sc xor}\ connective and feedback loop, whose elements would display deterministic chaos.
\section{Acknowledgements}
We thank Andrew Mullhaupt for discussions of this work. JESS gratefully acknowledges the financial support of the NSF under grant PHY-0417372. HLDSC, DJG and RZ gratefully acknowledge the financial support of the US Office of Naval Research under MURI award \#N00014-07-1-0734.
|
1,116,691,500,322 | arxiv | \section*{Abstract}
The coercivity of La$_{1-x}$Sr$_x$MnO$_3$ thin films can be enhanced by Ru substitution for Mn. In order to elucidate its mechanism, we performed soft x-ray absorption and magnetic circular dichroism measurements at the Ru M$_{2,3}$ and Mn L$_{2,3}$ edges. We found that the spin direction of Ru and Mn are opposite and that Ru has a finite orbital magnetic moment. Cluster-model analysis indicated that the finite orbital magnetic moment as well as the reduced spin moment of Ru result from local lattice distortion caused by epitaxial strain from the SrTiO$_3$ substrate in the presence of spin-orbit interaction.
\vspace{24pt}
Hole-doped manganese oxides with the perovskite-type structure have been known as materials in which interplay between the spin, orbital, and charge degrees of freedom leads to various phases and functionalities \cite{JonkerPhysica1950, TokuraGMR, TokuraScience2000}. Among them, La$_{1-x}$Sr$_{x}$MnO$_{3}$ (LSMO) with 0.1 $\leq$ {\it x} $\leq$ 0.5 is a half-metal which showing giant magneto-resistance \cite{PD_LSMObulk}. LSMO thin films have high potential for spintronics applications but there is a serious problem that the coercivity ({\it H}$_{\rm C}$) is too small: less than 10 Oe at 300 K. Yamada {\it et al.} \cite{Yamada} reported that one can enhance the coercivity of LSMO thin films by substituting Ru for Mn. They also observed a decrease of the magnetization with Ru doping, and attributed this to antiferromagnetic coupling between Mn and Ru. They proposed that charge transfer occurs from Mn$^{4+}$ to Ru$^{4+}$ (Mn$^{4+}$+Ru$^{4+}$ $\to$ Mn$^{3+}$+Ru$^{5+}$). In order to elucidate the mechanism of the coersivity enhancement, one needs microscopic information about the Ru atom such as the valence, spin, and orbital states in the LSMO matrix.
X-ray absorption spectroscopy (XAS) and x-ray magnetic circular dichroism (XMCD) are powerful element-specific probes of the electronic and magnetic properties of complex materials. XMCD is the difference between XAS taken with light of positive and negative helicities. In this Letter, we have performed XAS and XMCD measurements at the Ru M$_{2,3}$ and Mn L$_{2,3}$ absorption edges of Ru-doped La$_{0.6}$Sr$_{0.4}$MnO$_{3}$ thin film in order to investigate the electronic and magnetic states of each element, in particular of Ru, and to elucidate the mechanism of the coercivity enhancement.
We fabricated a Ru-doped La$_{0.6}$Sr$_{0.4}$MnO$_{3}$ thin film by the plused laser deposition (PLD) method on a (001)-oriented SrTiO$_{3}$ (STO) substrate. 5 $\%$ of Mn atoms were replaced by Ru atoms, and therefore, the composition was La$_{0.6}$Sr$_{0.4}$Mn$_{0.95}$Ru$_{0.05}$O$_{3}$. The thickness of this sample was 50 nm. The details of the sample fabrication were described elsewhere \cite{Yamada}. XAS and XMCD measurements were performed at BL23-SU of SPring-8. The spectra were recorded in the total electron yield (TEY) mode at the temperature {\it T} = 10 K, well below the Curie temperature {\it T}$_{\rm C}$ $\sim$ 355 K, and in the magnetic field {\it H} = 3 T applied perpendicular to the film surface. In order to detect the weak XMCD signals of the Ru M$_{2, 3}$ edges, we used the helicity-switching mode operated at 1 Hz \cite{1Hz}.
The XAS and XMCD spectra at the Mn L$_{2, 3}$ edge are shown in Figs. \ref{Fig123} (a) and (b), respectively. The spectra well agree with those of a pure La$_{0.6}$Sr$_{0.4}$MnO$_{3}$ bulk sample reported in Ref. \cite{KoideLSMObulk}. The XAS and XMCD spectra at the Ru M$_{2, 3}$ edge are shown in Figs. \ref{Fig123} (c), (d), and (e). They are the raw Ru M$_{2, 3}$ XAS data, the XAS spectra with the back ground subtracted, and the XMCD spectrum, respectively.
The XMCD intensity was recorded at each photon energy in the helicity switching mode, which enabled us to detect the weak XMCD signals at the Ru M$_{2,3}$ edge. The XMCD spectrum was then averaged between the two magnetic directions parallel and antiparallel to the photon propagation vector.
From the opposite signs of the XMCD signals between the Mn L$_{2, 3}$ and Ru M$_{2, 3}$ edges, one can unambiguously conclude that the spin direction of Ru is opposite to that of Mn, consistent with Yamada {\it et al.}'s observation of the decreased magnetization upon Ru doping \cite{Yamada}.
We have applied the XMCD sum rules \cite{TholePRL1992, CarraPRL1993} to the measured spectra and deduced the spin and orbital magnetic moment of Mn and Ru as summarized in Table \ref{spin_orbital_m_Ru-LSMO_and_SRO}. One can see that the orbital moments of Ru is finite and parallel to the spin moment in contrast to the negligibly small orbital magnetic moment in bulk SrRuO$_{3}$ (SRO) \cite{OkamotoPRB}.
Figures \ref{SRORuLSMO} (a) and (b) show comparison of the XAS and XMCD spectra of Ru-doped LSMO with those of bulk SRO. In the case of Ru-doped LSMO, one can recognize structures which can be attributed to the unoccupied {\it e}$_{g}$ and {\it t}$_{2g}$ orbitals of the Ru 4{\it d} manifold. Corresponding features are seen in the spectra of SRO, which means that the {\it e}$_{g}$-{\it t}$_{2g}$ crystal-field splitting of Ru-doped LSMO is similar to that of SRO. The spin and orbital magnetic moments of Ru-doped LSMO and SRO are indicated in Table \ref{spin_orbital_m_Ru-LSMO_and_SRO} \cite{OkamotoPRB}. For Ru-doped LSMO, the spin magnetic moment {\it m}$_{\rm spin}$ = 0.64 {\it $\mu$}$_{\rm B}$ is as small as that ({\it m}$_{\rm spin}$ = 0.6 {\it $\mu$}$_{\rm B}$) of SRO but the orbital magnetic moment {\it m}$_{\rm orb}$ = 0.14 {\it $\mu$}$_{\rm B}$ is much larger than that ({\it m}$_{\rm orb}$ = 0.04 {\it $\mu$}$_{\rm B}$) of SRO. As we shall see below, the values of {\it m}$_{\rm spin}$ and {\it m}$_{\rm orb}$ sensitively depend on the strengths of spin-orbit interaction and epitaxial strain.
Tetragonal local lattice distortion around the Ru atom is expected to be present for the epitaxial thin film \cite{KonishiJPSJ} and to split the {\it t}$_{2g}$ level of Ru into the {\it d}$_{xy}$ and {\it d}$_{zx}$/{\it d}$_{yz}$ sub-levels as shown in Fig. \ref{egSO} (a). Here, the splitting of the {\it t}$_{2g}$ level is denoted by {\it D}$_{t_{2g}}$. (For the splitting of the $e_g$ level, $D_{e_g}/D_{t_{2g}} =$ 2 has been assumed.) The spin-orbit coupling constant of of the Ru 4d orbitals is {\it $\zeta$} = 0.1 eV \cite{MizokawaPRL,Herman}, which is larger than those ($\sim$0.01 eV) of the 3{\it d} orbitals of the first-series transition-metal atoms. {\it $\zeta$} also causes the energy splitting between the {\it J}$_{\rm eff}$ = 1/2 and 3/2 sub-levels \cite{Kim06032009}. If there were no tetragonal crystal field, the spin magnetic moment would be more dramatically reduced because four 4{\it d} electrons occupy the {\it J}$_{\rm eff}$ = 3/2. In the presence of the tetragonal crystal field, the ground state becomes a mixture of {\it J}$_{\rm eff}$ = 1/2 and 3/2 states and has finite spin and orbital magnetic moments. In order to interpret the {\it m}$_{\rm spin}$ and {\it m}$_{\rm orb}$ of Ru doped in LSMO, we have performed cluster-model calculations on the Ru M$_{2, 3}$-edge XAS and XMCD spectra. Other parameters and their values used for the calculation are taken from Ref. \cite{parameta} and listed in Table \ref{parameta}. (Because multiplet splitting became too large if the tabulated Slater integrals for the Ru 4{\it d} and 3{\it p} orbitals \cite{Mann} were used, 25 \% of Sleater integrals were used representing the strong {\it p-d} hybridization and/or the delocalization of 4{\it d} electrons in 4{\it d} transition-metal oxides.) The XAS and XMCD spectra for Ru$^{4+}$ calculated for various parameter sets are shown in Fig. \ref{egSO} (b) to (e). In panels (b) and (c), the {\it D}$_{t_{2g}}$ dependence of the spectra with fixed $\zeta$ = 0.1 eV is shown. Spectra for {\it D}$_{t_{2g}}$ = 0.4 eV best reproduce the XAS and XMCD spectra concerning the line shapes and the {\it m}$_{\rm spin}$ and {\it m}$_{\rm orb}$ values.
(The splitting of the M$_{3}$ edge, however, could not be eliminated within the physically reasonable parameter range, probably because the {\it e}$_{g}$ orbitals of Ru is too itinerant to properly describe using the cluster model.)
If we change {\it $\zeta$}, agreement with experiment becomes worse as shown in Figs. \ref{egSO} (d) and (e).
The observed small spin ($m_{\rm spin}=$ 0.64 {\it $\mu$}$_{\rm B}$) and finite orbital magnetic moment ($m_{\rm orb}=$0.14 {\it $\mu$}$_{\rm B}$) of the Ru$^{4+}$ ion can thus be attributed to the combined effect of the epitaxial strain and spin-orbit interaction of the Ru$^{4+}$ ({\it d}$^{ 4}$) ion. In order to test the possibilities of Ru valences other than 4+, we performed calculations for Ru$^{3+}$ and Ru$^{5+}$, too, as shown in Figs. \ref{Fig56} (a) and (b), but no better agreement with experiment could be obtained: If Ru$^{5+}$ is assumed, {\it m}$_{\rm spin}$ becomes too large (Table \ref{spin_orbital_m_Ru-LSMO_and_SRO}) and therefore the XMCD spectra becomes much stronger than experiment. If Ru$^{3+}$ is assumed, the direction of the orbital magnetic moment becomes opposite to the spin magnetic moment, in disagreement with experiment (Table \ref{spin_orbital_m_Ru-LSMO_and_SRO}).
The positive {\it D}$_{t_{2g}}$ value deduced in the present study indicates that the RuO$_{6}$ octahedron is compressed within the {\it a-b} plane. At first glance, this appears contradictory to the tensile strain of the LSMO thin film grown on STO substrate \cite{KonishiJPSJ}. However, because the radius of Ru$^{4+}$ ion is larger than the average radius of the Mn$^{3+}$/Mn$^{4+}$ ion as well as the radius of the Ti$^{4+}$ ion in the STO substrate \cite{Shannon}, the RuO$_{6}$ octahedron in the LSMO thin film epitaxially grown on the STO substrate will be compressed within the {\it a-b} plane while it may be elongated along the flexible {\it c}-direction.
In fact, SrRuO$_3$ thin films grown on SrTiO$_3$ substrates exhibit easy magnetization axis perpendicular to the film \cite{xia}, like the appearance of orbital magnetic moment in the present case.
We therefore consider that interplay between the epitaxial strain and spin-orbit interaction gives rise to the finite orbital magnetic moment of Ru$^{4+}$, which enhances the magnetic anisotropy.
Since magnetic anisotropy generally ehnaces the hysteretic behavior of magnetization and hence the coercivity, the coersivity of the LSMO thin film is enhanced by Ru doping.
In conclusion, we have performed XAS and XMCD studies of the Ru-doped LSMO thin film in order to investigate the origin of the enhanced coercivity. From the XAS and XMCD spectra, it has been found that the spin direction of Ru is opposite to that of Mn and that Ru has a finite orbital magnetic moment unlike that in SRO. On the basis of the cluster-model analysis of the spectra, we conclude that the valence of Ru is 4+ and the finite orbital magnetic moment arises from the spin-orbit interaction and tetragonal crystal field arising from the epitaxial strain from the STO substrate. Such a Ru atom would be magnetically anisotropic and hence enhance the coercivity of the Ru-doped LSMO thin films. In order to confirm this scenario, measurement of the orbital magnetic moment for different magnetic field directions as well as measurements of hysteresis using the element-specific XMCD technique are desired.
We would like to thank J.-M Chen for informing us of the cluster-model calculations on the Ru 2{\it p} core-level spectra of LaCe$_{0.5}$Ru$_{0.5}$O$_{3}$ prior to publication.
This work was supported by a Grant-in-Aid for Scientific Research from JSPS (S22224005) and the Quantum Beam Technology Development Program from JST.
The experiment was performed under the approval of the Photon Factory Program Advisory Committee (proposal No. 2010G187) and under the Shared Use Program of JAEA Facilities (Proposal No. 2011A3840, No. 2012A3824/BL23SU).
\begin{figure}[b]
\begin{center}
\includegraphics[width=12cm]{1.eps}
\caption{(Color online) (a), (b) XAS and XMCD spectra at the Mn L$_{2,3}$ edge of La$_{0.6}$Sr$_{0.4}$Mn$_{0.95}$Ru$_{0.05}$O$_{3}$ thin film under the magnetic field of {\it H} = 3 T at temperature {\it T} = 10 K.
(c), (d), (e) Corresponding XAS and XMCD spectra at the Ru M$_{2,3}$ edge. (c) shows raw data and (d) shows data with the smooth background subtracted.}
\label{Fig123}
\end{center}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{2.eps}
\caption{(Color online) XAS and XMCD spectra at the Ru M$_{2,3}$ edge of bulk SrRuO$_{3}$ polycrystal \cite{OkamotoPRB} (a) compared with those of the Ru-doped LSMO thin film (b).}
\label{SRORuLSMO}
\end{center}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=12cm]{3.eps}
\caption{(Color online) Cluster-model calculations of the XAS and XMCD spectra at the Ru M$_{2,3}$ edge for the Ru$^{4+}$ valence state. (a) One-electron energy-level diagram of the Ru$^{4+}$ ion in the presence of both spin-orbit (SO) interaction and tetragonal lattice distortion, which split the $t_{2g}$ level by $D_{t_{2g}}$. The degeneracy of each level is indicated in a bracket. (b), (c) Tertragonal crystal-field ({\it D}$_{t_{2g}}$) dependence of the spectra. Spin-orbit parameter {\it $\zeta$} is fixed at 0.1eV and the other parameters are given in Table \ref{parameta}. (d), (e) Spin-orbit interaction ($\zeta$) dependence of the spectra. {\it D}$_{t_{2g}}$ is fixed at 0.4 eV.}
\label{egSO}
\end{center}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.8cm]{4.eps}
\caption{(Color online)
Comparison of the experimental Ru M$_{2,3}$ XAS (a) and XMCD (b) spectra of La$_{0.6}$Sr$_{0.4}$Mn$_{0.95}$Ru$_{0.05}$O$_{3}$ with cluster-model calculation for the various valence states of Ru$^{3+}$, Ru$^{4+}$, and Ru$^{5+}$. The calculation for Ru$^{4+}$ is in better agreement with experiment than Ru$^{5+}$ and Ru$^{3+}$. If Ru$^{5+}$ is assumed, {\it m}$_{\rm spin}$ becomes too large and therefore the XMCD spectra becomes much stronger than experiment. If Ru$^{3+}$ is assumed, the direction of the orbital magnetic moment becomes opposite to the spin magnetic moment, in disagreement with experiment.}
\label{Fig56}
\end{center}
\end{figure}
\clearpage
\renewcommand{\thetable}{\Roman{table}}
\begin{table}[t]
\begin{center}
\caption{Spin and orbital magnetic moments ({\it m}$_{\rm spin}$, {\it m}$_{\rm orb}$) of Mn and Ru in La$_{0.6}$Sr$_{0.4}$Mn$_{0.95}$Ru$_{0.05}$O$_{3}$ (Ru-doped LSMO) in units of {\it $\mu$}$_{\rm B}$ deduced from the XAS and XMVCD spectra using the sum rules. The deduced values are compared with those derived from the cluster-model calculation for the Ru$^{3+}$, Ru$^{4+}$, and Ru$^{5+}$ valence states. The {\it m}$_{\rm spin}$ and {\it m}$_{\rm orb}$ of SrRuO$_{3}$ (SRO) \cite{OkamotoPRB} are also indicated for comparision. The spin directions of Mn and Ru are taken to be positive and negative, respectively, corresponding to those in Ru-doped LSMO. }
\vspace{1cm}
\begin{tabular}{ccccccc}
\hline
& \multicolumn{2}{c}{Expt.} & \multicolumn{3}{c}{Cluster-model} & Expt. \\
& \multicolumn{2}{c}{Ru-doped LSMO} & \multicolumn{3}{c}{Calculation} & SRO \cite{OkamotoPRB} \\
& Mn & Ru & Ru$^{3+}$ & Ru$^{4+}$ & Ru$^{5+}$ & Ru \\ \hline
{\it m}$_{\rm spin}$ & 3.58 & -0.64 & -0.86 & -0.50 & -2.7 & -0.6 \\
{\it m}$_{\rm orb}$ & 0.02 & -0.14 & 0.98 & -0.23 & 0.08 & -0.04 \\ \hline
\label{spin_orbital_m_Ru-LSMO_and_SRO}
\end{tabular}
\end{center}
\end{table}
\clearpage
\begin{table} [t]
\begin{center}
\caption{Electronic structure parameters used for the cluster-model calculation of the Ru M$_{2,3}$ XAS and XMCD spectra for Ru$^{3+}$, Ru$^{4+}$, and Ru$^{5+}$ in units of eV. 10{\it Dq}: Octahedral crystal-field splitting, {\it $\Delta$}: {\it p}-to-{\it d} charge-transfer energy, {\it U}$_{4d-4d}$: On-site 4{\it d}-4{\it d} Coulomb energy, {\it U}$_{4d-3p}$: On-site 4{\it d}-3{\it p} core Coulomb energy, {\it pd}{\it $\sigma$}: Hopping integral between the 3{\it d} and oxygen {\it p} orbitals (Slater-Koster parameter). These parameter values are taken from Ref. \cite{parameta}.}
\vspace{1cm}
\begin{tabular}{cccccc}
\hline
& 10{\it Dq} & {\it $\Delta$} & {\it U}$_{4d-4d}$ & {\it U}$_{4d-3p}$ & {\it pd}{\it $\sigma$} \\ \hline
Ru$^{3+}$ & 3.5 & 5 & 3 & 3.6 & -2.31 \\
Ru$^{4+}$ & 3.5 & 2 & 3 & 3.6 & -2.52 \\
Ru$^{5+}$ &3.5 & -1 & 3 & 3.6 & -2.78 \\ \hline
\label{parameta}
\end{tabular}
\end{center}
\end{table}
\clearpage
\graphicspath{{Ru-LSMO/fig/}}
|
1,116,691,500,323 | arxiv | \section{Introduction}
Many engineers have a justifiable predilection for coordinate-based descriptions of the world.
However, the use of coordinate-free descriptions is consistently leveraged in Jerry Marsden's work to gain insights which would otherwise have been clouded by the complexities which coordinates bring with them.
For example, proving anything non-trivial about the inviscid fluid equations
\begin{align}
\partial_t u^i + u^j \partial_j u_i + \partial_i p = 0 \qquad , \qquad \partial_i u^i = 0 \quad , \quad u \in \mathfrak{X}(\mathbb{R}^d) \label{eq:Euler}
\end{align}
is notoriously difficult.
However, with the publication of \cite{Arnold1966} differential geometers were permitted to substitute \eqref{eq:Euler} with a right-trivialized geodesic equation on a Lie group (i.e. an Euler-Poincar\'e equation).
In particular, if one was willing to use geometry, one could study inviscid fluids \emph{without the need to invoke \eqref{eq:Euler} directly!}
Only three years later, the proof of local existence-uniqueness was realized by David Ebin and Jerry Marsden using these coordinate-free notions \cite{EbinMarsden1970}.
In studying swimming in the mid-Reynolds regime, one is confronted with coupling a solid body to a Navier-Stokes fluid.
It is just as true today as it was in 1966 that the Navier-Stokes equations are difficult to work with.
A very modest extension of \cite{Arnold1966} allows us to view fluid-structure problems as forced Lagrangian systems on principal bundles \cite{JaVa2013}.
In this paper, we will use this geometric characterization of fluid-structure interaction to study swimming in viscous flows.
We will use these geometric tools to explore the question: \emph{Can we reasonably interpret swimming as a limit cycle?}.
Unfortunately, we will not be able answer this question in full.
However, we will be able to clarify the crucial role which $\SE(d)$-symmetry will play in the final answer.
\emph{A limit-cycle interpretation of swimming is valuable because it would conform with an existing body of knowledge derived from laboratory and computer experiments.
Moreover, this simple characterization of swimming could be of interest to control engineers who desire to use passive mechanisms to achieve robust behavior with simple open-loop control algorithms.}
\paragraph{\bf Main Contributions}
We will understand the system consisting of a body immersed in a fluid as a dissipative system evolving on a phase space $P$. One observes that the system is invariant with respect to the group of isometries of $\mathbb{R}^d$, i.e. the special Euclidean group $\SE(d)$. This observation suggests that one can describe the system evolving on the quotient manifold $[P] = \frac{P}{\SE(d)}$. Given this reduction, the main contributions of this paper are:
\begin{itemize}
\item Under reasonable assumptions on the Lagrangian and the viscous frictions, we will prove the existence of an asymptotically stable point for the dynamics in $[P]$.
\item We will illustrate how relative limit cycles are produced by exponentially stable equilibria in finite-dimensional dynamical systems under sufficiently small time-periodic perturbations.
\item For sufficiently small time-periodic internal body forces, we will speculate on the existence of loops in $[P]$ which approximately satisfy the dynamics on $[P]$.
\item We illustrate how loops in $[P]$ are lifted to paths in $P$, where each period is related to the previous by a rigid rotation and translation.
\end{itemize}
\subsection{Background}
There exists a substantial body of knowledge in the form of computational and biological experiments which are consistent with the hypothesis that swimming could be interpreted as a limit cycle.
For example, experiments involving tethered dead fish immersed in a flow behind a bluff body suggest an ability to passively harvest energy from the surrounding vorticity of the flow.
The same studies also provide a relevant example of oscillatory behavior as a stable state for an unactuated system \cite{Beal2006}. Moreover, in living fish, periodic motor neuron actuation has been recorded directly and periodic internal elastic forces have been approximated via linear elasticity models \cite{Shadwick1998}. Finally, the notion of central pattern generators\footnote{Central patter generators (CPGs) are neural networks which produce time-periodic signals.}, has become widely accepted among biologists studying locomotion \cite{Delcomyn1980,GillnerWallen1985,OttoFriesen1994}. In particular, a central pattern generator for lamprey swimming has been identified and EMG readings have been recorded in-vitro to verify that the swimming mechanism does not rely solely on feedback \cite{WallenWilliams1984} (see Figure \ref{fig:emg}). These experiments and observations from biology suggest that passive mechanisms might play a significant role in understanding swimming.
\begin{figure}[h]
\centering
\includegraphics[width = 3in]{./images/wallen}
\caption{An in-vitro EMG recording of a lamprey spinal chord in ``fictive swimming'' \cite{WallenWilliams1984}.}
\label{fig:emg}
\end{figure}
Additionally, numerical experiments involving rigid bodies with oscillating forces suggest that uniform motion (i.e. flapping flight) is an attracting state for certain pairs of frequencies and Reynolds numbers \cite{Alben_Shelley_2005,Zhang2010}. Closer to what will be demonstrated here, numerical simulations of a 2-dimensional model of a lamprey at high (but not infinite) Reynolds numbers illustrate swimming as an emergent phenomenon arising asymptotically from time-periodic internal body forces. Trajectories of this system converge to cyclic behavior after very few oscillations when starting from rest \cite{Tytell2010}.
A similar study was carried out to understand the difference between periodic control forces and prescribed kinematics in \cite{Wilson2011}. Here, regular periodic behavior was observed for both. Moreover, the prescribed kinematic swimmers were unable to swim in the inviscid regime due to time-reversibility, while coherent locomotion was consistently observed for both the forced and prescribed kinematic swimmers at $Re = 70,140,350,560,700$.
Finally, after the initial submission of this article, a series of numerical experiments to test this ``limit-cycle hypothesis'' were performed for n-linked swimmers.
Here, the authors viewed swimming as analogous to the emergence of limit cycles in a forced-damped harmonic oscillator in what they refer to as the ``forced-damped-oscillation framework.''
The numerical experiments consisted of placing an n-linked chain with an elastic restoring force on the joint angles into a Navier-Stokes fluid using the immersed body method.
The results consistently suggested that the dynamics admit a stable relative limit cycle \cite{Bhalla2013}.
Of course, vorticity shedding plays a fundamental role in the middle and high Reynolds swimming \cite{Vogel}.
\begin{figure}[h]
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./images/PLOS1}
\caption{A plot taken from \cite{Bhalla2013} of the horizontal velocity, $U$, and vertical velocity, $V$, of n-linked swimmers with time-periodic internal body forces.}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./images/PLOS2}
\caption{A vorticity isosurface of an n-linked swimmer courtesy of \cite{Bhalla2013}.}
\end{minipage}
\end{figure}
In this paper we will approach the problem of fluid-structure interaction in Navier-Stokes fluids as an instance of Lagrangian reduction by symmetry \cite{CeMaRa2001}.
Recent work based upon a modest generalization of \cite{Arnold1966} has accomplished this reduction by viewing the configuration space of fluid-structure interaction as a $\Diff_{\vol}( \ensuremath{\aquarius}_{b_0})$-principal bundle,
where $\Diff_{\vol}( \ensuremath{\aquarius}_{b_0})$ is the diffeomorphism group for the reference-domain of the fluid \cite{JaVa2013}.
In particular, the standard equations of motion for a passive body immersed in a Navier-Stokes fluid can be seen as dissipative Lagrange-Poincar\'e equations.
Following Professor Marsden's tradition of giving credit to Jean le Rond d'Alembert for his formulation of the Lagrange-d'Alembert principle, it would be fair to label the equations of motion for a body immersed in a Navier-Stokes fluid as an instance of \emph{Lagrange-Poincar\'{e}-d'Alembert equations}.
Just as \cite{Arnold1966} allowed geometers to replace the coordinate-based description of an Euler fluid with an Euler-Poincar\'e equation, \cite{JaVa2013} will serve as a sanity check for us, and allow us to replace the equations for a Navier-Stokes fluid coupled to an elastic solid with a Lagrange-Poincar\'e-d'Alembert equation.
It is worth noting that the constructions to be presented in this paper are different from those typically employed in applications of differential geometry to fluid-structure interaction.
In the low Reynolds regime, one frequently encounters geometric constructions initially articulated in \cite{ShapereWilczek1989}.
Similarly, in the potential flow regime, a similar set of constructions was described in \cite{Lewis1986}.
Both of these constructions lead to a number of insights in aquatic locomotion at extreme Reynolds numbers \cite{Ehlers2011,Kanso2005,Kelly1996,Kelly2000,Koiller1996,Munnier2011}.
Principal connections are crucial for these constructions, but interpolating between these extreme Reynolds regimes has proven difficult.
In particular, \emph{there will be absolutely no principal connections in this paper}.
\subsection{Conventions and Notation}
All objects and morphisms will be assumed to be sufficiently smooth.
Moreover, we will not address the existence or uniqueness of solutions for fluid-structure systems and all algebraic manipulations will be interpreted formally.
If $M$ is a smooth manifold then we will denote the tangent bundle by $\tau_M :TM \to M$, and the tangent lift of a map $f:M \to N$ will be denoted $Tf : TM \to TN$.
The set of vector fields on $M$ will be denoted $\mathfrak{X}(M)$ and the set of time-periodic vector fields on $M$ will be denoted $\mathfrak{X}(M)^{S^1}$.
A deformation of a vector field $X \in \mathfrak{X}(M)$ is a continuous\footnote{We view $\mathfrak{X}(M)$ as a Fr\'echet vector space.} sequence of vector fields $X_{\varepsilon} \in \mathfrak{X}(M)$ parametrized by a real parameter $\varepsilon \in \mathbb{R}$ which takes values in a neighborhood of $0 \in \mathbb{R}$ and is such that $X_0 = X$.
Given that $\mathfrak{X}(M)$ is contained in $\mathfrak{X}(M)^{S^1}$, we can consider time-periodic deformations of vector fields as well.
The flow of a vector-field, $X$, (perhaps time-dependent) will be denoted by $\Phi_t^X$.
Lastly, given any map $f:M \to N$, the map $f^{-1}:N \to \mathrm{Set}(M)$ is the set-valued map defined by $f^{-1}(n) = \{ m \in M \vert f(m) = n \}$.
\section{Limit cycles} \label{sec:LC}
Let $M$ be a finite-dimensional Riemannian manifold with norm $\| \cdot \| : TM \to \mathbb{R}$.
We can use the norm to define the notion of exponential stability.
Informally, an exponentially stable equilibrium is an equilibrium for which nearby trajectories are attracted to at an exponential rate.
Formally, we say an equilibrium is \emph{exponentially stable} if the spectrum of the linearized system lies \emph{strictly} in the left half of the complex plane.
However, the following (and equivalent) definition will be of greater use.
\begin{definition}
Let $x^* \in M$ be an equilibrium of the vector field $X \in \mathfrak{X}(M)$. Let $TX \in \mathfrak{X}(TM)$ be the tangent lift of $X$. We call $x^*$ an \emph{exponentially stable equilibrium} if there exists a $\lambda < 0$ such that
\[
\frac{d}{dt} \| v(t) \| < \lambda \| v(0) \| \quad , \quad \forall t > 0
\]
where $v(t)$ is a solution curve of $TX$.
\end{definition}
If one prefers to view exponential stability in terms of flows, we can use the Riemannian distance metric $d: M \times M \to \mathbb{R}$.
Then, an exponentially stable equilibrium $x^* \in M$ of a vector field $X \in \mathfrak{X}(M)$ is an equilibrium where there exists a neighborhood $U \subset M$ containing $x^*$ such that for any integral curve $x(t)$ with $x(0) \in U$ the equation
\[
d(x(t) , x^*) < e^{\lambda t} d(x(0) , x^*) \quad , \quad \forall t > 0
\]
holds for some $\lambda < 0$.
A special property of exponentially stable equilibria is what some control theorists call \emph{robustness} \cite{ZhouDoyle} and what some dynamical systems theorists call \emph{persistence} \cite{Fenichel1971,Hirsch77,Eldering2013}.
Let $X_\varepsilon \in \mathfrak{X}(M)$ be a deformation of the vector field $X \in \mathfrak{X}(M)$. Given an exponentially stable point $x^* \in M$ of $X$, we can assert the existence of exponentially stable equilibria of $X_\varepsilon$ for sufficiently small $\varepsilon$. This robustness of behavior can be vastly generalized by considering normally hyperbolic invariant manifolds.
\begin{definition}[Normally hyperbolic invariant manifold\footnote{This definition was taken from the introduction of \cite{Eldering2013} and is equivalent to the definition used in \cite{Hirsch77}.}]
Let $N \subset M$ be a compact invariant submanifold of the vector field $X \in \mathfrak{X}(M)$, and let $\Phi_t^X$ be the flow of $X$.
We call $N$ a \emph{normally hyperbolically invariant manifold} if there exists a $T\Phi_t$-invariant splitting $T_{N}M \equiv TN \oplus E^s \oplus E^u$ and rates $\rho^s < - \rho < 0 < \rho < \rho^u$ such that
\begin{align}
\| T\Phi^X_t(v) \| \leq C \cdot e^{ \rho |t|} \| v \| \quad , \quad \forall v \in TN , t \in \mathbb{R} \\
\| T \Phi^X_t(v)\| \leq C_u \cdot e^{ \rho_u t } \| v \| \quad , \quad \forall v \in E^u , t \leq 0 \\
\| T \Phi^X_t(v)\| \leq C_s \cdot e^{ \rho_s t } \| v \| \quad , \quad \forall v \in E^s , t \geq 0
\end{align}
for some constants $C,C_u,C_s > 0$. If $E^u$ has trivial fibers, then we call $N$ an \emph{exponentially stable invariant manifold}.
\end{definition}
Now that we are equipped with the definition of a normally hyperbolic invariant manifold, we can state the persistence theorem (a.k.a. Fenichel's theorem).
\begin{thm}[see Theorem 1 of \cite{Fenichel1971} or section 4 of \cite{Hirsch77}] \label{thm:persistence}
Let $X_{\varepsilon} \in \mathfrak{X}(M)$ be a deformation of $X \in \mathfrak{X}(M)$ and let $N \subset M$ be a compact normally hyperbolic invariant manifold of $X$. Then for sufficiently small $\varepsilon > 0$ there exists a normally hyperbolic invariant manifold, $N_\varepsilon \subset M$ of $X_\varepsilon$ which is diffeomorphic to $N$ and contained in a neighborhood of $N$.
\end{thm}
We will not need Theorem \ref{thm:persistence} in its full generality because we will only be concerned with a special instance of normally hyperbolic invariant manifolds. In particular, we will be concerned with exponentially stable limit cycles.
\begin{definition}[Exponentially stable limit cycle]
An exponentially stable invariant manifold which is homeomorphic to $S^1$ is called an \emph{exponentially stable limit cycle}.
\end{definition}
We can alternatively define an exponentially stable limit cycle using the distance metric $d:M \times M \to \mathbb{R}$. Given a periodic trajectory $x^*(t)$, the orbit $\Gamma$ is an exponentially stable limit cycle if there exists a neighborhood $U$ of $\Gamma$ and a contraction rate $\lambda < 0$ such that
\[
d( x(t) , \Gamma) \leq e^{\lambda t} d( x(0) , \Gamma) \quad \forall t > 0
\]
for all solution curves $x(t)$ with $x(0) \in U$. In any case, a direct corollary of Theorem \ref{thm:persistence} is the persistence of exponentially stable limit cycles. That is to say:
\begin{cor} \label{cor:ESLC}
Let $\Gamma$ be an exponentially stable limit cycle of $X \in \mathfrak{X}(M)$ and let $X_\varepsilon \in \mathfrak{X}(M)$ be a deformation of $X$. Then for sufficiently small $\varepsilon > 0$ there exists an exponentially stable limit cycle $\Gamma_\varepsilon$ of $X_\varepsilon$ which is in a neighborhood of $\Gamma$.
\end{cor}
Given a time-periodic vector field $Y \in \mathfrak{X}(M)^{S^1}$, we can consider the \emph{autonomous} vector field on the time-augmented phase space $M \times S^1$ given by $Y \times \partial_\theta \in \mathfrak{X}(M \times S^1)$.
In particular, the vector field $Y \times \partial_\theta$ corresponds to the autonomous dynamical system
\begin{align*}
\dot{\theta} = 1 \qquad , \qquad \dot{x} = Y(x,\theta).
\end{align*}
If the vector field $Y \times \partial_\theta$ admits an exponentially stable limit cycle $(x(t) , \theta(t) ) \in M \times S^1$, then $\theta(t) := t \mod 2\pi$ and $x(t)$ is $2\pi$-periodic.
This observation justifies the following definition.
\begin{definition}
Let $Y \in \mathfrak{X}(M)^{S^1}$. Given a periodic solution curve $x(t) \in M$, we call the orbit $\Gamma := x(S^1)$ a \emph{non-autonomous exponentially stable limit cycle} if $\Gamma \times S^1$ is an exponentially stable limit cycle for $Y \times \partial_\theta$.
\end{definition}
Given the definition of a non-autonomous exponentially stable limit cycle, we can specialize Corollary \ref{cor:ESLC} to the case of time-periodic dynamical systems. In particular, we arrive at:
\begin{prop} \label{prop:NAESLC}
Let $x^* \in M$ be an exponentially stable equilibrium of $X \in \mathfrak{X}(M)$ and let $X_\varepsilon \in \mathfrak{X}(M)^{S^1}$ be a time-periodic deformation of $X$. Then for sufficiently small $\varepsilon > 0$ the vector field $X_\varepsilon$ admits a non-autonomous exponentially stable limit cycle in a neighborhood of $x^*$.
\end{prop}
\begin{proof}
Because $x^*$ is an exponentially stable equilibrium of $X$, we can see that $(x^* , \theta)$ for $\theta \in S^1$ is a solution curve of $X \times \partial_\theta$ with orbit $\{x^*\} \times S^1$.
In particular, $\{ x^* \} \times S^1$ is an exponentially stable limit cycle with a contraction rate $\rho_s$ equal to the contraction rate of $x^*$ in the dynamical system defined by $\dot{x} = X(x)$.
By Corollary \ref{cor:ESLC}, the vector field $X_\varepsilon \times \partial_\theta \in \mathfrak{X}(M \times S^1)$ also exhibits a limit cycle, $(x_\varepsilon(\theta) , \theta)$, in a neighborhood of $\{x^*\} \times S^1$.
This means that $x_\varepsilon(\theta)$ is a non-autonomous exponentially stable limit cycle for $X_\varepsilon$ in a neighborhood of $x^*$.
\end{proof}
The significance of Proposition \ref{prop:NAESLC} is that we can time-periodically deform systems with exponentially stable equilibria to produce non-autonomous exponentially stable limit cycles.
\begin{example}
Consider the equations of motion for a perturbed linear damped mass-spring system,
\begin{align}
\frac{d}{dt} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} y \\ -x -y \end{bmatrix} + \varepsilon \begin{bmatrix} 0 \\ \sin(t) \end{bmatrix}. \label{eq:example1}
\end{align}
We see that for $\varepsilon = 0$, the system admits an exponentially stable point $(x,\dot{x}) = (0,0)$.
When $\varepsilon > 0$, the non-autonomous limit cycle of Proposition \ref{prop:NAESLC} emerges.
Typical trajectories of the system for $\varepsilon = 0,1$ are shown in Figures \ref{fig:SP} and \ref{fig:LC}.
\end{example}
\begin{figure}[h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width = \textwidth]{./images/SP.pdf}
\caption{A trajectory of \eqref{eq:example1} with $\varepsilon = 0$}
\label{fig:SP}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width = \textwidth]{./images/LC.pdf}
\caption{A trajectory of \eqref{eq:example1} with $\varepsilon = 1$}
\label{fig:LC}
\end{minipage}
\end{figure}
\section{Relative limit cycles} \label{sec:RLC}
In this section we will consider dynamical systems with Lie group symmetries.
Let $G$ be a Lie group which acts on $M$ by a left action.
The $G$-orbit of a point $x \in M$ is the set $[x] := \{ g \cdot x \vert g \in G \} \equiv G \cdot x$.
We denote the quotient space by $[M] := \{ [x] : x \in M \}$ and we call the map $\pi: x \in M \mapsto G \cdot x \in [M]$ the quotient projection.
If the action of $G$ is free and proper, then the quotient projection is a smooth surjection and the triple $(M,[M], \pi)$ is a fiber bundle known as a \emph{principal $G$-bundle} \cite[Proposition 4.1.23]{FOM}.
We now present the natural notions of $G$-invariance for function on $M$.
Note that for any $[f] \in C^{\infty}([M])$, we can define the smooth function $[f] \circ \pi \in C^{\infty}(M)$.
Moreover, $[f] \circ \pi$ is $G$-invariant because
\[
[f] \circ \pi(g \cdot x) = [f]( G \cdot (g \cdot x) ) = [f]( G \cdot x) = [f] \circ \pi(x).
\]
Conversely, for a $G$-invariant function $f \in C^{\infty}(M)$, we see that $f( G \cdot x) = f(x)$.
Noting that the left-hand side of this equation involves the application of $f$ to a $G$-orbit, we have apparently found a function $[f] \in C^{\infty}( [M] )$ such that $[f] \circ \pi = f$.
In other words, the set of $G$-invariant functions on $M$ is identifiable with the set of smooth function on $[M]$.
This $G$-invariance for functions on $M$ extends to $G$-invariant vector fields.
We do this by extending the action on $M$ to an action on $TM$ by the tangent lift.
In particular, the action of $g$ on a point $v = \frac{dx}{dt} \in TM$ is given by
\[
g \cdot v := \left. \frac{d}{dt} \right|_{t=0} (g \cdot x(t) ).
\]
In this case, we call $X \in \mathfrak{X}(M)$ a $G$-invariant vector field if
\begin{align}
g \cdot X(x) = X(g \cdot x) \quad \forall g \in G, x \in M. \label{eq:Gvf}
\end{align}
Moreover, the flow $\Phi^X_t$ is $G$-invariant as well.
For if $X$ is $G$-invariant and $x(t) \in M$ is a solution curve, then
\[
\frac{d}{dt} (g \cdot x(t)) = g \cdot \dot{x}(t) = g \cdot X(x(t)) = X( g \cdot x(t)).
\]
Thus, $g \cdot x(t)$ is a solution and so $g \circ \Phi^X_t = \Phi^X_t \circ g$.
\begin{prop}\label{prop:reduced_vf}
If $X \in \mathfrak{X}(M)$ is $G$-invariant then there exists a unique vector field $[X] \in \mathfrak{X}([M])$ such that $T\pi \cdot X = [X] \circ \pi$. Moreover, the flow of $[X]$ is $\pi$-related to the flow of $X$. In other words, the diagrams
\[
\begin{tikzcd}
M \arrow{r}{X} \arrow{d}{\pi} & TM \arrow{d}{T\pi} \\
{[M]} \arrow{r}{[X]} & {T[M]}
\end{tikzcd}
\qquad , \qquad
\begin{tikzcd}
M \arrow{r}{\Phi^X_t} \arrow{d}{\pi} & M \arrow{d}{\pi} \\
{[M]} \arrow{r}{ \Phi^{[X]}_t} & {[M]}
\end{tikzcd}
\]
are commutative.
\end{prop}
Given a pair $X \in \mathfrak{X}(M)$ and $[X] \in \mathfrak{X}([M])$ which satisfies \eqref{eq:Gvf}, we call $[X]$ the reduced vector field and $X$ the unreduced vector field. This correspondence between $X$ and $[X]$ allows us to discuss relative periodicity.
\begin{definition}[relative periodicity]
Let $X \in \mathfrak{X}(M)$ be a $G$-invariant vector field on the $G$-principal bundle $\pi:M \to [M]$. Let $[X] \in \mathfrak{X}([M])$ be the reduced vector field. The orbit of a solution curve $x(t)$ of $X$ is called a \emph{relatively periodic orbit} if $\pi(x(t))$ is a periodic orbit of $[X]$.
\end{definition}
A remarkable characteristic of relative periodic orbits is the following.
\begin{prop} \label{prop:regular}
Let $X \in \mathfrak{X}(M)$ be a $G$-invariant vector field. If $x(t)$ is a relative periodic orbit of period $T$, then there exists some $g \in G$ such that $x(T) = g \cdot x(0)$. Moreover, $x(kT) = g^k \cdot x(0)$ for each $k \in \mathbb{Z}$.
\end{prop}
We call the element $g \in G$ of Proposition \ref{prop:regular} the \emph{phase shift} of the periodic orbit $\pi(x(t))$.
To hint at the relevance of this concept to locomotion, we should mention that if $G = \SE(d)$, the phase shift implies that the system undergoes regular and periodic changes in position and orientation. We now seek to study exponentially stable manifestations of relative periodicity. This brings us to the notion of a relative limit cycle.
\begin{definition}
An orbit $x(t) \in M$ of a $G$-invariant vector field $X \in \mathfrak{X}(M)$ is called a \emph{relative exponentially stable limit cycle} if $\pi(x(t))$ is an exponentially stable limit cycle for the reduced vector field $[X]$. Finally, if $Y \in \mathfrak{X}(M)^{S^1}$ is $G$-invariant with reduced vector field $[Y] \in \mathfrak{X}_{S^1}([M])$, then we call the orbit of a trajectory $x(t) \in M$ a \emph{non-autonomous exponentially stable relative limit cycle} if $\pi(x(t)) \in [M]$ is a non-autonomous exponentially stable limit cycle.
\end{definition}
\begin{prop} \label{prop:stable}
Let $X \in \mathfrak{X}(M)$ be $G$-invariant, and let $[X] \in \mathfrak{X}([M])$ be the reduced vector field of $X$.
Let $\Gamma \subset [M]$ be a limit cycle of $[X]$. Then there exists an open neighborhood $U$ of $\pi^{-1}( \Gamma) \subset M$ wherein each point is attracted towards a relative limit cycle contained in $\pi^{-1}( \Gamma)$.
\end{prop}
Before we provide the proof of this proposition, it is useful to illustrate the following lemma which relates the distance metric on $M$ with the natural distance metric on $[M]$.
\begin{lem} \label{lem:metric}
If the Riemannian metric on $M$ is $G$-invariant, then the distance metric $d: M \times M \to \mathbb{R}$ is $G$-invariant as well. The function on $[M] \times [M]$ given by $[d] ( [x] , [y] ) := d( G \cdot x , G \cdot y)$ is a metric and satisfies the equality $d( x , G \cdot y ) = [d]( [x] , [y])$.
\end{lem}
Equipped with Lemma \ref{lem:metric}, we are now ready to prove Proposition \ref{prop:stable}.
\begin{proof}[proof of Proposition \ref{prop:stable}]
Let $[U]$ be a neighborhood of $\Gamma$. Then $U = \pi^{-1}([U]) \subset M$ is an open set as well, since $\pi$ is continuous. Therefore, given an arbitrary $x \in U$, we see by Lemma \ref{lem:metric} that $\frac{d}{dt} ( d( x , \pi^{-1}(\Gamma) ) ) = \frac{d}{dt}( d(\pi(x),\Gamma) ) < \lambda d(\pi(x),\Gamma) = \lambda d(x, \pi^{-1}( \Gamma) )$. Thus, the solution is attracted towards $\pi^{-1}(\Gamma)$. However, $\pi^{-1}(\Gamma)$ is foliated by relative limit cycles.
\end{proof}
Later we will want to see how time-periodic perturbations generate stable and relatively periodic behavior. This motivates us to state the following proposition.
\begin{prop}\label{prop:NAESRLC}
Let $X \in \mathfrak{X}(M)$ be $G$-invariant and let $[X] \in \mathfrak{X}([M])$ be the reduced vector field of $X$.
Let $q^*$ be an exponentially stable equilibria of $[X]$.
If $X_\varepsilon \in \mathfrak{X}(M)^{S^1}$ is a time-periodic $G$-invariant deformation of $X$,
then for sufficiently small $\varepsilon > 0$ the vector field $X_\varepsilon$ admits a non-autonomous exponentially stable relative limit cycle.
\end{prop}
\begin{proof}
Let $[X_\varepsilon] \in \mathfrak{X}( [M] )^{S^1}$ be the reduced vector field corresponding to $X_\varepsilon$ for each $\varepsilon$.
We can then verify that $[X_\varepsilon]$ is a deformation of $[X]$.
By Proposition \ref{prop:NAESLC}, the vector field $[X_\varepsilon]$ admits a non-autonomous exponentially stable limit cycle for sufficiently small $\varepsilon$.
It follows that $X_\varepsilon$ must admit non-autonomous exponentially stable relative limit cycles.
\end{proof}
\begin{example} \label{ex:3D}
Consider the system on $\mathbb{R}^3$ given by
\begin{align}
\frac{d}{dt} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} y \\ -x - y \\ y - x^2 - x y \end{bmatrix} + \varepsilon \begin{bmatrix} 0 \\ \sin(t) \\ \cos(t) \end{bmatrix}. \label{eq:example2}
\end{align}
We see that this system is invariant under translations in the $z$-coordinate.
This is a $(\mathbb{R},+)$-symmetry and the quotient projection is given by $\pi(x,y,z) = (x,y)$.
The reduced vector field is given by equation \eqref{eq:example1}.
By Proposition \ref{prop:NAESRLC}, \eqref{eq:example2} must admit relative limit cycles for sufficiently small $\varepsilon > 0$.
Moreover, as the symmetry of the system is along the $z$-axis, by Proposition \ref{prop:regular} each period of the relative limit cycle should be related to the previous period by a constant vertical shift.
Typical trajectories for $\varepsilon = 0,1$ are depicted in Figures \ref{fig:RSP} and \ref{fig:RLC}
\end{example}
\begin{figure}[h]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[clip=true,trim=2cm 1cm 2cm 1cm, width = \textwidth]{./images/RSP.pdf}
\caption{A trajectory of \eqref{eq:example2} with $\varepsilon = 0$}
\label{fig:RSP}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[clip=true,trim=2cm 1cm 2cm 1cm, width = \textwidth]{./images/RLC.pdf}
\caption{A trajectory of \eqref{eq:example2} with $\varepsilon = 1$}
\label{fig:RLC}
\end{minipage}
\end{figure}
Example \ref{ex:3D} illustrates how a $(\mathbb{R},+)$-symmetry lead to a system with a stable non-autonomous relative limit cycle wherein each period was related to the previous by a constant translation along the $z$-axis.
The goal of this article is to characterize swimming as a stable non-autonomous relative limit cycle with respect to an $\SE(d)$-symmetry wherein each period is related to the previous by a constant translation and rotation of space.
In order to do this, we must express fluid-structure problems in a geometric formalism.
In particular, we will follow the constructions of \cite{JaVa2013} to do this, using the Lagrange-d'Alembert formalism.
\section{Lagrange-d'Alembert formalism} \label{sec:LDA}
In this section, we review the Lagrange-d'Alembert formalism for simple mechanical systems.
If $Q$ is equipped with a Riemanian metric, $\ensuremath{\langle} \cdot , \cdot \ensuremath{\rangle}_Q: TQ \oplus TQ \to \mathbb{R}$, then it is customary to consider Lagrangians of the form
\begin{align}
L(q,\dot{q}) = \frac{1}{2} \ensuremath{\langle} \dot{q} ,\dot{q}\ensuremath{\rangle}_Q - U(q), \label{eq:SM_Lag}
\end{align}
where $U: Q \to \mathbb{R}$. We call a Lagrangian of this form a \emph{simple mechanical Lagrangian}. For simple mechanical Lagrangians, and external force fields $F:TQ \to T^{\ast}Q$, the Lagrange-D'Alembert equations take the form
\begin{align}
\frac{D \dot{q}}{Dt} = \nabla U(q) + \sharp \left( F(q,\dot{q}) \right), \label{eq:LDA}
\end{align}
where $\frac{D}{Dt}$ is the Levi-Cevita covariant derivative, $\nabla U$ is the gradient of $U$, and $\sharp : T^{\ast}Q \to TQ$ is the sharp operator induced by the Riemannian metric \cite[Proposition 3.7.4]{FOM}.
It is notable that \eqref{eq:LDA} is equivalent to the Lagrange-d'Alembert variational principle
\[
\delta \int_0^T L(q,\dot{q}) dt = \int_0^{T}{ \ensuremath{\langle} F(q,\dot{q}) , \delta q \ensuremath{\rangle} dt }
\]
with respect to variations $\delta q$ with fixed end points \cite[Chapter 7]{MandS}.
We denote the vector field associated to \eqref{eq:LDA} by $X_{TQ} \in \mathfrak{X}(TQ)$, and its flow is given by $\Phi^{TQ}_t$.
\section{Fluid-structure interaction} \label{sec:passive}
In this section, we will place fluid structure-interactions in the Lagrange-d'Alembert formalism.
Specifically, we will understand a body immersed in a fluid as a simple mechanical Lagrangian system with a dissipative force field, in the sense of \eqref{eq:LDA}.
This is commonly referred to as the ``material description'' in fluid mechanics.
Moreover, we will reduce the system by a particle relabeling symmetry, so that the fluid is described in the ``spatial description'' via the Navier-Stokes equations.
Finally, we will identify `frame-invariance' (a.k.a objectivity) as a left $\SE(d)$-symmetry.
\subsection{Navier stokes fluids in the Lagrange-d'Alembert formalism} \label{sec:passive fluids}
In this paper, we seek to understand swimming in the mid-Reynolds regime.
Specifically this entails invoking the Navier-Stokes equations with non-zero viscosity.
It was discovered in \cite{Arnold1966} that the Navier-Stokes equations with zero viscosity could be handled in the Euler-Poincar\'e formalism.
Moreover, it is mentioned in \cite[Chapter 1, section 12]{ArKh1992} that the Navier-Stokes equations can be viewed in this framework with the simple addition of a dissipative force.
In this section, we will describe this formulation of the Navier-Stokes equations.
Consider the manifold $\mathbb{R}^d$ with the standard flat metric and volume form $dx = dx^1 \wedge \cdots \wedge dx^d$.
One can consider the infinite-dimensional Lie group of volume-preserving diffeomorphisms, $\Diff_{\vol} (\mathbb{R}^d)$, where the group multiplication is simply the composition of diffeomorphisms.\footnote{This is a pseudo Lie group. We will assume that all diffeomorphisms approach the identity as $\|x\| \to \infty$ sufficiently rapidly for all computations to make sense. In particular, the existence of a Hodge-decomposition for our space is important. Sufficient conditions for our purposes are provided in \cite{Cantor1975} and \cite{Troyanov2009}.}
The configuration of a fluid flowing on $\mathbb{R}^d$ relative to some reference configuration is described by an element $\varphi \in \Diff_{\vol}(\mathbb{R}^d)$.
Given a curve $\varphi_t \in \Diff_{\vol}(\mathbb{R}^d)$, one can differentiate it to obtain a tangent vector $\dot{\varphi}_t = \frac{d}{dt} \varphi_t \in T \Diff_{\vol}(\mathbb{R}^3)$.
One can interpret $\dot{\varphi}$ as a map from $\mathbb{R}^d$ to $T\mathbb{R}^d$ by the natural definition $\dot{\varphi}(x) = \frac{d}{dt} \varphi_t(x)$.
Therefore, a tangent vector, $\dot{\varphi} \in T \Diff_{\vol}(\mathbb{R}^d)$, over a diffeomorphism $\varphi \in \Diff_{\vol}(\mathbb{R}^d)$ is simply the smooth map $\dot{\varphi}: \mathbb{R}^d \to T\mathbb{R}^d$, such that $\tau_{\mathbb{R}^d} \circ \dot{\varphi} = \varphi$ where $\tau_{\mathbb{R}^d} : T\mathbb{R}^d \to \mathbb{R}^d$ is the tangent bundle projection.
Moreover, $\dot{\varphi} \circ \varphi^{-1}$ is a smooth divergence-free vector field on $\mathbb{R}^d$.
We call $\dot{\varphi}$ the \emph{material} representation of the velocity, while $\dot{\varphi} \circ \varphi^{-1} \in \mathfrak{X}_{\vol}(\mathbb{R}^d)$ is the spatial representation.
The Lagrangian, $L: T( \Diff_{\vol}(\mathbb{R}^d) ) \to \mathbb{R}$, is the kinetic energy of the fluid,
\[
L( \varphi, \dot{\varphi}) := \frac{1}{2} \int_{\mathbb{R}^d}{ \| \dot{\varphi}(x) \|^2 dx}.
\]
One can derive the Euler-Lagrange equations on $\Diff_{\vol}(\mathbb{R}^d)$ with respect to the Lagrangian $L$ to obtain the equations of motion for an ideal fluid. However, this Lagrangian exhibits a symmetry.
\begin{prop}[\cite{Arnold1966}]
The Lagrangian $L$ is symmetric with respect to the right action $\Diff_{\vol}(\mathbb{R}^d)$ on $T \Diff_{\vol}(\mathbb{R}^d)$.
\end{prop}
Moreover, it is simple to verify the the proposition:
\begin{prop} \label{prop:MC}
The action of $\Diff_{\vol}( \mathbb{R}^d )$ on $T \Diff_{\vol}( \mathbb{R}^d)$ is free and proper.
The quotient space $T \Diff_{\vol}(\mathbb{R}^d) / \Diff_{\vol}( \mathbb{R}^d) = \mathfrak{X}_{\rm div}(\mathbb{R}^d)$ and the quotient projection is the right Maurer-Cartan form,
\[
\rho: (\varphi, \dot{\varphi}) \in T \Diff( \mathbb{R}^d) \mapsto u = \dot{\varphi} \circ \varphi^{-1} \in \mathfrak{X}_{\rm div}( \mathbb{R}^d).
\]
\end{prop}
This symmetry is referred to as the \emph{particle relabeling symmetry}.
As a result of this symmetry, Proposition \ref{prop:reduced_vf} suggests that we can write equations of motion on $\mathfrak{X}_{\rm div}( \mathbb{R}^d )$.
It was the discovery of \cite{Arnold1966} that these equations could be written as
\[
\partial_t u + u \cdot \nabla u = - \nabla p \quad , \quad \mathrm{div}(u) = 0,
\]
which one will recognize as the inviscid fluid equations. Moreover, if we define the linear map, $f_\mu: \mathfrak{X}_{\vol}(\mathbb{R}^d) \to \mathfrak{X}_{\vol}^*(\mathbb{R}^d)$ given by
\[
\ensuremath{\langle} f_\mu(u) , w \ensuremath{\rangle} = \mu \int_{\mathbb{R}^d }{ \Delta u(x) \cdot w(x) dx},
\]
then we derive the Lagrange-D'Alembert equations by lifting $f_\mu$ (via the right Maurer-Cartan form) to obtain a force field $F: T (\Diff_{\vol}(\mathbb{R}^3)) \to T^{\ast}( \Diff_{\vol}(\mathbb{R}^3))$.
If we do this, then reduction by $\Diff_{\vol}(\mathbb{R}^3)$ yields a spatial velocity field, $u(t)$, which satisfies the Navier-Stokes equations
\[
\partial_t u + u \cdot \nabla u = - \nabla p - \mu \Delta u \quad , \quad \mathrm{div}(u) = 0.
\]
\subsection{Solids} \label{sec:passive solids}
Let $\ensuremath{\mathcal{B}}$ be a compact manifold with boundary $\partial \ensuremath{\mathcal{B}}$ and volume form $d\vol_{\ensuremath{\mathcal{B}}}$. Let $\Emb(\ensuremath{\mathcal{B}})$ denote the set of embeddings of $\ensuremath{\mathcal{B}}$ into $\mathbb{R}^d$. Finally, let $\SE(d)$ denote the set of isometries of $\mathbb{R}^d$.
We view each $b \in \Emb(\ensuremath{\mathcal{B}})$ as a map $b:\ensuremath{\mathcal{B}} \hookrightarrow \mathbb{R}^d$, while viewing $z \in \SE(d)$ as a map $z : \mathbb{R}^d \to \mathbb{R}^d$.
We can compose these maps to obtain a new map $z \circ b : \ensuremath{\mathcal{B}} \hookrightarrow \mathbb{R}^d$, which itself embeds $\ensuremath{\mathcal{B}}$ into $\mathbb{R}^d$.
That is to say, the assignment $b \mapsto z \circ b$ is a left action of $\SE(d)$ on $\Emb(\ensuremath{\mathcal{B}})$.
It is elementary to observe that this action is free and proper, and makes $\Emb(\ensuremath{\mathcal{B}})$ into an $\SE(d)$-principal bundle.
The configuration manifold for the body is given by a $\SE(d)$-invariant submanifold $B \subset \Emb(\ensuremath{\mathcal{B}})$ (possibly finite-dimensional).
Therefore, the quotient space $[B] = \frac{B}{\SE(d)}$ is a smooth manifold and $\pi^{B}_{[B]}: B \to [B]$ is a $\SE(d)$-principal bundle as well.
We call $[B]$ the \emph{shape-space}, following \cite{MaMoRa1990}.
The Lagrangian for the body, $L_B: TB \to \mathbb{R}$, will be that of a simple mechanical system.
The reduced-potential energy will be given by a function $[U] : [B] \to \mathbb{R}$,
and the potential energy is defined as $U := [U] \circ \pi^B_{[B]}$.
Equivalently, we may define $U:B \to \mathbb{R}$ first, with the assumption that we choose something which is $\SE(d)$-invariant.
To define the kinetic energy, we must first understand the tangent bundle $TB \subset T \Emb(\ensuremath{\mathcal{B}})$.
By applying the dynamic definition of tangent vectors, we can derive that a $(b,\dot{b}) \in T \Emb(\ensuremath{\mathcal{B}})$ must be a pair of maps, $b \in \Emb(\ensuremath{\mathcal{B}})$ and $\dot{b} : \ensuremath{\mathcal{B}} \hookrightarrow T \mathbb{R}^d$, such that $\dot{b}(x)$ is a vector over $b(x)$ for all $x \in \ensuremath{\mathcal{B}}$.
Moreover, a $(b,\dot{b}) \in TB$ is an element of $T \Emb(\ensuremath{\mathcal{B}})$ tangential to $B \subset \Emb(\ensuremath{\mathcal{B}})$.
We see that for each $z \in \SE(d)$, we can consider the map $Tz : T \mathbb{R}^d \to T \mathbb{R}^d$, and we define the action of $z$ on $TB$ by the assignment $(b,\dot{b}) \in TB \mapsto (z \circ b , Tz \circ \dot{b}) \in TB$.
This defines a free and proper left $\SE(d)$ action on $TB$ so that $TB$ is an $\SE(d)$-principal bundle.
We will assume the existence of an $\SE(d)$-invariant Riemannian metric $\ensuremath{\langle} \cdot , \cdot \ensuremath{\rangle}_{\ensuremath{\mathcal{B}}} : TB \oplus TB \to \mathbb{R}$,
and that the kinetic energy is $K(b,\dot{b}) = \frac{1}{2} \langle (b,\dot{b}) , (b,\dot{b}) \rangle_{B}$.
Finally, without any dissipation, our solid body could ``jiggle'' forever due to conservation of energy. To amend this, we will include a dissipative force given by a fiber-bundle map, $F_{\ensuremath{\mathcal{B}}}: T [ B] \to T^{\ast} [B]$, such that the storage function
\begin{align}
\ensuremath{\langle} F_{\ensuremath{\mathcal{B}}}( v_{[b]} ) , v_{[b]} \ensuremath{\rangle} : T[ B] \to \mathbb{R} \label{eq:concavity}
\end{align}
is convex on each fiber of $T[B]$ and reaches a maximum at zero where it vanishes.
For example, a negative definite quadratic form would be admissible.
Such a force has the effect of dampening the rate of change in the shape of the body, but it will not dampen motions induced by the action of $\SE(d)$. In other words, we assume that a jiggling body eventually comes to rest with some shape $s_{\min} \in [B]$ by the dissipation of energy.
\begin{example} \label{ex:two_link}
Consider a two-link body in $\mathbb{R}^2$.
The configuration manifold $B$ consists of rigid embeddings of the two links into $\mathbb{R}^2$ such that the embeddings respect the constraint that the links are joined at the hinge (see Figure \ref{fig:two_link}).
In particular, $B$ is isomorphic to $S^1 \times S^1 \times \mathbb{R}^2$ if we let the tuple $(\phi_1,\phi_2,x,y ) \in S^1 \times S^1 \times \mathbb{R}^2$ denote a configuration where $\phi_1,\phi_2 \in S^1$ are the angles between the links and the $x$-axis, while $(x,y) \in \mathbb{R}^2$ is the location of the hinge. Under this identification, the action of an element $(\theta, X,Y) \in \SE(2)$ on $(\phi_1,\phi_2,x,y) \in B$ is given by
\[
(\theta , X , Y) \cdot \begin{pmatrix} \phi_1 \\ \phi_2 \\ x \\ y \end{pmatrix} = \begin{pmatrix} \theta + \phi_1 \\ \theta+ \phi_2 \\ \cos(\theta) x - \sin(\theta)y + X \\ \sin(\theta)x + \cos(\theta) y + Y \end{pmatrix}.
\]
Under this action, we find that the shape space is $[B] = S^1$ and that the quotient projection from $B$ to $[B]$ is given by $\pi^B_{[B]}( \phi_1, \phi_2,x,y) = \phi_1 - \phi_2$.
In other words, the shape of the body is described by the interior angle of the hinge.
Finally, we may consider a potential energy derived from a linear spring between the hinges given by $U( \phi_1, \phi_2, x,y) = \frac{k}{2} ( \phi_1 - \phi_2 - \bar{\theta} )^{2}$ for some constant equilibrium interior angle $\bar{\theta} \in S^1$.
It should be evident that this potential energy is $\SE(2)$-invariant. The kinetic energy of the $i^{\rm th}$ body is
\[
K_i = \frac{I_i}{2} \dot{\phi}_i^2 + \frac{M_i}{2} \left( [\dot{x} - \sin(\phi_i) \dot{\phi_i}]^2 + [\dot{y} + \cos(\phi_i) \dot{\phi_i} ]^2 \right),
\]
where $M_i$ and $I_i$ are the mass and rotational inertial of the $i^{\rm th}$ body, respectively. The Lagrangian is therefore $L_B = K_1 + K_2 - U$.
Lastly, the force $F_B = \dot{\phi}_2 d\phi_2 - \dot{\phi}_1 d \phi_1$ provides an $\SE(2)$-invariant elastic friction force.
The effect of $F_B$ is to dampen changes in the the interior angle $\theta = \phi_2 - \phi_1$.
In particular, $\theta$ parametrizes the shape space of this body, and so $F_B$ can be said to dampen changes in shape.
\end{example}
\begin{figure}[h]
\centering
\includegraphics[width=3in]{./images/two_link.pdf}
\caption{A diagram of the swimmer from Example \ref{ex:two_link}.}
\label{fig:two_link}
\end{figure}
\begin{example}
The theory of linear elasticity assumes $\ensuremath{\mathcal{B}}$ to be a Riemannian manifold with a mass density $\rho \in \bigwedge^n (\ensuremath{\mathcal{B}})$ and metric $\langle \cdot , \cdot \rangle_{\ensuremath{\mathcal{B}}}: T\ensuremath{\mathcal{B}} \oplus T\ensuremath{\mathcal{B}} \to \mathbb{R}$.
Here, the configuration manifold is $B = \Emb(\ensuremath{\mathcal{B}})$ and the potential energy is
\[
U(b) = \frac{1}{2} \int_{b(\ensuremath{\mathcal{B}})}{ \mathrm{trace}\left( [I - C_b]^T \cdot [I - C_b] \right) b_*d\vol_{\ensuremath{\mathcal{B}}}},
\]
where $C_b$ is the push-forward of the metric $\ensuremath{\langle} \cdot , \cdot \ensuremath{\rangle}_{\ensuremath{\mathcal{B}}}$ by $b:\ensuremath{\mathcal{B}} \hookrightarrow \mathbb{R}^d$, a.k.a. the \emph{right Cauchy-Green strain tensor}.
The $\SE(d)$-invariant kinetic energy, $K:TB \to \mathbb{R}$, is given by
\[
K(b,\dot{b}) = \frac{1}{2} \int_{\ensuremath{\mathcal{B}}} \| \dot{b}(x) \|^2 \rho(x) dx.
\]
This Lagrangian yields the standard model of linear elasticity and is known to be $\SE(d)$-invariant, a.k.a. \emph{objective} \cite{MFOE}.
As we can not easily coordinatize $B$ in this example, we cannot expect to easily obtain a concrete description of the shape space, $[B]$.
Nonetheless, by the $\SE(d)$-invariance of $U$, there must exist a function $[U] : [B] \to \mathbb{R}$ such that $U = [U] \circ \pi^B_{[B]}$.
\end{example}
The above examples are merely instances of possible models we may choose for the body.
Identifying a physical model of the solid body is a necessary precondition for understanding the effect of internal body forces on the system.
In particular, this is the approach taken in \cite{Tytell2010} and \cite{Bhalla2013}.
To quote \cite{Tytell2010}, ``the motion of the body emerges as a balance between internal muscular force and external fluid forces.''
The emphasis on the importance of the internal mechanics of the swimming body can become fairly sophisticated.
These sophisticated solid-mechanical concerns can be important for understanding the role of passive mechanisms in biomechanics.
For example, fibered structures can exhibit a ``counter-bend phenomena'' in which an increased curvature in one region of a structure yields a decrease elsewhere in ways which aide swimming \cite{Gadelha2013}.
These advanced topics will not be addressed here, but we recall them only to put this work in a proper context.
\subsection{Fluid-solid interaction} \label{sec:passive fsi}
Let $\ensuremath{\mathcal{B}}$, $B$, $L_{\ensuremath{\mathcal{B}}}$, $F_{\ensuremath{\mathcal{B}}}$ be as described in the previous section. Given an embedding $b \in B$, let $\ensuremath{\aquarius}_{b}$ denote the set
\[
\ensuremath{\aquarius}_{b} = \mathrm{closure} \left\{ \mathbb{R}^d \backslash b\left( \ensuremath{\mathcal{B}} \right) \right\}.
\]
The set $\ensuremath{\aquarius}_{b}$ is the region which will be occupied by the fluid given the embedding of the body $b$. If the body configuration is given by $b_0 \in B$ at time $0$ and $b \in B$ at time $t$, then the configuration of the fluid is given by a volume-preserving diffeomorphism from $\ensuremath{\aquarius}_{b_0}$ to $\ensuremath{\aquarius}_{b}$, i.e. an element of $\Diff_{\vol}\left( \ensuremath{\aquarius}_{b_0}, \ensuremath{\aquarius}_{b} \right)$. Given a reference configuration $b_0 \in B$ for the body, we define the configuration manifold as
\begin{align*}
Q := \{ (b , \varphi) \quad \vert \quad & b \in B, \varphi \in \Diff_{\vol}\left( \ensuremath{\aquarius}_{b_0}, \ensuremath{\aquarius}_{b} \right) \}.
\end{align*}
One should note that the manifold $Q$ has some extra structure. In particular, the Lie group $G := \Diff_{\vol}( \ensuremath{\aquarius}_{b_0} )$ represents the symmetry group for the set of particle labels, and acts on $Q$ on the right by sending
\[
(b,\varphi) \in Q \mapsto (b,\varphi \circ \psi) \in Q
\]
for each $\psi \in G$ and $(b,\varphi) \in Q$.
Given this action, the following proposition is self-evident.
\begin{prop} \label{prop:Q}
The projection $\pi^Q_B :Q \to B$ defined by $\pi^{Q}_{B}(b,\varphi) = b$ makes $Q$ into a principal $G$-bundle over $B$.
\end{prop}
Now we must define the Lagrangian. To do this, it is useful to note that the system should be invariant with respect to particle relablings of the fluid, and so the Lagrangian should be invariant with respect to the right action of $G$ on $TQ$ given by
\[
(b,\dot{b},\varphi,\dot{\varphi}) \in TQ \mapsto (b,\dot{b}, \varphi \circ \psi , \dot{\varphi} \circ \psi ) \in TQ
\]
for each $\psi \in G$. As a result, we can define a Lagrangian on the quotient space $TQ / G$. Incidentally, this quotient space is much closer to the space typically encountered in fluid-structure interaction
\begin{prop}[Proposition 2.2 of \cite{JaVa2013}] \label{prop:TQ/G}
The quotient space $TQ / G$ can be identified with the set
\begin{align}
P := \{ (b,\dot{b} , u) \quad \vert \quad & (b,\dot{b}) \in T B , \nonumber \\
& u \in \mathfrak{X}_{\rm div}( {\ensuremath{\aquarius}_{b}} ) , \nonumber \\
& u(b(x)) = \dot{b}(x) , \forall x \in \partial \ensuremath{\mathcal{B}} \}.
\end{align}
Under this identification, the quotient map $\pi^{TQ}_{/G}: TQ \to P$ is given by $\pi_{/G}(b,\dot{b},\varphi,\dot{\varphi}) = (b,\dot{b}, \dot{\varphi} \circ \varphi^{-1})$.
Moreover, $P$ is naturally equipped with the bundle projection $\tau( b,\dot{b},u) = b$ and the vector bundle structure $(b,\dot{b}_1,u_1) + (b,\dot{b}_2, u_2) = (b, \dot{b}_1 + \dot{b}_2, u_1+u_2)$,
for all $(b,\dot{b}_1,u_1),(b,\dot{b}_2,u_2) \in \tau^{-1}(b)$ and all $ b \in B$.
\end{prop}
\begin{proof}
Observe that $\pi^{TQ}_{/G}( v \circ \psi) = \pi^{TQ}_{ / G}( v )$ for all $\psi \in G$ and $v \in TQ$. Therefore, $\pi^{TQ}_{/G}$ maps the coset $v \cdot G$ to a single element of $P$.
Conversely, given an element $(b,\dot{b}, u) \in P$, we see that $(\pi^{TQ}_{/G})^{-1}(b,\dot{b},u)$ is the set of element in $(b,\dot{b},\varphi , \dot{\varphi}) \in TQ$ such that $u = \dot{\varphi} \circ \varphi^{-1}$.
However, this set of elements is just the coset $v \cdot G$, where $v$ is any element such that $\pi^{TQ}_{/G}(v) = (b,\dot{b},u)$. Thus, $\pi^{TQ}_{/G}$ induces an isomorphism between $TQ/G$ and $P$.
Additionally, we can check that $\pi^{TQ}_{/G}(v+w) = \pi^{TQ}_{/G}(v) + \pi^{TQ}_{/G}(w)$ and $\tau( \pi_{/G}( v ) ) = \tau_{Q}(v) \cdot G$.
Therefore, the desired vector bundle structure is inherited by $P$ as well, and $\pi_{/G}$ becomes a vector bundle morphism.
Finally, the map $\rho(b,\dot{b},u) = (b,\dot{b})$ is merely the map $T\pi_B^Q : TQ \to TB$ modulo the action of $G$. That is to say, $\rho \circ \pi^{TQ}_{/G} = T \pi^Q_B$. This equation makes $\rho$ well-defined because $\pi^Q_G$ is $G$-invariant.
\end{proof}
As a guide for the reader, we provide the following commutative diagram.
\[
\begin{tikzcd}
TQ \arrow{rr}{T\pi^{Q}_{/G} } \arrow{dr}{ \pi^{TQ}_{/G}} \arrow{dd}{\tau_Q} & & TB \arrow{dd}{\tau_B} \\
& P \arrow{ur}{\rho} \arrow{dr}{\tau}& \\
Q \arrow{rr}{\pi^Q_{/G}} & & B
\end{tikzcd}
\qquad \qquad
\begin{tikzcd}
(b,\dot{b},\varphi,\dot{\varphi}) \arrow[mapsto]{rr}{T\pi^Q_{/G} } \arrow[mapsto]{dr}{ \pi^{TQ}_{/G}} \arrow[mapsto]{dd}{\tau_Q} & & (b,\dot{b}) \arrow[mapsto]{dd}{\tau_B} \\
& (b,\dot{b}, \dot{\varphi} \circ \varphi^{-1}) \arrow[mapsto]{ur}{\rho} \arrow{dr}{\tau}& \\
(b,\varphi) \arrow[mapsto]{rr}{\pi^Q_{/G}} & & b
\end{tikzcd}
\]
Note that the fluid velocity component $u \in \mathfrak{X}_{\rm div}( \ensuremath{\aquarius}_b )$ for a $(b,\dot{b},u) \in P$ may point in directions transverse to the boundary of the fluid domain $\ensuremath{\aquarius}_{b}$.
This reflects the fact that the boundary is time-dependent. The condition $\dot{b}(x) = u(b(x))$ on the boundary states that the boundary of the body moves with the fluid, and is the mathematical description of the no-slip condition.
We now define the reduced Lagrangian $\ell: P \to \mathbb{R}$ by
\[
\ell (b,\dot{b},u) = L_{\ensuremath{\mathcal{B}}}(b,\dot{b}) + \frac{1}{2} \int_{\ensuremath{\aquarius}_{b} }{ \| u(x) \|^2 dx }.
\]
This induces the standard Lagrangian
\begin{align}
L:= l \circ \pi^{TQ}_{/G}: TQ \to \mathbb{R} \label{eq:total_Lagrangian},
\end{align} which is a simple mechanical Lagrangian consisting of the kinetic energy of the fluid minus the potential energy of the body described in \S \ref{sec:passive solids}. Moreover, $L$ is $G$-invariant by construction.
Additionally, we wish to add a viscous force on the fluid, $F_{\mu} : TQ \to T^{\ast}Q$. Given a coefficient of viscosity, $\mu$, we can define the reduced viscous friction force field $f_{\mu}: P \to P^{\ast}$ by
\[
\ensuremath{\langle} f_{\mu}( b, v_b, u) , (b, w_b , w) \ensuremath{\rangle} = \mu \int_{\ensuremath{\aquarius}_{b}}{ \Delta u (x) \cdot w(x) d x },
\]
and define the unreduced force $F_{\mu}: TQ \to T^{\ast}Q$ by
\[
\ensuremath{\langle} F_{\mu}( v ) , w \ensuremath{\rangle} = \ensuremath{\langle} f_{\mu}( \pi_{/G}(v) ) , \pi_{/G}(w) \ensuremath{\rangle}.
\]
We finally define the total force on our system to be
\begin{align}
F = F_{\mu} + (F_\ensuremath{\mathcal{B}} \circ T \pi^Q_B), \label{eq:total_force}
\end{align}
where $F_\ensuremath{\mathcal{B}}$ is the dissipative force on the shape of the body mentioned in \S \ref{sec:passive solids}. This total force $F$ descends via $\pi_{/G}$ to a reduced force $F_{/G} : P \to P^{\ast}$ where $P^*$ is the dual vector bundle to $P$. The reduced force is given explicitly in terms of $f_\mu$ and $F_\ensuremath{\mathcal{B}}$ by $F_{/G} = f_\mu + (F_\ensuremath{\mathcal{B}} \circ \rho)$. One can verify directly from this expression that $\ensuremath{\langle} F( v) , w \ensuremath{\rangle} = \ensuremath{\langle} F_{/G}( \pi_{/G}(v) ) , \pi_{/G}(w) \ensuremath{\rangle}$.
We now introduce a consequence which follows from the $G$-invariance of $F$ and $L$.
\begin{prop} \label{prop:evolution1}
Let $X_{TQ} \in \mathfrak{X}(TQ)$ denote the Lagrange-d'Alembert vector field, and let $\Phi_t^{X_{TQ}} : TQ \to TQ$ denote the flow map associated with the Lagrangian $L:TQ \to \mathbb{R}$ and the force $F: TQ \to T^{\ast}Q$. Then there exists a vector field $X_P \in \mathfrak{X}(P)$ and a flow map $\Phi_t^{X_P} : P \to P$ which are $\pi^{TQ}_{/G}$-related to $X_{TQ}$ and $\Phi_t^{X_{TQ}}$.
\end{prop}
\begin{proof}
Let $q:[0,t] \to Q$ be a curve such that the time derivative $(q,\dot{q}): [0,t] \to TQ$ is an integral curve of the Lagrange-d'Alembert equations with initial condition $(q,\dot{q})(0)$ and final condition $(q,\dot{q})(t)$. Then the Lagrange-d'Alembert variational principle states that
\[
\delta \int_{0}^{t}{ L( (q,\dot{q})(\tau)) d\tau } = \int_{0}^{t}{ \ensuremath{\langle} F((q,\dot{q})(\tau) , \delta q(\tau) \ensuremath{\rangle} d\tau}
\]
for all variations of the curve $q( \cdot)$ with fixed endpoints. Note that for each $\psi \in G$, the action satisfies $\int_{0}^{t}{ L((q,\dot{q})(\tau) d\tau} = \int_{0}^{t}{ L( (q,\dot{q})(\tau) \circ \psi ) d\tau}$, and the variation on the right hand side of the Lagrange-d'Alembert principle is
\begin{align*}
\int_{0}^{t}{ \ensuremath{\langle} F((q,\dot{q})(\tau)) , \delta q (\tau) \ensuremath{\rangle} d\tau} &= \int_{0}^{t}{ \ensuremath{\langle} F_{/G}( \pi_{/G}( (q,\dot{q})(\tau) , \pi_{/G}( \delta q (\tau)) \ensuremath{\rangle} d\tau} \\
&= \int_0^t{ \ensuremath{\langle} F( (q,\dot{q})(\tau) \circ \psi ) , \delta q(\tau) \circ \psi \ensuremath{\rangle} d\tau}.
\end{align*}
Therefore, we observe that
\[
\delta \int_{0}^{t}{ L( (q,\dot{q})(\tau) \circ \psi) d\tau } = \int_{0}^{t}{ \ensuremath{\langle} F((q,\dot{q}) (\tau) \circ \psi) , \delta q (\tau) \circ \psi \ensuremath{\rangle} d\tau}
\]
for arbitrary variations of the curve $q( \cdot )$ with fixed end points. However, the variation $\delta q \circ \psi$ is merely a variation of the curve $q \circ \psi (\cdot)$ becuase
\[
\delta q(\tau) \circ \psi = \left. \pder{}{\epsilon} \right|_{\epsilon = 0}( q(\tau, \epsilon) \circ \psi ),
\]
and if $q(\tau,\epsilon)$ is a deformation of $q( \tau)$, then $q(\tau,\epsilon) \circ \psi$ is a deformation of $q(\tau) \circ \psi$ by construction.
Therefore,
\[
\delta \int_{0}^{t}{ L( (q,\dot{q})(\tau) \circ \psi) d\tau} = \int_{0}^{t}{ \ensuremath{\langle} F((q,\dot{q})(\tau) \circ \psi) , \delta (q \circ \psi) \ensuremath{\rangle} d\tau}
\]
for arbitrary variations of the curve $q \circ \psi$ with fixed end points.
This last equation states that the curve $(q,\dot{q}) \circ \psi$ satisfies the Lagrange-d'Alembert principle.
Thus, the flow $\Phi_t^{X_{TQ}}$ is $G$-invariant, as is the vector field $X_{TQ}$.
By Proposition \ref{prop:reduced_vf}, there exists a $\pi^{TQ}_{/G}$-related flow and vector field on $TQ/G$.
By Proposition \ref{prop:TQ/G}, we obtained the desired vector field $X_{P} \in \mathfrak{X}(P)$, and its flow $\Phi_t^{X_{P}}:P \to P$.
\end{proof}
Now that we know there exists a flow on $P$, one can ask for the equations of motion.
\begin{prop}
The flow map of $P$ mentioned in Proposition \ref{prop:evolution1} for the Lagrangian $L$ and force $F$ is identical to the flow of the Lagrange-Poincar\'e-d'Alembert equation:
\begin{align*}
u_t + u \cdot \nabla u &= - \nabla p - \nu \Delta u \\
\frac{D\dot{q}}{Dt} + \nabla U(q) &= \sharp (F(q,\dot{q}) + F_{\partial \ensuremath{\mathcal{B}}}),
\end{align*}
where $F_{\partial \ensuremath{\mathcal{B}}} : P \to T^* B$ is the force that the fluid exerts on the body in order to satisfy the no-slip boundary condition.
\end{prop}
\begin{proof}
This is Theorem 7.1(c) of \cite{JaVa2013} paired with \eqref{eq:LDA}.
Roughly speaking, one can obtain these equations of motion by performing Lagrange-Poincar\'e-d'Alembert reduction following \cite{CeMaRa2001}.
This involves choosing a principal connection $A: TQ \to \mathfrak{g}$.
The spatial velocity field is reconstructed by $u = h^{\uparrow}(b , \dot{b} , \varphi) + \varphi_*A( b, \dot{b} ,\varphi , \dot{\varphi})$,
where $h^{\uparrow}$ is the horizontal lift.
\end{proof}
\subsection{Reduction by frame invariance} \label{sec:SE}
Consider the group of isometries of $\mathbb{R}^d$ denoted $\SE(d)$.
Each $z \in \SE(d)$ sends $(b,\dot{b}, u) \in TQ/G$ to $(z (b,\dot{b}) , z_* u ) \in P$, where $z_*u \in \mathfrak{X}_{\rm div}( \ensuremath{\aquarius}_{z \circ b})$ is the push-forward of the fluid velocity field $u \in \mathfrak{X}(\ensuremath{\aquarius}_{b})$.
This action is free and proper on $P$ so that the projection $\pi_{[P]}^{P} : P \to [P]$, where $[P] := \frac{P}{\SE(d)}$ is a principal bundle \cite[Prop 4.1.23]{FOM}.
Additionally, $\SE(d)$ acts by vector-bundle morphisms, which are isomorphisms on each fiber.
Therefore, $[P]$ inherits a vector-bundle structure from $P$.
\begin{prop}
There exists a unique vector-bundle projection $[\tau]: [P] \to [B]$ and a map $[\rho] : [P] \to T [B]$ such that $[\tau] \circ \pi^P_{[P]} = \tau$ and $[\rho] \circ \pi^{P}_{[P]} = \rho$.
\end{prop}
\begin{proof}
We see that $\tau( z \cdot \xi ) = z \cdot \tau( \xi)$ for any $z \in \SE(d)$ and $\xi \in P$.
Applying the above formula to an entire coset in $[P]$ then maps to a coset in $[B]$.
Thus the map $[\tau]: [P] \to [B]$ is well-defined by the condition $\tau = [\tau] \circ [\cdot]$.
The same argument makes the map $[\rho]: [P] \to T[B ]$.
The vector-bundle structure on $[P]$ can be observed directly.
\end{proof}
As everything in sight is $\SE(d)$-invariant, it is not surprising that we can express reduced equations of motion on $[P]$.
\begin{prop} \label{prop:evolution2}
There exists a vector field $X_{[P]} \in \mathfrak{X}([P])$ and a flow-map $\Phi_t^{[P]}:[P] \to [P]$ which is $\pi^{P}_{[P]}$-related to $X_P$ and $\Phi_t^{X_P}$.
\end{prop}
\begin{proof}
Let $q:[0,T] \to Q$ be a curve such that the time derivative $(q,\dot{q}): [0,T] \to TQ$ is an integral curve of the Lagrange-d'Alembert equations with initial condition $(q,\dot{q})_0$ and final condition $(q,\dot{q})_T$. Then the Lagrange-d'Alembert variational principle states that
\[
\delta \int_{0}^{T}{ L( q,\dot{q}) dt } = \int_{0}^{T}{ \ensuremath{\langle} F(q,\dot{q}) , \delta q \ensuremath{\rangle} dt}
\]
for all variations of the curve $q( \cdot)$ with fixed endpoints. Note that for each $\psi \in G$ and $z \in \SE(d)$, the action satisfies $\int_{0}^{T}{ L(q,\dot{q}) dt} = \int_{0}^{T}{ L( z \cdot (q,\dot{q}) \circ \psi ) dt}$ and the virtual-work is
\begin{align*}
\int_{0}^{T}{ \ensuremath{\langle} F(q,\dot{q}) , \delta q \ensuremath{\rangle} dt} &= \int_{0}^{T}{ \ensuremath{\langle} F_{/G}(z \cdot \pi_{/G}( q,\dot{q}) , z \cdot \pi_{/G}( \delta q) \ensuremath{\rangle} dt} \\
&= \int_0^T{ \ensuremath{\langle} F( z \cdot (q,\dot{q}) \circ \psi ) , z \cdot \delta q \circ \psi \ensuremath{\rangle} dt}.
\end{align*}
Therefore we observe that
\[
\delta \int_{0}^{T}{ L( z \cdot (q,\dot{q}) \circ \psi) dt } = \int_{0}^{T}{ \ensuremath{\langle} F( z \cdot (q,\dot{q}) \circ \psi) , z \cdot \delta q \circ \psi \ensuremath{\rangle} dt}
\]
for arbitrary variations of the curve $q( \cdot )$ with fixed end points. However, the variation $z \cdot \delta q \circ \psi$ is merely a variation of the curve $z \cdot q \circ \psi (\cdot)$. Therefore,
\[
\delta \int_{0}^{T}{ L( z \cdot (q,\dot{q}) \circ \psi) dt } = \int_{0}^{T}{ \ensuremath{\langle} F( z \cdot (q,\dot{q}) \circ \psi) , \delta (z \cdot q \circ \psi) \ensuremath{\rangle} dt}
\]
for arbitrary variations of the curve $z \cdot q \circ \psi$ with fixed end points. This last equation states that the curve $z \cdot (q,\dot{q}) \circ \psi$ satisfies the Lagrange-d'Alembert principle. Thus, $\Phi_T^{TQ}(z \cdot (q,\dot{q})_0 \circ \psi)= z \cdot \Phi_{T}^{TQ}(q,\dot{q}) \circ \psi$. Since $\psi \in G$ was arbitrary, we may apply $\Phi_T^{TQ}$ to the entire coset $z \cdot (q,\dot{q}) \cdot G$ to find $\Phi^{TQ}_T( z \cdot (q,\dot{q} ) \cdot G) = z \cdot \Phi_T^{TQ}(q,\dot{q}) \cdot G$.
This map from $G$-cosets to $G$-cosets is the defining condition for $\Phi_T^P$.
Therefore, the last equation states that $\Phi^P_T(z \cdot \xi) = z \cdot \Phi_T^P(\xi)$, where $\xi = \pi^{TQ}_{/G}((q,\dot{q})_0)$.
In other words, $\Phi_T^P$ is $\SE(d)$-invariant.
Therefore, by Proposition \ref{prop:reduced_vf} the theorem follows.
\end{proof}
\section{Asymptotic Behavior} \label{sec:asy}
It is commonplace to assume that the asymptotic behavior of a simple mechanical system with dissipation approaches a state of minimum energy.
In this section, we will verify that the asymptotic behavior of the Lagrangian system described in the previous section tends towards the minimizers of the elastic potential energy, $U$.
\begin{prop} \label{prop:stable_set}
Assume the Lagrangian $L$ of \eqref{eq:total_Lagrangian} and the external force $F$ of \eqref{eq:total_force}. Let $q:[0,\infty) \to Q$ be a curve such that the time derivative $(q,\dot{q}) : [0, \infty) \to TQ$ is an integral curve of the Lagrange-d'Alembert equations for the Lagrangian $L$ and the force $F$. Then the $\omega$-limit set of $(q,\dot{q})( \cdot )$ is contained in the set $dU^{-1}(0) := \{ (q,0) \in TQ \quad \vert \quad dU(q) = 0 \}$.
\end{prop}
\begin{proof}
The energy is the function $E: TQ \to \mathbb{R}$ given by
\begin{align*}
E(q,\dot{q}) &:= \ensuremath{\langle} \mathbb{F}L(q,\dot{q}) , \dot{q} \ensuremath{\rangle} - L(q,\dot{q}).
\end{align*}
Given any Lagrangian system on a Riemannian manifold where the Lagrangian is the kinetic energy minus the potential energy, the time derivative of the generalized energy under the evolution of the Lagrange-d'Alembert equations is given by $\dot{E} = \ensuremath{\langle} F(\dot{q}) , \dot{q} \ensuremath{\rangle}$. In this case, we find
\[
\dot{E}(q,\dot{q}) = \ensuremath{\langle} F_{\ensuremath{\mathcal{B}}}(\dot{b}) , \dot{b} \ensuremath{\rangle} + \ensuremath{\langle} F_{\mu}( q,\dot{q}), (q,\dot{q}) \ensuremath{\rangle}.
\]
However, by \eqref{eq:concavity}, this is a convex function on each fiber of $TQ$ in a neighborhood of the zero-section.
Therefore the $\omega$-limit of $(q,\dot{q})( \cdot )$, denoted $M^{\omega}$, must be a subset of the zero section of $TQ$.
Moreover, the Lagrange-D'Alembert equations state
\[
\frac{D \dot{q}}{Dt} = - \nabla U(b) + \sharp( F(q,\dot{q}) ),
\]
where $\sharp:T^{\ast}Q \to TQ$ is the sharp map associated the metric on $Q$. However, $F(q,\dot{q}) = 0$ when $\dot{q} = 0$, which is the case for points in $M^{\omega}$. Thus, the vector field on $M^{\omega}$ must satisfy
\[
\frac{D \dot{q}}{Dt} = - \nabla U(b).
\]
However, $M^\omega$ is an invariant set.
Therefore, the Lagrange-d'Alembert vector field must be tangential to $M^\omega$.
As $M^\omega$ is contained in the zero-section of $TQ$, the second derivative of $q(t)$ must vanish in order to remain in the zero section.
Thus, we find $\frac{D \dot{q}}{Dt} = 0$ on $M^{\omega}$ which implies $\nabla U = 0$ on $M^{\omega}$.
That is to say, $M^{\omega} \subset dU^{-1}(0)$.
\end{proof}
\begin{cor} \label{cor:stable_point}
Let $[U]: [B] \to \mathbb{R}$ be the unique function on the shape-space of the body such that $[U]( [ b] ) = U(b)$ for all $b \in B$. Assume that $[U]$ has a unique minimizer $s_{\min} \in [B]$, and let $(s_{\min})^{0}_{\uparrow} \in [P]$ denote the element of the zero section of $[\tau]: [P] \to [B]$ above $s_{\min} \in [B]$. If $(q,\dot{q}): [0,\infty) \to TQ$ is an integral curve of the Lagrange-d'Alembert equations, then $[\xi] (t) = [ \pi_{/G}(q,\dot{q}(t))]$ must approach $(s_{\min})^{0}_{\uparrow} \in [P]$. If the flow of the Lagrange-d'Alembert equations is complete, this means that $(s_{\min})^0_{\uparrow}$ is a global (weakly) hyperbolically stable fixed point for the vector field $X_{[P]}$.
\end{cor}
\begin{proof}
In proposition \ref{prop:stable_set}, we showed that solutions approach points within the set $dU^{-1}(0)$ asymptotically. This implies that the dynamics on $[P]$ must approach $d[U]^{-1}(0)$ asymptotically. However, there is only one such point.
\end{proof}
In the next section, we will periodically perturb this stable equilibria to obtain a loop in $[P]$.
\begin{example}
Consider the swimmer of Example \ref{ex:two_link} and Figure \ref{fig:two_link}. Corollary \ref{cor:stable_point} asserts that the state where the swimmer and the water is stationary and $\phi_1 - \phi_2 = \bar{\theta}$ is asymptotically stable.
\end{example}
\section{Swimming} \label{sec:swimming}
In order to understand swimming, let us consider a time-periodic internal body force, $F_{\rm swim}: T [B] \times S^1 \to T^{\ast}[B]$.
Such a force should be designed to model the time-periodic activation of muscles in a fish, or control forces for an underwater vehicle.
This force can be lifted via the map $\pi^{B}_{[B]} \circ \pi^Q_B: Q \to [B]$ to obtain a $G,\SE(d)$-invariant invariant force on $Q$.
\emph{The addition of this time periodic force force on $Q$ alters the Lagrange-d'Alembert equations linearly by the addition of a $G,\SE(d)$-invariant time-periodic vector field $X_{\rm swim}$.}
To consider small perturbations, we can consider scaling this time-periodic force by a real parameter $\varepsilon \in \mathbb{R}^+$, so that the Lagrange-d'Alembert vector field is now given by the time-periodic vector field $X_{TQ,\varepsilon} = X_{TQ} + \varepsilon X_{\rm swim} \in \mathfrak{X}(TQ)^{S^1}$.
This deformed vector-field is also $G,\SE(d)$-invariant; thus, there exists a reduced vector fields $X_{P,\varepsilon} \in \mathfrak{X}(P)^{S^1}$ which is $\pi^Q_{P}$-related to $X_{TQ,\epsilon}$, and a vector field $X_{[P],\varepsilon} \in \mathfrak{X}([P])^{S^1}$ which is $\pi^{P}_{[P]}$-related to $X_{P,\varepsilon}$.
The vector field $X_{[P],\varepsilon}$ is a time-periodic deformation of $X_{[P]}$.
As $X_{[P]}$ admits an asymptotically stable point by Corollary \ref{cor:stable_point}, we are reasonably close to being able to prove the existence of a $\SE(d)$-relative limit cycle for $X_{P,\varepsilon}$ for small $\varepsilon > 0$.
{\bf Desideratum:}
{\it For sufficiently small $\varepsilon > 0$, the vector-field $X_{[P],\varepsilon}$ admits a non-autonomous exponentially stable limit cycle.
Moreover, $X_{P,\varepsilon}$ admits stable relative limit cycle. }
If we were to assume Proposition \ref{prop:NAESRLC} held for infinite-dimensional dynamical systems, then as $X_{[P],\varepsilon}$ is a deformation of $X_{[P]}$ we could deduce that $X_{P,\varepsilon}$ admits a non-autonomous exponentially stable limit cycle for sufficiently small $\varepsilon > 0$.
As a result, this would imply $X_{P,\epsilon}$ admits $\SE(d)$-relative limit cycles which are $\pi^{P}_{[P]}$-related to the limit cycle in $[P]$.
Unfortunately, we are unable to do this because Proposition \ref{prop:NAESRLC} is limited to finite-dimensional manifolds and vector fields with exponentially stable points.
The vector-field $X_{[P]}$ is on an infinite-dimensional space where we have only proven asymptotic stability.
We will not overcome this difficulty; however, we are at least able to speculate on how to deal with this.
For example, there does exist extensions of normal hyperbolicity and persistence to infinite-dimensional dynamical systems on a Hilbert manifold \cite{Jones1999,Bates1998}.
Alternatively, we could consider finite-dimensional models for fluid-structure interaction, such as the immersed boundary method \cite{Peskin2002}.
In the next section, we will informally speculate on this latter approach.
\subsection{Analytic concerns and approximate relative limit cycles} \label{sec:analytic}
Up until now, the paper has been fairly rigorous and complete.
This start of this section marks the end of this theorem-proof formalism.
Instead, we provide a more speculative discussion on how one can overcome the challenges to obtaining relative limit cycles in $P$.
There are two issues of concern. The first is the lack of a ``spectral gap'' with respect to the equilibrium associated to $s_{\min} \in [B]$.
That is to say, it is not immediately obvious if there exists a convergence bound $\rho > 0$ with respect to $s_{\min}$, as is required in order to use Theorem \ref{thm:persistence} and its offspring, Proposition \ref{prop:NAESRLC}.
It is possible that there does not exist any such $\rho$.
For simple mechanical systems, $\rho$ is related to the spectrum of the Rayleigh dissipation function.
In our case, this spectrum includes the spectrum of the Laplace-operator on a non-compact domain, which \emph{does not contain a spectral gap!}
The second issue is the non-completeness of $[P]$. As $Q$ is an infinite-dimensional Fr\'echet manifold, so is $[P]$.
This is a concern because both Theorem \ref{thm:persistence} and Proposition \ref{prop:NAESRLC} require completeness in order to provide an existence-uniqueness result.
There do exist generalizations of Theorem \ref{thm:persistence} to infinite-dimensional Banach manifolds, but not Fr\'echet manifolds \cite{Jones1999,Bates1998}.
Therefore, using the persistence theorem directly will not allow us to assert the existence of a relative limit cycle on $P$.
Perhaps other methods besides normal hyperbolicity theory could be employed, but this would be an exploration for another paper.
However, we can consider an option which is morally the converse of an idea illustrated in \cite{Jones1999}, wherein discrete approximations are invoked.
There exists a number of finite-dimensional models for the space $P$ used by engineers to study fluid-structure interaction.
It is fairly common to approximate the fluid velocity field on a finite-dimensional space and model the solid using a finite element method (e.g. \cite{Peskin2002}).
Let us call this finite-dimensional space $P_{\discrete}$.
Moreover, one can usually act on $P_{\discrete}$ by $\SE(d)$ by simply rotating and translating the finite elements and the grid.
If the model on $P_{\discrete}$ converges as the time step and spatial resolution go to zero, then we could reasonably restrict ourselves to methods which dissipate energy at a rate which is quadratic and positive definite in the state velocity.
This is not too much to expect, as a good method ought to converge.\footnote{The immersed boundary method \cite{Peskin2002} and smooth-particle hydrodynamics \cite{Monaghan1977,Lucy1977} are both candidates.}
By the same arguments as before, the dynamics will exhibit hyperbolically stable equilibria on the quotient space $[P_{\discrete}] = \frac{ P_{\discrete} }{\SE(d)}$.
Upon adding a periodic perturbation to the dynamics on $[P_{\discrete}]$, one could apply Proposition \ref{prop:NAESRLC} directly to assert the existence of a non-autonomous exponentially stable relative limit cycle $\gamma_{\discrete}(t) \in P_{\discrete}$. In particular, by Proposition \ref{prop:regular}, $\gamma_{\discrete}(t)$ must satisfy
\[
\gamma_{\discrete}(t) = z^{ \lfloor t \rfloor } \cdot \gamma_{\discrete}(t - \lfloor t \rfloor)
\]
for some $z \in \SE(d)$, where $\lfloor t \rfloor = \sup \{ k \in \mathbb{Z} : k \leq t \}$.
If the model on $P_{\discrete}$ converges, then there exists a trajectory in $\gamma(s) \in P$ which is well-approximated by $\gamma_{\discrete}(t)$, over a single time period, by the definition of ``convergence.''
Then, the equation $\gamma(t) = z^{ \lfloor t \rfloor } \cdot \gamma(t - \lfloor t \rfloor)$ would hold up to numerical error.
In other words, the immersed body would move in an \emph{approximately} relatively periodic fashion, reminiscent of swimming.
\section{Conclusion and Future work} \label{sec:Conclusion}
It is widely observed that steady swimming is periodic, and this observation inspired the question, ``Is it possible to interpret swimming as a limit cycle?''
In this paper, we have illustrated the crucial role played by $\SE(d)$-reduction in answering this question.
Moreover, we have posed a possible answer, accurate up to the spatial discretization error of a numerical method.
The existence of these hypothetical relative limit cycles would provide robustness to mechanisms of locomotion, and conform with behavior observed in real systems \cite{Alben_Shelley_2005,Bhalla2013,Liao2003a,Liao2003b,Tytell2010,Wilson2011}.
\emph{Given the complexity of fluid-structure interaction, it is not immediately clear that one could expect such orderly behavior.}
This potential orderliness could be exploited in a number of applications.
\begin{enumerate}
\item {\bf Robotics and Optimal Control}
The interpretation of swimming as a limit cycle may permit a non-traditional framework for controller design.
For example, if our control forces are parametrized by a space $C$, then we may consider the set of loops, $\mathrm{loop}(C)$.
The limit cycle hypothesis would imply the existence of a subset $W \subset {\rm loop}(C)$ and a map $\Gamma: W \to \mathrm{loop}( [P] )$ which outputs the periodic limit cycle in $[P]$, resulting asymptotically from the time-periodic control signals in $W$. Given $\Gamma$, we may define a control cost functional on $W$ based upon a reward function on ${\rm loop}([P])$. As such a cost functional would only respond to the asymptotic behavior of the system, one could surmise that it would not overreact to transient dynamics.
\item {\bf Transient dynamics} Although trajectories may approach a limit cycle, the transient dynamics are still important.
The transient dynamics would re-orient and translate the body before orderly periodic behavior takes effect.
Therefore, if one desires to create locomotion through periodic control inputs, one should try to get onto a limit cycle quickly in order to minimize the duration where transient dynamics dominate.
\item {\bf Pumping}
In the current setup, one could consider a reference frame attached to the body. In this reference frame, ``swimming'' manifests as fluid moving around the body in a regular fashion. This change in our frame of reference describes pumping.
\item {\bf Passive Dynamics}
This paper does \emph{not} address the dual problem. By the dual problem, we mean: ``Given a constant fluid velocity at infinity, what periodic motion (if any) will a tethered body approach as time goes to infinity?'' In this dual problem, the motion of the body is given first, and parameters such as the period of the limit cycle are emergent phenomena. In particular, the dual problem of a flapping flag immersed in a fluid with a constant velocity at infinity has received much attention in the applied mathematics community (see \cite{Shelley2005} and references therein). Here, it is generally not the case that a limit cycle will emerge, and the system is capable of admitting chaos.
\item {\bf Other types of locomotion}
The notion that walking may be viewed as a limit cycle is fairly common \cite{GaChaRuCo98,Hobbelen,McGreer1990}. Moreover, it is conceivable that flapping flight is a limit cycle as well \cite{LiuRistroph2012}. However, for both of these systems, $\SE(3)$ symmetry is broken by the direction of gravity.
Because of this, it is not immediately clear that one can import the methods used here to understand flapping flight and terrestrial locomotion. However, perhaps this is merely a challenge to be overcome. In particular, these systems still exhibit $\SE(2)$ symmetry. For the case of 2D bipedal walkers, we have an $\mathbb{R}$-symmetry and the stability problems due to falling will not manifest. Here, one can find limit cycles using regularized models of the ground \cite{JaEl2013}.
\end{enumerate}
\subsection{Acknowledgements}
The notion of swimming as a limit cycle was initially introduced to me by Erica J. Kim while she was studying hummingbirds.
Additionally, Sam Burden, Ram Vasudevan, and Humberto Gonzales provided much insight into how to frame this work for engineers.
I would also like to thank Professor Shankar Sastry for allowing me to stay in his lab for a year and meet people who are outside of my normal research circle.
I would like to thank Eric Tytell for suggesting relevant articles in neurobiology,
Amneet Pal Singh Bhalla for allowing me to reproduce figures from \cite{Bhalla2013}, and Peter Wallen for allowing me to reproduce figures from \cite{WallenWilliams1984}.
An early version of this paper was written in the context of Lie groupoid theory, where the guidance of Alan Weinstein was invaluable.
Jaap Eldering and Joris Vankerschaver have given me more patience than I may deserve by reading my papers and checking my claims.
Major contributions to the bibliography and the overall presentation of the paper were provided by Jair Koiller.
Finally, the writing of this paper was solidified with the help of Darryl Holm.
This research has been supported by the European Research Council Advanced Grant 267382 FCCA and NSF grant CCF-1011944.
\bibliographystyle{amsalpha}
|
1,116,691,500,324 | arxiv | \section{Introduction}
Massive stars spend a significant part ($\gtrsim10\%$) of their lives embedded in their parental molecular cloud, making it difficult to investigate their early evolutionary stages.
The discovery of IR-dark clouds \citep[IRDCs; e.g., ][]{Perault+96, Egan+98} seen in absorption against the mid-IR Galactic background made it possible to identify the most likely birthplaces of high-mass stars. These clouds are usually filamentary, hosting complexes of cold ($T\lesssim25\usk\kelvin$) and dense ($n\gtrsim\pot{5}\usk\centi \metre^{-3}$) clumps, with high H$_2$ column densities ($N\gtrsim\pot{23}\usk\centi \metre^{-2}$) and masses that usually exceed $100\msun$ (though not all of them will form massive stars; e.g., \citealt{KauffmannPillai10}).
Clouds with such low temperatures have a spectral energy distribution (SED) that peaks at far-IR (FIR) wavelengths and are optically thin in the millimetre/submillimetre regime. The emission at these wavelengths usually matches the IR absorption very well \citep[e.g., ][]{Rathborne+06, Pillai+06}, and makes it easy to identify the cold and dense gas concentrations. Some clumps within IRDCs show signs of active star formation, such as $24\usk\micro\metre$ emission, presence of extended excess emission at $4.5\usk\micro\metre$\footnote{The excess at $4.5\usk\micro\metre$, typically named Extended Green Object or ``green fuzzy'' is commonly interpreted as arising from $\mathrm{H_2}$ and CO lines, likely tracing shocks \citep[e.g., ][]{Noriega-Crespo+04, Marston+04}.}, masers and SiO emission from outflows (e.g., \citealt{Beuther+05, Rathborne+05, Chambers+09}).
The pre-/proto-stellar phase for high-mass stars is very short ($\lesssim3\times\pot{4}\yr$), according to statistical studies \citep{Motte+07}, when compared to the low-mass regime \citep[$\sim 3\times\pot{5}\yr$,][]{Kirk+05}, because the accretion timescale is longer than the Kelvin-Helmoltz timescale and nuclear fusion starts before the star has reached its final mass. Therefore, the entire life of the protostar and part of the main sequence is spent inside the parental clump. Objects in these early phases of evolution and their influence on the surrounding material, can be investigated at frequencies that can penetrate the cocoon in which the objects are enshrouded.
In the last 2 decades we have made a thorough investigation of a sample of luminous IRAS sources distributed over the whole sky, selected on the basis of FIR colours typical of YSOs \citep[e.g, ][]{Palla+91}. Our expectation that this sample contains high-mass YSOs in different evolutionary stages has been supported by a large number of observations at both low- and high-angular resolution \citep[e.g., ][]{Molinari+96, Molinari+98a, Molinari+98b, Brand+01, Fontani+05, Beltran+06}.
Those with $\delta<30\degree$ have been observed with the SEST in the continuum at 1.2-mm (SIMBA) and in CS \citep{Fontani+05, Beltran+06}. The mm-continuum maps often show the presence of several clumps around a single IRAS source. A comparison with MSX (and later Spitzer) images revealed that some of these clumps are associated with mid-IR emission, while others appear IR-dark \citep{Beltran+06}.
\begin{table}[tbp]
\centering
\caption{Central coordinates of the observed fields, names of the clumps in the field and their coordinates.}
\label{tab:positions}
\tiny
\begin{tabular}{ccccc}
\toprule
\multicolumn{2}{c}{Phase Centre (J2000)} & Clump & \multicolumn{2}{c}{Clump Coordinates (J2000)} \\
\midrule
RA & DEC & & RA & DEC \\
\midrule
08:49:35.13 &$-$44:11:59.0 & 08477-4359c1 & 08:49:35.13 &$-$44:11:59.0 \\
09:00:40.50 &$-$47:25:55.0 & 08589-4714c1 & 09:00:39.71 &$-$47:26:11.0 \\
10:10:41.70 &$-$57:44:36.0 & 10088-5730c2 & 10:10:41.70 &$-$57:44:36.0 \\
12:32:52.10 &$-$61:35:42.0 & 12300-6119c1 & 12:32:49.86 &$-$61:35:34.0 \\
13:07:09.40 &$-$63:47:12.0 & 13039-6331c1 & 13:07:08.19 &$-$63:47:12.0 \\
{13:59:33.04} &$ -$61:49:13.0& 13560-6133c1 & 13:59:31.91 &$-$61:48:41.0 \\
&$ $ & 13560-6133c2 & 13:59:33.04 &$-$61:49:13.0 \\
13:59:55.50 &$-$61:24:25.0 & 13563-6109c1 & 13:59:57.73 &$-$61:24:33.0 \\
{14:20:21.74} &$ -$61:31:13.0& 14166-6118c1 & 14:20:19.50 &$-$61:31:53.0 \\
&$ $ & 14166-6118c2 & 14:20:21.74 &$-$61:31:13.0 \\
14:22:21.54 &$-$61:06:42.0 & 14183-6050c3 & 14:22:21.54 &$-$61:06:42.0 \\
15:07:32.52 &$-$58:40:33.0 & 15038-5828c1 & 15:07:32.52 &$-$58:40:33.0 \\
15:11:07.90 &$-$59:06:30.0 & 15072-5855c1 & 15:11:08.94 &$-$59:06:46.0 \\
{15:31:44.17} &$ -$56:32:08.0& 15278-5620c1 & 15:31:45.13 &$-$56:30:48.0 \\
&$ $ & 15278-5620c2 & 15:31:44.17 &$-$56:32:08.0 \\
15:48:40.82 &$-$53:40:35.0 & 15454-5335c2 & 15:48:40.82 &$-$53:40:35.0 \\
15:51:28.24 &$-$54:31:42.0 & 15470-5419c1 & 15:51:28.24 &$-$54:31:42.0 \\
15:51:01.62 &$-$54:26:46.0 & 15470-5419c3 & 15:51:01.62 &$-$54:26:46.0 \\
15:50:56.12 &$-$54:30:38.0 & 15470-5419c4 & 15:50:56.12 &$-$54:30:38.0 \\
{15:59:36.20} &$ -$52:22:58.0& 15557-5215c1 & 15:59:40.57 &$-$52:23:30.0 \\
&$ $ & 15557-5215c2 & 15:59:36.20 &$-$52:22:58.0 \\
15:59:39.70 &$-$52:25:14.0 & 15557-5215c3 & 15:59:39.70 &$-$52:25:14.0 \\
16:01:52.83 &$-$53:11:57.0 & 15579-5303c1 & 16:01:46.60 &$-$53:11:41.0 \\
16:02:08.86 &$-$53:08:53.0 & 15579-5303c3 & 16:02:08.86 &$-$53:08:53.0 \\
16:10:06.61 &$-$50:50:29.0 & 16061-5048c1 & 16:10:06.61 &$-$50:50:29.0 \\
16:09:57.30 &$-$50:57:09.0 & 16061-5048c2 & 16:10:02.38 &$-$50:49:33.0 \\
16:10:06.61 &$-$50:57:09.0 & 16061-5048c4 & 16:10:06.61 &$-$50:57:09.0 \\
16:13:05.20 &$-$50:23:05.0 & 16093-5015c1 & 16:13:01.85 &$-$50:22:41.0 \\
{16:12:55.46} &$ -$51:43:22.0& 16093-5128c1 & 16:12:49.45 &$-$51:43:30.0 \\
&$ $ & 16093-5128c2 & 16:12:55.46 &$-$51:43:22.0 \\
16:12:49.45 &$-$51:36:34.0 & 16093-5128c8 & 16:12:49.45 &$-$51:36:34.0 \\
{16:20:24.51} &$ -$49:35:34.0& 16164-4929c2 & 16:20:18.75 &$-$49:34:54.0 \\
&$ $ & 16164-4929c3 & 16:20:24.51 &$-$49:35:34.0 \\
16:20:31.92 &$-$49:35:26.0 & 16164-4929c6 & 16:20:31.92 &$-$49:35:26.0 \\
16:20:24.33 &$-$48:44:58.0 & 16164-4837c2 & 16:20:24.33 &$-$48:44:58.0 \\
16:29:00.89 &$-$48:50:31.0 & 16254-4844c1 & 16:29:00.89 &$-$48:50:31.0 \\
16:47:01.70 &$-$41:15:18.0 & 16428-4109c1 & 16:47:01.70 &$-$41:15:18.0 \\
16:46:46.81 &$-$41:14:22.0 & 16428-4109c2 & 16:46:46.81 &$-$41:14:22.0 \\
16:47:33.13 &$-$45:22:51.0 & 16435-4515c3 & 16:47:33.13 &$-$45:22:51.0 \\
16:51:44.59 &$-$44:46:50.0 & 16482-4443c2 & 16:51:44.59 &$-$44:46:50.0 \\
17:00:33.38 &$-$42:25:18.0 & 16573-4214c2 & 17:00:33.38 &$-$42:25:18.0 \\
17:07:58.78 &$-$40:02:24.0 & 17040-3959c1 & 17:07:58.78 &$-$40:02:24.0 \\
17:23:00.30 &$-$38:13:54.0 & 17195-3811c1 & 17:23:00.98 &$-$38:13:54.0 \\
{17:23:00.30} &$ -$38:14:58.0& 17195-3811c2 & 17:23:00.30 &$-$38:14:58.0 \\
&$ $ & 17195-3811c3 & 17:23:00.98 &$-$38:15:38.0 \\
17:38:49.87 &$-$32:43:27.0 & 17355-3241c1 & 17:38:49.87 &$-$32:43:27.0 \\
\bottomrule
\end{tabular}
\end{table}
A first attempt to exploit such large amount of data to define an evolutionary sequence for the clumps and their embedded sources was carried out by \citet{Molinari+08}, who distinguished three different types of objects, on the basis of their mm and IR properties: {\bf (Type 1)} objects with dominant mm emission, and not associated with a mid-IR source; {\bf (Type 2)} objects with both IR and mm emission; and {\bf (Type 3)} objects with clearly dominant IR emission.
Using a simple model, the authors could explain these different types in terms of an evolutionary scenario, in order of increasing age: {(Type 1)} starless cores and/or high-mass proto-stars embedded in dusty clumps; {(Type 2)} deeply embedded Zero-Age Main Sequence (ZAMS) OB star(s) still accreting material from the parental clump, and {(Type 3)} OB stars surrounded only by the remnants of the molecular cloud.
From our recent ATCA 1.3~cm continuum and line (H$_2$O maser at 22~GHz) observations \citep{Sanchez-Monge+13} of a large number ($\sim \,200$) of these massive clumps selected from the SEST mm-continuum observations, we found that Type 1 sources are rarely associated with cm-continuum emission (8\%), Types 2 (75\%) and 3 (28\%) more frequently. At the same time, H$_2$O maser emission was found associated with 13\%, 26\%, and 3\% of sources of Type 1, 2 and 3, respectively. These findings corroborate the evolutionary sequence derived by \citet{Molinari+08}.
In this paper we will explore how the gas and dust properties in massive clumps depend on the evolutionary phase, as determined from the source type and the presence of signposts of high-mass star formation.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/neg_lobes.pdf}
\caption{Typical map for a source in which we filter out the extended emission. The contours are $\pm3\%$ of the intensity peak of $628.3 \usk\milli\mathrm{Jy}$.}
\label{fig:neg_lobes}
\end{figure}
The paper is organized as follows: in Sect.~\ref{sec:sample} we briefly describe the sample selection, in Sect.~\ref{sec:obs} we describe the observations performed with the Australia Telescope Compact Array, and describe the data reduction procedure; in Sect.~\ref{sec:results} we show the results for the quantities directly derived from the observations; in Sect.~\ref{sec:discussion} we discuss how the sample is divided into ``star-forming'' and ``quiescent'' clumps, and into clumps likely hosting a ZAMS star (Types 2 and 3) and clumps that are starless or with a deeply embedded protostar (Type 1). The mean clump properties are investigated to search for differences as a function of evolutionary phase. In Sect.~\ref{sec:sketch} we give a sketch of the different classes of objects identified and finally in Sect.~\ref{sec:summary} we summarize our findings.
\section{Sample and tracer}\label{sec:sample}
The 39 fields considered in this paper were selected from the \citet{Beltran+06} survey at $1.2\usk\milli\metre$, carried out with SEST/SIMBA towards IRAS sources, and contain 46 massive millimetre clumps. The coordinates of the field centres and of the clumps are listed in Table~\ref{tab:positions}. Each field contains at least one massive clump.
The selection of fields was done according to simple criteria: (i) source declination $\delta<-30\degree$; (ii) comparable numbers of MSX-dark and -bright sources; (iii) clumps as isolated as possible, i.e. with a separation greater than 1 SEST beam ($24\arcsec$) between MSX and non-MSX emitters to limit confusion; (iv) masses in excess of $\sim40\msun$ in the \citet{Beltran+06} catalogue.
In this work we will use the gas temperature derived from ammonia observations and the dust temperature derived from a modified black-body fit to the SED to obtain a more accurate estimate of the mass and related quantities. For a spectroscopic tool we selected ammonia, which is an ideal tracer for cold, dense gas, not depleting up to high number densities ($\gtrsim\pot{6}\usk\centi \metre^{-3}$) and an excellent thermometer \citep{HoTownes83}.
The ammonia inversion transitions are split into five electric quadrupole hyperfine components, a main one at the centre of the spectrum and four satellites, from the intensity-ratio of which one can derive the optical depth $\tau$. This allows a direct estimate of the column density.
The five lines are further split into several closely spaced components by magnetic interactions between the nuclei; however, these lines are typically not resolved observationally.
\section{Observations and data reduction} \label{sec:obs}
The fields were observed in the NH$_3$(1,1) and (2,2) inversion transitions ($23694.50 \usk \mega\hertz$ and $23722.63 \usk \mega\hertz$, respectively) and in the H$_2$O($6_{16}-5_{23}$) maser line ($22235.08 \usk \mega\hertz$), with the Australia Telescope Compact Array (ATCA).
The observations were performed between the 4th and 8th of March 2011, for a total telescope time of 48 hours. We used the array in configuration 750D, providing baselines from $31\usk \mathrm{m}$ to $4469\usk \mathrm{m}$. The primary beam of the telescope at these frequencies is $\sim 2\overset{\prime}{.} 5$. The flux density scale was determined by observing the standard primary calibrator PKS1934$-$638 ($0.78\usk\mathrm{Jy}$ at $23650 \mega\hertz$), with an uncertainty expected to be $\lesssim10\%$. Gain calibration was performed through frequent observations of nearby compact quasars; 0537$-$441 was used as the bandpass calibrator. Pointing corrections were derived from nearby quasars and applied online. Weather conditions were generally good, with a weather path noise $\sim 400\usk \micro\mathrm{m}$ or better.
The total time on source was divided into series of snapshots with a variable duration of between 3 and 5 minutes observed over a range as large as possible in hour angle, to improve the uv-coverage for each target. As a consequence of the observing strategy, the total on source integration time varies, and is typically between $\sim 30\usk$min and $\sim 1 \usk$hour. The CABB correlator provided two zoom bands of $64\usk \mega\hertz$ each, with a spectral resolution of $32\usk$kHz ($\sim 0.4 \usk\kilo\metre\usk\second^{-1}$ at $\sim23.7\usk\mathrm{GHz}$). The two ammonia inversion transitions were observed in one band, and the other was centred on the H$_2$O maser line.
The data were edited and calibrated with the MIRIAD software package, following standard procedures. Deconvolution and imaging were performed in AIPS with the ``imagr'' task, applying natural weighting to the visibilities. Ammonia emission lines were visible only on the shortest baselines, thus we discarded all baselines $\gtrsim 30 \usk \mathrm{k \lambda}$. In order to obtain images with the same angular resolution, we reconstructed all of them with a clean circular beam of diameter $20^{\prime\prime}$, except for 17195$-$3811, 17040$-$3959c1 and 16428$-$4109c1. These sources have a poorer uv-coverage, resulting in a beam of roughly $20^{\prime\prime}$ $\times 40 ^{\prime\prime}$. Moreover, 16254$-$4844c1 and 16573$-$4214c2 were observed with a very limited range for the hour angle, making the `clean' impossible.
Ammonia spectra were extracted from the data cubes in two different ways: from a circular area of diameter $20^{\prime\prime}$ centred at the peak emission, or averaged over the (larger) region enclosed in the $3\sigma$ contour of the NH$_3$(1,1) integrated emission.
The spectra extracted from the data cube were imported in CLASS\footnote{Part of the GILDAS (Grenoble Image and Line Data Analysis Software \url{http://iram.fr/IRAMFR/GILDAS/}) package}, and the lines were fitted using METHOD NH3 for the NH$_3$(1,1) inversion transition, that takes into account the hyperfine splitting of the line, thus giving as output also the optical depth of the main line. This method was also used for the (2,2) transition, in order to obtain a better estimate of the full width at half maximum (FWHM) linewidth $\Delta V$ and $\tau$ for the $9$ sources for which we detected the (2,2) hyperfine structure. The spectral rms ranges from $3$ to $55\usk \milli\mathrm{Jy}$, with typical values around $10\usk \milli\mathrm{Jy}$. The value of the rms for each spectrum is given in Table~\ref{tab:line_prop}.
H$_2$O maser emission is detected on all baselines, allowing us to achieve the highest angular resolution allowed by the array configuration ($\sim1-2^{\prime\prime}$). For 16254$-$4844c1 and 16573$-$4214c2, we could only establish whether there is maser emission or not, and we do not derive positions for the maser spots.
\section{Results and analysis} \label{sec:results}
\subsection{Ammonia line profiles and properties}\label{ssec:line_prop}
The NH$_3$(1,1) integrated emission (zeroth moment) is shown in panels (a) and (b) of Fig.~\ref{fig:mom0_sest} together with the SEST $1.2\usk\milli\metre$ emission.
Of the $46$ clumps listed in Table~\ref{tab:positions}, $36$ were detected in both NH$_3$(1,1) and (2,2); $43$ have been detected in NH$_3$(1,1).
Three clumps were not detected in NH$_3$ at all: 15454$-$5335c2, 14166$-$6118c1 and 16164$-$4929c2. We discuss them in more detail in Appendix~\ref{app:ind_sou}.
It is evident from the data that we filter out extended emission for some objects: Figure~\ref{fig:neg_lobes} shows that the lack of information on the largest spatial scales of emission causes the persistence of negative features in the corresponding maps.
The general morphology of the ammonia emission traces well the mm-continuum emission.
The peaks of NH$_3$(1,1) may show significant displacement with respect to the millimetre peak. For some sources this may be caused by a low signal-to-noise ratio of the NH$_3$ emission. Alternatively, the offset could be the result of optical depth or chemical effects. Twelve clumps have optical depths larger than $1.5$ in the main line and a reliable map, but only 3 of them show an offset. We also made maps of the emission of one of the satellite lines, (that are likely to be optically thin, as their optical depth is $\sim 4$ times smaller than that of the main line), for the 7 sources showing the largest displacement between ammonia and millimetre peak. In only one source (15470-5419c1) is the peak of the satellite line emission coincident with the millimetre peak; in the others the offset remains unchanged. Thus, optical depth effects cannot be the dominant cause for the offset.
\begin{figure}[tb]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/rfwhm_tkin.pdf}
\caption{Ratio of the line FWHM $\Delta V$ for the NH$_3$(2,2) and (1,1) lines as a function of kinetic temperature, derived from our data assuming $\Delta V$(2,2)$=\Delta V$(1,1).}
\label{fig:11/22}
\end{figure}
The peak flux of the two ammonia transitions, the rms of the spectra, $\Delta V$, the optical depth $\tau$, the systemic velocity $V_\mathrm{{LSR}}$ of the line, the rotation temperature $T_\mathrm{{rot}}$, the kinetic temperature $T_\mathrm{{K}}$ and the ammonia column density $N(\mathrm{NH_3})$ are listed in Table~\ref{tab:line_prop}.
The optical depth of the (1,1) transition ranges from $\ll1$ to $\sim4$, showing that ammonia emission is moderately thick in these objects.
Figure~\ref{fig:11/22} shows that the FWHM $\Delta V$ of the (2,2) transition is on average slightly larger than that of the (1,1), indicating that the two transitions do not trace exactly the same volume of gas, the (2,2) transition being more sensitive to regions with a higher degree of turbulence due to its higher energy. The same result is found by \citet{Rygl+10}.
To estimate a rotation temperature \citep[e.g., ][]{Mangum+92, Busquet+09} from the line ratio the two ammonia transitions must trace the same volume of gas. Thus, if this assumption is correct, the $\Delta V$ should be the equal. The difference in $\Delta V$ that we measure is sufficiently small not to invalidate our assumption that the emission from NH$_3$(1,1) and (2,2) comes from the same region. We thus consider our temperature estimates reliable.
The kinematic distances were recomputed for the clumps with the $V_\mathrm{{LSR}}$ derived from NH$_3$ and the rotation curve of \citet{BrandBlitz93}, and were found to be in agreement with those given in \citet{Beltran+06}, except for 08477$-$4939c1, 16061$-$5048c1 and c2, 16093$-$5128c1, and 17040$-$3959c1. We choose to use the \citet{BrandBlitz93} instead of the more recent one of \citet{Reid+09} because it still the best sampled in terms of Heliocentric- and Galactocentric distances and Galactocentric azimuth. A comparison shows that the kinematic distances derived with the \citet{Reid+09} curve are systematically smaller by $< 10-15\%$ for virtually all of our sources.
In the inner Galaxy, objects along the line-of-sight on either side of the tangent point have the same radial velocity, which leads to an ambiguity in the kinematic distance (``near'' and ``far'') for several of our targets. Thus, we checked all our sources for associated $8 \usk\micro\metre$ absorption features in the Spitzer/GLIMPSE images, for H\textsc{i} self-absorption observations towards them in the literature and for the height with respect to the Galactic midplane. The near distance is chosen if the complex is observed in absorption against the Galactic mid-InfraRed (MIR) background or if the source at the far distance is further than $150\usk\mathrm{pc}$ from the Galactic plane \citep[$\sim 2-3$ times the scaleheight of the molecular gas distribution; see][]{Dame+87, BrandBlitz93, DameThaddeus94}.
Twenty-two of our sources meet the first criterion, and 8 targets would be located at more than $150\usk\mathrm{pc}$ from the midplane of the Milky Way at the far distance.
Finally, \citet{GreenMcClureGriffiths11} report H\textsc{i} self-absorption measurements for 7 H\textsc{ii}\ regions near our observed fields (containing 12 clumps in total). They locate 3 H\textsc{ii}\ region/clump complexes at the far distance. However, we are confident that 2 (15557-5215 and 17040-3959) of those 3 are instead at the near distance as the $8\usk\micro\metre$ images show a clear absorption patch, and this is unlikely if the sources were on the far side of the Galactic centre. Hence the far distance was assigned to only 1 of our fields (containing 1 clump). The distances adopted are listed in Table~\ref{tab:line_prop}. Where the near-far ambiguity could not be resolved (8 sources), the near distance was assumed.
\subsection{Temperatures from ammonia}\label{ssec:temp}
We derive the rotation temperature ($T_\mathrm{{rot}}$), and the molecular column density, following the method described in \citet{Busquet+09}. This assumes that the transitions between the inversion doublets can be approximated as a two level system \citep[see ][]{HoTownes83}, and that the excitation temperature and line widths are the same for both the (1,1) and (2,2) transition (see Sect.~\ref{ssec:line_prop}).
The kinetic temperature $T_\mathrm{{K}}$ was then extrapolated from $T_\mathrm{{rot}}$ using the empirical method outlined in \citet{Tafalla+04}. This relation gives results accurate to a $5\%$ level for temperatures $\lesssim20\usk\kelvin$.
This procedure was used to derive the gas temperature both from the spectra extracted from an area equal to that of the beam around the peak of the ammonia emission, and from those averaged over the whole area of NH$_3$(1,1) emission.
In order to also have an estimate of the uncertainty, the method to derive $T_\mathrm{{rot}}$, $T_\mathrm{{K}}$ and ammonia column density $N(\mathrm{NH_3})$ was implemented in JAGS\footnote{\url{http://mcmc-jags.sourceforge.net/}} (Just Another Gibbs Sampler). JAGS is a program for the analysis of Bayesian models, based on Markov Chain Monte Carlo simulations. It computes the \textit{posterior} probability distribution, summarizing our knowledge of the quantities considered, given a user-defined model (i.e. the equations and the assumptions of Gaussianity for the quantities directly derived from the fit in our case), the data and our prior knowledge of the quantities involved \citep{Andreon11}. This program was used to derive $T_\mathrm{{rot}}$, $T_\mathrm{{K}}$ and $N(\mathrm{NH_3})$ and their uncertainty, propagating the Gaussian uncertainty of the parameters of the fit, as given by CLASS. Constant \textit{priors} were used on these parameters, i.e. $T\tau$ and $\tau$. To check the dependency of the results on the choice of the \textit{prior}, we used also a Gaussian \textit{prior} with a large $\sigma$. The results show that the derived parameters are virtually independent of the \textit{prior} choice.
The temperatures and $N(\mathrm{NH_3})$ derived from the peak spectrum in this way are listed in Table~\ref{tab:line_prop}, with their uncertainties. On the other hand, Table~\ref{tab:line_prop_mean} shows the observed spectral parameters for the spectra averaged over the whole NH$_3$(1,1) emission, the rotation and kinetic temperatures and the average ammonia column density, with the respective uncertainties.
$T_\mathrm{{K}}$ obtained from the spectra extracted from both the peak of the NH$_3$ and those obtained from the whole area of emission are in the range between $\sim10$ and $\sim28 \usk\kelvin$. $T_\mathrm{{rot}}$ and $T_\mathrm{{K}}$ calculated from the two sets of ammonia spectra agree very well in most cases.
Few exceptions exist, where the $68\%$ credibility intervals for the kinetic temperature do not overlap (3 cases), but with differences of $\sim5\usk\kelvin$ at most, possibly due to dilution of the NH$_3$(2,2) emission, averaged over the same area as the (1,1). Thus in the following we use the $T_\mathrm{{K}}$ derived at the peak of ammonia emission
The gas temperatures derived from ammonia imply that the average $\Delta V$ of the ammonia lines (between $\sim0.7$ and $3.7\usk\kilo\metre\usk\second^{-1}$) is well in excess of the thermal broadening in such cold gas ($\sim0.15\usk\kilo\metre\usk\second^{-1}$ for $T_\mathrm{{K}}=20\usk\kelvin$), indicating that turbulence may play a major role in supporting the clumps.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/dist_mass.pdf}
\caption{Mass of the clumps as a function of distance. The black solid line indicates the typical mass sensitivity for the SEST images (see text). The sources with signs of active star formation are shown as red filled circles, those without as black open squares (see Sect.~\ref{sec:discussion}). Associated MSX emission is indicated as a black plus, and radio-continuum emission as a black cross. The white cross indicates 17195-3811c1 (see text).}
\label{fig:m_dist}
\end{figure}
\subsection{Ammonia abundances}
To determine characteristic ammonia abundances we used the spectra averaged over all the NH$_3$ emission.
We derived $N(\mathrm{H_2})$ from the average $1.2\usk\milli\metre$ emission, collected over the same area as the ammonia, assuming that the clump is homogeneous (see Sect.~\ref{ssec:mass_dens_size}), and divided the NH$_3$ column density by $N(\mathrm{H_2})$. The total range of abundances for all the sources in the sample lies between $\sim\pot{-9}$ and $\sim\pot{-7}$. For most of the objects the abundances are in the typical range of $\sim \pot{-8}-\pot{-7}$ \citep[cf.][and references therein]{Wienen+12}. We compared the NH$_3$ abundance derived in this way with the abundance derived at the peak of ammonia emission: we find this latter quantity is typically slightly greater than the former, with ratios in the range $\sim 0.6-10$ and mean and median values of $2.5$ and $1.2$, respectively. A sub-sample of the clumps observed in ammonia was also observed in C$^{18}$O\ and N$_2$H$^+$ with APEX \citep{Fontani+12}. The carbon monoxide was found to be heavily depleted in these sources, showing that they are in an early phase of evolution. On the contrary, the observed ammonia abundances indicate that NH$_3$ is not depleted on a large scale in these clumps, in agreement with studies of clumps in low-mass star-forming regions, whereas CO is also depleted \citep[e.g., ][]{Tafalla+02}.
\begin{table*}
\centering
\caption{Parameters of the ammonia spectra extracted from an area equal to that of the beam, around the NH$_3$ emission peak. The columns indicate the clump name, the peak flux of the (1,1) transition and the rms of the spectrum, the $V_\mathrm{{LSR}}$ of the emission, the $\Delta V$ of NH$_3$(1,1), the opacity of the (1,1) line and its uncertainty, the peak flux of the (2,2) transition and the rms of the spectrum, and the $\Delta V$ of NH$_3$(2,2), $T_\mathrm{{rot}}$, $T_\mathrm{{K}}$ and ammonia column density, with their uncertainties, the near and far kinematic distance. The clumps above the horizontal line are those classified as star-forming, while the clump below it are those classified as quiescent (see Sect.~\ref{sec:discussion}).}
\label{tab:line_prop}
\includegraphics[width=0.8\textwidth]{./figures/land_tables/tab_line_prop-crop.pdf}
\end{table*}
\subsection{Clump masses, diameters and gas densities}\label{ssec:mass_dens_size}
Determining accurate masses for the clumps is crucial to determine the evolutionary phase of the clump from a mass-luminosity plot \citep{Molinari+08}, and to see if the clump is massive enough to form high-mass stars.
With our temperature determination (assuming that the gas, dust and kinetic temperatures $T_g$, $T_\mathrm{{d}}$ and $T_\mathrm{{K}}$ are equal) we are able to compute more accurate masses than those listed in \citet{Beltran+06}, that were derived assuming $T_d=30\usk\kelvin$. The clump masses are calculated from the integrated $1.2\usk\milli\metre$ ($250\,\mathrm{GHz}$) flux through \citep{Hildebrand83}:
\begin{equation}
\mathrm{M_{gas}} = \gamma \frac{\mathrm{S_{250}} D^2}{\kappa_{250} \mathrm{B_{250}}(T_\mathrm{{d}})},
\label{eq:dustmass}
\end{equation}
where $\mathrm{S_{250}}$ is the total flux density at $250\usk\mathrm{GHz}$, $D$ is the distance, $\gamma$ is the gas-to-dust ratio, $\mathrm{B_{250}}(T_\mathrm{{d}})$ is the emission of a black body with temperature equal to $T_\mathrm{{d}}$ at $250\usk\mathrm{GHz}$, and $\kappa_{250}\equiv\kappa_0(250\usk\giga\hertz/\nu_0[\giga\hertz])^\beta$ is the dust opacity per unit mass at the indicated frequency. We used $\kappa_0=0.8\usk\usk\centi \metre\usk\gram^{-1}$ at $\nu_0=230.6\usk\giga\hertz$, as recommended by \citet{OssenkopfHenning94}.
The index $\beta$ was derived from the modified black-body fit to the spectral energy distribution of the clumps, using only the SEST and Hi-GAL fluxes (see Sect.~\ref{ssec:sed}), where the data were available, otherwise we chose $\beta=2$, as in \citet{Beltran+06}.
The masses and their uncertainties are again estimated with JAGS, taking into account the probability distribution of $T_\mathrm{{K}}$, as derived from the ammonia observations, the uncertainty of the integrated $1.2$-mm flux (determined with standard techniques from the flux density rms in the images) and a $15\%$ calibration uncertainty.
We find systematically higher masses than \citet{Beltran+06}, because the temperatures are always lower than $30\usk\kelvin$. The masses of the clumps tend to increase with the distance (see Fig.~\ref{fig:m_dist}), as a result of the fact that nearby high-mass clumps are rare and that at large distances one cannot always separate individual clumps. In Fig.~\ref{fig:m_dist} we show the minimum detectable mass from the $1.2\usk\milli\metre$ maps, calculated from Eq.~\ref{eq:dustmass}, with a typical $3\sigma$ flux density of $100\usk$mJy/beam and a $T_\mathrm{{d}}=15\usk\kelvin$.
In this work we adopted the mass computed within the FWHM contour, in order to consider only the inner regions of the clump, excluding the external envelope (see Sect.~\ref{sec:discussion}), and because the measured diameter depends on the signal-to-noise ratio. Therefore when we generically speak of the mass we refer to masses computed within the FWHM contour. In this way the mass could be underestimated by a factor of 2, if the source is Gaussian and the envelope contribution is negligible. Our mass estimates are thus conservative, and the possible variation is indicated in figures~\ref{fig:m-r_diagram} and \ref{fig:m-l_plot} for comparison. The masses are listed in Table~\ref{tab:fwhm}. For completeness, masses and densities computed within the $3\sigma$ contour contour are shown in Table~\ref{tab:3s}.
Angular diameters were derived from $1.2\usk\milli\metre$ maps. The beam-corrected angular diameters $\theta$ of the clumps at FWHM level are estimated assuming that the sources are Gaussian, using the relation $\theta=\sqrt{\mathrm{FWHP^2}-\mathrm{HPBW^2}}$, with $\mathrm{FWHP}=2\sqrt{\mathrm{A}/\pi}$, where $A$ is the area within the contour at half peak intensity, and $\mathrm{HPBW}$ is the SEST half-power beam width. If the angular size derived in this way is less than half the beam size, the source is deemed unresolved and we set an upper limit to its size equal to half the HPBW \citep{Wilson09ToRA}.
The linear diameters at FWHM level range from $\sim0.2$ to $\sim2.0\usk\mathrm{pc}$ (Table~\ref{tab:fwhm}).
\citet{KauffmannPillai10} derived an empirical relation between mass and radius to separate the clumps that are able to form high-mass stars from those that are not. With the mass now much better constrained, we can use this relation to test if our clumps have the potential of forming massive stars.
In Fig.~\ref{fig:m-r_diagram} we show the mass and size of our clumps, compared to the \citet{KauffmannPillai10} relation, scaled to the same dust opacity as used in the present work. From the figure we observe that the vast majority of our sources lie above the \citet{KauffmannPillai10} relation, indicated as a dashed line, corroborating the idea that the whole sample is constituted of similar objects and suggesting that virtually all of them could form massive stars. This makes our sample a good one to study the evolution of massive clumps potentially able to form massive stars.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/mass_radius.pdf}
\caption{Mass-radius plot; $r_\mathrm{FWHM}$ is the beam-corrected radius at FWHM level. The symbols are the same as in Fig.~\ref{fig:m_dist}.
The boundary for massive-star formation derived by \citet{KauffmannPillai10} ($M[\msuntab]=870(r/\usk\mathrm{pc})^{1.33}$, rescaled to match our dust opacity) is indicated as a dashed line. Sources above this line are able to form massive stars. The uncertainty in mass is shown for each point. A variation of a factor of 2 in mass (see text) is indicated in the bottom right corner.}
\label{fig:m-r_diagram}
\end{figure}
The column- and volume-densities of molecular hydrogen were calculated using the mass and the diameters, assuming spherical symmetry and correcting for helium \citep[$\sim8\%$ in number; e.g.,][]{Allen73}.
These two quantities are found to lie between $\sim0.1-6\times\pot{23}\usk\centi \metre^{-2}$, and $0.2-20\times\pot{5}\usk\centi \metre^{-3}$, respectively: values like these are typical of IRDCs \citep[e.g., ][]{Egan+98, Carey+98, Carey+00, Pillai+06}.
The surface density $\Sigma$ was determined by averaging the mass over the deconvolved FWHM area of emission at $1.2\usk\milli\metre$.
$\Sigma$ for the clumps in this sample is found to lie between $0.03$ and $1.5\usk\gram\usk\centi \metre^{-2}$.
The mass, volume-, column-, and surface densities with their $68\%$ credibility intervals for all the clumps are listed in Table~\ref{tab:fwhm}.
\subsection{Spectral energy distribution} \label{ssec:sed}
Important insights in the evolutionary state of a source can be gained through its $L/M$ ratio, as proposed by \citet{Saraceno+96} for the low-mass regime and by \citet{Molinari+08} for the high-mass regime. We constructed the Spectral Energy Distribution (SED) for the sources in our sample complementing the SEST data with Herschel\footnote{\citealt{Pilbratt+10}. Here we use the PACS \citep{Poglitsch+10} and SPIRE \citep{Griffin+10} instruments.}/Hi-GAL \citep[$500\usk\micro\metre$, $350\usk\micro\metre$, $250\usk\micro\metre$, $160\usk\micro\metre$, $70\usk\micro\metre$; ][]{Molinari+10}, MIPSGAL \citep[$24\usk\micro\metre$; ][]{Carey+09}, MSX \citep[band A $8.28\usk\micro\metre$, C $12.13 \usk\micro\metre$, D $14.65 \usk\micro\metre$, E $21.30 \usk\micro\metre$; ][]{Price+01} and GLIMPSE \citep[$8.0\usk\micro\metre$, $5.8\usk\micro\metre$, $4.5\usk\micro\metre$, $3.6\usk\micro\metre$; ][]{Benjamin+03, Churchwell+09} data. We smoothed all the images to a common resolution of $25^{\prime\prime}$ (the approximate resolution of the $350 \usk\micro\metre$ image) except that at $500\usk\micro\metre$, which has a resolution of $\sim36^{\prime\prime}$. Two different polygons were defined for each wavelength: one to derive the flux density of the source, and the other for the background in the region.
The bolometric luminosity was calculated by integrating the fluxes over frequency, interpolating linearly between the measured fluxes at different frequencies in logarithmic space. The uncertainty was estimated by simply interpolating between the lower and upper limit of the $68\%$ credibility interval of the fluxes used to derive the luminosity, respectively.
The fluxes at the longest wavelengths in the SED can be used to infer the typical properties of the dusty envelope. With this in mind we fitted the SED fluxes, down to $70\usk\micro\metre$ with a modified black body. The point at $70\usk\micro\metre$ was included in the fit to better constrain the temperature in the case where the SED has its peak shortward of $160\usk\micro\metre$. The inclusion of the flux at $70\usk\micro\metre$ implies that we will measure a higher $T_\mathrm{{d}}$, because we are tracing the warm dust layers near the embedded (proto-)star. The fit procedure is described in Appendix~\ref{sec:app_SEDfit}. The results of the modified black-body fit, the luminosity derived integrating the SED from $1.2\usk\milli\metre$ down to $3.6\usk\micro\metre$, and the uncertainties in these quantities for each clump are listed in Table~\ref{tab:greybody}. In the table we list only the clumps with data in all the five Herschel/Hi-GAL bands. Figures~\ref{fig:sed_sfs} and \ref{fig:sed_qs} show the SED with the fit results. 17195-3811c1 is on the edge of the Herschel/HiGAL $160/70\usk\micro\metre$ maps, thus is not included here. However we use the lower limits on the fluxes at these wavelengths for the fit with the Robitaille models (see below) for this latter source.
\begin{table*}[tbp]
\centering
\caption{Parameters derived from the modified black-body fit of the SED down to $70\usk\micro\metre$, and luminosity of the clumps (integrated from $1.2\usk\milli\metre$ and $3.6\usk\micro\metre$). The clumps above the horizontal line are those classified as star-forming, while the clump below it are those classified as quiescent (see Sect.~\ref{sec:discussion}). The columns shows the dust temperature ($T_\mathrm{{d}}$), the mass ($M$) of the gas and the dust emissivity index $\beta$ from the modified black-body fit, and the luminosity of the clumps derived integrating the SED, with their uncertainties.}
\label{tab:greybody}
\begin{tabular}{lrrrrrrrr}
\toprule
Clump & $T_\mathrm{{d}}$ & $68\%$ int. & $M$ & $68\%$ int. & $\beta$ & $68\%$ int. & $L$ & $68\%$ int. \\
\midrule
& {\scriptsize (K)} & {\scriptsize (K)} & {\scriptsize ($\pot{2}\times\msuntab$)} & {\scriptsize ($\pot{2}\times\msuntab$)} & & & {\scriptsize ($\pot{2}\times\mathrm{L_\odot}$)} & {\scriptsize ($\pot{2}\times\mathrm{L_\odot}$)} \\
\midrule
13560-6133c1 &$ 24.3$&$ 22.8- 25.5$&$ 6.2 $&$ 5.3 - 7.2 $&$ 1.4$&$ 1.2- 1.5$&$ 30.1$&$ 25.6- 34.3$ \\
13563-6109c1 &$ 22.0$&$ 20.8- 23.3$&$ 2.4 $&$ 2.0 - 2.9 $&$ 1.8$&$ 1.6- 2.0$&$ 27.3$&$ 24.2- 30.4$ \\
15072-5855c1 &$ 26.5$&$ 25.0- 28.0$&$ 0.5 $&$ 0.4 - 0.6 $&$ 1.7$&$ 1.5- 1.9$&$ 11.4$&$ 10.2- 12.6$ \\
15278-5620c1 &$ 27.8$&$ 26.0- 29.3$&$ 6.0 $&$ 5.1 - 7.0 $&$ 1.8$&$ 1.6- 1.9$&$ 266.2$&$ 241.1- 291.1$ \\
15278-5620c2 &$ 11.2$&$ 9.8- 12.3$&$ 4.4 $&$ 3.5 - 5.3 $&$ 2.1$&$ 1.7- 2.4$&$ 1.4$&$ 0.8- 1.8$ \\
15470-5419c1 &$ 16.6$&$ 15.8- 17.3$&$ 4.0 $&$ 3.5 - 4.8 $&$ 1.5$&$ 1.3- 1.7$&$ 4.0$&$ 3.2- 4.5$ \\
15470-5419c3 &$ 19.2$&$ 18.3- 20.0$&$ 3.6 $&$ 3.1 - 4.3 $&$ 1.6$&$ 1.4- 1.8$&$ 7.9$&$ 6.7- 8.9$ \\
15557-5215c1 &$ 23.8$&$ 22.3- 25.0$&$ 5.3 $&$ 4.5 - 6.3 $&$ 1.8$&$ 1.6- 2.0$&$ 76.2$&$ 68.0- 84.4$ \\
15557-5215c2 &$ 15.6$&$ 14.8- 16.8$&$ 3.6 $&$ 2.9 - 4.5 $&$ 2.1$&$ 1.8- 2.3$&$ 7.4$&$ 5.9- 8.5$ \\
15579-5303c1 &$ 24.5$&$ 23.0- 25.5$&$ 5.3 $&$ 4.6 - 6.1 $&$ 1.9$&$ 1.7- 2.0$&$ 84.8$&$ 75.9- 93.6$ \\
16061-5048c1 &$ 19.9$&$ 19.0- 20.8$&$ 6.0 $&$ 5.1 - 7.0 $&$ 2.0$&$ 1.8- 2.1$&$ 33.8$&$ 29.7- 37.6$ \\
16061-5048c2 &$ 21.3$&$ 20.3- 22.3$&$ 5.9 $&$ 5.1 - 7.0 $&$ 2.2$&$ 2.0- 2.3$&$ 86.3$&$ 77.0- 95.5$ \\
16061-5048c4 &$ 9.5$&$ 8.8- 10.3$&$ 8.5 $&$ 7.0 - 10.2 $&$ 2.5$&$ 2.2- 2.8$&$ 2.1$&$ 1.4- 2.6$ \\
16093-5015c1 &$ 20.8$&$ 19.8- 21.8$&$ 20.9 $&$ 17.8 - 24.3 $&$ 1.6$&$ 1.4- 1.8$&$ 90.0$&$ 77.5- 101.5$ \\
16093-5128c1 &$ 23.8$&$ 22.5- 25.0$&$ 4.6 $&$ 4.0 - 5.3 $&$ 2.2$&$ 2.0- 2.4$&$ 256.9$&$ 234.4- 279.3$ \\
16093-5128c8 &$ 13.8$&$ 12.8- 15.3$&$ 1.8 $&$ 1.5 - 2.2 $&$ 2.3$&$ 2.0- 2.6$&$ 3.5$&$ 2.4- 4.2$ \\
16254-4844c1 &$ 17.6$&$ 16.8- 18.3$&$ 2.6 $&$ 2.2 - 3.1 $&$ 1.6$&$ 1.5- 1.8$&$ 4.1$&$ 3.4- 4.6$ \\
16573-4214c2 &$ 16.0$&$ 15.3- 16.5$&$ 2.1 $&$ 1.8 - 2.4 $&$ 1.8$&$ 1.6- 1.9$&$ 2.6$&$ 2.2- 3.0$ \\
17040-3959c1 &$ 17.2$&$ 16.5- 17.8$&$ 0.5 $&$ 0.4 - 0.6 $&$ 2.3$&$ 2.2- 2.5$&$ 2.5$&$ 2.2- 2.8$ \\
17355-3241c1 &$ 23.8$&$ 22.5- 24.8$&$ 0.35 $&$ 0.3 - 0.4 $&$ 2.1$&$ 1.9- 2.2$&$ 19.0$&$ 17.1- 21.0$ \\
\midrule
14166-6118c2 &$ 18.1$&$ 15.3- 20.8$&$ 0.5 $&$ 0.4 - 0.6 $&$ 1.7$&$ 1.2- 2.0$&$ 1.3$&$ 0.8- 1.7$ \\
14183-6050c3 &$ 16.1$&$ 15.0- 17.0$&$ 1.0 $&$ 0.7 - 1.2 $&$ 1.9$&$ 1.6- 2.3$&$ 2.3$&$ 1.5- 2.7$ \\
15038-5828c1 &$ 12.2$&$ 11.3- 13.3$&$ 5.4 $&$ 4.5 - 6.5 $&$ 2.1$&$ 1.8- 2.4$&$ 3.3$&$ 1.9- 3.8$ \\
15470-5419c4 &$ 11.1$&$ 10.3- 12.0$&$ 6.2 $&$ 5.1 - 7.5 $&$ 2.4$&$ 2.0- 2.6$&$ 2.9$&$ 1.9- 3.7$ \\
15557-5215c3 &$ 9.6$&$ 8.5- 10.5$&$ 3.6 $&$ 2.9 - 4.5 $&$ 2.9$&$ 2.5- 3.3$&$ 1.8$&$ 1.1- 2.6$ \\
15579-5303c3 &$ 15.6$&$ 14.5- 17.0$&$ 3.4 $&$ 2.7 - 4.3 $&$ 1.6$&$ 1.3- 1.9$&$ 3.9$&$ 2.1- 4.8$ \\
16093-5128c2 &$ 10.4$&$ 9.0- 11.5$&$ 5.1 $&$ 4.0 - 6.1 $&$ 2.7$&$ 2.3- 3.0$&$ 3.2$&$ 1.7- 4.0$ \\
16164-4929c3 &$ 9.2$&$ 8.3- 10.3$&$ 4.7 $&$ 3.8 - 5.7 $&$ 2.6$&$ 2.2- 2.9$&$ 1.0$&$ 0.6- 1.7$ \\
16164-4929c6 &$ 11.0$&$ 10.3- 12.0$&$ 1.1 $&$ 0.9 - 1.4 $&$ 2.4$&$ 2.1- 2.7$&$ 0.7$&$ 0.4- 1.0$ \\
16164-4837c2 &$ 8.1$&$ 7.5- 8.8$&$ 3.2 $&$ 2.7 - 3.9 $&$ 3.0$&$ 2.7- 3.4$&$ 0.8$&$ 0.5- 1.0$ \\
16435-4515c3 &$ 10.5$&$ 9.8- 11.3$&$ 4.3 $&$ 3.5 - 5.1 $&$ 2.7$&$ 2.4- 2.9$&$ 2.9$&$ 1.8- 3.6$ \\
16482-4443c2 &$ 8.7$&$ 8.0- 9.5$&$ 1.9 $&$ 1.6 - 2.3 $&$ 3.1$&$ 2.8- 3.5$&$ 0.8$&$ 0.5- 1.0$ \\
\bottomrule
\end{tabular}
\end{table*}
From Fig.~\ref{fig:tk_td} we can see that the characteristic $T_\mathrm{{d}}$ obtained from the fit of a modified black body to the SED down to $70\usk\micro\metre$ is usually in good agreement with $T_\mathrm{{K}}$ derived from the ammonia observations. Seven sources have $|T_\mathrm{{K}}-T_\mathrm{{d}}|\ge5\usk\kelvin$ and uncertainties not large enough to explain this difference, implying a statistically significant discrepancy. Four of these objects have $T_\mathrm{{d}}>T_\mathrm{{K}}$: these are also the cases where $T_\mathrm{{d}}$ is high, always above $20\usk\kelvin$. The discrepancy may arise from a combination of different causes: the fact that the ratio of the two lowest transitions of ammonia is optimal to derive temperatures only up to $20-25\usk\kelvin$, that ammonia and dust emission are probing different regions of the clump, and that the strong emission in these clumps from warm dust, heated by the central star and visible at $70\usk\micro\metre$, is biasing the modified black-body fit towards higher $T_\mathrm{{d}}$.
The gas masses obtained from the modified black-body fit usually agree, within the uncertainties, with those derived simply from the $1.2\usk\milli\metre$ continuum, and lie between the mass within the $3\sigma$ and that within the FWHM contour (Fig.~\ref{fig:msed_msest}). As not all sources have a complete SED, and considering the reasonable agreement between masses determined from the $1.2\usk\milli\metre$ integrated flux and from the modified black-body fit to the SED, we decided to use the former in the following analysis, so that we could also assign a size to the source consistently.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/tk_td.pdf}
\caption{Comparison between the kinetic temperature ($T_\mathrm{{K}}$) derived from ammonia and the dust temperature ($T_\mathrm{{d}}$) from the SED-fit. The dashed line indicates equal temperatures and the yellow-shaded region shows a difference of $\pm 5 \usk\kelvin$ between the two temperatures.}
\label{fig:tk_td}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/m12_msed.pdf}
\caption{Comparison between the mass derived from the $1.2\usk\milli\metre$ continuum and from the SED fit. The uncertainties in $M_\mathrm{SED}$ are indicated. The bars for the $1.2\usk\milli\metre$ continuum range from the lower limit of the mass within the FWHM contour to the upper limit of the $3\sigma$ contour. The dashed line indicates equal masses and the yellow-shaded region shows a difference of a factor of two.}
\label{fig:msed_msest}
\end{figure}
\smallskip
\citet{Robitaille+06} developed a code to compute the SED of axisymmetric YSOs. This code considers a central YSO with a rotationally flattened infalling envelope, the presence of bipolar cavities and a flared accretion disk,
making use of a Monte Carlo radiation transfer algorithm to compute the flux of the object at wavelengths from the mm- to the near-IR regimes.
A vast range of stellar masses and of evolutionary phases are covered, from the earliest stages of strong infall to the late phase where only the circumstellar disk remains around the central object, and the envelope is completely dispersed.
Following the discussion in \citet{Molinari+08}, we use these models only for clumps that clearly contain an embedded source, and with detectable fluxes shortward of $70\usk\micro\metre$ in the smoothed images. Otherwise, only the modified black-body fit is done. To fit the SED with the Robitaille models, we make use of the online SED fitting tool\footnote{\url{http://caravan.astro.wisc.edu/protostars/}} \citep{Robitaille+07}.
In the tool, we allowed the foreground interstellar extinction to range between $1$ and $2\usk\mathrm{mag}\usk\kilo\mathrm{pc}^{-1}$ \citep[e.g., ][]{Allen73, Lynga82, Scheffler82}.
The stellar masses obtained from the fit range between $\sim5$ and $\sim30\msun$, corresponding to spectral types approximately B7-O8. The luminosities derived vary between $\sim600$ and $65000\usk\mathrm{L_\odot}$, agreeing with those derived by simply integrating the SED, interpolating linearly in the log-log space. Our estimate of $L$ tends to be lower, as the linear interpolation in the log-log space gives a lower limit for the luminosity and because of the model assumptions. However, for consistency, in the following we will use our estimate of the luminosity for all sources.
The envelope mass from the fit of the Robitaille models is greater than that derived either from the modified black-body fit or from the $1.2\usk\milli\metre$-continuum.
In this regard, \citet{Offner+12} compared synthetic SEDs of deeply embedded proto-stars, derived from simulated observation obtained with a 3D radiative transfer code for dust emission, for a vast range of parameters, with the best fit obtained from the standard grid of Robitaille models. These authors showed that usually the fit recovers the true luminosity and stellar mass, although with large uncertainties, but systematically overestimates the mass of the envelope, mainly due to the assumption of a different dust model. The difference is more pronounced in the mm-regime, strongly influencing the mass determination. Our assumption of dust opacity is similar to that used by \citet{Offner+12} in the mm-regime, explaining the discrepancy between our mass estimates and the envelope mass from the fit with the online SED fitting tool.
A summary of the results of the fits with the Robitaille models is shown in Table~\ref{tab:robitaille}.
\begin{table}
\centering
\caption{Summary of the properties derived from the fit of the Robitaille models to the SED of the objects with significant mid-IR emission. Our estimated range in $L$ is shown for comparison. The ranges in $M_*$, $L_{Rob}$ and $M_\mathrm{env}$ are those spanned by the best ten models. The masses are derived with a different dust model, and thus deviate from our estimate.}
\tiny
\label{tab:robitaille}
\begin{tabular}{lcccc}
\toprule
Clump & $M_*$ & $L_{Rob}$ & $L$ & $M_\mathrm{env}$ \\
& ${\scriptsize (\msuntab)}$ & ${\scriptsize (\pot{2}\times\mathrm{L_\odot})}$ & ${\scriptsize (\pot{2}\times\mathrm{L_\odot})}$ & ${\scriptsize (\pot{2}\times\msuntab)}$ \\
\midrule
13560-6133c1 & $9.6-15.4$ & $25-43$ & $26-34$ & $15.0-36.0$ \\
13563-6109c1 & $9.7-13.8$ & $27-49$ & $24-30$ & $5.5-18.0$ \\
15072-5855c1 & $5.9-8.8$ & $6-28$ & $10-13$ & $1.6-15.0$ \\
15278-5620c1 & $14.7-22.9$ & $307-665$ & $241-291$ & $16.0-28.0$ \\
15557-5215c1 & $10.1-27.4$ & $37-234$ & $68-84$ & $5.2-23.0$ \\
15557-5215c2 & $6.0-9.6$ & $6-15$ & $6-9$ & $7.3-15.0$ \\
15579-5303c1 & $13.4-17.8$ & $149-334$ & $76-94$ & $14.0-34.0$ \\
16061-5048c1 & $12.5-18.5$ & $52-116$ & $30-38$ & $23.0-40.0$ \\
16061-5048c2 & $11.0-24.4$ & $83-781$ & $77-96$ & $16.0-21.0$ \\
16093-5015c1 & $12.8-21.9$ & $45-209$ & $78-102$ & $19.0-52.0$ \\
16093-5128c1 & $14.8-24.2$ & $104-638$ & $234-279$ & $9.8-40.0$ \\
17195-3811c1 & $9.3-12.9$ & $23-138$ & $-$ & $7.2-31.0$ \\
17355-3241c1 & $7.6-8.2$ & $21-34$ & $17-21$ & $2.2-2.5$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Stability and dynamics of the clumps}\label{ssec:dynamics}
To investigate the stability of the clumps, we performed the simplest virial analysis, without taking into account magnetic or rotational support.
To derive the virial mass we assumed a constant density profile, using Eq.~$(3)$ in \citet{MacLaren88}, which implies that $M_\mathrm{vir}[\msuntab]=210 \usk R\mathrm{[\mathrm{pc}]} \usk \Delta V\mathrm{[\kilo\metre\usk\second^{-1}]}^2$, where $R$ is the radius and $\Delta V$ is the FWHM of the line. In this way we obtain an upper limit for the virial mass. Power law radial profiles for the gas volume density in the clumps can reduce the virial mass: the steeper the profile, the lower the virial mass. A radial dependence like $n(\mathrm{H_2})\propto\left( r/r_0 \right)^{-2}$ reduces the virial mass by about a factor of $2$ \citep[cf. ][]{MacLaren88}. The virial parameter $\alpha=M_\mathrm{vir}/M$ is used as an indicator for gravitational stability; $\alpha<2$ implies that the clumps are gravitationally bound, and $\alpha=1$ indicates virial equilibrium. Due to our choice of homogeneous clumps to derive the virial mass, the values that we derive for the virial parameter $\alpha$ are upper limits. Figure~\ref{fig:virial} shows that for virtually all the clumps we find $\alpha\lesssim 1$, implying that they are dominated by gravity. In Sect.~\ref{ssec:alpha} we discuss in detail the observed values of $\alpha$.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/virial.pdf}
\caption{Virial parameter $\alpha=M_\mathrm{vir}/M$ as a function of $M$. The symbols are the same as in Fig.~\ref{fig:m_dist}. The dashed lines indicates $\alpha=2$ (clump gravitationally bound), and $\alpha=1$ (clump in virial equilibrium).}
\label{fig:virial}
\end{figure}
First order moment maps reveal the presence of velocity gradients, typically around $1-2\usk\kilo\metre\usk\second^{-1}\usk\mathrm{pc}^{-1}$. A more detailed analysis for the most interesting clumps is the subject of a forthcoming paper.
\subsection{Water maser emission}
Thirteen clumps in our sample show maser emission. The spectra were extracted from the data cubes at each position where emission was detected (see Fig.~\ref{fig:maser_spec}) within a polygon comparable to the beam dimensions (between $\sim 1^{\prime\prime}$ and $\sim 2^{\prime\prime}$), and imported in CLASS. The lines were then fitted with Gaussians.
Two sources have very strong lines, reaching nearly $\sim100\usk \usk\mathrm{Jy}$. We typically find multiple velocity components (up to 22) towards a single clump. A summary of the maser emission properties is shown in Table~\ref{tab:maser}. The water maser range of velocities usually straddles the systemic velocity of the clump, as shown in Fig.~\ref{fig:maser_v} (cf. \citealt{Brand+03}). The positions of the maser spots are indicated in Fig.~\ref{fig:sest+24mum} as white open squares. A comparison with \citet{Sanchez-Monge+13} shows that 2 sources detected in their study are not detected in our observations, while 2 targets that we detect, were not detected in \citet{Sanchez-Monge+13} (cf. Table~\ref{tab:sf_signs}), as expected because of the well-known variability of water masers \citep[e.g., ][]{Felli+07}. All of these sources show other signs of active star formation.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/maser_v.pdf}
\caption{Range of velocity of the water maser \textit{vs.} $V_\mathrm{{LSR}}$ of the clump. The dashed line indicates $V_\mathrm{H_2O}=V_\mathrm{{LSR}}$, i.e. a maser velocity equal to the systemic velocity of the clump.}
\label{fig:maser_v}
\end{figure}
\begin{table*}[tbp]
\centering
\caption{Summary of the emission characteristics of H$_2$O masers. The columns show the clump name, the number of Gaussian components in the spectrum, the range of $V_\mathrm{{LSR}}$ over which we detect emission, the flux density peak, the integrated emission, the water maser luminosity, the $V_\mathrm{{LSR}}$ and $\Delta V$ of the strongest component (S.C.), and the offset of the maser spot with respect to the phase centre (Ph.C.). No correction was performed for the primary beam.}
\tiny
\label{tab:maser}
\begin{tabular}{lrrrrrrrr}
\toprule
Clump & Maser & $V_\mathrm{{LSR}}(\mathrm{min,max})$ & $F_\mathrm{peak}$ & $\int F d\nu$ & $L_\mathrm{H_2O}$ & $V_\mathrm{{LSR}}$ S.C. & $\Delta V$ S.C. & Offset (Ph.C.) \\
\midrule
& & {\scriptsize ($\kilo\metre\usk\second^{-1} $)}& {\scriptsize ($\mathrm{Jy}$)} & {\scriptsize ($\mathrm{Jy} \usk\kilo\metre\usk\second^{-1}$)} & {\scriptsize $\pot{-7}\times\mathrm{L_\odot}$} & {\scriptsize ($\kilo\metre\usk\second^{-1}$)} & {\scriptsize ($\kilo\metre\usk\second^{-1}$)} & {\scriptsize ($^{\prime\prime} $)} \\
\midrule
08589-4714c1 &$ 3$&$ 0.5; 14.8$& $1.67$ & $2.4$ & $1.3$ & $4.8$ & $1.2$ & $2,-10$ \\
13560-6133c1 &$ 22$&$ -78.2; 26.0$& $5.78$ & $37.4$ & $272.0$ & $-53.3$ & $1.1$ & $-15,35$ \\
15278-5620c1 &$ 6$&$ -68.2; -43.5$& $84.5$ & $85.4$ & $228.9$ & $-47.5$ & $0.8$ & $10,79$ \\
15470-5419c1 &$ 1$&$ -64.6; -61.2$& $1.95$ & $2.1$ & $8.2$ & $-63.0$ & $1.0$ & $26,-8$ \\
15470-5419c3 &$ 4$&$ -63.7; -51.9$& $3.93$ & $9.3$ & $36.3$ & $-60.0$ & $1.1$ & $-2,10$ \\
&$ 2$&$ -75.5; -64.1$& $0.57$ & $1.2$ & $4.7$ & $-66.4$ & $1.3$ & $-31,-50$ \\
15557-5215c1 &$ 2$&$ -68.4; -52.4$& $3.54$ & $7.5$ & $33.7$ & $-57.2$ & $1.6$ & $42,-7$ \\
&$ 2$&$ -67.9; -60.4$& $9.03$ & $11.3$ & $50.7$ & $-65.4$ & $1.0$ & $6,-57$ \\
15557-5215c2 &$ 1$&$ -71.0; -67.8$& $2.56$ & $2.3$ & $9.4$ & $-69.4$ & $0.8$ & $32,33$ \\
&$ 1$&$ -69.4; -67.1$& $0.18$ & $0.15$ & $0.7$ & $-67.1$ & $0.8$ & $7,4$ \\
15579-5303c1 &$ 9$&$ -63.1; -27.7$& $75.4$ & $120.4$& $643.3$ & $-47.7$ & $0.9$ & $-53,13$ \\
&$ 1$&$ -38.5; -33.7$& $0.59$ & $0.67$ & $3.6$ & $-35.8$ & $1.0$ & $-54,15$ \\
16061-5048c1 &$ 6$&$ -78.7; -50.9$& $19.6$ & $23.3$ & $104.6$ & $-67.0$ & $0.8$ & $-3,2$ \\
&$ 2$&$ -71.2; -68.1$& $0.47$ & $0.51$ & $2.3$ & $-69.0$ & $0.8$ & $-9,4$ \\
16061-5048c2 &$ 2$&$ -79.5; -69.6$& $1.50$ & $2.7$ & $12.1$ & $-77.7$ & $1.6$ & $45,-27$ \\
&$ 3$&$-135.8; -63.5$& $0.54$ & $1.8$ & $8.1$ & $-65.1$ & $0.9$ & $35,-20$ \\
&$ 3$&$ -79.6; -52.0$& $0.46$ & $0.75$ & $3.4$ & $-54.8$ & $1.8$ & $42,-24$ \\
16061-5048c4 &$ 3$&$ -35.6; -25.1$& $0.70$ & $2.1$ & $6.3$ & $-33.2$ & $1.5$ & $-5,-2$ \\
16573-4214c2 &$ 5$&$ -39.6; -20.2$& $1.30$ & $2.5$ & $3.9$ & $-29.9$ & $0.7$ & $-$\tablefootmark{a} \\
17195-3811 &$ 5$&$ -39.7; -29.5$& $8.39$ & $10.4$ & $31.3$ & $-32.2$ & $1.0$ & $-$\tablefootmark{a} \\
\bottomrule
\end{tabular}
\tablefoot{
\tablefoottext{a}{These sources have poor uv-coverage, thus no position for the maser spots is given.}
}
\end{table*}
\section{Discussion} \label{sec:discussion}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figures/sf_signs_summ-crop.pdf}
\caption{Summary of specific star formation indicators in the clumps. The labels correspond to green fuzzies (GF), mid-IR emission (IR), H$_2$O maser (M), and radio continuum emission (RC) (see Sect.~\ref{sec:discussion} for details). For example, the small central circle shows the combination IR+GF+RC, while the ``triangular'' area with a darker shade surrounding the small circle represents the presence of all four signposts of star formation.}
\label{fig:summ_sf}
\end{figure}
In order to investigate how the clump properties depend on their evolutionary state, the sources were first separated into two sub-samples, according to the presence or absence of signposts of active star formation. In particular, we considered the presence of water maser(s), ``green fuzzies'', $24\usk\mu\mathrm{m}$ and radio-continuum emission.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.75\columnwidth]{./figures/lm_plot-crop.pdf}
\caption{Mass-Luminosity plot for the sources in our sample with Hi-GAL observations. The mass is computed within the FWHM contour of $1.2\usk\milli\metre$ emission. The symbols are the same as in Fig.~\ref{fig:m_dist}. The black solid line indicates the ZAMS locus, according to \citet{Molinari+08}, while the dashed line indicates the ZAMS locus as determined by \citet[][]{Urquhart+13}. The grey lines show the evolution of cores of different masses; the lines are labeled with the final mass of the most massive star (in $\msun$). Time increases from bottom to top and from right to left, as indicated. Radio-continuum emission and MSX emission are found nearly exclusively in clumps near the ZAMS. A variation of a factor of 2 in mass (see text) is indicated in the bottom left corner.}
\label{fig:m-l_plot}
\end{figure}
\begin{itemize}
\item $24\usk\micro\metre$ emission: We overlaid the Spitzer MIPSGAL images at $24\usk\mu\mathrm{m}$ and the SEST $1.2\usk\milli\metre$ maps of \citet{Beltran+06}, in order to identify those clumps with and without IR emission. If a $24\usk\micro\metre$ source is found at the location of the $1.2\usk\milli\metre$ emission peak, then it is considered as being associated with it.
MIPSGAL images cover 41 of the clumps in this sample, the remaining 5 are covered by MSX images. Twenty-five clumps are IR-bright at $24\usk\micro\metre$ as shown in Fig.~\ref{fig:sest+24mum}, and 3 among those without Spitzer $24\usk\micro\metre$ data show emission in the $21\usk\micro\metre$ MSX image.
\item ``Green Fuzzies'': extended $4.5\usk\micro\metre$ emission produced by shock-excited molecular lines, commonly associated with Class II CH$_3$OH masers \citep{Cyganowski+09}. To identify the ``green fuzzies'', we followed the procedure described in \citet{Chambers+09}. Again, only 41 clumps out of the 46 are covered by GLIMPSE data.
Sixteen clumps show the presence of extended, excess $4.5\usk\micro\metre$ emission.
\item Water masers: 13 clumps in our sample show maser emission. The water maser is a known indicator of star formation, thought to appear in the early stages of the process \citep{BreenEllingsen11}, and is observed both in low- and high-mass star formation regions.
\item Radio-continuum: 40 of the clumps in our sample were observed with ATCA at $22\usk\giga\hertz$ and $18\usk\giga\hertz$ \citep{Sanchez-Monge+13}. Twelve sources were detected, and 3 more have a tentative detection at about $3\sigma$ at one of the frequencies. The radio-continuum alone is not always considered sufficient to classify a source as star forming. This is because we find one source (16061$-$5048c4) in the sample where the radio emission is likely to come from an ionization front in the outer layers of the clump, based on the morphology of the emission or the lack of an IR source in the Spitzer/Hi-GAL images. This special case is discussed in the Appendix.
\end{itemize}
If any of these signposts is observed (except radio continuum in the special case discussed in the Appendix), a clump is indicated as star-forming.
Based on these criteria, of the 46 objects observed, 31 were classified as star-forming and 15 as quiescent. A summary of the clumps with specific indicators of ongoing star formation is given in Fig.~\ref{fig:summ_sf}. All the star formation signposts in each clump are presented in Table~\ref{tab:sf_signs}. Figure~\ref{fig:sest+24mum} shows all the observed fields of view and clumps (dashed and solid red circles, respectively), with the SEST emission (contours) superimposed on the MIPS $24\usk\micro\metre$ image, and the position of maser spots (white open squares) and ``green fuzzies'' (green open circles).
Removing those clumps with no NH$_3$(2,2) detection, $26$ and $10$ clumps remain in the star-forming and in the quiescent sub-samples, respectively.
\smallskip
In the following, we compare the average properties of the clumps using the parameters we have derived, for the two sub-samples, to look for systematic differences in the physical properties characterizing the two classes. Note that hereafter the quiescent and star-forming sub-samples will be denoted with the acronyms \textbf{QS} and \textbf{SFS}, respectively.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/histo_lumi.pdf}
\caption{Normalized histogram of the luminosities for the SFS (green) and for the QS (black). $500\usk\mathrm{L_\odot}$ and $8000\usk\mathrm{L_\odot}$ are indicated by the dashed lines.}
\label{fig:luminosities}
\end{figure}
\subsection{The mass-luminosity plot}\label{ssec:ml_plot}
To refine the separation in different evolutionary phases we make use of the mass-luminosity ($M-L$) plot, which is an efficient and well-established diagnostic tool to disentangle the different evolutionary phases of star formation in the low-mass regime \citep{Saraceno+96}. \citet{Molinari+08} proposed that it could also be used for high-mass stars, under the hypothesis that star formation at high- and low-mass proceeds in a similar fashion, with accretion from the surrounding environment playing a major role \citep[e.g., ][]{Krumholz+09}. \citet{Molinari+08} built a simple model for the evolution of a clump, based on the turbulent core prescriptions of \citet{McKeeTan03}, ranging from the early collapse phase to the complete disruption of the dusty envelope by the central object.
Figure~\ref{fig:m-l_plot} shows the $M-L$ plot for the sources in our sample for which we were able to derive the luminosity (see Sect.~\ref{ssec:sed} and Table~\ref{tab:greybody}) from the integration of the SED (with the linear interpolation in the log-log space) and the mass derived from the $1.2\usk\milli\metre$ emission, for which we used the temperature determination obtained from the ammonia observations (Sect.~\ref{ssec:temp}). The black solid line indicates the ZAMS locus, according to \citet{Molinari+08}, while the dashed line indicates the ZAMS locus as determined by \citet[][]{Urquhart+13}. The latter authors consider clumps showing methanol maser emission and with a luminosity from the Red MSX Sources survey \citep{Urquhart+08}. $90\%$ of the sources studied by \citet{Urquhart+13} have $L>\pot{3}\usk\mathrm{L_\odot}$, the remaining $10\%$, with $L<\pot{3}\usk\mathrm{L_\odot}$, have low gas masses ($\pot{1}-\pot{2}\msun$), and could be forming intermediate-mass stars. Objects classified as either YSO and UCH\textsc{ii}\ regions were used to determine the ZAMS locus, as no statistically significant difference is found between the two classes. The difference in slope between the two ZAMS-lines could be due to the fact that in \citet{Molinari+08} the luminosities were determined from the Robitaille models, using IRAS fluxes in the FIR, and could thus be overestimated.
The grey curves show the evolution of the source predicted by the simple model, for different final masses, from $\sim 6$ to $\sim30\msun$. Time increases in the direction of the arrows shown in the upper right part of the plot. In the collapse phase, before the central object reaches the ZAMS, the mass of the core envelope does not change by much, but the luminosity increases rapidly. After the central star reaches the ZAMS, the luminosity does not vary much, and the energetic radiation and wind begin to destroy the parental clump, visible as a steady decrease in envelope mass.
Comparing the distribution of sources in the plot with the evolutionary tracks of the \citet{Molinari+08} model, our objects span the total range in envelope masses, for final masses of the star between about $6$ and $30\msun$.
The stellar masses are in agreement with those derived from the fit of the SED with the Robitaille models for the objects near the ZAMS, for which the final mass is similar to the current mass, as the main accretion phase is over. Massive stars appear to form virtually always in clusters \citep[e.g., ][]{LadaLada03}. In both the \citet{Molinari+08} and the Robitaille models the luminosity is considered as being dominated by the most massive object formed in the cluster. The derived bolometric luminosity and stellar mass could thus be overestimated. A detailed study of the stellar population in these clumps will be carried out in a subsequent paper, by means of mid-IR and near-IR data.
\begin{figure*}[tbp]
\centering
\includegraphics[width=\textwidth]{./figures/panels_histo_tkin-crop.pdf}
\caption{Normalized histogram (to the total number of sources in each class) of kinetic temperature for \textbf{(a)} the SFS (green), and the QS (black); \textbf{(b)} the same as \textbf{(a)}, but the SFS-2, with $L>\pot{3}\usk\mathrm{L_\odot}$ (in red) are separated from the SFS-1 (in blue), the red triangle shows the temperature for the Type 3 source; \textbf{(c)} the same as \textbf{(a)}, but the SFS was divided into clumps with (magenta) and without (cyan) radio-continuum emission. Mean and median values of the temperature are indicated in each panel. The total number of sources in each class is shown above each panel.}
\label{fig:histo_tk}
\end{figure*}
A first macroscopic difference between the SFS and the QS sources is the luminosity. As can be seen in Fig.~\ref{fig:luminosities}, the clumps in the QS have low luminosities, distributed between $\sim100$ and $500\usk\mathrm{L_\odot}$. On the other hand, the sources in the SFS show a peak at $\sim500\usk\mathrm{L_\odot}$, but also a second peak at $L\sim8\times\pot{3}\usk\mathrm{L_\odot}$ (both indicated as dashed lines in the figure).
The distribution of the sources in the $M-L$ plot indicates that part of the sources in the SFS (10 out of 20, including 17195-3811c1, with the mean luminosity derived with the online SED fitting tool) are likely hosting a ZAMS star and have stopped the accelerating accretion phase, according to the model of \citet{Molinari+08}. The sources with signs of active star formation, but well below the ZAMS loci, are essentially indistinguishable from the quiescent ones in terms of luminosities.
Figure~\ref{fig:m-l_plot} shows that in our sample all sources with $L>\pot{3}\usk\mathrm{L_\odot}$ have strong IR (MSX) and/or radio continuum emission, while those below this threshold do not.
The radio emission from the sources near the ZAMS locus, with final masses greater than $8\msun$ shows that the interpretation of the $M-L$ plot is essentially correct, and that the prediction of the end of the accelerating accretion phase is reasonably good.
The small range of luminosity and its low average value for the QS shows that this is a homogeneous sample, with all the clumps in an early phase of evolution.
On the other hand, the SFS appear to include clumps in widely different evolutionary stages: the points in the diagram go from clumps similar to those of the QS, to clumps containing a ZAMS star and beyond, where the star is dispersing the envelope.
For our sample, a simple criterion in luminosity is sufficient to separate the sources that likely have an embedded ZAMS star from the rest. Thus, in the following we will refer to objects containing a ZAMS star as those with $L>\pot{3}\usk\mathrm{L_\odot}$. In Fig.~\ref{fig:m-l_plot} these objects fall in the region encompassed by the ZAMS loci, except 13560$-$6133c1 and 16093$-$5015c1, slightly below the \citet[][]{Urquhart+13} ZAMS locus.
We can easily divide the clumps in this sample in Type 1, 2 and 3 according to the $M-L$ plot \citep[see ][]{Molinari+08}; the classification of sources in one of the three types is shown in Table~\ref{tab:sf_signs}. The clumps between the two ZAMS loci, with $L>\pot{3}\usk\mathrm{L_\odot}$ would be Type 2 clumps (including the two sources slightly below the Urquhart et al. ZAMS locus, 13560$-$6133c1 and 16093$-$5015c1, the former of which shows also radio continuum emission), while the rest of the SFS and the QS would be Type 1.
Type 1 sources include both pre-stellar and proto-stellar sources in early stages of evolution, for which the development of an H\textsc{ii}\ region may be quenched by the high accretion rates.
The leftmost point in the mass luminosity plot (Fig.~\ref{fig:m-l_plot}) identifies the most evolved source in our sample, 17355$-$3241c1, with a relatively low mass and a high luminosity. This clump is the only Type 3 source in our sample, and has very strong emission even in the IRAC bands. 17355$-$3241c1 exemplifies the last phase of the evolution in the $M-L$ plot, when the parent cloud is dispersed by the destructive action of the central star. This source is discussed in more detail in the Appendix.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.35\textwidth]{./figures/tkin_l.pdf}
\caption{Correlation between luminosity and kinetic temperature. The uncertainties for $L$ and $T_\mathrm{{K}}$ are indicated in the figure.}
\label{fig:tk_l}
\end{figure}
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.7\textwidth]{./figures/panels_lin_diam-crop.pdf}
\caption{Histogram of the diameters of the clumps. Panel \textbf{(a)} shows the diameters of the SFS (green) \textit{vs.} the QS (black). Panel \textbf{(b)} shows the diameters for the SFS-2 is indicated in red, the SFS-1 is indicated in blue, and the QS in black. The total number of sources in each class is shown above each panel.}
\label{fig:lin_diam}
\end{figure*}
Thus, the $M-L$ plot and the signposts of active star formation give complementary information about the evolutionary state of the clump, allowing us to refine the \citet{Molinari+08} classification, separating objects likely hosting a ZAMS star from the other sources in the SFS. Our original sample is thus finally divided into three different classes (without considering Type~3 objects): Type 1 quiescent clumps, apparently starless, Type 1 with signs of active star formation, but still a low luminosity and Type 2 sources, hereafter QS, SFS-1 and SFS-2, respectively. Table~\ref{tab:mean_zams_nozams} shows explicitly that SFS-1 and SFS-2 have very different luminosity-to-mass ratios. The clumps in the SFS-1 can be in a very early phase of the process of formation of a high-mass object; alternatively, the signposts of active star formation could be generated by more evolved lower-mass stars.
\subsection{Properties of sources in different stages of evolution}
\subsubsection{Temperatures}
The {temperature} of gas and dust may be influenced by the presence of a (proto-)star deeply embedded in a clump.
Figure~\ref{fig:histo_tk}a shows the histogram of temperatures for the two samples. The normalized counts of the SFS are shown in green, and those of the QS in black. We find that it is possible to observe temperature differences on a large scale, comparing the average values of $T_\mathrm{{K}}$ and $T_\mathrm{{d}}$ of the QS and the SFS. The typical $T_\mathrm{{K}}$ of the star-forming clumps is greater than that of the QS, with mean values of $T_\mathrm{{K}} = 19.5\unc{-2.9}{+1.5}\usk\kelvin$ and $14.1\unc{-3.2}{+1.8}\usk\kelvin$, for the SFS and QS, respectively. Thus, the average temperature increases as evolution proceeds.
In Fig.~\ref{fig:histo_tk}b we can see that the SFS-2 (in red) show a slightly higher $T_\mathrm{{K}}$ ($T_\mathrm{{K}} = 21.4\unc{-3.8}{+1.7}\usk\kelvin$) than the SFS-1 (in blue) ($T_\mathrm{{K}}=18.3\unc{-3.0}{+1.4}\usk\kelvin$). Figure~\ref{fig:histo_tk}b shows also the QS, to underline that both the SFS-1 and SFS-2 sources on average have a higher $T_\mathrm{{K}}$. Figure~\ref{fig:histo_tk}c shows that the SFS with $1.3$~cm radio-continuum emission are hotter than those without it.
A similar conclusion is found by \citet{Sanchez-Monge+13b}, studying NH$_3$(1,1) and (2,2) at high angular resolution in cores in clustered high- and intermediate-mass star forming regions. They find that starless cores have an average $T\sim15\usk\kelvin$, lower than $T\sim21\usk\kelvin$ found for proto-stellar cores. These values are very close to those found in this work. \citet{Sanchez-Monge+13b} show that the higher temperatures in starless cores in clustered environments with respect to more isolated cases can be explained considering external heating from the nearby massive stars. Also \citet{Rygl+10} and \citet{Urquhart+11} find that actively star-forming clumps are slightly hotter than the quiescent ones.
A behaviour similar to that of the kinetic temperature is observed for $T_\mathrm{{d}}$ as a function of evolutionary phase, not unexpected given the good agreement of the two temperatures (see Sect.~\ref{ssec:sed}; Fig.~\ref{fig:tk_td}).
The mean values are $T_\mathrm{{d}}=24.0\unc{-1.6}{+1.5}; 15.8\unc{-1.6}{+1.4}; 11.8\unc{-1.5}{+1.7} \usk\kelvin$ for SFS-2, SFS-1 and QS, respectively.
Disregarding the 7 points with $|T_\mathrm{{K}}-T_\mathrm{{d}}|\ge5\usk\kelvin$ and non-overlapping $68\%$ credibility intervals, we find a correlation between the luminosity and the kinetic temperature, shown in Fig.~\ref{fig:tk_l}. A correlation between $T_\mathrm{{K}}$ and $L_\mathrm{bol}$ was also found by e.g., \citet{Churchwell+90}, \citet{Wu+06}, \citet{Urquhart+11}, and by \citet{Sanchez-Monge+13b}, for cores in clustered environments, using the luminosity of the whole region.
\smallskip
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90, width=0.35\textwidth]{./figures/prof_comparison.pdf}
\caption{$1.2\usk\milli\metre$ intensity level as a function of the area within the contour for a typical QS (black) and SFS-2 (red) source, respectively.}
\label{fig:12mm_extrap}
\end{figure}
\subsubsection{Sizes}\label{sssec:sizes}
From the panel (a) of Fig.~\ref{fig:lin_diam} we note that the SFS tend to have smaller FWHM diameters than the QS. From the distribution of sizes shown in panel (b) we note that the SFS-2 have a peak at the smallest linear dimensions.
Plotting the area within a specific intensity contour of $1.2\usk\milli\metre$ emission as a function of the intensity level and extrapolating linearly to zero intensity we get an idea of the source extent if we could observe with infinite sensitivity \citep[cf. ][]{BrandWouterloot94}. Figure~\ref{fig:12mm_extrap} shows the $1.2\usk\milli\metre$ intensity level as a function of the area within the contour for representative sources in QS and SFS-2. For the SFS-2 we ignore the central emission peak for the fit. This method allow us to estimate the effect of the lower temperature on the clump size at the typical noise levels of the SEST maps.
This procedure shows that with our noise levels we miss $\sim 30\%$ of the emission area for a typical QS, while only $\sim10-15\%$ is lost for a typical SFS-2. The SFS-1 usually shows an intermediate behaviour. The larger fraction of emission area below the noise level for the QS confirms that we are not able to detect the external envelope of the coldest clumps, and that the actual linear size of QS sources is very similar to that of SFS-2 objects. On the other hand, the area within the FWHM contour is smaller for SFS-2 sources, possibly indicating that sources hosting a ZAMS object are more compact and centrally concentrated. We investigate an alternative possibility, namely that the observed FWHM size may be $T$-dependent, performing a simple test using a 1D simulation with RATRAN \citep{HogerheijdeVanDerTak00}, constructing a clump with a typical radial dependence of the density \citep[$\propto r^{-1.7}$, cf. ][]{Beuther+02b, Mueller+02}, and comparing the continuum at $250 \usk \giga\hertz$ with and without a central luminous heating source. The radial temperature dependence is assumed to be $\propto (r/r_0)^{-0.4}$ \citep[e.g., ][]{WolfireCassinelli86}. We observe that the clump with the embedded source, has a smaller (by $20-30\%$) FWHM. This could explain the smaller sizes derived for SFS-2.
\citet[][]{Urquhart+13}, also conclude that high-mass star forming clumps showing methanol maser emission are more compact and centrally concentrated than the rest of sources in the ATLASGAL survey \citep{Schuller+09}, comparing the ``compactness'' of the sources by means of the ratio of the peak and integrated sub-mm flux, for a much larger sample.
\subsubsection{Densities}
The mean {column-, volume- and surface-densities} are compared for QS and SFS, and for SFS-1 and SFS-2 in Fig.~\ref{fig:histo_dens_fwhm} and \ref{fig:histo_dens_fwhm_zams}, respectively.
\begin{figure}[btp]
\centering
\includegraphics[angle=-90,width=0.33\textwidth]{./figures/histo_nh2_fwhm.pdf}
\caption{Normalized histogram of volume density of molecular hydrogen averaged within the FWHM contour. We show the SFS in green and the QS in black. The total number of sources in each class is shown above the panel.}
\label{fig:histo_dens_fwhm}
\end{figure}
\begin{figure*}[btp]
\centering
\includegraphics[width=\textwidth]{./figures/panels_densities-crop.pdf}
\caption{Normalized histograms of \textbf{(a)} column-, \textbf{(b)} volume- and \textbf{(c)} surface-densities of molecular hydrogen averaged within the FWHM contour. We show the SFS-2 in red, SFS-1 in blue and the QS in black. The total number of sources in each class is shown above the panels.}
\label{fig:histo_dens_fwhm_zams}
\end{figure*}
The histogram of the SFS is shifted towards higher densities; \citet{Chambers+09} obtain the same result, with star-forming sources on average denser than the quiescent ones. Taking into account the separation into SFS-1 and SFS-2, the density histograms show that the SFS-2 usually have higher values of column-, volume- and surface-densities than either QS or SFS-1; the latter two classes have density distributions peaking at similar values, with that of the SFS-1 showing a tail with values similar to those of the SFS-2. Thus, the clumps hosting a ZAMS star have higher densities, indicating that as star formation proceeds, the clumps appear to become denser in the central parts. The same is found by \citet{ButlerTan12}, comparing their sample of starless cores to a sample of more evolved objects \citep[from ][]{Mueller+02}. We caution that, as the source size could be underestimated due to the presence of a central heating source, causing the clump to appear more centrally peaked (see Sect.~\ref{sssec:sizes}), the densities could be overestimated for the SFS-2, thus explaining the observed differences with both QS and SFS-1.
\citet{Rygl+13} argue that star formation signposts are not present in clumps with a column density below a value of $4\times\pot{22}\usk\centi \metre^{-2}$. No such threshold effect for the onset of star formation is observed in our sample, but only two of our clumps of any type have column densities well below $4\times\pot{22}\usk\centi \metre^{-2}$ (cf. Table~\ref{tab:fwhm}).
We note that the values of the mass surface density are typically lower than the theoretical threshold of $\Sigma=1\usk\gram\usk\centi \metre^{-2}$ for massive star formation, on average by a factor $2-6$. The theoretical threshold for $\Sigma$ given by \citet{KrumholzMcKee08} holds for a single core, stabilized against fragmentation only by radiative heating.
\citet{LopezSepulcre+10} show that massive star formation, indicated by the presence of massive molecular outflows, most probably driven by massive YSOs, is occurring also in clumps with a much lower mass surface density, of the order of $\Sigma\sim0.3\usk\gram\usk\centi \metre^{-2}$. \citet{ButlerTan12} show that similar values of the mass surface density $\Sigma$ are typical also of massive starless cores, assuming a gas-to-dust ratio of $150$, thus consistent with our average value of $\sim0.2\usk\gram\usk\centi \metre^{-2}$ for the QS, for a gas-to-dust ratio of $100$.
\begin{figure*}[tbp]
\centering
\includegraphics[angle=-90, width=0.35\textwidth]{./figures/15579c3_pl_profile.pdf} \hspace*{1cm}
\includegraphics[angle=-90, width=0.35\textwidth]{./figures/15579c1_pl_profile.pdf}
\caption{Average $1.2\usk\milli\metre$ flux, normalized to the maximum, as a function of the radius of the largest circle of the annulus for a typical QS (left) and SFS-2 (right) source. The uncertainty on the average flux in indicated. The beam FWHM size is indicated as an errorbar in $x$. In the bottom left corner the slope $m$ is indicated.}
\label{fig:density_pl}
\end{figure*}
We investigate in more detail the possibility that the radial dependence of the density changes in different classes of objects, by fitting a power law to the average radial $1.2\usk\milli\metre$ intensity profile calculated in concentric annuli centered on the clump. Figure~\ref{fig:density_pl} shows the normalized $1.2\usk\milli\metre$ emission as a function of the angular radius of the largest circle of the annulus. The beam FWHM size is indicated as an errorbar in $x$. Following the notation in \citet{Ward-Thompson+94} (and references therein), the power law indices of the $\usk\milli\metre$ emission $m$, of the density $p$ and of the temperature $q$ are related by $m=p+Q(\nu,T)q-1$, where $Q(\nu,T)$ is a coefficient depending on the wavelength and the temperature, near to unity for $h\nu/(k_B T)\ll1$ \citep[see ][]{Adams91}. We find typical power law indices for mm emission $m \simeq 0.2-0.4$ for QS and $\sim 1.0-1.3$ for SFS-2 clumps, respectively. We assumed that quiescent clumps are isothermal and that Type 2 sources are likely to have a radial gradient in temperature, described by $T\propto (r/r_0)^{q}$, with $q=-0.4$ \citep[e.g., ][]{WolfireCassinelli86}. This implies that the power law index $p$ for volume density is $\sim 1.5-1.8$ in SFS-2 sources, steeper than in QS sources, with $p\sim 1.2-1.4$. The value of $p$ is highly uncertain, because of the assumptions made and because of the very limited number of points for each source, however we are mainly interested in the difference between the QS and SFS-2, and not the specific value of $p$. As the clumps in QS and SFS have similar masses and total sizes (see Sect.~\ref{ssec:mass_dens_size} and \ref{sssec:sizes}), this difference in density profile suggests that SFS-2 clumps may indeed be denser in the central regions. Other authors investigated the differences in the density power law index for sources in different stages of evolution. \citet{Beuther+02b} also derive an average radial power law dependence for density with $p \sim 1.5-2.0$. These authors find that the density distribution is flatter for sources in the very early stages ($p\sim1.5$), then it becomes steeper during the collapse and accretion phase ($p\sim1.9$), and finally it flattens again ($p\sim1.5$) in the dispersal phase. A similar result is reported by \citet{ButlerTan12}, who compared power law indices for density in starless clumps ($p \sim 1.1$), obtained with a very different technique, with the same quantity derived by \citet{Mueller+02} for more evolved objects ($p \sim 1.8$).
\smallskip
\subsubsection{Mass}
The masses measured for our clumps are in the range $10-2000\msun$, which indicates that we are probing the low-mass end of the \citet{KauffmannPillai10} relation (cf. their Fig.~2b and Fig.~\ref{fig:m-r_diagram} in this paper).
With respect to more massive clumps, those in this sample will probably form a very limited number of stars with $M>8\msun$, making the assumption of a single massive star in the clump plausible. On the other hand, we know that massive stars are being formed in some objects, as we observe compact radio continuum emission, likely arising in H\textsc{ii}\ regions \citep{Sanchez-Monge+13}.
Comparing the source size and densities (Fig.~\ref{fig:lin_diam} and \ref{fig:histo_dens_fwhm_zams}) SFS-2 sources are typically the most compact, suggesting that an evolution of the clumps towards more compact entities with evolution might indeed occur. However, as time proceeds and the massive stars dissociate the molecular gas they move down and to the right in the $M-r$ plot (Fig.~\ref{fig:m-r_diagram}), as shown by 17355$-$3241c1.
\smallskip
\subsubsection{Velocity and linewidth}
The velocity gradients found for the clumps ($1-2\usk\kilo\metre\usk\second^{-1}\usk\mathrm{pc}^{-1}$; Sect.~\ref{ssec:dynamics}) are comparable for the SFS and QS, and also the fraction of clumps showing gradients is similar: about $50\%$ of the objects.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90, width=0.45\textwidth]{./figures/2nd_mom_example.pdf}
\caption{Example of NH$_3$(1,1) second moment map. Left panel shows Spitzer/MIPSGAL $24\usk\micro\metre$ image in colourscale and NH$_3$(1,1) integrated emission in contours, while the right panel shows the second moment map.}
\label{fig:ex_2nd_mom}
\end{figure}
From the second moment map we find that the clumps in the SFS have larger linewidths, with an average value of $2.2\usk\kilo\metre\usk\second^{-1}$ compared to $1.6\usk\kilo\metre\usk\second^{-1}$ for the sources in the QS. For 08477$-$4359c1, 13560$-$6133c1, 15557$-$5215c1 and c2, 15579$-$5303c1 and 16093$-$5015c1 we find an increase of the linewidth between the position of a $24\usk\micro\metre$ source and the rest of the clump of $\sim10-30\%$. Figure~\ref{fig:ex_2nd_mom} shows an example of this increase in linewidth at the location of a $24\usk\micro\metre$ source. These sources always have a $\Delta V$ of the (2,2) line larger than that of the (1,1) at this position. This indicates that we are probing the regions where the embedded source is injecting turbulence. All these sources have a luminosity in excess of $\pot{3}\usk\mathrm{L_\odot}$, except 15557$-$5215c2 ($L=740\usk\mathrm{L_\odot}$), and 08477$-$4359c1, for which we do not have Herschel data. Similar linewidths are found in dense cores located within clustered massive star forming regions \citep{Sanchez-Monge+13b}: $\sim1.2\usk\kilo\metre\usk\second^{-1}$ and $\sim2.0\usk\kilo\metre\usk\second^{-1}$ for starless and proto-stellar cores, respectively.
\subsubsection{Virial parameter} \label{ssec:alpha}
Figure~\ref{fig:virial} presents the {virial parameter} $\alpha\equiv M_\mathrm{vir}/M$ as a function of $M$. As already noted, all clumps in our sample appear dominated by gravity. Moreover, these are upper limits for $\alpha$ as the virial mass should be reduced by up to a factor of 2 \citep[see ][]{MacLaren88} as a consequence of the density gradients found in the clumps; another factor of 2 may arise because the mass was was computed within the FWHM contour. Two sources in the SFS has $\alpha\gtrsim2$: this could be due to the action of the embedded YSO(s) disrupting the parental cloud.
The value of the virial parameter tend to decrease as the clump mass increases.
Let's explore a few possibilities to explain $\alpha < 1$ for such a large number of sources.
The first is that we might be underestimating the gas/dust temperature, and are thus overestimating the mass from the $1.2\usk\milli\metre$ continuum. As NH$_3$(1,1) and (2,2) essentially trace cold gas and they could be optically thick, this may be a possibility. An independent determination of the temperature in the clumps is given by the dust temperature derived from the modified black-body fit of the mm-FIR part of the SED. Dust emission is optically thin in this regime, thus probing even the inner regions of the clump. Comparing $T_\mathrm{{K}}$ and $T_\mathrm{{d}}$ we find that the two temperatures are usually in good agreement, as discussed in Sect.~\ref{sec:results} and shown in Fig.~\ref{fig:tk_td}. Therefore we do not expect the bulk of the gas in the clump to have temperatures systematically higher than those adopted here, and consequently lower masses. Moreover, also quiescent clumps have $\alpha \ll 1$, and the temperature in this case should not be underestimated, as no heating source is present in these sources. Even if it were the case, it is difficult to account for a factor of $3-10$. The independent estimate of the mass through the SED essentially confirms that the mass of the clumps is correct.
Another possibility is that we are underestimating the radius of the clump, leading us to underestimate the virial mass. \citet{PanagiaWalmsley78} show that if the source is not Gaussian, we may underestimate the radius by even a factor of 2.
However, it is not clear why the most massive clumps should be different from the others. The same holds for the uncertainty connected to the gas-to-dust ratio, assumed to be 100.
\citet{Fontani+02} and \citet{LopezSepulcre+10} reach opposite conclusions regarding the stability of the clumps in their samples. In the former work the authors show that the ratio between $M_\mathrm{vir}/M$ reaches values as low as ours, using as a tracer CH$_3$CCH, while \citet{LopezSepulcre+10} obtain $\alpha\sim1$ using the mass derived from dust emission and the virial mass from the C$^{18}$O line emission. Also \citet{Hofner+00} find $\alpha<1$ using the less abundant C$^{17}$O.
Some of the clumps in our sample have been observed in C$^{18}$O$(3-2)$ by \citet{Fontani+12}, and these authors suggest that the clumps are in virial equilibrium or even have virial masses greater than the clump mass (i.e., $\alpha\gtrsim1$). Comparing the C$^{18}$O and the NH$_3$ linewidths we find that the former are larger, typically by a factor of $1.6$ and even up to $\sim2.6$. As a consequence, the virial masses calculated using the C$^{18}$O linewidth would be larger by a factor $2.7$ (up to $6.8$), and $\alpha$ would increase accordingly.
\citet{Sanchez-Monge+13b} find linewidths for cores similar to those found in this study.
The difference in linewidth between NH$_3$ and C$^{18}$O may be explained by the fact that C$^{18}$O is tracing a more extended and diffuse region, where $\Delta V$ may be larger due to the effects of the environment in which the clump is embedded, or due to the presence of several gas concentrations along the line of sight (i.e., additional indistinguishable velocity components in the spectra), not dense/massive enough to be visible in NH$_3$, or by the presence of velocity gradients across the clumps. Moreover, C$^{18}$O is more prone to be entrained in outflows driven by objects already formed in the clump. In conclusion, $\alpha$ may be underestimated, nevertheless, even taking into account the possibilities discussed above, $\alpha$ would still be $<1$ for the sources with the highest masses. Moreover, the contribution of external pressure helps gravity.
If these clumps were supported only by turbulence and thermal pressure, they would collapse on the timescale of the free-fall time. Given the short timescales set by the free-fall time $t_{ff}=\sqrt{3\pi/(32 G \rho)} \sim 5\times\pot{4}\yr$, this may suggest that magnetic field plays a significant role in stabilizing the clumps.
Following the relation for virial equilibrium given in \citet{McKee+93}, neglecting the surface pressure term, we get the expression for the equilibrium magnetic field
\begin{equation}
\begin{split}
B = & 1.4\times\pot{-5} \left[ \left( \frac{M}{100\msun} \right) \left( \frac{R}{1\usk\mathrm{pc}} \right)^{-4} \right]^{0.5} \times \\
& \left[ \left( \frac{M}{100\msun} \right) - 0.71 \left( \frac{R}{1\usk\mathrm{pc}} \right) \left( \frac{\Delta V}{2\usk\kilo\metre\usk\second^{-1}} \right)^2 \right]^{0.5} [G],
\end{split}
\end{equation}
where $M$ is the clump mass, $R$ is the clump radius and $\Delta V$ is the FWHM of the line.
Using appropriate numbers for these parameters we find that $|\overrightarrow{B}|\approx 0.1-1\usk\milli\mathrm{G}$ is sufficient to stabilize the clumps. Such values of the magnetic field have been observed towards regions undergoing massive star formation \citep[e.g., ][]{Crutcher05, Girart+09}.
\begin{figure}[tbp]
\centering
\includegraphics[angle=-90,width=0.75\columnwidth]{./figures/dgc_abund.pdf}
\caption{Ammonia abundance as a function of Galactocentric distance. The symbols are the same as in Fig.~\ref{fig:m_dist}.}
\label{fig:dgc_abund}
\end{figure}
\section{Observational classification of high-mass clumps} \label{sec:sketch}
Figure~\ref{fig:sketch_ev_ph} shows a sketch of the different evolutionary phases identified in this work, with representative values for the observed properties indicated in the yellow rectangles. The arrows connecting the different source types show how the evolution proceeds. In addition to the properties listed in the figure, we find that the SFS-2 have smaller FWHM diameters, that can be due to an intrinsically smaller size and/or just an observational effect caused by the presence of a temperature gradient generated by the heating of the embedded massive (proto-)stars (see Sect.~\ref{sssec:sizes}). The density profile seems to be steeper in the SFS-2 clumps than in the QS, even when allowing for a temperature gradient in the former sub-sample. Thus the central density may indeed be higher for clumps with a similar size and mass. More detailed studies are needed to confirm this result.
Mean and median values for various parameters of the QS and SFS, and of the SFS-1 and SFS-2 are listed in Tables~\ref{tab:mean_sfs_qs_fwhm} and \ref{tab:mean_zams_nozams}.
\begin{figure*}[tbp]
\centering
\includegraphics[width=\textwidth]{./figures/sketch_phases-crop.pdf}
\caption{A simple sketch of the evolutionary phases considered in this paper for massive star formation. The representative properties of clumps in the different evolutionary stages as derived in this work are listed in the yellow rectangles.}
\label{fig:sketch_ev_ph}
\end{figure*}
\section{Summary and conclusions} \label{sec:summary}
From ATCA NH$_3$ observations of 46 clumps previously observed with the SEST in the $1.2$-mm continuum we derived the average properties of the gas for a sample of 36 of these, detected in both NH$_3$(1,1) and (2,2).
With a reliable and independent temperature estimate through the NH$_3$ (1,1) and (2,2) line ratio, we determined the mass of the gas.
We performed the simplest virial analysis to investigate the stability of the clumps against gravity. All sources, but one, show a virial parameter $\alpha\lesssim2$ (Fig.~\ref{fig:virial}), showing that gravity is the dominating force. The most massive clumps typically show $\alpha < 1$, and we showed that this is likely to be real (see Sect.~\ref{ssec:alpha}). The role of the magnetic field in stabilizing the sources against gravitational collapse is thus a major one. The required strength of the field was estimated to be $|\overrightarrow{B}|\approx 0.1-1\usk\milli\mathrm{G}$, in agreement with the sparse measurements in regions of high-mass star formation (Sect.~\ref{ssec:alpha}).
We find ammonia abundances to be in the range $\pot{-7}-\pot{-9}$, but within the canonical values of $\sim\pot{-7}-\pot{-8}$ for the vast majority of the sample, showing that this molecule is not depleted, as opposed to CO in these clumps \citep{Fontani+12}.
These data were complemented with Herschel/Hi-GAL, Spitzer/MIPSGAL, MSX and Spitzer/IRAC data, to construct the SEDs of the sources (Sect.~\ref{ssec:sed}).
From the SEDs we derived the luminosity of 32 sources (i.e. those with Herschel/Hi-GAL data), out of which we have 29 sources with a reliable ammonia detection in both lines, so that we could locate the clumps in a $M-L$ plot (Fig.~\ref{fig:m-l_plot}).
The sample was divided into sub-samples of clumps in different phases of evolution on the basis of the presence or absence of signs of ongoing star formation and on the location of the sources in the $M-L$ plot. To classify a clump as star-forming, we considered the presence of at least one of the following tracers: $24\usk\micro\metre$ emission, $1.3\usk\centi \metre$ continuum emission, water masers, and ``green fuzzies'' (excess emission at $4.5\usk\micro\metre$). The star-forming sub-sample (SFS) includes these sources, while the quiescent sub-sample (QS) contains those without detectable signs of ongoing star formation.
We used the $M-r$ relation found by \citet{KauffmannPillai10} to assess if also the clumps without clear signs of ongoing massive star formation in our sample are potentially able to form high-mass stars. Virtually all sources lie above of the empirical relation, confirming that our sample is a good one to study evolution in the first stages of the formation of high-mass stars.
We explored if and how the average properties of a clump depend on the presence of active star formation and on its evolutionary phase (Sect.~\ref{sec:discussion}). The information from the $M-L$ plot and that on the presence of ongoing star formation are complementary: we find that Type 1 sources include both the star forming clumps with a low luminosity, well below the ZAMS loci in the $M-L$ plot, and the clumps in the QS, while Type 2 sources are the clumps in the SFS hosting a ZAMS star. For our sample, a convenient criterion based on $L$ was enough to separate Type 1 and Type 2 objects (cf. Fig.~\ref{fig:m-l_plot}): sources with $L>\pot{3}\usk\mathrm{L_\odot}$ are likely to host a ZAMS star. This idea is corroborated by the large fraction of these sources that are encompassed by the two determinations of the ZAMS locus \citep[][]{Molinari+08, Urquhart+13}, and show radio-continuum and strong mid-IR emission, i.e. an UCH\textsc{ii}\ region. Therefore we define the following classes of objects using both the information obtained from the $M-L$ plot and from classical signposts of star formation: QS for quiescent Type 1 sources, SFS-1 for Type 1 sources with signs of active star formation, and SFS-2 for Type 2 sources.
A sketch of these phases with the typical values for the physical parameters derived in this work is shown in Fig.~\ref{fig:sketch_ev_ph}.
Analyzing the typical properties of the clumps in our sample we find that they depend on the evolutionary phase of the source. The differences found can be summarized as follows:
\begin{itemize}
\item SFS-2 sources always show radio continuum or strong mid-IR emission, suggesting the presence of an H\textsc{ii}\ region, while QS and SFS-1 objects do not.
\item The average temperatures (both kinetic and dust) of the three evolutionary classes slowly increases from the QS, to SFS-1, to SFS-2 sources, with typical values of $\sim13\usk\kelvin$, $17\usk\kelvin$ and $\sim23\usk\kelvin$, respectively.
\item The temperature of the clumps appears to be correlated with the luminosity of the source (Fig.~\ref{fig:tk_l}).
\item SFS-2 objects have smaller FWHM diameters (median values of $0.5$ \textit{vs.} $0.8\usk\mathrm{pc}$ for SFS-2 and QS, respectively), due to the presence of strong and compact peaks of emission at mm wavelengths. This could be caused by the presence of a temperature gradient in the SFS-2.
\item As a consequence, clumps classified as SFS-2 on average have higher volume-, column- and surface-densities inside the FWHM intensity contour of the $1.2\usk\milli\metre$ continuum emission.
\item Assuming that density (for all clumps) and temperature (for SFS-2 clumps) both vary as power laws as a function of clump radius, we derived the power law indices for density, and found them to be steeper in more evolved sources. Typical power law indices for the molecular hydrogen volume density are $p\sim1.2-1.4$ for QS and $\sim1.5-1.8$ for SFS-2 sources. These results indicate that more evolved sources are indeed denser and more centrally concentrated.
\item The fact that SFS-2 sources are the most extreme in terms of compactness suggests that QS sources are still contracting.
\end{itemize}
\begin{acknowledgements}
The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This research made use of data products from the Midcourse Space Experiment. Processing of the data was funded by the Ballistic Missile Defense Organization with additional support from NASA Office of Space Science. This research has also made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research made use of the NASA ADS database.
\end{acknowledgements}
\bibliographystyle{bibtex/aa}
|
1,116,691,500,325 | arxiv | \section{Introduction}
\label{sec: introduction}
With the deployment of first releases of 5G 3GPP wireless networks standards, the research community is already defining the scenarios and directions for Beyond 5G and 6G future systems~\cite{6GWhitePaper,harish20206g}.
Interest in Artificial Intelligence techniques has surged, leading to the revolution of many staple techniques commonly used in communications~\cite{hoydis2020special}. At the same time, some of the old challenges remain hot, like enforcing network flexibility with a variety of possible Quality of Service (QoS) demands by its users~\cite{ABKKM-EAMSWA,maeder2016scalable,MABK-SNSC}.
Heterogeneous QoS profiles and increasing traffic demands require the implementation of a proper prioritization framework in base stations. In wireless networks prevention of cell congestion cannot be demanded from real time operations where simpler algorithms, e.g., radio resource scheduling, are running. The typical first layer of security is guaranteed by a proper Admission Control (AC) mechanism.
It can be described as an agent that decides whether or not to accept a new user equipment (UE) connection request, based on the current cell load and the QoS profile of ongoing traffic and of the new request, defacto aiming at preventing congestion and serving as much traffic as possible.
The most relevant performance metrics for AC are the blocking probability, i.e., the probability that a new UE connection request is blocked and the dropping probability, i.e, the probability that an existing UE connection is terminated due to insufficient available resources at the base station.
The AC problem in cellular systems already received a lot of attention and is well studied~\cite{DM-CALS,RNT-OCAC,MHT-CACCDMA}.
The AC problem for 5G wireless networks is attracting renewed interest because of the new complexities within these networks, see~\cite{AACA-CACUDN, HNLF-ACNS}. Most notable is the state of the art solution using reinforcement learning (RL). Unlike other model-based algorithms, RL does not require specific state transition models which is a very important feature when considering large wireless networks supporting various types of UEs. Especially, an agent based on Q-learning, see~\cite{SBP-DQLAC, TB-ACAC}, and on the use of neural networks, see~\cite{BKO-TASAC, MG-CAC}, is studied. In~\cite{MG-CAC} it is assumed that the channel rate of the UE is constant over time and~\cite{BKO-TASAC} considers the scenario with stochastic resource requirements network slices that are fixed over time.
As alluded to above, the goal of the AC agent is to strike an optimal trade-off between the blocking and dropping probability, all in the presence of varying channel rates of the UE.
The novelty of our approach compared to the closest prior art~\cite{BKO-TASAC, MG-CAC} is the consideration of UE mobility that impacts the large scale components of the wireless channel, according to~\cite{TR38.901}.
So it may be that, at the time of a new UE connection request, the base station has enough available resources to serve it. However, due to varying channel rates given by different (i) UE position and (ii) serving cell, the required resources to guarantee the same throughput QoS may change. Therefore, a UE connection may be dropped since the base station is not capable of providing enough resources to all the UEs.
We consider two approaches for solving the AC problem: i) a threshold policy and ii) a reinforcement learning policy. Extensive simulation experiments are conducted to analyze the performance of both policies. The results show that the reinforcement learning policy outperforms the threshold-based policies, being able to generalize its operations in the scenario with heterogeneous time-varying arrival rates and multiple UE types.
The remainder of the paper is organized as follows. In Section~\ref{sec: methods} we explain the AC policies. In Section~\ref{sec: simulation environment} we provide a description of the simulation environment and Section~\ref{sec: simulation results} reports on extensive simulation experiments, in which the performance of all the policies is compared. Section~\ref{sec: conclusion} contains conclusions and some suggestions for further research.
\section{Considered Methods}
\label{sec: methods}
In this section we discuss the various AC policies that are proposed in this paper.
Before introducing the threshold-based and reinforcement learning policies, we first discuss the reward framework.
The purpose of this reward framework is threefold; i) it is needed for updating the Q values in the reinforcement learning policy, ii) it enables us to compare the various policies based on the total (discounted) reward, where the goal for each policy is to maximize the highest total (discounted) reward, and iii) it allows us to distinguish and prioritize between UE types. Indeed, the operator can construct the rewards in such a way that dropping a certain type is highly unfavorable relative to the other types. Let $r_{\mathrm{x},C(i)}$ be the reward of the event $\mathrm{x}$ for the $i$th UE, here $C: I \rightarrow M$ is a function that maps the UE to its type with $I$ denoting the set of UEs and $M$ the set of types. Accepting a new UE connection request yields a (positive) reward $r_{\mathrm{A},C(i)}$ while for blocking this UE connection we pay (negative) penalty $r_{\mathrm{B},C(i)}$. Moreover, for dropping an existing UE connection we pay a penalty, i.e., receive a negative reward $r_{\mathrm{D},C(i)}$. From an user experience perspective it is more bothersome when the connection is abruptly terminated than not being able to establish the connection in the first place.
\subsection{Threshold policies}
\label{sec: threshold policies}
We consider two threshold-based policies which are the prior art, see for example~\cite{CBVCA-NSGRS}. In the \textit{threshold UE} policy a new UE connection request is accepted if the total number of UEs served by the base station is less than a certain specified threshold $\tau_{\mathrm{U}}$.
In the \textit{threshold resource} policy a new UE connection request is accepted if the total occupied resource of the base station plus the requested resource of the new UE connection is less than a certain specified threshold $\tau_{\mathrm{R}}$. Equivalently, this policy ensures that the fraction of occupied resources after accepting the UE connection does not exceed $\tau_{\mathrm{R}}/B$, with $B$ denoting the total bandwidth of the base station. Observe that in both policies the number of UEs or the total occupied resource can exceed the threshold in a specific base station at some point in time, since we allow for mobility of the UEs.
\subsection{Q-learning}
Q-learning is a reinforcement learning algorithm that learns the quality of actions under a generic set of circumstances, captured in the system state \cite{WD-QL}. Note that this algorithm is model-free, meaning that it does not require the formalization of any underlying model.
Let the action space be denoted by $\mathcal{A}$ and the state space by $\mathcal{S}$. At each execution time the AC agent can perform two actions: either block or accept new UE connection requests, i.e., $\mathcal{A} = \{\text{block}, \text{accept}\}$ which we denote by $a^{-}$ and $a^{+}$, respectively. The set of features defining the state space will be varied to investigate the effect of additional features on the performance of the agent.
Let $s_{i} \in \mathcal{S}$ denote the state upon request of a new connection by UE $i$. For this UE connection the AC agent will take the action $a_{i} \in \mathcal{A}$ which has the highest Q value, i.e., $a_{i} = \text{arg}\max_{a \in \mathcal{A}} Q(s_{i},a)$.
The Q values for each action are stored in a so-called look-up table and are updated based on the reward framework.
The Q values represent the expected discounted reward and are updated according to the Bellman equation given by,
\begin{align}
\label{eq: updating Q table}
Q^{\mathrm{new}}(s_{i},a_{i}) &= Q(s_{i},a_{i}) + \alpha \Big( r(s_{i},a_{i}) \nonumber \\
& \quad + \gamma^{\Delta t} \max_{a \in \mathcal{A}}Q(s_{i+1},a) - Q(s_{i},a_{i}) \Big),
\end{align}
where $\alpha$ is the learning rate, $\gamma$ the discount factor, $\Delta t$ the time until the next decision making point and the discounted reward
\begin{align*}
r(s_{i},a_{i}) =
\begin{cases}
r_{\mathrm{B},C(i)} & \text{ if } a_{i} = a^{-}, \\
r_{\mathrm{A},C(i)} + \gamma^{\Delta t_{\mathrm{D},i}} r_{\mathrm{D},C(i)} \mathbbm{1}_{\text{UE $i$ dropped}} & \text{ if } a_{i} = a^{+},
\end{cases}
\end{align*}
where $\mathbbm{1}$ denotes the indicator function and $\Delta t_{\mathrm{D},i}$ the time between accepting and dropping the connection of UE $i$.
This updating rule in Q-learning requires discrete state space $\mathcal{S}$ and action space $\mathcal{A}$. Since our state space is continuous, we quantize it. Observe that there is a trade-off between minimizing the quantization error and the inefficiency of learning, which is due to the curse of dimensionality.
In this paper, we allocate the (negative) discounted reward of dropping $\gamma^{\Delta t_{\mathrm{D},i}} r_{\mathrm{D},j}$ to the time and state where the action of accepting that same UE was taken, see Figure~\ref{fig: visualization dropping} for the visualization of this dropping allocation.
However, one might argue that the UE that was last accepted caused the overload of the base station and therefore should be penalized or that all the UEs present at the moment of dropping are to blame. Note that in this latter case also UEs that are not dropped are penalized. We leave the comparison of all these different dropping allocations for further research.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{DroppingRuleAllocation.pdf}
\caption{Visualization of the dropping rule that allocates the penalty for dropping to the state when the dropped UE was accepted.}
\label{fig: visualization dropping}
\end{figure}
\subsection{Deep Q-learning}
As already mentioned above, the drawbacks of Q-learning are the discretization of the state space and the curse of dimensionality of the look-up table when increasing the number of features.
One of the solutions to this problem is Deep Q-learning (DQL) \cite{GBC-DL}, since this approach allows for a continuous state and action space.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{ArchitectureNN.pdf}
\caption{Visualization of the architecture of the Neural Network.}
\label{fig: visualization neural network}
\end{figure}
In Deep Q-learning the look-up table is replaced by a (Deep) Neural Network (NN), see Figure~\ref{fig: visualization neural network} for the NN that we used for conducting the simulation experiments in Section~\ref{sec: simulation results}.
Similar to the Q-learning policy, for the new UE connection request in state $s_{i}$ we take the action $a_{i}$ that gives the highest Q value, where the Q values are obtained by forwarding the input state $s_{i}$ through the Prediction Neural Network (PNN), see Figure~\ref{fig: updating neural network}.
Updating the weights in the PNN and thus indirectly the Q values is different from the Q-learning method. Figure~\ref{fig: updating neural network} gives a schematic overview of this updating. In the figure it can be seen that besides the PNN there is also a Target Neural Network (TNN).
In Deep Q-learning we replace the value function with a function approximator. As a consequence, in Q-learning we update exactly one Q value, see Equation~\eqref{eq: updating Q table}, whereas in Deep Q-learning we update many due to the fact that all outputs depend on all neurons' weights in the network. This causes the problem that the updating affects the Q values for the next decision. Thus, the purpose of the TNN is to have a fixed neural network for the PNN to converge to, which ensures robustness and more stable convergence to the correct Q values.
The loss of the PNN is given by,
\begin{align}
\label{eq: updating NN}
r(s_{i},a_{i}) + \gamma^{\Delta t} \max_{a \in \mathcal{A}}Q(s_{i+1},a) - Q(s_{i},a_{i}).
\end{align}
As can be seen in Figure~\ref{fig: updating neural network} we use the mean squared error (MSE) of the predicted Q-values for updating the weights of the PNN via back-propagation (with the Adam optimizer and learning rate $0.0001$).
For the pseudo code corresponding to the DQL policy we refer to Appendix~\ref{app sec: pseudocode}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{UpdatingWeightsNN.pdf}
\caption{Visualization of the updating of the neural network.}
\label{fig: updating neural network}
\end{figure}
Some comments are in place regarding the training of the DQL policy.
The performance of this policy is highly sensitive to the hyperparameters and we observed that the DQL policy not always converged to a network that provides the right decision for accepting or blocking a new UE connection request.
\subsection{Clairvoyant}
Normally, clairvoyant policies assume to have full knowledge of the system to make optimal decisions. However, in our AC model we allow for time-varying channel rates that depend, among other parameters, on the occupied resources of each base station and in addition we allow for multiple UE types with possibly different rewards. Due to the complexity of the dependencies it is NP-hard to derive the true optimal AC policy. Next, we describe the clairvoyant policy that we consider which gives a total reward that approximates the true optimal total reward.
In the \textit{no dropping clairvoyant} (NDC) policy all UE connection requests are initially accepted. However, accepting all these connections may lead to overload of the base station and therefore dropping of UE connections. Whenever a UE connection is dropped, we block this UE connection request in hindsight, see Algorithm~\ref{alg: clairvoyant policy}. Note that blocking this UE connection changes the evolution of the simulation compared to the simulation in which the UE was accepted, however, we do not take this into account. This is one of the reasons why the clairvoyant policy is not truly optimal. Furthermore, observe that the dropping probability in the clairvoyant policy is zero.
\begin{algorithm}[H]
\begin{algorithmic}[1]
\IF{new UE connection request}
\STATE Accept UE connection
\STATE Add reward for accepting the UE connection
\ENDIF
\IF{finishing UE connection}
\STATE Do nothing
\ENDIF
\IF{dropping UE connection}
\STATE Substract reward for accepting the UE connection
\STATE Add penalty for blocking the UE connection
\ENDIF
\end{algorithmic}
\caption{No dropping clairvoyant policy}
\label{alg: clairvoyant policy}
\end{algorithm}
\section{Simulation Setup}
\label{sec: simulation environment}
The considered simulation environment in this work reproduces the downlink channel between every active base station and every UE in the system, according to the 3D Urban Macro (3D-UMa) scenario defined by 3GPP~\cite{TR38.901, TR36.873}. In this work, we do not consider the effect of fast fading, but we only generate the effect of path loss, spatially coherent line/non-line of sight and shadowing, since we are interested in the average resource demand of UEs and not in stringent delay requirements. Accordingly, we assume isotropic single antenna transmitters and receivers.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{HexagonLayout.png}
\caption{Visualization of hexagon $7$ cell layout with wraparound.}
\label{fig: hexagon layout}
\end{figure}
We consider a hexagon $7$ cell layout with wraparound, see Figure~\ref{fig: hexagon layout}.
New UE connection requests come into the system according to a Poisson point process with rate $\lambda$ and once accepted the UE travels throughout the cells with a fixed velocity $v$ and fixed linear trajectory, determined upon acceptance. Each UE requires a certain amount of resources from the base station by which it is served. Observe that the traffic that we consider is the drop-sensitive one and that one can always operate at full capacity by also accepting best-effort connections.
We highlight that a metric corresponds to the $i$-th UE connection with a subscript $i$. The amount of resources occupied by UE $i$ is based on 3GPP TR 38.901~\cite{TR38.901} and determined as follows.
\begin{figure*}[htbp]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{SignalStrength_user.pdf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{Resource_user.pdf}
\end{minipage}
\caption{Realization of the signal strength and corresponding occupied resources for two UEs in the scenario with $f_{c}=2$ and $h_{\mathrm{UT},i}=1.5$. Both UEs start at the centre of cell $1$ and move with unit speed to the right according to Figure~\ref{fig: hexagon layout}.}
\label{fig: signal strength resource}
\end{figure*}
Let $\textit{PL}_{\mathrm{LOS},i}$ denote the line of sight (LOS) pathloss of UE $i$, which is given by
\begin{align*}
\textit{PL}_{\mathrm{LOS},i} = 22 \log_{10}(d_{\mathrm{3d},i}) + 28 + 20 \log_{10} (f_{c}),
\end{align*}
where $d_{\mathrm{3d},i}$ is the 3-dimensional distance between this UE and the base station by which it is served and $f_{c}$ the carrier frequency.
Moreover, let $\textit{PL}_{\mathrm{NLOS},i}$ denote the non-line of sight (NLOS) pathloss of UE $i$ given by
\begin{align*}
\textit{PL}_{\mathrm{NLOS},i} &= 13.54 + 39.88 \log_{10}(d_{\mathrm{3d},i}) + 20 \log_{10} (f_{c}) \\
&\qquad - 0.6(h_{\mathrm{UT},i}-1.5),
\end{align*}
where $h_{\mathrm{UT},i}$ is the height of this UE. In Figure~\ref{app fig: pathloss} (Appendix~\ref{sec app: simulatin results}) both the LOS and NLOS pathloss are depicted as a function of $d_{\mathrm{3d},i}$ with $f_{c}=2$ and $h_{\mathrm{UT},i}=1.5$.
Upon acceptance of the new UE connection the UE either has LOS or NLOS pathloss with probability
\begin{align*}
\mathbb{P}(\textit{PL}_{i} = \textit{PL}_{\mathrm{LOS},i}) =
\begin{cases}
1 \qquad \qquad \qquad \text{ for } d_{\mathrm{2D-out},i} \leq 18,\\
\frac{18}{d_{\mathrm{2D-out},i}} + (1-\frac{18}{d_{\mathrm{2D-out},i}}) e^{- \frac{d_{\mathrm{2D-out},i}}{63}} \\ \qquad \qquad \qquad \text{ for } d_{\mathrm{2D-out},i} > 18,
\end{cases}
\end{align*}
and $1 - \mathbb{P}(\textit{PL}_{i} = \textit{PL}_{\mathrm{LOS},i})$, respectively,
where $d_{\mathrm{2D-out},i}$ is the distance in the horizontal plane between the base station by which the UE is served and the outer wall of the building in which the UE is located, see also~\cite[Figure~7.4.1-2]{TR38.901}.
We assume that after $d_{\mathrm{cor}}$ meters the pathloss of the UE is completely uncorrelated, meaning that again the UE has $\textit{PL}_{\mathrm{LOS},i}$ or $\textit{PL}_{\mathrm{NLOS},i}$ with probability $\mathbb{P}(\textit{PL}_{i} = \textit{PL}_{\mathrm{LOS},i})$ and $1-\mathbb{P}(\textit{PL}_{i} = \textit{PL}_{\mathrm{LOS},i})$, respectively. In between, we apply linear interpolation to compute the pathloss $\textit{PL}_{i}$ of the UE $i$.
Let $X_{i}$ denote the shadowing component of UE $i$ which has auto-correlation function
\begin{align*}
R(\Delta x) = e^{-\frac{\Delta x_{i}}{d_{\mathrm{cor}}}},
\end{align*}
where $d_{\mathrm{cor}}$ denotes the correlation length and $\Delta x_{i}$ the distance in the horizontal plane between the current position of the UE and the location upon establishing the connection. In Figure~\ref{fig: signal strength resource} (left) a realization of the signal strength is depicted, i.e., the sum of the pathloss and shadowing component ($PL_{i} + X_{i}$).
For deriving the signal-to-interference-plus-noise ratio (SINR) of UE $i$, denoted by $\textit{SINR}_{i}$, we first have to derive the received power, the noise power and the interference power.
The received power of UE $i$ is given by\begin{align*}
P^{R}_{i} = P^{T} \frac{B_{i}}{\textit{PL}^{j}_{i} X^{j}_{i}},
\end{align*}
where $B_{i}$ is the allocation fraction of resources to UE $i$.
The noise power is given by
\begin{align*}
P^{N}_{i} = N_{0} B B_{i},
\end{align*}
where $N_{0} = -174$ dBm/Hz and $B$ the total bandwidth in hertz.
The interference power is given by
\begin{align*}
P^{I}_{i} = \sum_{j \in J_{-i}} P^{T} \frac{B_{i}}{PL^{j}_{i} X^{j}_{i}} B^{j},
\end{align*}
where $PL^{j}_{i}$ and $X^{j}_{i}$ denote the pathloss and the shadowing component, respectively, of UE $i$ at base station $j$, $J_{-i}$ the set of all base stations except the one of UE $i$ and $B^{j}$ the occupied resource of base station $j$.
Now the signal-to-interference-plus-noise ratio is given by
\begin{align*}
\textit{SINR}_{i} = \frac{P^{R}_{i}}{P^{N}_{i} + P^{I}_{i}}
\end{align*}
The channel rate of the UE is given by
\begin{align*}
c_{u,i} = \log_{2}(1+\textit{SINR}_{i}),
\end{align*}
which is in the simulation capped between $0.32$ and $7.6$ which corresponds to $-6$ and $22.9$ in dB, respectively.
The resources needed of the base station $j_{i}$ for UE $i$ is equal to $\gamma_{\mathrm{T},i} / c_{u,i}$, where $\gamma_{\mathrm{T},i}$ is the minimum rate requirement of UE $i$. The base station that serves the UE is changed when another base station has better combined signal strength with a margin of $3$ dB.
For a realization of the occupied resource of an UE see Figure~\ref{fig: signal strength resource} (right) (the corresponding channel rate can be found in Appendix~\ref{sec app: simulatin results}, Figure~\ref{app fig: channel rate}).
After a UE connection has been accepted, it either finishes service or it is dropped because the total occupied resources at the base station by which it is currently served exceed the total bandwidth. In this paper we consider the \textit{cost per resource} dropping rule. As the name suggests, the UE connection at the base station of interest that has the lowest cost, i.e. the smallest penalty, per resource is terminated. Other dropping rules might include:
\begin{enumerate}[i)]
\item \textit{random} in which a random UE connection is terminated,
\item \textit{channel rate} in which the UE connection with the lowest channel rate is terminated,
\item \textit{last acceptance} in which the UE that established the connection the latest is terminated.
\end{enumerate}
We leave the comparison of different dropping rules for further research.
An overview of all the input parameters can be found in Table~\ref{tab: input parameters}.
\begin{table}[h]
\caption{Input parameters for the simulation.}
\label{tab: input parameters}
\centering
\begin{tabular}{|l|l|}
\hline
Arrivals as Poisson point process & $\lambda$ \\ \hline
Inter Site Distance & $400$ m \\ \hline
Velocity UEs & $v \in \mathrm{Unif}[1,5]$ m/s \\ \hline
Holding time of UEs & $X_{\mathrm{B}} \in \mathrm{Exp}(\mu=0.005)$ s \\ \hline
Carrier frequency & $f_{c}=2$ GHz \\ \hline
Transmit power & $P^{T}=46${ dBm} \\ \hline
Base station height & $h_{\mathrm{bs}}=25$ m \\ \hline
UE height & $h_{\mathrm{UT}}=1.5$ m \\ \hline
Shadowing component & $X_{\mathrm{s}} \in \mathcal{N}(0,\sigma_{\mathrm{s}}^{2})$ with $\sigma_{\mathrm{s}}=4$ dB \\ \hline
Distance correlation for shadowing & $d_{\mathrm{cor}}=37$ m \\ \hline
Throughput & $\gamma_{\mathrm{T}}=1$ Mb/s \\ \hline
Total bandwidth cell & $B = 10$ MHz \\ \hline
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{ThresholdResource.pdf}
\caption{Discounted reward for the threshold policy based on the occupied resource of the base station for several threshold values $\tau_{\mathrm{R}}/B$ in the scenario with homogeneous arrivals, one UE type with rewards $r_{\mathrm{A}} = 10$, $r_{\mathrm{B}} = 0$ and $r_{\mathrm{D}} = -100$, and cost per resource dropping.}
\label{fig: discounted cost threshold FullCell types1 seed1 CPR}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{Homogeneous_vs_heterogeneous.pdf}
\caption{Discounted reward for various policies for the scenario with homogeneous (left) and heterogeneous (right) arrival rates, one UE type with rewards $r_{\mathrm{A}} = 10$, $r_{\mathrm{B}} = 0$ and $r_{\mathrm{D}} = -100$, and cost per resource dropping. Note that both the QL and DQL policy are trained to include the arrival rate (as input).}
\label{fig: discounted cost ALL FullCell types1 seed1 fixed rates}
\end{figure*}
\section{Simulation results}
\label{sec: simulation results}
In this section we study the performance of the AC policies described in Section~\ref{sec: methods} in various scenarios. First, in Sections~\ref{sec: results threshold} and \ref{sec: results NN}, we will consider a scenario in which the rates for new UE connection requests are spatially homogeneous, meaning that the rates in all cells are equal. Moreover, we conduct every simulation with $10^{5}$ UE connection requests and consider the \textit{cost per resource} dropping rule where the (negative) penalty of dropping is allocated to the UE that is dropped.
\subsection{Threshold policy}
\label{sec: results threshold}
In Section~\ref{sec: methods} we distinguish between two threshold policies. In Figure~\ref{fig: discounted cost threshold FullCell types1 seed1 CPR} the threshold policy based on the occupied resources is depicted for various threshold values (the threshold policy based on the number of UEs in the cell shows similar behavior). It can be seen that the best threshold value differs depending on the rate of the UE connection requests.
Interestingly, Figure~\ref{fig: discounted cost threshold FullCell types1 seed1 CPR} shows that the best resource based threshold policy accepts a UE connection when the load of drop-sensitive traffic is between 40\% and 60\% depending on the rate. Note that, due to the time-varying channel rates, the fraction of occupied resources of this traffic may exceed the specified threshold value as already mentioned at the introduction of the threshold based policies.
Observe that the discounted reward is increasing with the mean interarrival time, since a larger mean interarrival time means that within a certain time interval there are fewer UE connection requests. Fewer UE connection requests per time interval in turn means that we can accept a higher fraction of the UE connection requests as there will be fewer UE connections served by the base station.
In the remainder of this paper we will only depict the \textit{frontier} of both threshold policies, i.e., the highest discounted reward for each mean interarrival time.
\subsection{Deep Q-learning policy}
\label{sec: results NN}
For the Deep Q-learning policy we will first study the performance of having additional features describing the state space.
The versions, which represent a different set of features that are used as state space description, can be summarized as follows, where in each version the following feature(s) are added:
\begin{enumerate}[1)]
\item total resource usage of the cell,
\item resource usage of the arriving UE,
\item total resource usage of the neighboring cells,
\item quality of the UEs in the cell.
\end{enumerate}
These features are carefully chosen to potentially increase the performance, but also to represent a practical implementation of the policy. For example, adding features such as the trajectory and velocity of the UE might increase the performance but this is achieved by over-fitting the simulation model. The aim of our AC policy is to employ it in wireless networks in which the trajectory and velocity of an UE are in most cases unknown. Note that the UE type is included as a feature using one-hot encoding.
Surprisingly, simulation experiments (not included) demonstrated that all the versions perform equally well.
One would think that adding, for example, the total resource usage of neighboring cells as a feature would improve the DQL policy. However, observe that the state space does not include the trajectory and speed of all the UEs. Consider the situation where one neighboring cell is fully loaded while all other neighboring cells are empty. On the one hand, if the DQL policy accepts a new UE connection there is the probability of $1/7$ that the UE goes to the cell that is fully loaded leading to a dropping. On the other hand, if the DQL policy blocks the new UE connection it is not using the full capacity of the base station corresponding to this UE. In addition, simulation experiments showed that the fraction of occupied resources at the base station is highly fluctuating.
\subsection{Comparison of various policies}
We compare the performance of the various policies for three different scenarios; spatially homogeneous (equal rates in all cells), heterogeneous (different rates in the cells) and time-varying heterogeneous arrival rates. The time-varying arrival rates are modeled by having every $t_{\text{var}}$ seconds new heterogeneous arrival rates in the cells. Note that for both threshold policies, i.e., based on the number of UEs and on the occupied resource, only the frontier is depicted. For the QL and DQL policies we trained the agent in such a way that it is able to cope with general arrival rates, meaning that we have a look-up table and NN for the QL and DQL policy, respectively, in which the state space includes the arrival rate.
Figure~\ref{fig: discounted cost ALL FullCell types1 seed1 fixed rates} shows that the DQL policy is performing equally good as the frontier of the threshold-based policies both for fixed homogeneous and heterogeneous arrival rates, whereas the QL policy performs slightly worse. The lower performance of the QL policy might be due to the quantization error.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Timevarying.pdf}
\caption{Discounted reward for various policies for the scenario with fast ($t_{\text{var}}=1000$), medium ($t_{\text{var}}=5000$) and slow ($t_{\text{var}}=10000$) time-varying heterogeneous arrival rates, two UE types with rewards $\boldsymbol{r}_{\mathrm{A}} = (10,1)$, $\boldsymbol{r}_{\mathrm{B}} = (-10,-1)$ and $\boldsymbol{r}_{\mathrm{D}} = (-100,-10)$, and cost per resource dropping. Note that the DQL policy is trained to capture various arrival rates.}
\label{fig: discounted cost neuralnetwork FullCell types2 seed1 CPR timevarying}
\end{figure}
We cannot say with certainty if these results suggest that the DQL policy is not able to discover and/or capture system characteristics that improve the performance or that it is just impossible and not a weakness of this policy.
Note that for the threshold policies we manually set the correct threshold value to achieve the frontier, depicted in Figure~\ref{fig: discounted cost ALL FullCell types1 seed1 fixed rates}, for a specific mean interarrival time which a practical system would first need to predict from past arrivals with according errors. Both the QL and DQL policy are more general than the threshold-based policies in the sense that these policies are able to adaptively learn the decision that achieves this performance. In addition both the QL and DQL policies are able to capture general arrival rates, whereas the best threshold value depends on the mean interarrival time and again should be set manually.
Figure~\ref{fig: discounted cost neuralnetwork FullCell types2 seed1 CPR timevarying} shows the performance of the AC policies in the most realistic scenario, i.e., multiple UE types with heterogeneous time-varying arrival rates. For this scenario the DQL policy is outperforming the threshold-based policies. Note that in the threshold policies we manually set one threshold for every UE type (independent of the rate of new UE connection requests) once and for all in the simulation. Adapting the threshold for the various UE types is feasible but would require tuning with even more complexity. The (well-trained) DQL policy is able to capture multiple UE types and the time varying rate of new UE connection requests. Note that the DQL policy can be arbitrarily extended and trained with the same procedure, independent of the number of UE types.
The reward framework allows us to compare the performance of various policies, but it does not provide insight in the important performance metrics within AC such as the blocking and dropping probability. In Table~\ref{tab: probabilities timevarying} these latter performance metrics are shown. Note that the scenario that we considered has one UE type with a high reward for acceptance, whereas the other UE type has a low reward for acceptance. In the table it can be seen that the threshold-based policy does not differentiate between the two UE types. In contrast, the DQL policy significantly favors the high reward UE type over the UE type with low reward. Moreover, observe that the DQL policy achieves a lower dropping probability for both UE types. According to~\cite{ITU-E.807} the desired drop rate should be at most $3\%$. Note that the dropping probability achieved by the algorithms can be increased by decreasing the penalty of dropping, and vice-versa.
\begin{table}[h]
\caption{Acceptance and dropping probabilities for various policies in the same scenario as Figure~\ref{fig: discounted cost neuralnetwork FullCell types2 seed1 CPR timevarying} with fast ($t_{\text{var}}=1000$) varying heterogeneous arrival rates.}
\label{tab: probabilities timevarying}
\centering
\begin{tabular}{l|l|l|l|l}
& \multicolumn{2}{l|}{UE type $1$} & \multicolumn{2}{l}{UE type $2$} \\ \cline{2-5}
& Accept & Dropping & Accept & Dropping \\ \hline
Clairvoyant & $0.906$ & $0.0$ & $0.644$ & $0.0$ \\ \hline
Threshold resource & $0.646$ & $0.008$ & $0.646$ & $0.042$ \\ \hline
Threshold UEs & $0.724$ & $0.009$ & $0.718$ & $0.05$ \\ \hline
Deep Q-learning & $0.738$ & $0.005$ & $0.426$ & $0.023$
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec: conclusion}
In this paper we studied the performance of various policies for admission control in wireless networks. For fixed load and one UE type, the results indicate that the prior art threshold policies perform equally good as our proposed Q-learning (QL) and Deep Q-learning (DQL) policies. However, note that both the QL and DQL policy adaptively learn the decision, whereas in the threshold-based policies we have to manually set the correct threshold which is highly impractical.
For time-varying heterogeneous arrival rates and multiple UE types the DQL policy outperforms the threshold-based policies, due to the ability of capturing the generality of the arrival rate and multiple UE types.
For further research one could investigate additional features for the DQL policy that were not considered in this paper.
Another extension in the DQL policy would be to add a long-short term memory (LSTM) cell in the NN, see~\cite{HS-LSTM}.
Lastly, it is widely agreed that network slicing is a key requirement for 5G mobile networks, see for example~\cite{ABKKM-EAMSWA, CBVCA-NSGRS, MABK-SNSC}. Note that our models can be easily extended and adapted to cope with the network slicing framework by splitting users of different slices into different types, each with different reward policies.
\section*{Acknowledgments}
The authors gratefully acknowledge insightful discussions with Sem Borst and Thorsten Wild.
\bibliographystyle{plain}
|
1,116,691,500,326 | arxiv | \section{Introduction and Main Results} \label{intro}
In ordinary Quantum Field Theory (QFT) with mass gap the notion of particle is recovered
from that of interacting local field as a consequence of Infra-Red (IR) asymptotic dynamics: a
near-mass-shell pole singularity in each of the momenta incoming any Green
function (guaranteed in Lagrangian QFT's by the possibility of imposing
on-shell normalization conditions on both mass and wave function
renormalizations) ensures the existence of the Lehmann-Symanzik-Zimmerman asymptotic limit of the
field $\cite{BLT}$. One is thus provided with a ordinary free Fock field, by means of
which an irreducible representation {\em \`a l\`a} Wigner of Poincar\'e
group, sitting on an isolated mass hyperboloid, is in turn constructed.
In this context the fact that the field/particle may or may not
carry quantum numbers associated with some unbroken global internal
symmetry is irrelevant.
In gauge theories (we will always have in mind QED and QCD in continuum
Minkowski 4-dimensional space-time with unbroken electric and colour
charges) things go in a different way. Indeed, the issue is one about which,
as yet, there is no general consensus.
On the one side QED -- with the exception of its zero charge sector -- still
is only a theory of inclusive cross sections, in which all the theoretical
set-up of quantum mechanics (states, observables, representation of
symmetries and the like) has no satisfactory {\em explicit} representation,
in spite of the general model-independent investigations $\cite{FMS,B1}$
that have delimitated, so to speak, a possible battlefield: the battle is not yet won
and one could, in a provocative way, summarize the situation by saying that
the question: ``what is an electron in QED'' is still open.
On the other side there is, in QCD, the problem of confinement of coloured
gluons and quarks, about which there is even less to say. Many mechanisms
and criteria have been proposed over the years: some
(as {\em e.g.} the Wilson loop area behaviour $\cite{W}$, or
the fundamental role of topology leading to the
dual Meissner effect $\cite{tH1}$) are so suggestive that have become common language;
others (the $1/(k^2)^2$ IR behavior of the full gluon propagator $\cite{BZ,B3PZ}$, $\cite{ADJS,AB}$,
the quartet mechanism $\cite{KO}$ and the metric confinement $\cite{N1}$ both based on the existence of
LSZ asymptotic limits for colour fields,
violation of asymptotic completeness $\cite{dEM1}$,
the obstruction in the IR dressing due to Gribov ambiguities $\cite{LM}$,
and so many others that it would be impossible - and nonsensical -
to quote them all here) do not share the same popularity, but time and again are
reconsidered and revived.
However, so far none of these criteria has led to a systematic and generally accepted
description of what confinement is.
Prudentially we regard confinement as a delicate, multi-faceted subject
one can look at from different standpoints. We try here just to offer
a further standpoint, not necessarily in conflict with others, but
endowed with the possibility of a sound mathematical verification based
on the only input of implementing in QCD the symmetries that we believe
relevant: local gauge invariance and Poincar\'e.
It is convenient to state the terms of the problem of the particle content
of charged sectors in gauge theories within the framework of the Lagrangian
approach. We shall also assume that all the fields entering the Lagrangian
are local fields. These will be referred to as the basic fields of
the model. Ref. $\cite{NO}$ gives in detail the local covariant formulation
of the theory we shall rely on in the sequel. In particular the adjective
``physical'' will be referred to the fields that commute with -- or to states
that are annihilated by -- the Becchi--Rouet--Stora--Tyutin generator (the choice
of the local covariant formulation deserves a further comment: the fact that
manifest covariance is necessary to implement the renormalization procedure
may be regarded upon as a technical complication; to our knowledge, however,
a proof of renormalizability is given only in this context $\cite{tH2}$: that
is why we stick to it).
In this context it is convenient to distinguish four steps, all relevant in
designing the relationship between field and particle. We will try to
keep these steps as non-overlapping as possible:
(i) form of physical (composite) charged fields;
(ii) IR asymptotic dynamics;
(iii) existence of asymptotic limits and particle content;
(iv) $S$ matrix.
In this paper we will be mainly concerned with only (i) and (ii).
As for (i), it is well known that physical fields that are localized
functions of the basic fields transform trivially ({\em i.e.} have zero
charge) under any charge operator associated to a current obeying a Gauss
law: $j_\nu = \partial^\mu\,F_{\mu\nu}$. Indeed, in intuitive terms, thanks to the latter, the action of the
charge on any field $\Phi$ takes the form
\begin{eqnarray}
\delta \Phi = \lim_{R\to\infty} ~\left [ \int_{S_R}d{ S}_i\, { F}_{0 i}~,~ \Phi
\right ]
\label{GL}
\end{eqnarray}
where $S_R$ is the surface of the sphere of radius $R$ in 3-space.
Therefore, if $\Phi$ is (or the fields in terms of which it is constructed
are) smeared with functions of compact support, thanks to locality,
$\delta\Phi$ vanishes for $R$ large enough.
To avoid this the field $\Phi$ must have a ``tail'' through the sphere at
infinity in Minkowski space (whether only in space-like or even in
time-like directions is a subject to be taken up in the next section).
In this sense, as long as one is interested in physical nontrivially
charged fields, only nonlocalized functions of the basic fields ought
to be considered.
Since the above statement has been given the status of a theorem $\cite{FPS}$,
there is little to add and there is general agreement about it.
The theorem gives no hint, however, about the explicit form of such fields.
According to the terminology also recently used in Ref. $\cite{LM}$, such
nonlocalized functions will be shortly referred to as ``dressed'' fields:
a physical, {\em interacting} electron should be dressed with a cloud of
photons, as well as with its own Coulomb field.
Dirac $\cite{D}$ was the first to show, in an explicit way, how the dressing could
be done in order to endow an electron with its own Coulomb field.
His aim was a quantization of QED that would involve only those degrees of
freedom that actually contribute to the dynamic evolution of the system.
In retrospective, it does not sound as a surprise that he gave up the manifest
covariance properties of the physical fields under Lorentz transformations:
it was well known, after the Gupta-Bleuler formulation, that, even when
restricting to the zero charge sector, such manifestly covariant
formulations do involve indefinite metric, {\em i.e} extra degrees of
freedom irrelevant to the dynamic evolution.
After Dirac other authors have investigated different ways of dressing the
basic fields, with different motivations and with different aims.
The list given by $\cite{BB}-\!\!\cite{HLM}$ only gives some references that
are closer in spirit to the present article and, in any event, has no pretension
to completeness.
Ref. $\cite{LM}$ provides a much more comprehensive bibliography,
whereas $\cite{HLM}$ provides its updating.
On the same footing as Dirac, covariance is given up also in the model
investigations of Steinmann $\cite{S1}$, who has the same aim as Dirac, and of Ref. $\cite{LM}$
and other works by the same group,
who instead think of the dressed fields as composite operators within the
usual formulation of the gauge theory.
The non-implementability of Lorentz boosts in the charged sectors of QED is
indeed, after the model independent investigations of $\cite{FMS,B1}$, taken for
granted to the point that, once the symmetry is broken by hand from the very beginning of
the construction, no attempt is made to restore it.
The only exception is provided, to our knowledge, by the attempts of one of
the present authors and collaborators $\cite{dEMfp,dEC}$.
Needless to say, the effort of restoring Lorentz symmetry at some stage in
our construction of dressed fields will be made also in the present paper.
We have to state clearly that situating our results about QED within the
general framework of $\cite{FMS,B1}$ is a non trivial subject, particularly because
our results only concern some two-point functions and admittedly are,
for now, incomplete.
It is true, on the other hand, that the results of the present and the
following papers $\cite{dEMi}$ open the possibility of performing systematic
model calculations that could help an explicit and exhaustive comparison:
this is one of the several open, possibly not insurmountable, questions
to be discussed in the conclusions.
Among the references we have cited, the work by Steinmann deserves a special
mention, for not only it has been close in spirit to ours along the years, but
it has been constantly inspiring.
We feel it is not by chance that another part of Steinmann's and collaborator's work,
not immediately connected with the problems specific to gauge
theories, is invaluable to the approach presented here. Indeed, it turns out
that the usual Dyson expansion formula for the calculations of
Vacuum Expectation Values (VEV) of the type $\langle T(\cdots)\rangle$ is not sufficient for our purposes.
The composite fields we will introduce, will themselves be
$T^\pm$-ordered formal power series. So the calculation of their correlation functions
will demand the ability at computing -- in Perturbation Theory (PT) -- both Wightman functions and,
more in general, multi-time-ordered
VEV's of the type $\langle T^\pm(\cdots)\cdots T^\pm(\cdots)\rangle$. Ref.s
$\cite{O,S3,S4}$ exactly provide the algorithm for doing all this.
Our attitude in the present paper is that we do not want to make any
{\em a priori} assumption about IR asymptotic dynamics, with the exception
of enforcing symmetries: local gauge, translations and Lorentz in particular.
IR asymptotic dynamics should, hopefully, emerge by itself, {\em i.e.} only
by our ability at calculating the near-mass-shell behaviour of correlation
functions, once a particular gauge invariant charged field has been selected
within the framework of step (i) above.
In other words the main point is that (i) leaves a remarkable freedom and
evidently any choice made in selecting the form of physical charged fields
may, and indeed does, affect the outcomes of (ii)-(iv).
Our work will, as a consequence, consist in exploiting all the freedom (i)
leaves to see whether there exists a field with a near-mass-shell behaviour
mild enough to enable one to eventually face point (iii) and (iv).
In the case the motivations about the necessity of having fields
with a mild near-mass-shell behaviour should be recapitulated in more
intuitive and physical terms, we have found the discussion given in
$\cite{St}$ particularly sound.
Our expectation is that, playing, as we will do, twice the same game,
one should have different results in QED and QCD respectively. There is
infact no point about the statement that the electron is not confined
whereas, the quark should be so.
Now, while in QED it is more or less generally accepted that the
non-confinement of the electron should result in the existence of some kind
of asymptotic (possibly not LSZ) limit, the way the confinement of quarks
and gluons should show up is less generally agreed upon: we have already
recalled some among the many, sometime conflicting, mechanisms that have been
proposed in the literature: needless to say, we will come out with
a mechanism different from all the others!
So, in order to directly compare QED and QCD, we will construct
``dressed electron''
$e(x)$ and ``quark'' $q(x)$ fields (we could also construct the ``gluon''
$\cite{dEC}$, but the investigation of its behaviour in higher orders is better postponed to
future work, for a
comparison with its QED analogue would be less stringent: the photon has no
charge) whose two-point functions up the fourth order in the coupling
constant - the simplest place where a difference between QED and QCD may
emerge - have the following properties:
(1) they are independent of the gauge-fixing parameter;
(2) ultraviolet divergences brought about by the compositeness of the
dressing are cured by a single renormalization constant introduced in the
definition;
(3) on-shell normalization conditions can be imposed, in the IR regularized
theory, on the single IR divergent graphs with two different outcomes.
(3a) In QED a complete cancellation of IR divergences takes place, and the
two-point function is given by
\begin{eqnarray}
&& W(p) = \int d^4 x e^{i\,p\cdot(x-y)}\langle e(x) \overline{e}(y)\rangle \quad ,
\label{WEdef} \\
&& W(p) = W_0(p) + {\alpha\over \pi}\, W_1 + \left({\alpha\over \pi}\right)^2 W_2 + \cdots \quad ,
\label{WEres}
\end{eqnarray}
where
\begin{eqnarray}
W_0(p) = (\mathop{ \hbox{ \hbox{\slash + \mu)\,(2\pi)\,\theta(p^0)\,\delta(p^2-\mu^2)
\label{Wfree}
\end{eqnarray}
is the Wightman function of the free spinor field, whereas the higher order terms,
described by the two invariant functions $a_i(p^2/\mu^2)$ and $b_i(p^2/\mu^2)$:
\begin{eqnarray}
W_i(p)\, =\, \theta(p^0)\, \theta (p^2-\mu^2)\,{1\over \mu^2}\, (a_i \mathop{ \hbox{ \hbox{\slash + b_i \,\mu)
\quad , \quad i \ge 1\quad ,
\label{W2par}
\end{eqnarray}
are given, to the first order, by
\begin{eqnarray}
a_1 = {\mu^2\over 2\, p^2} \,\left (1 - {\mu^2\over p^2}\right ) \quad , \quad b_1 = 0 \quad ,
\label{W1}
\end{eqnarray}
and, to the second order (whose full form is given in Section \ref{sec6}) have the
near-mass-shell, asymptotic form
\begin{eqnarray}
a_2 \simeq
{5\over 9}\, {\rm{r}} + {1\over 6} \, {\rm{r}}^2 \ln {\rm{r}}
- {7\over 4}\, {\rm{r}}^2 + \cdots ~ ,
\qquad
b_2 \simeq
-{7\over 36}\, {\rm{r}} - {1\over 6} \,{\rm{r}}^2 \ln {\rm{r}}
+ {5\over 24}\, {\rm{r}}^2 + \cdots ~,
\label{W2}
\end{eqnarray}
with ${\rm{r}} = p^2/\mu^2 -1 \to 0 $.
(3b) In QCD, assuming dimensional regularization for IR divergences $-$
{\em i.e.} $D=4\to 4 + 2\,\epsilon$ $\cite{GM,MS}$ $-$ the latter do not cancel,
but obey the factorization equation
\begin{eqnarray}
\epsilon \,{\partial \over \partial \epsilon} \, w_2(p,\epsilon) =
+ \left ({1\over 2\, \epsilon}\right )\, {11\over 6}\, C_{_A} \, w_1(p,\epsilon) \quad ,
\label{ede}
\end{eqnarray}
where $w = \sum (\alpha/\pi)^n\,w_n$ is defined by the amputation of
the interacting part
\begin{eqnarray}
&& W_{ij}(p,\epsilon) = \int d^4 x e^{i\,p\cdot (x-y)}\langle q_i(x) \overline{q}_j(y)\rangle \quad ,
\label{WQdef} \\
&& W_{ij}(p,\epsilon) = \delta_{ij}\,W_0(p)~+~
{i\over \mathop{ \hbox{ \hbox{\slash - \mu + i\,0}\,\,\delta_{ij}\, w(p,\epsilon)\,\,
{-i\over \mathop{ \hbox{ \hbox{\slash - \mu - i\,0} \quad .
\label{wdef}
\end{eqnarray}
The first evident comment about the above results is that the game,
played twice with the same rules, gives two qualitatively different results.
It is sufficient, {\em per se}, to state that the IR asymptotic dynamics of
the two models is different (this was expected), even in perturbation
theory (possibly, this is a less widespread belief).
Concerning $(\ref{WEres})$, although the result is that the singularity of
the free field theory is not altered by the radiative corrections that
vanish on the mass-shell, it is true that the mass hyperboloid $p^2=\mu^2$
is not isolated as in the mass gap case.
This result, expected on the basis of simple physical intuition, is in
agreement with the observation made in Ref. $\cite{B2}$, where it has been pointed
out that, in the case of gauge theories, the particle content might be recovered at the cost of
abandoning Wigner notion of an irreducible representation of Poincar\'e group
sitting on an isolated mass hyperboloid.
The investigation of this point pertains the step (iii) above.
We will not pursue it in this article.
Concerning the second result $(\ref{ede})$, we find it intriguing for two
reasons. The first is that it is simple - we mean the factorization.
The second is the occurrence of the celebrated ${11\over 6} \, C_{_A}$ factor,
{\em with the plus sign}.
We cannot therefore resist the temptation of commenting on the consequences
$(\ref{ede})$ would have, were it true in any order of PT.
In the latter case its integration would yield
\begin{eqnarray}
&& w(p,\epsilon) = e^{-{\alpha\over \pi}\,\Delta(\epsilon)}\,w(p) \,
\stackrel{\epsilon \to 0}{\longrightarrow} \, 0 \quad ,
\label{IRlim} \\
&& \Delta(\epsilon) = {11\over 6}\, C_{_A}\,{1\over 2\, \epsilon} \quad ,
\label{Delta}
\end{eqnarray}
with $w(p)$ IR finite.
This hints at a different scenario, in which the Heisenberg ``quark'' field,
{\em as a result of IR asymptotic dynamics}, is a free field not asymptotically, but at {\em any}
momentum $p$.
It may be useful to recall the example of the Faddeev-Popov (FP) ghost in QED: in that case
the field is free by construction and there is a factorization of
correlation functions involving the ghost into a bunch of free ghost two-point-functions times
a connected correlation function only involving fields with zero ghost
number. Could one say that the ghost number is confined?
In QCD, even if $(\ref{IRlim})$ were true, one could not immediately
conclude, as in the case of local fields $\cite{SW}$ (we remind that dressed fields do
not share the locality property), that the $q_i(x)$ is a free field.
Nonetheless a working hypothesis could be to check whether, as a consequence
of IR asymptotic dynamics, the factorization of quark and gluon free
two-point functions, out of connected correlation functions only involving
colour singlets, does indeed take place.
The existence of asymptotic limit would, in the latter case, be by far
simpler then in QED -- it would be trivial.
Of course, it is not necessary for the above scenario to take really place
that the function $\Delta(\epsilon)$
preserves, on possibly going from ($\ref{ede}$) to an exact result, the
specific form given by equation ($\ref{Delta}$) suggested by our fourth order
calculation. It might dress up even as a full
series in $\alpha$, provided that
$\Delta\to + \infty$ for $\epsilon \to 0^+$ carried
on holding.
We are aware that, on extrapolating our result $(\ref{ede})$ to
$(\ref{IRlim})$, we have
raised more questions (all orders, gluon, IR asymptotics of
many-point-functions) than we will answer in this article.
But, in the framework we will set up, these questions do not seem to us
prohibitively out of the range of traditional and well established tools
of QFT.
The paper is organized as follows.
In Section \ref{sec2} the freedom one has in dressing the basic fields is analyzed
in detail on the level of classical fields.
Section \ref{sec3} sets the stage for the calculation of quantum correlation
functions: it is argued that an algorithm for computing VEV with several time orderings, {\em i.e.} of the type
$\langle T^{\pm}(\cdots)\cdots T^{\pm}(\cdots)\rangle$,
is needed and the exhaustive work of Ostendorff and Steinmann $\cite{O,S3,S4}$,
giving such an algorithm, is summarized.
Section \ref{sec4} systematically explores in PT the lowest order
of the two-point functions relative to the fields constructed in Section \ref{sec2},
and the full form of $W_1$, equation ($\ref{W1}$), is established.
Section \ref{sec6} gives a concise outlook of the fourth order calculations:
the full form of $W_2$, equation ($\ref{W2}$), is given together with a description of the way we follow
to calculate it and to obtain equation ($\ref{ede}$). The full derivation
of the latter results, as well as the proofs of their properties (1)-(3)
above, are left for forthcoming papers.
In Section \ref{sec5} we give a retrospective of the construction we have done and pinpoint
the open problems that, in our opinion, most urgently should be faced in order to give
the further, necessary support to such a construction.
\section{Classical Fields} \label{sec2}
Let $\psi(x)$ denote a multiplet of Dirac fields transforming as the
fundamental representation ${\cal R}$ of the colour group $SU(N)$
(the extension to whatever compact semi-simple Lie group being trivial).
We shall denote by ${\bf A}_\mu(x) = t^a A_\mu^a(x)$ the Yang-Mills
potentials. Here $t^a$, $a = 1 ,\cdots,\, N^2 - 1$, are the hermitian
generators in ${\cal R}$, satisfying the commutation relations
$[t^a , t^b]= i\,f^{abc} \,t^c$,
\mbox{$t_{il}^a \, t_{lj}^a = C_{_F} \delta_{ij}$},
\mbox{$C_{_F} = (N^2 - 1)/(2 N)$};
whereas the structure constants $f^{abc}$ are real, completely antisymmetric
and obey \mbox{$f^{acd} f^{bcd} = C_{_A} \delta^{ab}$}, \mbox{$C_{_A} = N$}.
The scalar and wedge products in ${\cal R}$ are accordingly defined by
\mbox{${\bf A} \cdot {\bf B}=2 \mathop{\rm Tr}\limits({\bf A} {\bf B})$},
\mbox{${\bf A} \wedge {\bf B}= -i\,[{\bf A} \,,\, {\bf B}]$}.
It will be understood that the dynamics of the above fields is defined by the
Lagrangian ${\cal L}$ given, {\it ${\it e.g.}$}, in $\cite{NO}$, in which the
gauge-fixing term $- {\xi/ 2} \,(\partial {\bf{A}}) \cdot (\partial{\bf{A}})$ as
well as the Faddeev--Popov ghosts have been introduced and the
Becchi--Rouet--Stora--Tyutin symmetry is at work.
All the fields in ${\cal L}$ are assumed to be local fields.
Let ${\bf C}(x) = t^a\, C^a(x) \in {\cal R}$ be the FP ghost field, satisfying
\begin{eqnarray}
{\bf C}(t,{\bf x}) \to 0~, \qquad
{\rm{for}}~\vert{\bf x}\vert \to \infty~.
\label{Cdef}
\end{eqnarray}
We shall call local gauge transformations of $\psi$, $\overline{\psi}$ and
${\bf{A}}_\mu$ the following:
\begin{eqnarray}
\left. \begin{array}{c}
\delta {\bf A}_\mu = \partial_\mu {\bf C} + g\,{\bf A}_\mu \wedge {\bf C}~, \\
\delta \psi = + ig\, {\bf C}\, \psi~,
\qquad
\delta \overline{\psi} = - ig\, \overline{\psi}\, {\bf C}~.
\end{array} \right. \label{BRSbasic}
\end{eqnarray}
Consider now the formal power series $\cite{S2}$:
\begin{eqnarray}
&& V(y;f) = \sum_{{_{N=0}}}^{+\infty}\,(+ig)^{_N} \!\!
\int \! d^4\eta_1 \!\cdots\! \int \! d^4\eta_{_N}
f_{_N}^{\nu_1\cdots\nu_{_N}}(y-\eta_1,\!\cdots\!,y-\eta_{_N})
{\bf A}_{\nu_1}(\eta_1) \!\cdots\!{\bf A}_{\nu_N}(\eta_N)\,,
\label{Vydef1} \\
&& V^\dagger(x;f) = \sum_{{_{M=0}}}^{+\infty}\,(-ig)^{_M}\!\!
\int \! d^4\xi_1 \! \cdots \!\int \! d^4\xi_{_M}
f_{_M}^{\mu_1\cdots\mu_{_M}}(x-\xi_1,\!\cdots\!,x-\xi_{_M})
{\bf A}_{\mu_{_M}}(\xi_{_M})\!\cdots\! {\bf A}_{\mu_1}(\xi_1)
\label{Vxdef1}
\end{eqnarray}
where the terms $M , N = 0$ are by definition $1$.
We claim that one can choose {\em real} kernel functions $f$'s such that $V$
and $V^\dagger$ transform under $(\ref{BRSbasic})$ according to
\begin{eqnarray}
\delta V =+ i\,g~{\bf C}~V~, \qquad
\delta V^\dagger =- i\,g~ V^\dagger~ {\bf C}~. \label{BRSV}
\end{eqnarray}
Before we proceed to enforce the transformation properties $(\ref{BRSV})$,
two comments are in order about the multiple convolutions displayed in
$(\ref{Vydef1})$ and $(\ref{Vxdef1})$.
(i) The first is that they are mandatory if one is interested, as we are, in
obtaining translation covariant solutions to $(\ref{BRSV})$.
(ii) The second is that the convolutions extending to the whole Minkowski
space explicitly expose the fact that $V$ and $V^\dagger$ may be
non-localized functions of the basic local fields ${\bf{A}}$, provided the
support of the $f$'s is suitably chosen. In view of the discussion about
$(\ref{GL})$, this is quite welcome because we are aiming at constructing
locally gauge invariant fields that carry nontrivial global colour numbers:
indeed, concerning local gauge transformations, once $(\ref{BRSV})$ are
satisfied, the spinor fields:
\begin{eqnarray}
\Psi_f(x)=V^{\dagger}(x;f)\psi(x) \quad ,\quad
\overline{\Psi}_f(y)=\overline{\psi}(y)V(y;f)
\label{PSIdef1}
\end{eqnarray}
are obviously invariant under $(\ref{BRSbasic})$ while they transform as
${\cal R}$ and ${\cal \overline{R}}$ when ${\bf{C}}$ is not chosen according
to $(\ref{Cdef})$, but is constant with respect to $x$.
Let us go back to enforcing $(\ref{BRSV})$. Steinmann has faced this problem
in Ref. $\cite{S2}$.
He assumes that, on introducing $(\ref{BRSbasic})$ into $(\ref{Vydef1})$ and
$(\ref{Vxdef1})$, the derivatives can be reversed by parts.
While this can be justified for space derivatives, thanks to the boundary
conditions $(\ref{Cdef})$ on the ghost, the thing is less justifiable for the
time derivatives, as one has no {\em a priori} control on asymptotic behaviour
in time. In electrodynamics there is a way out: since the ghost is free, one
can choose suitable solutions of the d'Alambert equation $\cite{dEMfp}$ that
justify the neglect of boundary terms. In the non-abelian case the problem is
there: we shall, as in $\cite{S2}$, just ignore it, recalling however the
statement (1) of the introduction that, in the case of quantum fields, we
will be able to prove the $\xi$-independence of correlation functions.
With this {\em proviso}, Steinmann has shown that the requirement that
$(\ref{BRSV})$ be satisfied by $(\ref{Vydef1})$ and $(\ref{Vxdef1})$ order by
order in $g$ leads to a linear inhomogeneous recursive system for the
$f$'s. The Fourier transforms of the first of the equations he gives is:
\begin{eqnarray}
k_\nu~\hat{f}_1^{\nu} (k)~ = ~i~,
\label{STFourier1}
\end{eqnarray}
whereas the $f$ with $N>1$ arguments is determined in terms of the $f$ with
$N-1$ arguments by
\begin{eqnarray}
&& \left \{
\matrix{
(k_1)_{\nu_{1}} {\hat f}_{{_N}}^{\nu_{1} \cdots \nu_{{_N}}}
(k_{1},\cdots, k_{{_N}}) = i\,
[{\hat f}_{{_{N-1}}}^{\nu_{2} \cdots \nu_{{_N}}}
(k_{2},\cdots, k_{{_N}})
- {\hat f}_{{_{N-1}}}^{\nu_{2} \cdots \nu_{{_N}}}
(k_{1}+k_{2},k_{3},\cdots, k_{{_N}})]~, \hfill \cr
\cr
(k_{\alpha})_{\nu_{\alpha}} {\hat f}_{{_N}}^{\nu_{1} \cdots
\nu_{\alpha} \cdots \nu_{{_N}}}
(k_{1},\cdots,k_{\alpha}, \cdots , k_{_N}) = \hfill \cr
\cr
\qquad \qquad
+i \,\bigl[{\hat f}_{{_{N-1}}}^{\nu_{1} \cdots \nu_{\alpha -1}
\nu_{\alpha +1} \cdots \nu_{_N}}
(k_{1},\cdots, k_{\alpha -2}, k_{\alpha -1}+
k_{\alpha}, k_{\alpha +1},\cdots,k_{_N})~+ \hfill \cr
\cr
\qquad \qquad
~~~-~{\hat f}_{{_{N-1}}}^{\nu_{1}\cdots \nu_{\alpha- 1}
\nu_{\alpha +1}\cdots\nu_{_N}}
(k_{1}, \cdots, k_{\alpha -1}, k_{\alpha}+ k_{\alpha +1},
k_{\alpha +2},\cdots,k_{_N})\bigr]~, \hfill \cr
\cr
(k_{{_N}})_{\nu_{{_N}}} {\hat f}_{{_N}}^{\nu_{1} \cdots
\nu_{{_N}}}
(k_{1}, \cdots, k_{{_N}}) =
i\,{\hat{f}}_{{_{N- 1}}}^{\nu_{1} \cdots \nu_{{_{N-1}}}}
(k_{1},\cdots, k_{{_{N-2}}}, k_{{_{N-1}}}+k_{{_N}})~.\hfill\cr
} \right.
\label{STFourierJ}
\end{eqnarray}
with ${2 \le \alpha \le N-1}$. We also take from $\cite{S2}$ that the solutions of $(\ref{STFourier1})$ and $(\ref{STFourierJ})$, that for any integer ${N}$ satisfy:
\begin{eqnarray}
&&\sum_{_{J = 0}}^{{_N}}(-1)^{_J}~
\hat{f}_{_J}^{~\nu_1 \cdots \nu_{_J}} (k_1,\cdots, k_{_J})~ \hat{f}_{{_{N - J}}}^{~\nu_{{_N}} \cdots
\nu_{{_{J+1}}}} (k_{_N},\cdots, k_{_{J + 1}})~=~0 \quad ,
\label{UNIa} \\
&&\sum_{_{J = 0}}^{{_N}}(-1)^{{_{N - J}}}~
\hat{f}_{_J}^{~\nu_{_J} \cdots \nu_{1}} (k_{_J},\cdots, k_1)~ \hat{f}_{{_{N - J}}}^{~\nu_{_{J +1}} \cdots
\nu_{{_N}}} (k_{{_{J + 1}}}, \cdots, k_{_N})~=~0 \quad ,
\label{UNIb}
\end{eqnarray}
give rise to unitary series \mbox{$V(x;f)~V^\dagger(x;f) = V^\dagger(x;f)~V(x;f) = 1$}.
Let us first focus on $(\ref{STFourier1})$. A family of solutions to this
equation that also satisfies $(\ref{UNIa})$ and $(\ref{UNIb})$ isa
\begin{eqnarray}
\hat f_1^\nu (k;c) =
i\,n^\nu~{1\over 2}\left({1+c\over n\cdot k - i\,0}+
{1-c\over n\cdot k + i\,0}\right ) \quad ,
\label{f1c}
\end{eqnarray}
where $c$ is a real parameter and $n^\nu$ is a 4-vector that we leave, for
the moment, unspecified.
Two particular solutions from $(\ref{f1c})$ are
\begin{eqnarray}
&& \hat f_{+\,1}^\nu(k;n) = {i\,n^\nu\over n\cdot k - i\,0} \label{f1+} \quad , \\
&& \hat f_{-\,1}^\nu(k;n) = {i\,n^\nu\over n\cdot k + i\,0} \label{f1-} \quad .
\end{eqnarray}
It can be verified that the two following sets of functions $\hat{f}_{+{_N}}$
and $\hat{f}_{-{_N}}$, given by
\begin{eqnarray}
\hat{f}_{\pm{_N}}^{\nu_1\cdots\nu_N}(k_1,\cdots,k_N;n) =
{{i\, n^{\nu_1}} \over {n \cdot (k_1 + \cdots + k_{{_N}}) \mp i\, 0}}~
\cdots {{i\, n^{\nu_{_N}}} \over {n \cdot k_{{_N}} \mp i\, 0}} \label{-ie}
\end{eqnarray}
separately satisfy all equations $(\ref{STFourierJ})$-$(\ref{UNIb})$.
These solutions also fulfil the factorization property
\begin{eqnarray}
&& \sum_{\rm perm} \,
\hat{f}_{\pm{_N}}^{\nu_1\cdots\nu_N}(k_1,\cdots,k_N;n) ={1\over N!}\,
\hat{f}_{\pm{_1}}^{~\nu_1}(k_1;n)\cdots\hat {f}_{\pm{_1}}^{~\nu_{_N}}
(k_{_N};n) \label{eik}
\end{eqnarray}
well known as eikonal identity, as well as the $n-$reflection exchange
relation
\begin{eqnarray}
\hat{f}_{\pm{_N}}^{\nu_1\cdots\nu_N}(k_1,\cdots,k_N;n) =
\hat{f}_{\mp{_N}}^{\nu_1\cdots\nu_N}(k_1,\cdots,k_N;-n) \quad .
\label{f-m}
\end{eqnarray}
We will also need a third set of solutions, that extend to higher orders the
lowest order solution obtained by setting $c=0$ in $(\ref{f1c})$:
\begin{eqnarray}
\hat f_{0\,1}^\nu (k;n) =
{i\,n^\nu~\over 2}\left({1\over n\cdot k - i\,0}+
{1\over n\cdot k + i\,0}\right )
\label{f10}
\end{eqnarray}
with Principal Value prescription (PV) for the $n\cdot k$ denominator.
This is evidently connected with the problem of exposing a family of
solutions that interpolates between $\hat{f}_{+{_N}}$ and $\hat{f}_{-{_N}}$.
We have found that, with $n^\nu$ kept fixed and even after imposing the
unitarity constraints $(\ref{UNIa})$ and $(\ref{UNIb})$, the higher the ${N}$
the higher the number of complex parameters due to the occurrence of
Poincar\'e-Bertrand terms.
However, if also the eikonal identity $(\ref{eik})$ is enforced, the
interpolating family only depends on the real parameter $c$ appearing in
$(\ref{f1c})$.
Just to give a flavour of the thing, it is found that
\begin{eqnarray}
&& \hat f^{\nu_1\nu_2}_2 (k_1,k_2;c) =
\hat f_1^{\nu_1} (k_1+k_2;c) ~ \hat f_1^{\nu_2} (k_2;c) ~
+{\pi^2\over 2}(1-c^2)\,
n^{\nu_1}\,n^{\nu_2} \delta(n\cdot k_1)\,\delta(n\cdot k_2) \quad .
\end{eqnarray}
We have explicitly found up to $f_4(k_1,\cdots,k_4;c)$ and we also have a
guess about $f_N(k_1,\cdots,k_N;c)$ for generic ${N}$.
But, for the sake of conciseness we will no longer elaborate on this topic,
also because higher orders will not be needed in the perturbative
calculations we will perform in later sections.
The important for the sequel is that there exists a solution, denoted by
$\hat f^{{\nu_1}\cdots{\nu_N}}_{0{_N}}(k_1,\cdots,k_N;n)$,
that extends $(\ref{f10})$ to any order ${N}$. In connection with
$(\ref{f-m})$, note that the solution $\hat f_0$, in addition to satisfying
the eikonal identity, is also invariant under $n-$reflection
\begin{eqnarray}
\hat{f}_{0{_N}}^{\nu_1\cdots\nu_N}(k_1,\cdots,k_N;n) =
\hat{f}_{0{_N}}^{\nu_1\cdots\nu_N}(k_1,\cdots,k_N;-n) \quad .
\label{f0-m}
\end{eqnarray}
The relationship between the present approach and other ones
$\cite{LM,M,S1,S2}$, can now be clarified.
Consider, to this purpose, $V_-(y;n)$, {\em i.e} the $V$ obtained by
inserting the solution $\hat f_{-{_N}}$ ({\em i.e} $(\ref{f1-})$ and
$(\ref{-ie})$ with the $-$ sign) into $(\ref{Vydef1})$. It is useful to
represent all the denominators in $\hat f_{-{_N}}$ by means of the
one-parameter integral representation
\mbox{ $(b + i \,0)^{-1} = -i\int_{0}^{+\infty}d \omega\,
{\rm{exp}}\,[i \omega\,(b + i \, 0)]$}.
In this way it is possible to explicitly perform the $d_4 k_j$
integrations in the anti-Fourier transform of the $\hat f$'s.
These integrations give rise to
$\delta_4(x - \eta_j + \sum_i \omega_i)$
that allow, in turn, for the elimination of the $d^4 \eta_j$ integrations in
$(\ref{Vydef1})$. Some further obvious manipulations convert $(\ref{Vydef1})$
into
\begin{eqnarray}
& V_-(y;n) &= \sum_{{_{N = 0}}}^\infty \, ( i \,g)^{_N} \!
\int_0^{+\infty} \!\! d \omega_1 \,\cdots
\int_{\omega_{_{N - 1}}}^{+\infty} \!\! d \omega_{_N}
\, n \cdot {\bf{A}} (y - n\, \omega_1)\cdots n \cdot
{\bf{A}} (y - n\, \omega_{_N}) ~= \nonumber \\
& & = {\cal P^{+}}~ \exp \left [
{{ i\,g \int_0^{+\infty} \!d \omega\,n
\cdot {\bf{A}}(y - n\, \omega)}}\right] \quad .
\label{V-string}
\end{eqnarray}
The r.h.s. of the above formula is the usual definition of the path-ordering
symbol ${\cal P^{+}}$. If $n$ is chosen to be a space-like vector, the above
representation clarifies that $V_-$ is nothing but a rectilinear string
operator ${\em \acute{a}\,l\acute{a}}$ Mandelstam $\cite{M}$ extending to
space-like infinity. The case of $n$ space-like may serve also to accommodate Buchholz case
$\cite{B3}$. For this reason we will generically refer to all the $V$ and $V^\dagger$
operators as to the string operators, regardless of whether $n$ is space-like
or time-like.
In the same way one finds that
\begin{eqnarray}
V^\dagger_+(x;n) =
{\cal P^{+}} \exp\left [{\displaystyle{- i\,g \int_0^{+\infty}\! d \omega\,n
\cdot {\bf{A}}(x + n \omega)}}\right]
\label{V+string}
\end{eqnarray}
(again a ${\cal P^{+}}$ for the order of the $n\cdot{\bf A}$ factors in
$V^\dagger$ is reversed with respect to $V$).
It is now convenient to introduce the decomposition of the Minkowski 4-space
${\cal M}_4$ into the future and past light cones, and their complement:
\begin{equation}
{\cal M}_4 = {\cal C}_+\cup {\cal C}_0 \cup {\cal C}_-\label{Mink}
\end{equation}
and in the sequel, referring to the above decomposition, the indices $\sigma$ and $\tau$ will always take the values $\pm,\,0$.
Suppose now that, we choose $n\in {\cal C}_{\pm}$, {\em i.e.} in the
future/past light-cone.
Then the statement that respectively $n_0\gl0$ is Lorentz invariant,
whence also
$x^0 - \omega_i\,n^0 = \tau_i \mathop{ \vbox{ \hbox{${_{_<}}$ \tau_{i-1} \mathop{ \vbox{ \hbox{${_{_<}}$ \cdots \mathop{ \vbox{ \hbox{${_{_<}}$ \tau^0 = x^0$.
In view of this, $(\ref{V-string})$ can be written using the $T^\pm$ chronological ordering symbols
\begin{eqnarray}
&& V_-(x;n) =
{ T^{\pm}} \exp\left [{\displaystyle
{+i\,g \int_0^{+\infty}\! d \omega\,n \cdot {\bf{A}}(x - n\, \omega)}}\right ] \quad ,
~~n\in{\cal C}_{\pm} \quad ,
\label{V-stringt}
\end{eqnarray}
and likewise for $(\ref{V+string})$
\begin{eqnarray}
&& V^\dagger_+(x;n) =
T^{\pm} \exp\left [{\displaystyle
{-i\,g \int_0^{+\infty}\! d \omega\,n \cdot {\bf{A}}(x + n\, \omega)}}\right ] \quad,
~~n\in{\cal C}_{\pm} \quad .
\label{V+stringt}
\end{eqnarray}
So far this is no big difference: the ordering operators, either
${\cal P^{\pm}}$ or ${ T^{\pm}}$, only order the colour matrices
$t^{a_1},\cdots,t^{a_N}$ in the $N$-th term of the above series, whereas the
fields $A^{a_1}_{\nu_1},\cdots,A^{a_N}_{\nu_N}$, inasmuch as classical fields,
are not sensitive to this ordering.
In the case of classical electrodynamics -- $t^a\to 1$ -- such operators are
simply useless.
The role of the $T^{\pm}$ ordering will instead become crucial when we will
keep it in the definition of the quantum Heisenberg operators.
We have also to consider string operators in which the string vector $n$ is
chosen space-like. In this case the difference between the arguments of two
neighbouring $A$'s is space-like, so only the colour matrices are
sensitive to the ordering, whereas even the Heisenberg fields of the quantum
case commute with one another, due to locality.
The solution we will consider for $n \in {\cal C}_0$ are
$V_0(x;n)$ and $V^\dagger_0(x;n)$, {\em i.e.} the ones corresponding to the
solution $\hat f_{0_{N}}$ that extends
$(\ref{f10})$ and fulfils the $n$-reflection invariance property $(\ref{f0-m})$.
Let us now introduce the characteristic functions
\begin{eqnarray}
\chi_\sigma(n) = \left \{ \begin{array}{lll}
1 & {\rm if} & n\in {\cal C}_{\sigma} \\
0 & ~ & {\rm{otherwise}}
\end{array} \right. \quad,
\qquad \qquad
\sigma = \pm 1, \, 0
\label{chi}
\end{eqnarray}
and correspondingly the fields
\begin{eqnarray}
&& \left. \begin{array}{l}
\Psi_\pm(x;n) = \chi_\pm(n) ~~T^\pm\bigl[ V^\dagger_\pm (x;n) \, \psi(x)\bigr] \quad , \\
\Psi_0 (x;n) = \chi_0 (n) ~~ V^\dagger_0 (x;n) \, \psi(x) \quad ,
\end{array} \right. \label{psi+-0} \\
&& \left. \begin{array}{l}
\overline \Psi_\pm(x;n) = \chi_\pm(n)~~ T^\pm \bigl [\,\overline \psi(x) \, V_\pm(x;n) \bigr] \quad , \\
\overline \Psi_0 (x;n) = \chi_0 (n) ~~ \overline \psi(x) \, V_0 (x;n) \quad .
\end{array} \right. \label{psibar+-0}
\end{eqnarray}
These fields fulfil the Dirac conjugation properties
\begin{eqnarray}
\left. \begin{array}{l}
\overline {\Psi_\pm}(x;n) = \overline \Psi_\mp(x;-n) \quad , \\
\overline {\Psi_0} (x;n) = ~\overline \Psi_0 (x,-n) \quad ,
\end{array} \right. \label{DiracConj+-0}
\end{eqnarray}
that follow from the $n$-reflection properties $(\ref{f-m})$ and $(\ref{f0-m})$.
As a consequence the composite fields
\begin{eqnarray}
\left. \begin{array}{l}
\Psi(x;n) = z_+\,\Psi_+ + z_-\,\Psi_- + z_0\, \Psi_0 , \\
\overline \Psi(x;n) = z_-\, \overline \Psi_+ + z_+\,\overline \Psi_- + z_0\, \overline \Psi_0
\end{array} \right. \label{zitazeta}
\end{eqnarray}
(with the complex constants $z$'s satisfying
$\overline z_\pm = z _\mp$, $\overline z_0 = z_0$, and to be specified later, for the quantum fields, when effecting renormalization)
satisfy the Dirac conjugation relation
\begin{eqnarray}
\overline \Psi(x; n) = \Psi(x;-n)^\dagger \, \gamma_0
\label{DiracConjx}
\end{eqnarray}
that, in Fourier transform, takes the form
\begin{eqnarray}
\overline {\hat \Psi}(p; n) = {\hat {\overline\Psi}}(-p;-n) \quad .
\label{DiracConjp}
\end{eqnarray}
All the constructions done so far, to go from $\psi$ to $\Psi$, can be crudely
summarized in this way:
one has traded the gauge-variance of $\psi$ for the dependence of $\Psi$ on
the string vector $n$.
We will refer to this fact as a breaking, put in by hand, of the original
Lorentz symmetry -- an unpleasant feature one would like to get rid of.
We dedicate the rest of this section to give a heuristic description of how we
will try to accomplish this task.
The Dirac equation for the ordinary $\psi$ in linear covariant gauges is first
converted into the equation of motion for ${\Psi}(x;n)$.
We write it in momentum representation:
\begin{eqnarray}
&& (\mathop{ \hbox{ \hbox{\slash - \mu) \, \hat{\Psi}(p;n) =
g~\gamma_\alpha\,t^a \int d_4 k \sum_\sigma\,z_\sigma\,
T_\sigma^{ \alpha\beta}(k;n)~
{{A}}_\beta^a(k) \, \hat{\Psi}_\sigma(p- k;n) = \nonumber \\
&& \hspace{1.15 in} = g~\gamma_\alpha\,t^a\, Q^{a\alpha}(p;n) \quad ,
\label{mDirac}
\end{eqnarray}
where the index $\sigma$ refers to the decomposition of $\Psi$ with respect to
the light-cone of $n$, equation
$(\ref{psi+-0})$. Accordingly, the projectors $T$ are given by
\begin{eqnarray}
\left. \begin{array}{l}
T^{ \alpha\beta}_\mp(k;n) =
g^{\alpha \beta} - \displaystyle{ {{k^\alpha n^\beta}~ \over {n\cdot k \pm i\, 0}} } , \\
\\
T^{ \alpha\beta}_0(k;n) =
g^{ \alpha\beta} - \displaystyle{ {\rm PV}~{ {{k^\alpha n^\beta}}\over {n\cdot k }} }
\end{array} \right. \label{Ts}
\end{eqnarray}
and satisfy
\begin{eqnarray}
n_\alpha~T^{ \alpha\beta}_\sigma(k;n) = 0~,
\qquad
T^{ \alpha\beta}_\sigma(k;n)~k_\beta = 0 ~. \label{mTk}
\end{eqnarray}
Thanks to the second of equations $(\ref{mTk})$, the longitudinal degrees of freedom of $A_\beta^a$ are expected to
decouple. Thanks to the first of equations $(\ref{mTk})$, the vector field to which $\Psi$
is coupled is ${\cal A}^{a\alpha}=T^{\alpha\beta}\,A^a_\beta$ that satisfies
$n\cdot {\cal A}^a=0$.
Were it not for the subtleties due to the $\pm i\,0$ prescriptions
({\em i.e.} to the light cone decomposition of the field with respect to $n$),
this formally is the equation satisfied by the Dirac field in the axial gauge.
One could try to take this as a substitute of the ordinary Dirac equation in
linear covariant gauges and $\Psi(x;n)$ (with $n$, as in a gauge-fixing,
chosen once for all) as the variable substituting $\psi$ and in terms of which
to attempt a gauge-invariant formulation of the theory -- much in the spirit of
$\cite{D,M,S1}$.
We will not take this attitude. We will continue to think of $\Psi(x;n)$ as a
composite field in a theory where $\psi$ and $A_\alpha^a$ play the role of
basic dynamic variables.
This point of view leaves open the possibility of choosing different $n$'s for
different $\Psi$'s.
More clearly, we want to leave open the possibility of computing quantum
correlation functions of the type $\langle\Psi(x;m)\overline\Psi(y;n)\rangle$,
in which any field has its own string and with no restriction
on whether both $m$ and $n$ are taken either time-like or space-like.
This also is the point where we can explain how we will recover the lost
Lorentz symmetry.
We will discuss about the possibility of taking the limit
\begin{eqnarray}
n\to p
\label{ntop}
\end{eqnarray}
in equation $(\ref{mDirac})$.
A serious warning about this limit is that
its very existence is far from being trivial: we will give some positive
evidence in favour of it only in the case of quantum fields in Section \ref{sec4}.
For now we will just forget about any mathematical rigor and assume its
existence: this enables us to draw some conclusions and formulate some
expectations about quantum fields.
The first consideration about $(\ref{ntop})$ is that it does not mess up the
Dirac conjugation properties of $\Psi$, as evident from equation
$(\ref{DiracConjp})$.
Let us then call
\begin{eqnarray}
\hat q(p) = \hat \Psi(p;p) \quad .
\label{defq}
\end{eqnarray}
Then, by setting $n = p$ in $(\ref{mDirac})$, one obtains:
\begin{eqnarray}
&& (\mathop{ \hbox{ \hbox{\slash - \mu) \,\hat{q}(p) =
g\,\gamma_\alpha\,t^a \int d_4 k \sum_\sigma\,z_\sigma\,
T_\sigma^{\alpha\beta}(k;p)\,
{\hat{A}}_\beta^a(k) \, \hat{\Psi}_\sigma(p - k;p)
\label{eq Dirac2}
\end{eqnarray}
that makes evident why we have kept our point of view:
differently from $\hat \Psi(p;n)$, the field $\hat q$ may exist only as a
composite field: in the r.h.s. of the above equation the $\hat \Psi$ appears
with two different values of its arguments, so the field $\hat q$ does not
satisfy a closed equation.
For $\hat q$, as already for $\hat \Psi(p;n)$, it is expected that the
unphysical degrees of freedom of $A_\alpha^a$ decouple: the second of equations
$(\ref{mTk})$ still applies.
But this is not the end of the story.
If, according to a well known argument, the near-mass-shell behaviour of the
field is driven by the classical currents responsible for the interaction
with soft gluons/photons, we can make a guess about it by operating the
replacement $\gamma^\alpha \to \mu\, p^\alpha/p\cdot k$
within the integration in the r.h.s. of $(\ref{eq Dirac2})$.
It is then seen that, thanks now to the first of equations $(\ref{mTk})$ with $n=p$,
also the classical currents decouple and no longer drive the asymptotic IR dynamics
of $q$. As a result, the near-mass-shell behaviour of the field $q$ we have
defined should be at least milder than that of both the gauge-variant $\psi$
and the $n$-dependent $\Psi$.
The observation above, finally, clarifies why we have constructed strings
allowing for the choice of a time-like vector: in the classical currents
the momentum is close to the mass-shell: $p^2\simeq\mu^2>0$.
All these expectations for the quantum fields will find confirmation in the
following sections.
This means that we will give meaning, to some extent, to the heuristic formula
\begin{eqnarray}
q(x) = \int {d^4p\over (2\pi)^4}\, e^{-i\,p\cdot x}\,
\left [ \int d^4y\,e^{i\,p\cdot y}\,\Psi(y;n) \right ]_{n=p}
\end{eqnarray}
with $\Psi(y;n)$ given by $(\ref{zitazeta})$.
The utility of this formula is to clarify that the kind of delocalization
involved in $q(x)$ is by far more complicated than that, recalled in connection with
($\ref{GL}$), of a field with a
``tail'' going to infinity along a string that
is rectilinear in coordinate representation, as is the case for $\Psi(y;n)$.
In pictorial terms the strings contributing to $q(x)$ are spread out all
over $x$-space: this happens when a string, rectilinear in $p$-space,
is integrated upon with $\exp(-i\,p\cdot x)$ as weighting factor.
The field $q(x)$ thus rather resembles a kind of space-time candy-sugar
cloud centered at $x$.
\section{Perturbation theory for Quantum Fields} \label{sec3}
The present section is devoted to set up diagrammatic rules for the
calculation, in perturbation theory, of the correlation functions of the
quantum gauge invariant charged fields we have sketched in Section \ref{sec2}.
We define the quantum field corresponding to $(\ref{zitazeta})$ in the
following way:
\begin{eqnarray}
&& \Psi(x;m) = \int \!{d^4p\over (2\pi)^4}~ e^{-i\,p\cdot x}\,
\Biggl \{\sum_{M=0}^\infty (-ig)^M \,~t^{a_M}\cdots t^{a_1}\,
\prod_{j=1}^M \int {d^4k_j\over (2\pi)^4}
~\times \label{PsiPT} \\
&& \times \, \sum_{\sigma=\pm,0}\, \chi_\sigma(m)\, \zeta_\sigma^{1/2} \,
\langle V^\dagger_\sigma \rangle ^{-1} \,
\hat f_{\sigma{_M}}^{\mu_1\cdots\mu_M}(k_1,\cdots,k_M;m) \,\, T^\sigma \,
\Bigl [\hat A_{\mu_M}^{a_M}(k_M)\cdots \hat A_{\mu_1}^{a_1}(k_1) \,
\hat{\psi} \Bigl(p-\sum_{j=1}^M k_j\Bigr)\Bigr] \Biggr\}. \nonumber
\end{eqnarray}
In the above formula the time-ordering operators $T^\pm$ and the identity
operator $T^0 = 1$ act on the Heisenberg fields in the square bracket. Moreover
$ \zeta_+ = \zeta_-$ and $\zeta_0$
will play the role of real renormalization constants, introduced to take care of
the compositeness of $\Psi$.
In addition, also the factors $\langle V^\dagger_\sigma \rangle^{-1}$
are constants whose values will be fixed later, when their necessity to avert
some ill-defined one particle reducible graphs will be realized.
For now, what is needed to know is that the $\zeta_\sigma$ and the
$\langle V^\dagger_\sigma \rangle^{-1}$ have the right conjugation properties
such that
$\zeta_\sigma\,\langle V^\dagger_\sigma \rangle^{-1}$
can be identified with
the $z_\sigma$ of equation ($\ref{zitazeta})$: in this way $\overline \Psi(x;m)$ is, in
turn, obtained by taking the straightforward Dirac conjugate of
($\ref{PsiPT}$).
It should be finally noted that the structure of formula $(\ref{PsiPT})$ is
slightly different from $(\ref{zitazeta})$.
In fact, in the latter case one can recognize the time-ordering of the fields
only after performing, as we have done in Section \ref{sec2}, the $d^4\xi_j$
integrations of $(\ref{Vxdef1})$. Here, instead, the $d^4k_j$
integration involving the $f$'s, that are in turn responsible for
this ordering, are indicated but not yet performed:
the $T^{\pm,0}$ are there simply by definition.
The light-cone decomposition of the field with respect to $m$ --
the second line of the above
formula -- makes it evident that, depending on the choice of the string vector
$m$ relative to any single field, one must be able to compute VEV's of the type
$\langle T^{\sigma_n}(\cdots)\cdots T^{\sigma_1}(\cdots)\rangle$,
with $\sigma_i=\pm,\,0$. This observation entails that the usual Dyson perturbation theory formula for the development of
one single $T$-ordered product is not sufficient to our purposes.
An extension of Dyson algorithm is therefore needed and,
fortunately for us, such an extension is already available, thanks to the
work of Ostendorff $\cite{O}$ and Steinmann $\cite{S3}$.
We recapitulate their results for the reader's convenience (reporting more or
less {\em verbatim} the content of the appendix of Ref. $\cite{S4}$).
Let us denote by $X=\{x_1,\cdots,x_r\}$ a set of 4-vectors and $\Phi$ stand for any basic field ($A_\alpha^a,\,\psi$ {\em etc}.)
of interest for us. Let also $T^\sigma(X)$ denote the corresponding product of
the fields $\Phi(x_1)\cdots \Phi(x_r)$. In the multi-time-ordered vacuum expectation value
\begin{eqnarray}
W(X_n,\sigma_n |~...~| X_2,\sigma_2 | X_1,\sigma_1)=\langle T^{\sigma_n}(X_n)\cdots T^{\sigma_1}(X_1)\rangle
\end{eqnarray}
any $\sigma_i$ may take the value $\pm$ only
(the case $T^0=1$ of no ordering will be included
later). The perturbative contribution to order $g^N$ to $W$ is obtained as follows.
$\bullet$ {\em{Graphs}}: All the graphs with $\sum\, r_i$ external points and a number
of internal points suitable to match the order $N$ in PT are drawn.
$\bullet$ {\em{Partitions}}: Each of the above graphs is partitioned in non
overlapping subgraphs -- the ``sectors'' -- such that all the external points of
$X_i$ belong to the same sector, called an external sector.
In general, there exist sectors not containing external points, called internal
sectors. Internal points may belong to external as well as to internal sectors,
depending on the partition considered.
$\bullet$ {\em{Sector numbers}}: To any sector $S$, a number $s(S)$ is assigned
according to the following rules.
(i) For the sector containing the external points $X_i$: $s = i$.
(ii) For an internal sector $S$, $s(S)$ is a non-integer number between the maximum
and the minimum sector numbers relative to the neighbouring sectors
({\em i.e.} the sectors connected to $S$ by at least by one line of the graph).
(iii) If $\sigma_i \ne \sigma_{i+1}$ there is no internal sector with $i<s(S)<i+1$.
$\bullet$ {\em{Equivalence}}: If two partitions only differ in the numbering of the
sectors -- not in their topology -- they are inequivalent if for at least one pair
of neighbouring sectors $S^\prime,\,S^{\prime\prime}$ one has
$s(S^\prime)>s(S^{\prime\prime})$ in the first
partition, $s(S^\prime)<s(S^{\prime\prime})$ in the second.
$\bullet$ {\em{Type}}: The sectors are either $T^+$ or $T^-$ sectors in the following
way:
the external sector with number $i$ is a $T^{\sigma_i}$ sector; the internal
sector with $i<s(S)<i+1$ and $\sigma_i = \sigma_{i+1}$ is a $T^{\sigma_i}$
sector as well.
$\bullet$ {\em{Diagrammatic rules}}: Any partition is converted into an analytical
expression according to the following.
(i) Inside a $T^+$ sector ordinary Feynman rules for propagator and vertices
apply.
(ii) Inside a $T^-$ sector the complex conjugate of Feynman rules hold.
(iii) Any internal sector contributes a $(-1)$ factor.
(iv) Finally, a line connecting two different sectors $S^\prime$ and $S^{\prime\prime}$
corresponds, in momentum space, to a factor
\begin{eqnarray}
\delta_{ij}\,(\mathop{ \hbox{ \hbox{\slash + \mu) \, 2\pi\,\theta(\pm p_0) \,
\delta(p^2-\mu^2) \qquad {\rm quarks} \quad , \label{cutf}
\end{eqnarray}
\begin{eqnarray}
\delta_{ab}\,\Bigl( -g_{\mu\nu} \, 2\pi\,\theta(\pm k_0)\,
\delta(k^2) + k_\mu k_\nu \cdots \Bigr ) \quad {\rm gluons} \quad ,
\label{cutb}
\end{eqnarray}
where the dots in the second stand for gauge terms that decouple in all the
$W$ functions we will calculate and the $\pm$ applies according to whether the
number sectors satisfy $s(S^\prime)\mathop{ \vbox{ \hbox{${_{_<}}$ s(S^{\prime\prime})$.
$\bullet$ {\em{Sum}}: The contribution of order $g^N$ to $W$ is obtained by summing
the contribution of all inequivalent partitions so obtained and multiplying
the result for the appropriate combinatorial factor.
The inclusion of the case $T^0=1$ of no ordering is taken into
account by the following observation.
Single fields $\Phi(x)$ are included in the above scheme by allowing external
sectors with only one field as argument: $\Phi(x)=T^\pm\bigl (\Phi(x)\bigr )$. In this way the single partitions of a graph do
depend on the choice of the sign, but the sum, expectedly, does not. This,
in particular, provides the algorithm for computing Wightman functions in PT.
Some comments about the above Steinmann rules are in order. The iterative derivation of the above rules is based on the following inputs:
(i) the equations of motion of the model;
(ii) Wightman axioms
for the Wightman functions (including locality, but excluding positivity);
(iii) on-shell normalization
conditions.
Within these assumptions the solution provided by the above rules is shown to
be unique.
Concerning the last point we emphasize that, whenever needed, an IR regulator
must be at work (which one is suitable for the models considered here will be
discussed later).
Moreover Steinmann himself emphasizes that no use of the asymptotic condition
is ever made. This is quite welcome for us for, in the contrary case, this would imply some
assumption on the IR asymptotic dynamics: this is exactly what we
do not want to do.
The above rules provide the tool necessary for computing in PT, at least in
principle, all the correlation functions of the gauge invariant charged
fields, as the ``quark'' $(\ref{PsiPT})$: this algorithm provides us immediately
with the ``quantum part'' of the calculation, {\em i.e.} that part that only
involves the quantum fields in the r.h.s of $(\ref{PsiPT})$.
About this part one should also observe that all the degrees of freedom,
physical as well as unphysical, are associated to local fields that propagate
in causal way.
However, there remains the ``classical part'' of the calculation, consisting in
checking whether the $d^4k_j$ integrations involving both the VEV's and
the $\hat f$'s we have chosen
(that should provide the decoupling of the unphysical degrees of freedom)
are well defined.
We face this problem in the next section where we only consider two-point
functions, because the rules we have reported above are somewhat unusual and
more complicated than the Feynman rules everybody is used to: we better start
learning the new game in the simplest case.
\section{Two-point Functions} \label{sec4}
Our aim is to see how the algorithm given in the preceding section works in
the case of the two-point function
\widetext
\begin{eqnarray}
&& \int d^4 x \,e^{i\,p\cdot x}~
\int d^4 y \,e^{i\,q\cdot y}~
\langle \Psi(x;m)\overline \Psi(y;n)\rangle
= (2\pi)^4 \delta_4(p+q)~W(p,m;q,n) = \nonumber \\
&& = \sum_{M,N=0}^\infty \quad \sum_{\sigma,\tau=\pm,0} (-ig)^M(ig)^N\,
\prod_{i=1}^M \int {d^4 k_i \over (2\pi)^4}~
\prod_{j=1}^N \int {d^4 \ell_j\over (2\pi)^4} \quad \times \label{Wpmqn} \\
&& \times ~~{\zeta_\sigma^{1/2}} \, \langle V^\dagger_\sigma\rangle^{-1} \,
\chi_\sigma(m)\,\hat f_{\sigma\,M}^{\mu_1\cdots k_M}(k_1\cdots,k_M;m) ~~~~
{\zeta_\tau^{1/2}} \,\langle V_\tau \rangle^{-1} \,
\chi_\tau(n)\, \hat f_{\tau\,N}^{\nu_1\cdots \nu_N}(\ell_1\cdots,\ell_N;n)~\times \nonumber \\
&& \times~~ \bigl \langle T^\sigma
\bigl [ \hat {\bf A}_{\mu_M}(k_M) \cdots
\hat {\bf A}_{\mu_1}(k_1)\,\hat \psi \bigl (p-\sum k_i\bigr) \bigr]\,
T^\tau \bigl [ \hat {\overline \psi}(q-\sum \ell_j)\,
\hat {\bf A}_{\nu_1}(\ell_1)\cdots \hat {\bf A}_{\nu_N}(\ell_N) \bigr ]\,
\bigr \rangle \quad . \nonumber
\end{eqnarray}
\narrowtext
Due to the presence of the string vectors $m$ and $n$, this two-point function extends
equation $(\ref{WQdef})$ that will be recovered in the end of this section.
In analogy to $(\ref{wdef})$ we will denote the amputation of
$(\ref{Wpmqn})$ by
\begin{eqnarray}
(2\pi)^4\delta_4(p+q)\,w(p,m;q,n) =\gamma_\alpha\,t^a\,\bigl
\langle Q^{a\alpha}(p,m)~ \overline Q^{\,b\beta}(q,n)\bigr \rangle\,\gamma_\beta \, t^b \quad ,
\end{eqnarray}
where the $Q$'s are the currents defined in $(\ref{mDirac})$.
Consistently with Steinmann assumptions, we assume that the QCD
Lagrangian $\cite{NO}$ has been IR regulated and
renormalized with on-shell normalization conditions.
Up to order $g^2$ the calculation is essentially abelian:
the colour matrices in the two vertices contract to $C_{_F}$($\to 1$ for QED)
and there is no three-gluon vertex.
To this order, therefore, one can think of regularizing IR divergences by
giving a mass $\lambda$ to the photon/gluon and UV
divergences by dimensional regularization:
$4 \to 4 - 2\,\varepsilon,\,\varepsilon>0$.
As a matter of fact, on going to order $g^4$ it will be seen in $\cite{dEMi}$ that
the mass regularization is not adequate and we shall use dimensional
regularization $4 \to 4 + 2\,\epsilon,\,\epsilon>0$ for the IR $\cite{GM,MS}$
(this IR $\epsilon$ should not be
confused with the UV $\varepsilon$, anyway they will never be
simultaneously used) and non-lagrangian Pauli-Villars $\cite{BD}$ for UV.
Details about the problems connected with the choice of the regularizations are given in
Section \ref{sec6}.
It is convenient to group the graphs contributing to the VEV in
$(\ref{Wpmqn})$ in the following way:
(1) Usual or local graphs: those with the $M=N=0$ in the above double series, {\em i.e.}
the graphs contributing to the Wightman function
$\langle \psi(x)\overline\psi(y)\rangle$.
(2L) Left graphs: $M>0,\,N=0$.
(2R) Right graphs: $M=0,\,N>0$, specular to the left graphs.
(3) Left/Right graphs: both $M>0$ and $N>0$.
This is exemplified by the four graphs in FIG. $\ref{fig: Fig1}$, that gives the graphs
contributing to order $g^2$.
The sector partitions of the above graphs depend on whether either
$m$ or $n$ are chosen in ${\cal C}_\pm$ or in the complement ${\cal C}_0$
of the light cone.
To cover all the nine possibilities, it would be sufficient to consider only
five cases, thanks to the Dirac conjugation properties of the fermion field, equation
$(\ref{DiracConjx})$. However, we will furtherly restrict ourselves
only to the three cases that are more interesting for our purposes:
(A) $m\in{\cal C}_+$, $n\in{\cal C}_-$;
(B) $m\in{\cal C}_+$, $n\in {\cal C}_0$;
(C) $m,\,n\in {\cal C}_0$.
The discussion of the remaining cases is, after these, a simple exercise.
In any event, the lowest order graph, common to all cases, contributes the
free two-point Wightman function of the spinor field, equation $(\ref{Wfree})$.
\subsection{$m\in{\cal C}_+$, $n\in{\cal C}_-$} \label{sec4a}
Only the term of $(\ref{Wpmqn})$ with $\sigma = +,\,\tau =-$ contributes,
there are only two external sectors, sector 1 in the right that is a $T^-$
sector, and sector 2 in the left, that is a $T^+$ sector.
Since the two sectors are of different type, there can be no internal
sectors.
The partitions of the above six graphs are thus obtained by drawing a
cutting vertical line in all possible positions.
In the cut lines we convene that momentum always flows from right to left,
{\em i.e.} from sector 1 to sector 2 so that the replacement rules
$(\ref{cutf})$ and $(\ref{cutb})$ are always taken with the plus sign.
All this resembles, and is nothing else but, the familiar Cutkosky-Veltman
cutting rules. It should be noted that this regards only the VEV in the last line of
$(\ref{Wpmqn})$. The $\hat f$-vertices contributed by the string operators, not even drawn in
FIG. $\ref{fig: Fig1}$, are not touched upon by Steinmann rules: their denominators are
instead prescribed by our definition $(\ref{PsiPT})$.
In addition, this identification of Steinmann rules with Cutkosky-Veltman
rules happens only thanks to the choice made for $m$ and $n$.
Different choices, as well as VEV's with more
than two external sectors, are covered only by Steinmann rules.
The partitions drawn in FIG. $\ref{fig: Fig2}$-$\ref{fig: Fig4}$ refer to $(\ref{Wpmqn})$,
{\em i.e.} to the whole $\langle \Psi \overline \Psi \rangle$,
not only to the VEV in $(\ref{PsiPT})$: the vertical lines represent the
string denominators of the $\hat f$'s, whereas each vertex on a vertical
line -- an empty circle -- contributes a factor proportional to either
$ g\, m_\mu $ or $ g\, n_\nu $;
there also is a 4-dimensional integration for the loop.
Concerning graph A, its partitions are given in FIG. $\ref{fig: Fig2}$.
Only the one marked {\tt (a)} is non-zero, the other
two vanish thanks to both mass and wave function
on-shell normalization conditions.
Graph BL has the two partitions given in FIG. $\ref{fig: Fig3}$ and named {\tt (bL)} and {\tt (}$\zeta${\tt L)}.
There are the two specular and
complex conjugate partitions {\tt (bR)} and {\tt(}$\zeta${\tt R)} from BR.
Graph C has the only partition {\tt (c)} given in FIG. $\ref{fig: Fig4}$.
Graph TL too has only the partition {\tt (tL)} given in FIG. $\ref{fig: Fig5}$.
There also is the partition {\tt (tR)} complex conjugate of the above.
We start with discussing the last graph.
It is ill-defined because its contribution to $W_1(p,m;q,n)$ is proportional
to the integral
\begin{eqnarray}
\int d_4 k~{{ g~m^\mu } \over{m \cdot k - i \,0}}~
{{ g~m^\nu } \over{m \cdot (k - k) - i \,0}}~
{{- i\, g _{\mu\nu} + \cdots} \over{k^2 - \lambda^2 + i\, 0}}
\label{4.3}
\end{eqnarray}
that is not defined. Even in QED, where, due to absence of colour matrices,
one could take for $\hat{f}_2$ the symmetrized form
\begin{eqnarray}
\hat{f}_{+2}^{\mu \nu}(k_1,k_2;m) =
{1 \over 2!}~{i\,m_\mu \over {m\cdot k_1 - i \,0}}~
{i\,m_\nu \over{m \cdot k_2 - i \,0}}~, \nonumber
\end{eqnarray}
the momentum conservation $k_1 = - k_2$ from the photon propagator would
yield the integral
\begin{eqnarray}
\int d_4 k~{1 \over {k^2 - \lambda^2 + i \,0}}~
{1 \over {m \cdot k - i \,0}}~{1 \over {m\cdot k + i \,0}}
\nonumber
\end{eqnarray}
plagued with a pinch singularity. So one has to get rid of it.
This is exactly the task of the
factors $\langle V^\dagger_\sigma \rangle^{-1}$ in $(\ref{Wpmqn})$, as we
now explain.
The initial observation is that, thanks to translation invariance, the VEV
of $V(x;m)$ cannot depend on $x$. So it may only
be a (ill-defined) constant times the identity matrix in colour space.
Imagine now that the theory has been
provisionally regularized by defining it on a space-time of finite volume
$\Omega$: translation invariance is temporarily broken and momentum conservation
does not hold, so that $(\ref{4.3})$ is now well defined: all the graphs depend
on $\Omega$ and tend to the expression that the above rules provide for them in the limit
$\Omega \to \infty$. However, before the limit is taken and up to order
$g^2$, the factor $\langle V^\dagger_+ \rangle^{-1}$ times $W_0(p)$
provides exactly the partition {\tt (tL)}, but with opposite sign.
Independently of any heuristic explanation, the factors
$\langle V^\dagger_\sigma \rangle^{-1}$ are the instruction for the neglect
of all the graphs including self interaction of the strings, as that
given in FIG. $\ref{fig: Fig8}$,
{\em i.e.} of the graphs that can be disconnected with one only cut in the
string associated with the field $\Psi(x;m)$ -- one could call them One String Reducible graphs.
Likewise, $\langle V_\sigma \rangle^{-1}$ operates on the One String
Reducible graphs associated with the string of the field
$\overline{\Psi}(y;n)$.
We thus arrive to the conclusion that to order $g^2$ only the six partitions
{\tt (a)}, {\tt (bL)}, {\tt (bR)}, {\tt (c)} and {\tt(}$\zeta${\tt L)}, {\tt(}$\zeta${\tt R)} survive, as well
as the counterterms coming from the expansion of
$\zeta_+ = \zeta_- \simeq 1 + {\alpha/\pi}\, \zeta_1$.
For example the partition {\tt(}$\zeta${\tt L)} can be parameterized in the form:
\begin{eqnarray}
{\tt (}\zeta {\tt L)} = {\alpha\over \pi}\, C_{_F}\,
\left [ \gamma(\beta(m,p);{\rm UV},{\rm IR}) + \delta (\beta(m,p))\,
{\mathop{ \hbox{ \hbox{\slash\,\mu \over m \cdot p} \right ]\,W_0(p) \label{zetal}
\end {eqnarray}
where, in terms of the ultraviolet and infra-red cutoffs
\begin{eqnarray}
&& {\rm UV} = {1\over \varepsilon} - \gamma_{_{E}} + \ln{ 4\pi \kappa^2 \over \mu^2}
\hspace{3.0 cm} {\rm{\small{dimensional~regularization}}}~, \label{UV} \\
&& {\rm IR} = \ln {\lambda \over \mu}
\hspace{5.6 cm} {\rm{\small{mass~term~for~the~vector~meson}}}~, \label{IR}
\end{eqnarray}
and of the functions
\begin{eqnarray}
&& \beta(m,p) = \sqrt{ 1-{ {m^2\,p^2} \over (m\cdot p)^2} }~, \quad
m,\,p\in {\cal C}_\pm \Rightarrow 0<\beta<1 ~, \quad
m \to p \Rightarrow \beta \to 0 ~, \label{beta} \\
&& B(\beta) = {1 \over 2\, \beta}\,
\ln \,\left \vert {1+\beta \over 1-\beta}\right \vert ~, \label{B} \\
&& \Xi(\beta) =
\left [{1\over \beta} \,{\rm Li}_2(\beta)
+{1\over 2\beta}\, {\rm Li}_2\left (-{1+\beta\over 1-\beta}\right ) \right ] +
[\beta \to - \beta] \quad , \label{Xi}
\end{eqnarray}
the calculation of the invariant functions $\gamma$ and $\delta$ gives the result:
\begin{eqnarray}
&& \gamma(\beta;{\rm UV}, {\rm IR}) = {1 \over 2}\,
\Biggl \{ {1\over 2}\,{\rm UV} + 1 + B(\beta)\,
\left ( 2 \,{\rm IR} + \ln{1-\beta^2\over 4} + 1
\right )-~\Xi(\beta) + \label{gamma} \\
&& \hspace{3.2 cm} +\left ( {1\over \xi}-1\right )
\left ({1\over 4}\, {\rm UV} + {\rm IR} + {7\over 4} -
{1\over 2}\,{\ln \xi\over 1-\xi} \right)
\Biggr \}\,W_0(p) ~ , \nonumber \\
&& \delta (\beta) = - {1\over 2}\,B(\beta) \quad . \label{delta}
\end {eqnarray}
In $(\ref{gamma})$ the contribution
of the $g_{\mu\nu}$ and of the longitudinal terms of the vector meson
propagator are the first and the second line, respectively.
Likewise, the calculation of {\tt(}$\zeta${\tt R)} is obtained by $(\ref{zetal})$ with the
replacement:
\begin{eqnarray}
{\tt(}\zeta {\tt R)} = {\alpha \over \pi}\, C_{_F}\,W_0(p) \,\Bigl [ m \to n \Bigr ] \quad . \label{zetar}
\end{eqnarray}
Obviously, the choice of $\zeta_1$ can only
modify the invariant function $\gamma$.
The first thing to note about the above graphs is that the coefficient of
UV in $\gamma$ does not depend either on $p$ or on $m,\,n$. Therefore, this
dependence (as well as the dependence on $\xi$) can be renormalized away.
The second thing to note is that the coefficient of IR -- proportional to
$B(\beta(m,p))+B(\beta(n,p))$ -- does depend on $p$:
the infra-red divergence cannot be eliminated by renormalization.
We choose
\begin{eqnarray}
{\alpha\over \pi}\,\zeta_1 = {\alpha\over \pi}\,C_{_F}\,
\left \{-{1\over 2}\,{\rm UV} - 2 \, {\rm IR} + 1
+ \left ( {1\over \xi}-1\right )\left ({1\over 4}\, {\rm UV} + {\rm IR} +
{7\over 4} - {1\over 2}\,{\ln \xi\over 1-\xi}\right)\right \}~,
\end{eqnarray}
in which the finite part of $\zeta_1$ has been chosen in such a way that when both
the limits \mbox{$m\to p,\,n\to -p$} are taken in $(\ref{zetal})$ and
$(\ref{zetar})$ respectively, one obtains
\begin{eqnarray}
{\tt(}\zeta {\tt L)} + {\tt(}\zeta {\tt R)} +
{\alpha\over \pi}\,\zeta_1\, W_0(p) ~
\stackrel {m,-n\,\to\,p}{\longrightarrow} 0~.
\label{lrz}
\end{eqnarray}
We have now to discuss the sector partitions {\tt (a)}, {\tt (bL)}, {\tt (bR)}, {\tt (c)}.
They have in common the 2-body phase-space
\begin{eqnarray}
& \Gamma_2(p) &= \int d\Gamma_2 = \int \! d^4 k\,\theta(k_0)\,\delta(k^2)\,\theta(p_0-k_0) \, \delta((p-k)^2-\mu^2) = \nonumber \\
& & = {\pi\over 2}\,\theta(p_0)\,\theta(p^2-\mu^2)\,\left (1-{\mu^2\over p^2}\right ) \label{2bps}
\end{eqnarray}
and their contribution to the amputated two-point function $(\ref{wdef})$ is
\begin{eqnarray}
&& {w}_1(p,m;-p,n) = {g^2 \over (2\pi)^2}\,C_{_F}\, \int d \Gamma_2~N(p;m,n) \label{dgammaN}
\end{eqnarray}
where from
\begin{eqnarray}
N(p;m,n) =
\left[ \gamma^\mu - (\mathop{ \hbox{ \hbox{\slash - \mu) {m^\mu \over {m \cdot k}} \right]
(\mathop{ \hbox{ \hbox{\slash - \mathop{ \hbox{ \hbox{\slash + \mu)
\left[ \gamma^\nu - (\mathop{ \hbox{ \hbox{\slash - \mu) {n^\nu \over {n \cdot k}} \right]
(- g_{\mu \nu} ) \label{N}
\end{eqnarray}
the contribution of each sector partition is clearly identifiable.
The following comments should help.
(a) The factors $(\mathop{ \hbox{ \hbox{\slash - \mu)$ in the square brackets of $(\ref{N})$
are due to the amputation.
(b) The contribution of the spurious degrees of freedom in the gluon
propagator is obtained by the replacements
\[ \delta(k^2)\to \delta(k^2-\lambda^2) - \delta(k^2-\lambda^2/\xi) \]
in the two-body phase-space $(\ref{2bps})$ and
\[ - g_{\mu \nu} \to k_\mu k_\nu/\lambda^2 \]
in $(\ref{N})$. The latter converts each of the square brackets into
$(\mathop{ \hbox{ \hbox{\slash \,- \mathop{ \hbox{ \hbox{\slash - \mu)$ that in turn, on multiplying the factor
$(\mathop{ \hbox{ \hbox{\slash \,- \mathop{ \hbox{ \hbox{\slash + \mu)$, gives
zero -- thanks to the delta function in the fermion phase-space.
(c) The prescriptions $\pm i\,0$ in the string vertex denominators have been
omitted inasmuch as irrelevant to $(\ref{N})$: indeed, $m$, $-n$ and also
$k$, thanks to $(\ref{2bps})$, belong to ${\cal C}_+$,
so that both $m \cdot k$ and $-n \cdot k$ are strictly positive on the
two-body phase-space.
(d) For the same reason there is no need of IR regularization in
$(\ref{dgammaN})$.
Use of covariance shows that $w_1(p,m;-p,n)$ can be expressed in terms of
three integrals: one is $\Gamma_2(p)$, equation $(\ref{2bps})$. As for the others,
it is convenient for later use to define
\begin{eqnarray}
I(p;m) = \int d\Gamma_2\,{\Pi(m)\over(m\cdot k)} ~ , \label{Idef}
\end{eqnarray}
where $\Pi(m)$ is a prescription:
\begin{eqnarray}
\Pi(m) = \left \{ \begin{array}{lll}
\pm 1 & {\rm if} & m\in {\cal C}_{\pm} \\
{\rm PV} & {\rm if} & m\in {\cal C}_0
\end{array} \right. ~, \label{Pi}
\end{eqnarray}
and likewise
\begin{eqnarray}
J(p;m,n) = \int d\Gamma_2\,{\Pi(m)\over (m\cdot k)}\,
{\Pi(n) \over (n\cdot k)} ~ . \label{Jdef}
\end{eqnarray}
The results of the calculations, for any $m$ and $n$, are
\begin{eqnarray}
&& I(p,m)=\Gamma_2(p)\, {2\over p^2-\mu^2}\,
{p^2\over \vert p\cdot m \vert }\,B( \beta(m,p) ) ~ , \label{Ires} \\
&& J(p;m,n)=\Gamma_2(p)\,\,{4\over (p^2-\mu^2)^2}\,
{p^2\over \vert m\cdot n \vert}\,B( \beta(m,n) ) ~ , \label{Jres}
\end{eqnarray}
with $B$ given by $(\ref{B})$.
Only one observation is relevant about the above integrals, namely that the
limits $m\to -n$, $m\to p$, $n\to-p$ exist, commute with one another and commute
with the phase-space integration to give
\begin{eqnarray}
&& I(p;p) = \Gamma_2(p)\,{2\over p^2-\mu^2} ~ , \\
&& J(p;m,-m) = J(p;p,-p) = \Gamma_2(p)\,{4\over (p^2-\mu^2)^2} ~ .
\end{eqnarray}
As long as the contribution of sector
partitions {\tt (a)}, {\tt (bL)}, {\tt (bR)} and {\tt (c)} is infra-red finite,
the lesson to be learned from adding this to
the contribution of partitions {\tt(}$\zeta${\tt L)} and {\tt(}$\zeta${\tt R)} (given by $(\ref{zetal})$,
$(\ref{zetar})$) is that the perturbative theory for the field $\Psi(x;m)$,
$m\in {\cal C}_+$, and its Dirac conjugate is plagued with the same
IR pathology as for the gauge dependent $\psi(x)$.
Should one stop here, nothing would have been gained.
The only way to get rid of the IR divergence given by sector partitions
{\tt(}$\zeta${\tt L)} and {\tt(}$\zeta${\tt R)} is to take both the limits $m\to p$ {\em and}
$n\to -p$. In this case, due to the last two formulae, the contribution of sector partitions {\tt (a)}, {\tt (bL)}, {\tt (bR)}, {\tt (c)}
simplifies to
\begin{eqnarray}
w_1(p)
=C_{_F}\,\theta(p_0)\,\theta(1 - \varrho) \,(1 - \varrho)
\left ( \mathop{ \hbox{ \hbox{\slash \, {{1 + \varrho}\over 2} - \mu \right ) \label{w1}
\end{eqnarray}
where
\begin{eqnarray}
\varrho = \mu^2/p^2 \label{rho} \quad ,
\end{eqnarray}
whence, on reinserting the external propagators omitted for the amputation,
taking into account $(\ref{lrz})$ and setting $C_{_F}=1$, one obtains the
$W_1(p)$ appearing in $(\ref{WEres})$ and given by
equations ($\ref{W2par}$), ($\ref{W1}$).
This is the piece of evidence that we can give in this paper, working to
order $g^2$, about the existence -- and, to some extent, the necessity -- of
taking the limit $(\ref{ntop})$, discussed in the Section \ref{sec2}.
The extension of $(\ref{w1})$ to the region $0<p^2 < \mu^2$ is legitimate and trivial.
Also the extension of $(\ref{w1})$ to the region $p^2 < 0$ is trivial --
it also gives zero. But in this case there is a problem of consistency between
this extension, on the one side, and the Steinmann rules and the
limits $m, \, -n\to p$ on the other side. In this region infact the taking
of the limits requires that $m$ and/or $n$ be space-like from the outset
and this, in turn, changes the sector partitions contributing to $w_1$.
It is however plausible to expect that the naive extrapolation of $(\ref{w1})$ to $p^2<0$
is correct: indeed all the sector partitions, even when calculated with the Steinmann
rules suited for $m$ and/or $n$ space-like, should display
either a $\Gamma_2(p)$ or a $W_0(p)$ factor, as encountered in the present
section. If really so, setting $p^2<0$ gives zero, due to the support
properties of these factors, and the limits $m,\,-n \to p$ are quite safe.
We feel however that, in order to check the above mentioned consistency,
the exposing of the results of the explicit calculation is more convincing.
Also because, should one be interested in the perturbative
theory of the fields with $m\ne p$, there arise some difficulties connected
with renormalizability that are better explicitly inspected.
This is dealt with in the next subsections.
\subsection{$m\in{\cal C}_0$, $n\in{\cal C}_-$} \label{sec4b}
There is again the contribution of local graphs, namely those contributing to the
ordinary Wightman function $\langle \psi(x)\overline \psi(y)\rangle$, {\em i.e.} graph
A of FIG. $\ref{fig: Fig1}$. This is expected to be the same as in the previous
section, as independent of the string vectors $m$ and $n$.
Indeed, as commented after the last Steinmann rule in Section \ref{sec3}, we have
the freedom to assign a time-ordering label to each field, being sure that
the final result does not depend on the assignment.
We choose to write
$\langle \psi(x)\overline \psi(y)\rangle=\langle T^+[\psi(x)]\,
T^-[\overline \psi(y)]\rangle$,
that takes us back to the case discussed in the previous section: only
sector partition {\tt (a)} of FIG. $\ref{fig: Fig2}$ gives a non-vanishing contribution.
Let us now discuss the sector partitions of graph BL of FIG. $\ref{fig: Fig1}$. Now the three external vertices must be given sector numbers
as in FIG. $\ref{fig: Fig6}$
and the number $s$ can be given values $1 \le s \le 3$.
So in principle there are five inequivalent partitions.
The two partitions in which $s$ is non-integer have three on-shell lines
joining in the same vertex, so their contribution is zero.
There remain the three sector partition labelled by
$s=1,\,2,\,3$.
The first -- $s=1$ -- is again {\tt (bL)} in FIG. $\ref{fig: Fig3}$, so its contribution is easily
recovered from $(\ref{dgammaN})$ and $(\ref{N})$, provided the integral
$(\ref{Idef})$ is taken, according to $(\ref{Wpmqn})$
and $(\ref{f10})$, with the ${\rm PV}$ prescription.
In fact, in this case the denominator $m\cdot k$ is no longer positive on the
two-body phase-space. The result is still given by $(\ref{Ires})$.
It is now convenient to consider the sector partitions of graph C
in FIG. $\ref{fig: Fig1}$, postponing to later the
sector partitions of FIG. $\ref{fig: Fig6}$ labelled by $s$ = 2, 3. The sector numbers can only be assigned as in FIG. $\ref{fig: Fig7}$.
Therefore this is again partition {\tt (c)} of FIG. $\ref{fig: Fig4}$, easily recovered from
$(\ref{dgammaN})$ and $(\ref{N})$, provided that in the integral
$(\ref{Jdef})$ the $m\cdot k$ denominator is PV prescribed.
Again the result is provided by $(\ref{Jres})$.
Finally, the only sector partition of the one string reducible graph
TL in FIG. $\ref{fig: Fig1}$ (sector numbers 1 to 4 clockwise from right vertex) is again
disposed of, thanks to $\langle V^\dagger_0 \rangle^{-1}$
instruction in $(\ref{Wpmqn})$.
Going back to the other two sector partitions $s=2,\,3$ of FIG. $\ref{fig: Fig6}$,
they both have a $W_0(p)$ factor (the fermion line in the right) and so take
the place of {\tt(}$\zeta${\tt L)} of the previous section.
The contribution of these partitions is parameterized, in analogy with
$(\ref{zetal})$, by
\begin{eqnarray}
{\tt{(zL)}} = {\alpha\over \pi}\, C_{_F}\, \left [ c(\beta(m,p);{\rm UV},{\rm IR}) + d(\beta(m,p);{\rm UV})\,
{\mathop{ \hbox{ \hbox{\slash\,\mu\over m \cdot p} \right ]\,W_0(p)
\label{zl}
\end{eqnarray}
where, the invariant functions $c$ and $d$ depend on $m$ and $p$ through
the variable $\beta(m,p)$
(defined in $(\ref{beta})$ and now, with $m \in {\cal C}_0$, satisfying $\beta > 1$)
and on the cutoffs $(\ref{UV})$, $(\ref{IR})$: the result of the calculation gives
\begin{eqnarray}
&c(\beta;{\rm UV},{\rm IR}) &= {\rm UV}\left ({1\over 2} +{3\over 8}\,\beta^{-2}+{1\over 8}(1-\beta^{-2})\,B(\beta) \right )
+{\rm IR}\,B(\beta) + \nonumber \\
& & +{5-\beta^{-2}\over 16}\,\Upsilon(\beta)
+{2\,\beta^{-2} -1\over 4}\,B(\beta) - {3\over 4}\,\beta^{-2} +\nonumber \\
& & -{1\over 4}\left ({1\over \xi}-1\right) \left ( {\rm UV} - 2\,{\rm IR} +1\right ) + {1\over 4\, \xi}\,\ln \xi \quad ,
\label{c} \\
& d(\beta;{\rm UV}) &= {\rm UV}\left (-{3\over 8}\,\beta^{-2} - {1\over 8}\,(1-\beta^{-2})\,B(\beta) \right ) + \nonumber\\
& &+{1\over 4}\,\beta^{-2} - {1\over 4}\,B(\beta) -{1-\beta^{-2}\over 16}\,\Upsilon(\beta) \quad ,
\label{d}
\end{eqnarray}
in which, we remind, $B(\beta)$ is given by $(\ref{B})$ and
\begin{eqnarray}
\Upsilon(\beta) = \left [ {1\over 2\,\beta}\,\ln^2{1+\beta\over 2\, \beta} + {1\over \beta}\,
{\rm Li}_2\left ({1+\beta\over 2\, \beta}\right ) \right ]
+ [\beta \to -\beta] \quad .\label{Upsilon}
\end{eqnarray}
Comparison of the above formulae with the corresponding $(\ref{gamma})$,
$(\ref{delta})$ shows that -- contrary to $\delta$ -- the invariant function
$d$ does depend on UV: this means that no choice of the renormalization
constant $\zeta_0$ introduced in $(\ref{PsiPT})$ (and giving rise to
counterterms proportional to $W_0(p)$, not to $\mathop{ \hbox{ \hbox{\slash W_0(p)$)
can cure the divergence. In addition there also are the coefficients of UV and IR in $c$ that
both depend on $p$ through $\beta$. As for the IR divergence associated
with a space-like string, the result is not new $\cite{S1}$.
There is a way out of this problem: choosing $m_0=0$ in the rest frame, in which only $p_0 = \mu\ne 0$.
In this case in fact {\tt{(zL)}}$\to \alpha \, C_{_F}/\pi\, W_0(p)\cdot (UV/2\,+\,$last line of ($\ref{c}$)$)$,
so that a suitable choice of $\zeta_0$ to order $g^2$ removes the divergence.
Unfortunately, the choice $m_0=0$ spoils Lorentz invariance and we will not stick
to it.
To summarize: the perturbative theory involving a charged field, dressed
with a string in space-like direction, is non-renormalizable -- at least at
finite orders -- due to the $m$-string.
There survive, in addition, IR divergences carried by both the $m$- and the $n$-string.
\subsection{$m\in{\cal C}_0$, $n\in{\cal C}_0$} \label{sec4c}
The discussion of local graphs as well as of the sector partitions with
only one string vertex (FIG. $\ref{fig: Fig2}$; FIG. $\ref{fig: Fig6}$ and its analogue giving rise to
a partition {\tt (bR)} and to a contribution {\tt{(zR)}} obtained by ($\ref{zl}$), with a replacement analogue to ($\ref{zetar}$))
presents no novelty with respect to the preceding subsection.
The only novel feature is given by graph C of FIG. $\ref{fig: Fig1}$, where the only
partition is given by assigning sector numbers from 1 to 4 with clockwise
movement, starting from the top right vertex.
This is again recovered from $(\ref{dgammaN})$ and $(\ref{N})$, provided
the integral $(\ref{Jdef})$ is now taken with both $m$ and $n$ space-like,
{\em i.e} with both denominators prescribed by PV.
The result is once more $(\ref{Jres})$.
In this case also, thanks to the {\tt{(zL)}} and {\tt{(zR)}} contributions,
there are UV as well IR divergences due to each string.
Choosing both $m_0=n_0=0$ in the rest frame would eliminate the two problems.
However, once more we refrain from breaking Lorentz symmetry,
also because there is another way out of this empasse.
This is provided exactly by the double limit $m=-n \to p$. The latter has
to be effected first by dragging $p$
in ${\cal C}_0$: this makes the whole two-point function vanish,
due to the support of the $\delta(p^2-\mu^2)$ in
the {\tt{(zL)}} and {\tt{(zR)}} contributions, and to the support of the $\Gamma_2(p)$
two-body phase-space in all the other ones.
At this point taking the limit is safe and gives zero, in agreement with
the naive extrapolation of $(\ref{w1})$ discussed in the end of
subsection \ref{sec4a}.
\section{outlook of fourth order calculations} \label{sec6}
The calculations of Section \ref{sec4} should make it evident that the
only two-point function free from both UV and IR problems is that
relative to the field ($\ref{defq}$): they provide evidence for the necessity,
rather than the possibility, of taking the limit $n\to p$, whose
meaning and implications we have discussed in the final part of Section \ref{sec2}.
It is also clear that, in order to obtain the result ($\ref{w1}$),
commuting the limit $n\to p$ with the loop integration makes the
calculation by far simpler and that, consistently, only the
diagrammatic rules of subsection \ref{sec4a} have to be used in order to
get a non-vanishing two-point function.
Exactly in this way, we have performed the two-loop calculation of
$W_2$, equation ($\ref{WEres}$), in QED. This receives contribution from 12
graphs for a total of 19 non-vanishing partitions, 10 of which
involve two-body phase-space, the other 9 involve three-body phase-space.
The graphs with only one external fermion line on shell -- as the partitions
{\tt(}$\zeta${\tt L)} or {\tt(}$\zeta${\tt R)} of Section \ref{sec4a} -- have not been included in the counting,
because, much as in the case of ($\ref{lrz}$),
they can be renormalized away with a suitable choice of the
fourth order contribution $\zeta_2$ to the renormalization constant
$\zeta_\pm$.
Several graphs exhibit an IR divergences proportional to $\ln \lambda$.
In this QED calculation, the photon mass regularization has been adopted, for it is well known
not to interfere with either BRST symmetry or unitarity.
Indeed, we have a diagrammatic ({\em i.e.}
without the need of analytic calculations) proof of the decoupling of
unphysical degrees of freedom, in the form of gauge-fixing
parameter independence:
\begin{eqnarray}
\xi \,{\partial \over \partial \xi}\, W_2 = 0
\label{xi}
\end{eqnarray}
Furthermore, we have found that the Grammer-Yennie
$\cite{GY}$ method of control of IR divergences can be extended, in a
straightforward way, to the graphs that include the eikonal string vertices.
This results in a diagrammatic proof (analogue to that of $\cite{KAdR}$)
of a complete cancellation between the IR divergences coming from
the three-body cut graphs and the two-body cut graphs.
What is left is the explicit result of the calculation that we
report below to give concreteness to what we have said,
although in its full form it is not illuminating.
For $p^2<9\,\mu^2$ ({\em i.e.} omitting graphs involving
closed fermion loops, that are irrelevant for the near-mass-shell
asymptotics) the two structure functions, defined by ($\ref{W2par}$) with $i=2$, are given by
\begin{eqnarray}
&& a_2={\frac {\varrho \,\left( -39 - 82\,\varrho + 37\,{{\varrho }^2}\right) }
{16\,\left( 1 - \varrho \right) } } +
{\frac {{{\varrho }^3}}
{2\,\left( 1 - \varrho \right) } }\,\ln (1 - {\sqrt{1 - \varrho }})\,
\ln (1 + {\sqrt{1 - \varrho }}) + \nonumber \\
&&\hspace{0.6 in}+ {\frac {{{\varrho }^2}}
{2\,{\sqrt{1 - \varrho }}} }\,\ln \,{\frac{1 + {\sqrt{1 - \varrho }}}
{1 - {\sqrt{1 - \varrho }}}} +
{\frac {\varrho \,\left( 3 - 31\,\varrho + 2\,{{\varrho }^2} - 2\,{{\varrho }^3} + 2\,{{\varrho }^4} \right) }
{8\,{{\left( 1 - \varrho \right) }^2}}} \,\ln \,\varrho + \nonumber \\
&& \hspace{0.6 in}+ {\frac { \varrho \,\left( -2 + 5\,\varrho - 5\,{{\varrho }^2} + 3\,{{\varrho }^3} - {{~{\varrho }}^4}+
( - 7 - 2\,\varrho - {{~{\varrho }}^2} + 2\,{{\varrho }^3}) \,\ln \,\varrho\right) }
{4\,{{\left( 1 - \varrho \right) }^2}}}\,\ln (1 - \varrho ) + \nonumber \\
&& \hspace{0.6 in}+ {\frac {{{\varrho }^2}\,\left( -1 + 2\,\varrho \right) }
{8\,\left( 1 - \varrho \right) }}\,\ln^2 \,\varrho +
{\frac {\varrho \,\left( -5 - 6\,\varrho + 3\,{{\varrho }^2} \right)}
{4\,{{\left( 1 - \varrho \right) }^2}}}\,\left( {\rm Li}_2(\varrho )- {\rm Li}_2(1)\right) \\
&& \nonumber \\
&& b_2= {\frac {\varrho \,\left( 5 + \varrho \right) \,\left( -9 + 2\,\varrho \right) }
{8\,\left( 1 - \varrho \right) }} -
{\frac {{{\varrho }^2}}
{2\,\left( 1 - \varrho \right) }} \ln (1 - {\sqrt{1 - \varrho }})\,
\ln (1 + {\sqrt{1 - \varrho }}) + \nonumber \\
&& \hspace{0.6 in}- {\frac {\,\varrho }
{2\,\sqrt{1 - \varrho} }} \,\ln {\frac {1 + {\sqrt{1 - \varrho }}}
{1 - {\sqrt{1 - \varrho }}}}\, +
{\frac {\varrho \,\left( -2 - 13\,\varrho + 6\,{{\varrho }^2} - 5\,{{\varrho }^3} + {{\varrho }^4} \right) }
{4\,{{\left( 1 - \varrho \right) }^2}}}\,\ln \varrho\, + \nonumber \\
&& \hspace{0.6 in}- {\frac {{{\varrho }^2}}
{8\,\left( 1 - \varrho \right) }}\,{{\ln^2 \varrho }} \,+
{\frac { \varrho \,\left( 3\,\varrho - 7\,{{\varrho }^2} + 5\,{{\varrho }^3} - {{~{\varrho }}^4} +
( - 13 + 4\,\varrho + {{~{\varrho }}^2} )\,\ln \varrho \right) }
{4\,{{\left( 1 - \varrho \right) }^2}}}\,\ln (1 - \varrho )\, + \nonumber \\
&& \hspace{0.6 in}+ {\frac {\varrho \,\left( -13 + 2\,\varrho + 3\,{{\varrho }^2}
\right) }{4\,
{{\left( 1 - \varrho \right) }^2}}}\,\left(
{\rm Li}_2(\varrho )-{\rm Li}_2(1) \right)
\end{eqnarray}
whose asymptotic form for $\varrho= \mu^2/p^2\to 1$ is equation ($\ref{W2}$). It should be also noted
that in the ultraviolet regime $p^2\to +\infty$, {\em i.e.} for $\varrho \to 0$, $a_2$ and $b_2$ vanish respectively
as $(\ln p^2)/p^2$ and $1/p^2$. So, when dispersed in $p^2$, they need no subtraction.
Concerning the QCD counterpart of the above calculation, stated
in equation ($\ref{ede}$) of Section \ref{intro}, apart from the contribution
of the non-planar QED graphs (those where the colour matrices occur
in the sequence $t^at^bt^at^b=C^2_{_F}- {1\over 2}\,C_{_A}C_{_F}$), there are 15 more graphs giving rise to 23
partitions, 8 of them involving two-body phase-space, the other 15 the
three-body phase-space.
A due remark concerns the IR regularization:
giving a mass to the gluon, even
according to $\cite{CF}$, preserves BRST symmetry, but only formally preserves unitarity
in the limit $\lambda \to 0$.
As a matter of fact we have verified that, in this limit, the r.h.s. of ($\ref{xi}$)
does not vanish.
We have therefore abandoned this
regularization adopting dimensional regularization for the IR
$\cite{GM,MS}$ and changed dimensional regularization for the UV with
non-lagrangian Pauli-Villars $\cite{BD}$.
With these regularizations, equation ($\ref{xi}$) indeed holds also in the
nonabelian case.
In writing equation ($\ref{ede}$), we have recalculated the abelian part (proportional to
$C^2_{_F}$) with the new IR and UV regularizations, with the expected
result that the cancellation of IR divergences
holds also in the new scheme. These details, however,
will be part of a forthcoming paper giving the details of the above described two-loop
calculation $\cite{dEMi}$.
Concerning instead the non-abelian part, we can say,
referring to the factor ${11\over 6}$ appearing in ($\ref{ede}$), that ${5\over 6}$ comes from the sum of all
the graphs that include gluon self-energy corrections; a further 1 comes from the
$C_{_A}C_{_F}$ part of the non-planar abelian graphs. For the other graphs (whose
sector partitions are, in some cases, IR divergent even as $1/\epsilon^2$, not just as $1/\epsilon$)
there is a complete
cancellation between the two-body cut and the three-body cut contributions to each of them.
\section{Conclusions} \label{sec5}
We have shown how to construct BRST invariant
composite fermion fields that carry the global quantum numbers of the
electron and of the quark in QED and QCD.
The construction consists in
dressing the ordinary Dirac field with a rectilinear string whose
space-time direction is characterized by a 4-vector that, provisionally,
breaks the Lorentz covariance properties of the field.
In perturbation theory the string generates new graphs characterized by the
occurrence of eikonal vertices. These new vertices
require prescriptions (either $\pm i\, 0$ or PV) whose choice is uniquely
dictated by the Dirac conjugation properties of the field.
Furthermore, after going in momentum representation, the 4-vector
characterizing the string must be chosen proportional to the 4-momentum of the
field.
This choice:
(i) restores Lorentz,
(ii) averts some IR as well as some nonrenormalizable UV divergences.
The second point indicates that, as a matter of fact, there is little choice.
The whole construction survives the check of a fourth order calculation
of the two-point function in PT, performed both in QED and QCD.
If these fields are to survive further and more stringent
verifications, one can conclude that global charges associated to a Gauss law
imply, for the fields carrying such charges, delocalization properties considerably more involved than the single
1-dimensional string in 3-space, somewhat popular in the literature: since the string is rectilinear in 4-momentum space, in coordinate representation
the fields rather appear spread out all over Minkowski space, exhibiting a kind of
candy-sugar structure.
The construction gives -- as an extra bonus -- different
results for the IR asymptotic dynamics of QED and QCD respectively. In particular,
it hints at a mechanism of confinement according to which
the quark so constructed seems to behave as a free field at any momentum
scale.
The construction presented raises several problems; but
the algorithm we have given in this paper also provides the possibility to
face them. Among others, a few of them still involve two-point
functions:
\begin{itemize}
\item Extending to any order in PT the above results about:
(i) IR cancellation in QED and
(ii) IR non-cancellation and factorization in QCD.
\item The verification that any gauge invariant coloured field, first of all
the gluon, has the same behaviour as the quark.
\end{itemize}
Other problems pertain instead a study of correlation functions with more than
two points:
\begin{itemize}
\item The verification that such fields do indeed carry the
expected global charges, {\em e.g.} that they satisfy in PT at least the
weak commutation relations
\[
\langle \, e(x)\,[ Q\,, \overline e(y)]\, \rangle =
\langle \, e(x)\, \overline e(y) \,\rangle ~,
\]
{\em etc.}, where $\displaystyle{Q=\int d^3 x :\!\overline \psi\, \gamma_0 \, \psi \!:\!(x)}$
is the electric charge (and the analogue in QCD).
\item A study of the VEV of the algebra of Lorentz generators (the boosts
in particular), in order to represent, in explicit way, the mechanism that
prevents a unitary implementation of Lorentz symmetry in the charged
superselection sector generated by $e(x)$
-- or, in alternative, to show how this mechanism is evaded by the fields we have proposed.
\item A comparison of the present approach with the well established results
of QED such as those about inclusive cross section or the electron $g-2$ and, in general, the
impact of this construction -- if any -- on the $S$ matrix.
\item For QCD, the proof of the scenario we have hinted at in Section \ref{intro}, namely
that amplitudes involving gauge invariant coloured fields either vanish
or disconnect into the product of free two-point functions relative to
coloured fields times an amplitude that only involves colour singlet fields.
\end{itemize}
These problems are already under our investigation and we will report
about them in the near future.
\acknowledgements
The authors are greatly indebted to Dr. M. Mintchev, Dr. G. Morchio and Dr. D. McMullan for extensive
discussions on these topics. Dr. Mintchev is also acknowledged for having carefully and thoroughly read the manuscript.
E.d'E. is grateful to Dr. B.R. Webber and Prof. J.B. Griffiths for the warm
hospitality at the Cavendish Laboratory, University of Cambridge (GB), and at the Department
of Mathematical Sciences, Loughborough University (GB) respectively, where great part of this
work was done. S.M. also wishes to thank Prof. J.B. Griffiths for the encouragement and patient
support during the preparation of this work.
|
1,116,691,500,327 | arxiv | \section{Introduction}
\subsection{Background}
In many basic problems in high-dimensional statistics and machine learning, there appear to be fundamental gaps between the performance of the information-theoretically best estimator and the best estimator that can be computed in polynomial time. These are called {\em computational vs. statistical tradeoffs}. Recently, there has been an effort
to study these gaps in a systematic fashion, in particular by forging reductions between some of these problems.
For example, finding sparse directions with large variance in the spiked covariance model turns out to be at least as hard as finding small planted cliques, see e.g.~\cite{BerthetRigollet:13,MaWu:15,BrBrHu:18}.
However, these reductions leave much to be desired as there are relatively few examples where reductions are known that map natural distributions on one problem to natural distributions on another.
In this paper, we will explore other popular methodologies for predicting where average-case problems become hard, which come from statistical physics and revolve around a powerful algorithm called belief propagation. Our key example originates from the following special case of community detection in the stochastic block model. We start with a fixed partition of $n$ nodes into $q$ (almost) equal sized communities. The probability of connecting any pair of nodes with an edge is
$k q \theta /n + k(1-\theta)/n$
if they belong to the same community and otherwise is
$k(1-\theta)/n$,
where edges in the graph are sampled independently. It is easy to see that the average degree in this graph is $k$ and that $\theta$ is a measure of the strength of the communities.
The goal is, given a graph sampled from this model, to find a $q$-partition of its nodes whose parts have non-trivial correlation (i.e. better than random) with the true communities. A striking prediction from statistical physics~\cite{DKMZ:11} is that the problem is efficiently solvable when
$k \theta^2 > 1$
while the information theory threshold for the problem is different for large values of $k$.
By now the existence of efficient algorithms when $k\theta^2 > 1$ has been established \cite{MoNeSl:15,Massoulie:14,MoNeSl:18,BoLeMa:15,AbbeSandon:15} as well as the fact that for $k > 5$, the information theory threshold is strictly below this bound \cite{AbbeSandon:15,BMNN:16}.
The threshold of
$k \theta^2 > 1$
is called the {\em Kesten-Stigum bound} and will play an important role in our paper. It is believed that for some problems, like the block model, the structure of the space of solutions changes in a fundamental way beneath the Kesten-Stigum bound, and this is the basis for the predictions about computational hardness.
Fundamentally, these predictions of computational difficulty all revolve around studying the behavior of belief propagation. In what follows we will explain some of the intuition behind belief propagation along with how do computational versus statistical phase transitions are predicted. See also \cite{MezardMontanari:06,KMRSZ:07}.
The way to think about belief propagation in the stochastic block model is to start with a local view around a node. With high probability, its neighborhood will be tree-like. In fact, we can model it (along with which community each node belongs to) as a Markov process on a tree. This model is called the {\em broadcast tree model}. We start with a complete $k$-regular tree of height $d$ (or alternatively we generate a random tree of height $h$ in which the number of children of each node is a Poisson random variable with expectation $k$). The root is assigned one of the $q$ possible labels at random. Next we propagate labels from the root to the leaves by, at each step, assigning a child the same label as its parent with probability $\theta$ and otherwise assigning it a uniformly random label. At the end, we are given the labels of the leaves and the goal is to use this information to guess the label of the root. We want our guess to be correct with some advantage over random guessing, and we want the advantage to be bounded away from zero independently of $h$. Belief propagation is an iterative algorithm that provably computes the posterior distribution on the label of the root given the labels of the leaves. So when belief propagation fails at guessing the label of the root with some nonzero advantage that is independent of $h$, it is because the problem is information-theoretically impossible. Belief propagation is based on the idea that conditioned on the label of some node, the labels of its neighbors are independent. This is exactly true on a tree and approximately true in a sparse random graph with few short cycles.
\begin{quote} {\em The key to using belief propagation to locate phase transitions is that it has its own intrinsic notions of complexity.}
\end{quote}
In the broadcast tree model, the Kesten-Stigum bound is the threshold $k \theta^2 > 1$. (The Kesten-Stigum bound in the stochastic block model is usually stated in terms of $a$ and $b$
but they are actually the same, which can be seen by relating $a, b, \theta$ and $k$). It turns out that the Kesten-Stigum bound coincides with where linear statistics stop working. In fact, in the seminal work of Kesten and Stigum \cite{KestenStigum:67,KestenStigum:66},
they showed that it is possible to guess the label of the root (and beat random guessing) just by tallying the number of labels of each type among the leaves. Moreover, it is not too hard to deduce from their results~\cite{MosselPeres:03} that below the Kesten-Stigum bound, this method fails.
Perhaps surprisingly, it is still possible to guess the label of the root and beat random guessing beneath the Kesten-Stigum bound when $k \geq 5$. However, this requires to use {\em higher-order} information about which labels appear where in the tree~\cite{Mossel:01,Sly:09,Sly:09a}.
Alternatively, the Kesten-Stigum bound can be thought of through the lens of robustness. Suppose we inject random noise at the leaves. In particular, suppose we overwrite the label of each leaf to a random value with probability $\eta$. Then above the Kesten-Stigum bound, reconstructing the root in the face of noise is still possible, but beneath the Kesten-Stigum bound it is information-theoretically impossible~\cite{JansonMossel:04}. Thus the Kesten-Stigum bound is the location in parameter space where the typical posterior distribution on the label of the root becomes highly sensitive to noise.
Fundamentally, each of these methodologies represents a way to extract information from belief propagation about where the posterior distribution on the label of the root becomes highly complex. The notion of complexity is expressed in many different ways \--- for example, the failure of linear statistics, lack of robustness, or (in the physics language) stability of the trivial fixed point. In this paper, we take an approach that is grounded in computational complexity for studying the posterior distribution in the broadcast tree model. (Alternatively, we take a circuit complexity approach to studying the complexity of the problem that belief propagation is actually solving).
\begin{quote} {\em We establish some tantalizing parallels between phase transitions (in the traditional meaning of the phrase, where it refers to changes in the structure of the solution space) and phase transitions in the circuit complexity of the inference problem. }
\end{quote}
\subsection{Our Results}
In this paper, we study the circuit complexity of various tasks performed by belief propagation on the broadcast tree model. We will be interested in four main problems: $(1)$ detection, where the goal is to guess the label of the root, given leaves generated at random, with probability $1/q + \epsilon$ with $\epsilon > 0$ independent of the depth $(2)$ inference, where the goal is to compete with the Bayes optimal predictor asymptotically in an average-case sense over samples from the model $(3)$ computing the posterior, which is the analogous question for worst-case inputs on the labels of the leaves. And finally we study $(4)$ the complexity of the forward problem of generating samples from the model. These tasks can all naturally be solved in $\mathbf{NC}^1$ the class of logarithmic depth circuits with AND, OR and NOT gates. However it will turn out that in some cases (conjecturally) weaker classes with constant depth will suffice and in others logarithmic depth is inherently necessary.
It is well known that for the broadcast tree model on two labels \--- also called the Ising model on trees \--- beneath the Kesten-Stigum bound detection is information-theoretically impossible. What this means is that taking the majority vote of the labels of the leaves solves the detection problem whenever it is information-theoretically possible to do so. However it is also well-known that majority vote is suboptimal in how often it guesses the label of the root correctly. Intuitively, this is because there is more information about the label of the root contained not just in the number of labels of each type but also in the structure of where in the tree they are relative to each other. We prove that there are more complex circuits, but still ones in $\mathbf{TC}^0$, that can solve the inference problem:
\begin{theorem}[informal, see Theorem~\ref{thm:tc0main}]
There is a constant $C > 1$ so that $k \theta^2 > C$ then the inference problem in the Ising model ($q=2$) on trees can be solved in $\mathbf{TC}^0$.
\end{theorem}
Our approach is based on~\cite{MoNeSl:14b} that shows belief propagation (suitably above the Kesten-Stigum bound) is robust to label noise. This allows to construct a $\mathbf{TC}^0$ circuit by using majority on the leaves of a subtree to get noisy estimates of their roots. We then bootstrap these estimates to get asymptotically optimal estimates of the label of the overall root. It is conjectured that belief propagation works with noisy labels all the way down to the Kesten-Stigum bound (i.e. $k \theta^2 > 1$) in which case we could improve the above theorem analogously.
As we discussed earlier, belief propagation works even in a worst-case sense and computes the true posterior. We show that the worst-case problem is much harder and is $\mathbf{NC}^1$-complete:
\begin{theorem}[informal, see Theorem~\ref{thm:nc1main}]
There are constants $\theta$ and $k$ for which computing the posterior in the Ising model on trees is $\mathbf{NC}^1$-complete.
\end{theorem}
However there is something unsatisfying about a circuit complexity lower bound that applies to the problem of computing the posterior distribution on the label of the root for a worst-case configuration of labels on the leaves. The broadcast tree model is a generative model, and the properties of belief propagation that are used to locate phase transitions are really average-case properties \--- or rather, properties about the posterior distribution on the label of the root, for a typical realization of the labels of the leaves. Now we come to what we believe to be our most significant result. We study the average-case circuit complexity of guessing the label of the root in a broadcast tree model whose parameters are beneath the Kesten-Stigum bound. We prove:
\begin{theorem}[informal, see Theorem~\ref{thm:nc1main2}]
There is a $16$ label broadcast tree model where it is possible to guess the label of the root with probability $\geq 0.999$ but where detection is $\mathbf{NC}^1$-complete.
\end{theorem}
For a general markov process on a $k$-regular tree with a transmission matrix $M$, the Kesten-Stigum bound is $k (\lambda_2(M))^2 > 1$ where $\lambda_2(M)$ is the second largest eigenvalue of $M$. In our construction, the transmission matrix has a second eigenvalue equal to zero and thus no matter how large $k$ is, we are operating below the Kesten-Stigum bound. (Equivalently, no matter how large $k$ is, linear statistics are not enough to guess the label of the root with positive advantage over random guessing). More broadly, we conjecture that the detection problem is $\mathbf{NC}^1$-complete {\em anywhere} beneath the Kesten-Stigum bound, which is consistent with the fractal way that information is stored in such settings \cite{Mossel:01}, but we are only able to prove it for this particular $16$ label broadcast tree model.
Barrington famously showed that the word problem over nonsolvable groups is $\mathbf{NC}^1$-complete \cite{barrington1989bounded}. This leads to a natural average-case $\mathbf{NC}^1$-complete problem via telescopically multiplying by random group elements. We construct a model where the labels of the children can be multiplied to get the labels of the parents. While we can solve detection by multiplying group elements in some way, what is less obvious is how to show that any circuit for detection can be used to solve the word problem. The key idea is we can define an alternative but equivalent generation procedure that starts by labelling the root implicitly as the product of many group elements, and as we follow the process down the levels of the tree, the product simplifies and involves fewer elements until at the leaves it is a random function of a single group element. In this way, the generative process expresses the label of the root as a random function of the labels of the leaves, as opposed to the other way around. This is our most challenging result and perhaps the most surprising.
Finally, we study the circuit complexity of some of the remaining tasks associated with the broadcast tree model to complete the picture. First, it is natural to wonder if weaker circuit models can solve the detection problem. We show an unconditional lower bound against $\mathbf{AC}^0$:
\begin{theorem}[informal, see Theorem~\ref{thm:ac0main}]
For any $0 < \theta < 1$, there is no $\mathbf{AC}^0$ circuit for solving the detection problem in the Ising model on trees.
\end{theorem}
\noindent The proof is based on the observation that the generative process for the broadcast tree model can itself be thought of as a random restriction \---- a classic tool for proving circuit lower bounds \cite{furst1984parity}. The main difference is that we do not get to choose the parameters of the restriction ourselves, it is dictated by the model and only sets a constant fraction of the inputs as we go up one level of the tree. Luckily, we can define an alternative generative process that is equivalent to the broadcast tree model but uses random restrictions as an intermediary step.
Despite the fact that $\mathbf{AC}^0$ circuits do not solve even the most basic type of inference problem in any interesting range of parameters, it turns out that, somewhat surprisingly, they can solve the forward problem of generation.
\begin{theorem}[informal, see Theorem~\ref{thm:genmain}]
For any $\theta = a/2^b$ where $a$ and $b$ are integers, given uniformly random bits as input, there is an $\mathbf{AC}^0$ circuit for sampling from the Ising model on trees.
\end{theorem}
Thus the broadcast tree model on two labels is an interesting example where there is a wide discrepancy between the depth needed for generation vs. inference. This is reminiscent of the work of Babai \cite{babai1987random} and Boppana and Lagarias \cite{boppana1987one} who show that, while $\mathbf{AC}^0$ cannot compute parity on the uniform distribution, there is a depth one circuit whose outputs depend on two bits each that samples from a distribution whose first $n$ bits are uniform and whose last bit is their parity.
\subsection{More Related Work}
We note that while our depth lower bounds result apply to a natural inference problem, the results proving logarithmic lower bounds are conditional (on the fact that $\mathbf{NC}^1 \neq \mathbf{TC}^0$).
This should be compared to the unconditional lower bounds for deep nets~\cite{Telgarsky:16} and to worst case~\cite{Hastad:87} and average case~\cite{HRRT:17} lower bounds in circuit complexity.
In fact, part of the motivation for our work comes from the work of the second author~\cite{Mossel:19deep} who suggested that the broadcast model is a particularly natural data generative model that has provable reconstruction algorithms and for which one can prove rigorously that depth is needed for inference. The reconstruction algorithms of the broadcast process, are often referred to as phylogenetic reconstruction algorithm. Polynomial time algorithm for reconstructing phylogenies were established in~\cite{ErStSzWa:99a,ErStSzWa:99b} and phase transition related to the Kesten-Stigum bound in the model were established in~\cite{Mossel:03,Mossel:04a} and follow up work. The paper~\cite{Mossel:19deep} does not prove depth lower bounds in the sense of the current paper. Rather, it shows that for a range of values of $\theta$, in a semi-supervised broadcast setting, algorithms that can only access low-order moments of the labelled data are unable to classify better than random, while there exists algorithms that use high-order moments and are able to label accurately.
In a concurrent work~\cite{JKLM:19}, it was shown that message passing algorithms that use only bounded memory of bits per node, do not achieve the Kesten-Stigum Bound even for the Ising Model ($q=2$). This proves a conjecture from~\cite{EvKePeSc:00}. However, these results do not have implications for the circuit complexity of the problem.
There is also a close connection between the types of problems we study here and the coin problem in pseudorandomness \cite{brody2010coin}, which asks: Suppose we are given a coin which is promised to have bias either $1/2 + \delta$ or $1/2 -\delta$ along with $n$ independent tosses and our goal is to guess which way the coin is biased and to guess correctly with (say) probability at least $2/3$. What is the smallest $\delta$ for which a given computational model (e.g. $\mathbf{AC}^0$ \cite{shaltiel2010hardness, aaronson2009bqp}, width $w$ ROBPs \cite{brody2010coin}) can succeed? In fact we can think of this as a broadcast problem on a $n$-ary depth one tree with two labels where the label of the root represents whether the coin has positive or negative bias.
With an unrestricted computational model, the majority function is optimal. And thus the coin problem is interesting in models that cannot compute the majority function and in turn leads to bounds on the fourier coefficients of the functions that they can compute and is a key ingredient in various PRGs. In the broadcast tree model, the labels of the leaves are no longer independent conditioned on the root but rather have a hierarchical structure to the strength of their dependencies. As it turns out, in light of our results, this problem can be much harder. We show that it is $\mathbf{NC}^1$-complete for a particular broadcast problem on $16$ labels. Optimistically, and in analogy with the coin problem, we could ask: Could proving unconditional lower bounds against $\mathbf{TC}^0$ for the broadcast tree problem lead to non-trivial PRGs?
\section{Preliminaries}
\subsection{The broadcast tree model}
In this paper we consider the classical tree broadcast model on
regular trees and binary labels. Throughout we will use the following notation. We write $T_k(d)$ for the $d$-level $k$-ary tree. We will identify such a tree $T_k(d)$ with a
subset of ${\mathbb N}^*$, the set of finite strings of natural numbers,
with the property that if
$v \in T$ then any prefix of $v$ is also in $T$. In this way, the root of the
tree is naturally identified with the empty string, which we will denote by $\rho$.
We will write $uv$ for the concatenation of the strings $u$ and $v$, and
$L_r(u)$ for the $r$th-level descendents of $u$; that is,
$L_r(u) = \{uv \in T: |v| = r\}$. Also, we will write
${\mathbb C}(u) \subset {\mathbb N}$ for the indices of $u$'s children relative to itself.
That is, $i \in {\mathbb C}(u)$ if and only if $ui \in L_1(u)$. We write $L_r$ for $L_r(\rho)$ and $\parent(v)$
for the parent of node $v$.
\begin{definition}[Broadcast process on a tree]
Given a parameter $\theta \in [-1, 1]$ and a $k$-ary tree of $d$ level $T_k(d)$, the {\em broadcast process on $T$} is
a two-state Markov process $\{\sigma_u : u \in T\}$ defined as follows:
let $\sigma_\rho$ be $1$ or $0$ with probability $\frac{1}{2}$. Then, for each $u$ such that $\sigma_u$ is defined,
independently for every $v \in L_1(u)$ let $\sigma_v = \sigma_u$ with probability
$\theta + (1-\theta)/2$ and $\sigma_v = 1-\sigma_u$ otherwise.
\end{definition}
In other words, in the broadcast model, the root is randomly assigned a label in $\{0,1\}$, and then each other vertex is assigned its parent's label with probability $\theta$ and an independent uniformly chosen label with probability $1-\theta$.
Of course, this is equivalent to keeping the bit with probability $1/2 + \theta/2$ and flipping it to the opposive value with
probability $1/2 - \theta/2$.
This broadcast process has been extensively studied in probability, where the major question is
whether the labels of vertices far from the root of the tree give
any information on the label of the root,~\cite{KestenStigum:66,BlRuZa:95}. See also~\cite{EvKePeSc:00,Mossel:04,MezardMontanari:06}.
A similar question was studied in various communities including bio-informatics~\cite{Felsenstein:04} and AI~\cite{Pearl88} from an algorithmic perspective, where the goal is to estimate (the posterior) of the root given the labels of vertices far from the root. It is well known that Belief Propagation is an exact linear time algorithm for computing the posterior.
We will mainly be focusing on the asymptotic behavior of the broadcast model as $d$ increases with all other parameters held constant, and we will commonly set $n=k^d$. We will be discussing the circuit complexity of multiple tasks associated with the broadcast model on the tree. To simplify notation we write $X^{(r)}$ for the vector of labels at level $r$:
$ X^{(r)} := (\sigma_v : |v| = r) $.
The most important task associated with the model is inference of the root given $X^{(d)}$. As mentioned earlier, Belief propagation is used for this task. The output of Belief propagation is a posterior distribution $\mathbb{P}[X^{(0)} = \dot | X^{(d)} = x]$. For a fixed $d$ and $k$ the posterior is always bounded away from $0$ and $1$. Indeed if $k$ is even, the posterior can often assign equal probability to the two root values.
Rounding the posterior allows to determine the more likely root value. The probabilistic nature of the inference problem, leads to a number of complexity formulations. First, in the worst-case formulation, we are looking for circuits that estimate the root correctly whenever the posterior is far enough from $(1/2,1/2)$.
In terms of average case, there is a natural distribution over the inputs, i.e, the distribution given by the broadcast process. It is thus natural to formulate an average case version of the problem where the inputs are drawn from the broadcast distribution and the objective is to estimate the root correctly with almost the same probability that BP does. Finally, in the average case setup we may settle for less, i.e., inferring the root correctly with probability bounded away from $1/2$.
The formal definition of the $3$ problems follow.
\begin{definition}
We say that a series of functions $f_d:\{0,1\}^{L_d}\to\{0,1\}$ are {\em posterior} functions if
\[
\mathbb{P}[X^{(0)}=f(x)|X^{(d)}=x]\ge \mathbb{P}[X^{(0)}=\BP(x)|X^{(d)}=x]-\delta_d\]
for every $d$ and every $x\in\{0,1\}^{L_d}$, where
$\BP(x) := \argmax_{a \in \{0,1\}} P[X^{0} = a | X^{d} = x]$ is the optimal Bayes posterior, i.e., the one obtained by applying Belief Propagation and rounding,
and $\delta_d \to 0$ as $d \to \infty$.
\end{definition}
\begin{definition}
We say that a series of functions $f_d:\{0,1\}^{L_d}\to\{0,1\}$ are {\em inference} functions if
\[
\mathbb{P}[f(X^{(d)})=X^{(0)}]\ge \mathbb{P}[\BP(X^{(d)})=X^{(0)}]-\delta_d,
\]
where $\delta_d \to 0$ as $d \to \infty$
\end{definition}
Thus a function is an inference function if they find the most likely root with (almost) the same overall probability as Belief Propagation does.
\begin{definition}
We say that a series of functions $f_d:\{0,1\}^{L_d}\to\{0,1\}$ are {\em detection} functions if there exists $\delta>0$ and $d_0$ such that for all $d\ge d_0$,
\[
\mathbb{P}[f(X^{(d)})=X^{(0)}]\ge 1/2+\delta
\]
\end{definition}
In other words, a series of detection functions determines the root's label with accuracy $1/2+\Omega(1)$, a series of inference functions determines the root's label with an accuracy within $o(1)$ of the best possible, and a series of posterior functions
determines the root's label with an accuracy within $o(1)$ of the best possible conditioned on any possible value of $X^{(d)}$. Clearly posterior functions are also inference functions.
When the reconstruction problem is unsolvable, there are no detection functions. If it is solvable, then inference functions are also detection functions.
In addition to inference problem, we are also interested in the generation problem, in other words, what is the computation complexity of generating $X^{(d)}$ given access to random bits. We address the generation question is section~\ref{sec:generation}
\subsection{Circuit Classes}
Here we give the formal definitions for the circuit classes that we will be interested in:
\begin{definition}
The circuit class $\mathbf{AC}^0$ is the class of constant depth circuits with a polynomial number of AND, OR and NOT gates, where the AND and OR gates have unbounded fan-in.
\end{definition}
It is well-known that there are explicit functions (such as the parity function) for which we can prove lower bounds against $\mathbf{AC}^0$ \cite{furst1984parity}.
\begin{definition}
The circuit class $\mathbf{NC}^1$ is the class of logarithmic depth circuits with a polynomial number of AND, OR and NOT gates, where the AND and OR gates have fan-in two.
\end{definition}
In the broadcast tree model, the depth of the tree is logarithmic in the number of leaves. It follows that the posterior distribution on the root can always be computed in $\mathbf{NC}^1$.
\begin{definition}
A linear threshold function $f: \{0, 1\}^m \rightarrow \{0, 1\}$ takes the form $f(x) = \mbox{sgn}(w^T x - \theta)$ where $w \in \mathbb{R}^m$ and $\theta \in \mathbb{R}$. The circuit class $\mathbf{TC}^0$ is the class of constant depth circuits with a polynomial number of linear threshold function gates with unbounded fan-in.
\end{definition}
The class $\mathbf{TC}^0$ is contained in $\mathbf{NC}^1$ and can compute any symmetric function of its inputs. In many ways, $\mathbf{TC}^0$ represents the frontier in circuit complexity. Impagliazzo, Paturi and Saks \cite{impagliazzo1997size} showed that depth $d$ $\mathbf{TC}^0$ circuits with $m$ inputs need at least $m^{1+c^{-d}}$ wires to compute the parity function for some constant $c > 0$. Chen and Tell \cite{chen2019bootstrapping} showed that bootstrapping $\mathbf{TC}^0$ lower bounds just beyond this would yield super-polynomial lower bounds. Miles and Viola \cite{miles2015substitution} gave a candidate pseudorandom function computable in $\mathbf{TC}^0$ which helps explain the difficulty in proving lower bounds against $\mathbf{TC}^0$.
\section{Lower bounds against $\mathbf{AC}^0$ for detection}
We show that there is no $\mathbf{AC}^0$ circuit that solves the detection problem for any non-trivial choice of parameters. In order to prove this, we are going to define a series of random projections that preserve the probability distribution of $X^{(d)}$ but reduce any circuit in $\mathbf{AC}^0$ to a constant with high probability. For the most part, the proof that these projections reduce the circuit to a constant will be a fairly standard argument using the switching lemma \cite{furst1984parity, yao1985separating, hastad1986almost}. However, due to the nature of the $X^{(d')}$, each projection will only fix a constant fraction of the variables, which will force us to apply $\Theta(\log n)$ successive projections every time we wish to reduce the circuit depth by one. The key observation is that we can preserve the probability distribution of $X^{(d)}$ by setting each vertex's label to its parent's label with probability $\theta$ and a random value otherwise. We prove:
\begin{theorem}\label{thm:ac0main}
Let $f:\{0,1\}^{L_d}\rightarrow \{0,1\}$ be computed by an $\mathbf{AC}^0$ circuit. Then there exists $\delta>0$ such that
$\mathbb{P}[f(X^{(d)})=X^{(0)}]=1/2+O(n^{-\delta})$
\end{theorem}
\noindent We defer the proof to Appendix~\ref{app:ac0}. As usual, the key idea is to prove that $f$ can be approximated by a small DNF, although here the input to $f$ comes from the broadcast tree model.
\section{$\mathbf{NC}^1$-completeness of posterior functions}
In this section we will prove that
\begin{theorem}\label{thm:nc1main}
For all $\theta$ and $k$ and in the Ising tree model, there are posterior functions in $\mathbf{NC}^1$.
Moreover there are $\theta$ and $k$ for which posterior functions Ising model is $\mathbf{NC}^1$-hard problem.
\end{theorem}
We begin by proving the first part of the theorem \---- i.e. that computing the posterior can be solved in $\mathbf{NC}^1$. The obvious approach to establish this would be to try to compute the probability distribution of each node's label based on the probability distributions of its children's labels. However, this could fail due to rounding errors. Instead, we will show that for each node, there exists a random $\mathbf{NC}^1$ function that sometimes outputs a label for that node, such that the probability distribution of the label it output given that it outputs one is the same as the probability distribution of the vertex's label. A little more precisely, for each $d'$ we will show that there exists a random function $F:\{0,1\}^{L_{d'}}\to \{0,1,?\}$ that can be computed by an NC circuit of depth $O(d')$ such that for every $x\in\{0,1\}^{L_{d'}}$, the probability that $F(x)= '?'$ is reasonably small and
\[
\mathbb{P}[F(x)=1|F(x)\ne'?'] = \mathbb{P}[X^{(0)}=1|X^{(d')}=x]
\]
In order to prove this, we will induct on $d'$. If we assume that we have such a probability distribution for $d'-1$, then we can use it to guess the value of $X^{(1)}$ based on the value of $X^{(d')}$. Then, we can also guess which children of $v_1^{(0)}$ have the same label as it, and if any choice of $X^{(0)}$ is consistent with all of these guesses, we can conclude that $X^{(0)}$ has that value. This would give a suitable probability distribution for $d'$, except that it has an excessively high probability of returning $'?'$. Fortunately, we can deal with that by trying it multiple times and returning the first value in $\{0,1\}$ that we get. We defer the proof to Appendix~\ref{app:nc1part1}.
For the second part of the theorem we interpret any node that is very likely to have a label of $1$ to be a variable that is actually set to $1$, and similarly for the label $0$. Then, we will construct gadgets for AND and OR, at which point it will be easy to translate an arbitrary $\mathbf{NC}^1$ circuit to an $\mathbf{NC}^0$ formula for $X^{(d)}$ in terms of the circuit's inputs. We defer the proof to Appendix~\ref{app:nc1part2}
\section{A $\mathbf{TC}^0$ circuit for inference}
The previous result implies that if $\mathbf{TC}^0\ne \mathbf{NC}^1$ then no $\mathbf{TC}^0$ circuit can compute a posterior function in the Ising tree model. However we can still hope that $\mathbf{TC}^0$ circuits attempting to determine $X^{(0)}$ can still perform well in the average case and can compute an inference function.
A natural approach is to guess that the root has the same label as the majority of the leaves, which gives the right answer with probability $1+\Omega(1)$ if $\theta>1/\sqrt{k}$. However, this is not an inference function. In particular, it achieves worse error even in an average-case sense. Alternatively we could compute an inference function using belief propagation but the naive way to encode this as a circuit would lead to logarithmic depth. The key idea is that the function computed by belief propagation is robust to injecting noise at the leaves. We use this idea by first guessing that each node at depth $\lfloor\log_k(\log_2(n))\rfloor$ has the same label as the majority of the leaves descended from it. Then we guess the value of $X^{(0)}$ by computing the output of belief propagation (on the smaller depth tree) using a look up table. We are able to prove that this circuit is indeed a posterior function when $k\theta^2$ is sufficiently large and we conjecture that it is for any $k \theta^2 > 1$.
More precisely we will build a $\mathbf{TC}^0$ circuit that encodes the following algorithm.
\begin{algorithm}
{\sc LinearizedBP}(d, k, $\theta$, $X^{(d)}$, $f$)
\begin{enumerate}
\item Let $d'=\lfloor\log_k(\log_2(n))\rfloor$.
\item For each $i\in L_{d'}$, randomly select $x^\star_i\in\{0,1\}$ and set
\[x_i=
\begin{cases}
1 &\text{ if } \sum_{j\in L_{d-d'}(i)} X^{(d)}_j> k^{d-d'}/2\\
0 &\text{ if } \sum_{j\in L_{d-d'}(i)} X^{(d)}_j< k^{d-d'}/2\\
x^\star_i &\text{ if } \sum_{j\in L_{d-d'}(i)} X^{(d)}_j= k^{d-d'}/2\\
\end{cases}
\]
\item Output $f(x)$
\end{enumerate}
\end{algorithm}
First of all, note that each value of $n$ has a unique corresponding value of $d'$, and each of the $x_i$ can be computed from the inputs and a random bit by a threshold gate. $k^{d'}\le\log_2(n)$, so there are at most $n$ possible values of $x$. That means that we can use an AND gate to check for each possible value of $x$ and then OR together the ones for which $f(x)=1$. That means that for any fixed series of functions $f_d:\{0,1\}^{k^{\lfloor\log_k(\ln(n))\rfloor}}\to\{0,1\}$, there is a $\mathbf{TC}^0$ circuit that computes {\sc LinearizedBP}(d, k, $\theta$, $X^{(d)}$, $f$) given access to $\log_2(n)$ random bits. Furthermore, we conjecture the following.
\begin{conjecture}
There exists a series of functions $f_d:\{0,1\}^{k^{\lfloor\log_k(\ln(n))\rfloor}}\to\{0,1\}$ such that if $X'={\sc LinearizedBP}(d, k, \theta, X^{(d)}, f_d)$ then
\[\lim_{n\to\infty} \mathbb{P}[X'=X^{(0)}]- \mathbb{P}[\BP(X^{(d)})=X^{(0)}]=0,
\]
where $\BP(x) : \{0,1\}^{L_d} \to \{0,1\}$ returns the more likely posterior label of the root
\[
BP(x) = a \; \mbox{ if } \; \mathbb{P}[X^{(0)} = a | X^{(d)} = x] > \mathbb{P}[X^{(0)} = 1- a | X^{(d)} = x]
\]
\end{conjecture}
In other words, we believe that {\sc LinearizedBP} can compute $X^{(0)}$ with optimal accuracy. If $k\theta^2\le 1$, then it is known that no algorithm can compute $X^{(0)}$ from $X^{(d)}$ with nontrivial accuracy, so this algorithm uninterestingly attains optimal accuracy. In this section, we will prove that there exists $C>1$ such that {\sc LinearizedBP} can attain optimal accuracy whenever $k\theta^2>C$. The case where $1<k\theta^2\le C$ remains open. The first step towards proving that it can attain optimal accuracy for large values of $k\theta^2$ is to prove that when the algorithm is run, $x$ is a reasonably accurate approximation of $X^{(d')}$. For that, we need the following standard second moment lemma which we include for completeness in Appendix~\ref{app:deviation} (similar lemmas were proven in previous work including~\cite{EvKePeSc:00}).
\begin{lemma}\label{lem:deviation}
For any $d$, $k$, and $\theta$ such that $k\theta^2>2$,
\[\mathbb{P}\left[\sum_{i=1}^{k^d} X^{(d)}_i\le k^d/2\middle | X^{(0)}=1\right]\le\frac{1}{\theta^2k-1}\]
\end{lemma}
By symmetry, this also implies that $\mathbb{P}\left[\sum_{i=1}^{k^d} X^{(d)}_i\ge k^d/2\middle | X^{(0)}=0\right]\le\frac{1}{\theta^2k-1}$. So, that gives us a bound on $\mathbb{P}[x_i\ne X^{(d')}_i]$ when the algorithm is run. That leaves the task of showing that we can determine $X^{(0)}$ with optimal accuracy from a noisy version of $X^{(d')}$. In order to discuss the accuracy with which one can do that, we will need to define the following.
\begin{definition}
Let $0\le s\le 1/2$ and $d$ be a positive integer. Also, let $X'\in\{0,1\}^{L_d}$ such that for each $i$, $X'_i$ is independently set equal to $1-X^{(d)}_i$ with probability $s$ and $X^{(d)}_i$ otherwise.
\[P_{s,d}=\sum_{x\in \{0,1\}^{L_d}} \max(\mathbb{P}[X^{(0)}=0,X'=x],\mathbb{P}[X^{(0)}=1,X'=x])\]
\end{definition}
In other words, $P_{s,d}$ is the maximum accuracy with which we can determine $X^{(0)}$ from a noisy version of $X^{(d)}$ in which each bit is flipped with probability $s$. Mossel et al. \cite{MoNeSl:16b} show the following:
\begin{proposition}\label{noisybp}\cite{MoNeSl:16b}
There exists $C>0$ such that if $k\theta^2>C$ then
\[\lim_{s\to 1/2}\inf \lim_{d\to\infty}\inf P_{s,d}=\lim_{d\to\infty}\inf P_{0,d}\]
\end{proposition}
In other words, if $k\theta^2$ is sufficiently large then the maximum accuracy with which $X^{(0)}$ can be determined from a highly noisy estimate of $X^{(d')}$ is the same as the maximum accuracy with which $X^{(0)}$ can be determined from $X^{(d')}$. That allows us to prove that {\sc LinearizedBP} is optimal for large values of $k\theta^2$. More formally, we have the following:
\begin{theorem}\label{thm:tc0main}
There exists $C'>0$ such that if $k\theta^2>C'$ then there exists a function $f$ for which {\sc LinearizedBP} run on $f$ is an inference function for the Ising model on trees.
\end{theorem}
\begin{proof}
First, let $C'=\max(C,4)$. First we observe that for any $d$, when {\sc LinearizedBP} is run, each bit $x_i$ is independently set equal to $X^{(d')}_i$ with some advantage over random guessing and set to the opposite value otherwise. Let $s_d=\mathbb{P}[x_i\ne X^{(d')}_i]$. Next, let $f_d$ be the function that maximizes the probability that {\sc LinearizedBP} outputs the correct label for the root. Let $q$ be the probability that it succeeds. Then we have
$$ q =\sum_{x'\in \{0,1\}^{L_{d'}}} \max(\mathbb{P}[X^{(0)}=0,x=x'],\mathbb{P}[X^{(0)}=1,x=x'])=P_{s_d,d'}$$
Now, let $s'=\frac{1}{\theta^2k-1}$. Proposition~\ref{noisybp} shows that $s_d\le s'$ for all $d$, and adding more noise can never make it easier to determine $X^{(0)}$, so for every $d$, it must be the case that
\[P_{0,d'}\ge P_{s_d,d'}\ge P_{s',d'}\]
Combining that with the previous theorem shows that
\[\lim_{d'\to\infty}\inf P_{0,d'}\ge \lim_{d'\to\infty}\inf P_{s',d'}\ge \lim_{s\to 1/2}\inf \lim_{d'\to\infty}\inf P_{s,d'}=\lim_{d'\to\infty}\inf P_{0,d'}\]
Also, $P_{0,d'}$ is a nonincreasing function of $d'$, so $P_{0,d'}$ converges. So,
\[\lim_{s\to 1/2}\sup \lim_{d'\to\infty}\sup P_{s,d'}\le \lim_{d'\to\infty}\sup P_{s',d'}\le \lim_{d'\to\infty}\sup P_{0,d'}=\lim_{d'\to\infty} P_{0,d'}\]
That implies that all of these sequences converge to $\lim_{d'\to\infty} P_{0,d'}$, and thus that {\sc LinearizedBP} computes $X^{(0)}$ with optimal accuracy.
\end{proof}
\section{$\mathbf{NC}^1$ hardness of detection with many labels}
So far, we have been assuming that there are only two labels that could be assigned to a vertex. However, we could instead have $m$ labels for arbitrary $m$. That leads to the following definition
\begin{definition}[Generalized broadcast process on a tree]
Given parameters $m>0$ and an $m\times m$ matrix $M$ with nonnegative entries and columns that add up to $1$, the {\em generalized broadcast process on $T$} is
an $m$-state Markov process $\{\sigma^\star_u : u \in T\}$ defined as follows:
let $\sigma^\star_\rho$ be drawn uniformly at random from $\{1,\cdots,m\}$. Then, for each $u$ such that $\sigma^\star_u$ is defined,
independently for every $v \in L_1(u)$ let $\sigma^\star_v = 1$ with probability $M_{i,\sigma^\star_u}$ for each $i$.
\end{definition}
In other words, in the generalized broadcast model, the root is randomly assigned a label in $\{1,\cdots,m\}$, and then each other vertex is assigned a label with a probability distribution corresponding to the column of $M$ indexed by its parent's label.
Note that the previous case is simply the instance of this where $m=2$ and $M=\theta I+\frac{1-\theta}{2} J$, where $J$ is the matrix with all entries equal to $1$. There is an important difference between the case when there are just two labels and when there are more. It turns out there are many natural cases where it is possible to detect the label of the root, but not by taking the majority vote of the labels of the leaves. The function computed by Belief Propagation is generally more complicated and the main result of this section is to show that this manifests as a phase transition in the circuit complexity of solving detection. When there are many labels, we will show that the problem becomes $\mathbf{NC}^1$ hard.
First, we need a problem that is $\mathbf{NC}^1$-hard in the average case. In a celebrated result, Barrington showed that deciding whether the word problem (i.e. if a given word is the identity or not) over a finite nonsolvable group is $\mathbf{NC}^1$-complete \cite{barrington1989bounded}. We will work with the alternating group $A_5$:
\begin{proposition}\label{prop:barr} \cite{barrington1989bounded}
For every $c\in A_5$ such that $c\ne 1$, determining whether a product of elements of $A_5$, $\prod_{i=1}^n \sigma_i$ is $c$ or the identity given that it is one of them is $\mathbf{NC}^1$-complete.
\end{proposition}
Conveniently, this problem has a simple worst-case to average-case reduction:
\begin{theorem}
Let $f_r: A_5^r\rightarrow A_5$ be a family of functions. Suppose there exists $\epsilon>0$ independent of $r$ such that when $\Sigma_1,\cdots,\Sigma_r$ are independently drawn from $A_5$ according to the uniform distribution, $$\mathbb{P}[f_r(\Sigma_1,\cdots,\Sigma_r)=\prod_{i=1}^r \Sigma_i]\ge 1/60+\epsilon$$ If $\mathbf{TC}^0\neq \mathbf{NC}^1$ then there is no $\mathbf{TC}^0$ circuit that computes $f$.
\end{theorem}
\begin{proof}
For the sake of contradiction, we will assume that there is a $\mathbf{TC}^0$ circuit that computes $f$. Let $h_n:\{0,1\}^n\rightarrow \{0,1\}$ be an $\mathbf{NC}^1$-complete family of functions. Consider the following randomized algorithm attempting to compute $h_n(x)$. First, generate a random $c\in A_5\backslash \{1\}$.
Next, the completeness of $h_n$ implies there there exists $r$ polynomial in $n$ and $\sigma\in A_5^r$ such that $\prod_{i=1}^r \sigma_i=c$ if $h_n(x)=1$ and $\prod_{i=1}^r \sigma_i=1$ if $h_n(x)=0$ (note that $\sigma$ depends on $c$ and $x$ and the computation of $\sigma$ is in $\mathbf{NC}^0$).
Now
randomly select $b_i\in A_5$ for each $1\le i\le r$. Next compute
$$f(\sigma_1 b_1, b_1^{-1} \sigma_2 b_2, b_2^{-1} \sigma_3 b_3,\cdots, b_{r-1}^{-1} \sigma_r b_r).$$
If it is equal to $b_r$, conclude that $h_n(x)=0$, if it is $c b_r$ then conclude that $h_n(x)=1$, and output nothing otherwise. No matter what the value of $\sigma$ is, the probability distribution of $(\sigma_1 b_1, b_1^{-1} \sigma_2 b_2,\cdots, b_{r-1}^{-1} \sigma_r b_r)$ is the uniform distribution on $A_5^r$. Hence we have that $$\mathbb{P}[f(\sigma_1 b_1,\cdots,b_{r-1}^{-1} \sigma_r b_r)=\sigma_1\sigma_2,\cdots,\sigma_r b_r]\ge 1/60+\epsilon$$ Thus, this algorithm computes $h_n(x)$ correctly with a probability of at least $1/60+\epsilon$. Furthermore, $c$ is independent of $(\sigma_1 b_1,\cdots, b_{r-1}^{-1} \sigma_r b_r)$, and thus of what $f$ will return if it computes the product incorrectly. So, this algorithm computes $h_n(x)$ incorrectly with a probability of at most $1/60$. Thus if we repeat this process a large polynomial number of times and take the majority vote, we can compute $h_n(x)$ correctly with probability at least $1-o(2^{-n})$. Thus there must be some choices of our random variables for which this computes $h_n(x)$ correctly for every $x$. This whole procedure can be carried out by a $\mathbf{TC}^0$ circuit, so $\mathbf{TC}^0$=$\mathbf{NC}^1$.
\end{proof}
Now that we have a problem that is $\mathbf{NC}^1$-hard in the average case, we need a way to reduce this to the problem of determining the label of the root for some choice of parameters. In order to do that, we consider the following instance of the generalized broadcast process on a tree. There is one label for every ordered pair $(\sigma, \sigma')\in A_5^2$, and $k=60000$. Given a vertex with a parent with label $(\sigma,\sigma')$, we select a random $b\in A_5$. Then, we set its label to $(b,b^{-1}\sigma)$ with probability $2/3$ and $(b,b^{-1}\sigma')$ with probability $1/3$. In other words, each child of a vertex is assigned a random ordered pair that multiplies to $\sigma$ with probability $2/3$ and a random ordered pair that multiplies to $\sigma'$ with probability $1/3$. For the rest of this section, we will assume that $\sigma^\star$ was generated by the generalized broadcast process with these parameters.
Note that it is straightforward to implement this process with an $\mathbf{NC}^1$ circuit because the tree has logarithmic depth.
Moreover, we argue that detection is information-theoretically possible. The key idea is for any $d'$, if we can determine the labels of the vertices at depth $d'$ so that each label is correct (independently) with probability $0.99$ then for any vertex at depth $d'-1$ we can determine its label with probability at least $0.99$. We do this by taking the two most common products of the elements among its children's suspected labels and by a Chernoff bound it is easy to see that this procedure succeeds with probabiity at least $0.99$. Furthermore because the subtrees of each vertex at depth $d'-1$ are disjoint, the probability our guess is correct is independent. Now we can continue this process until we reach the root. This type of recursive reconstruction arguments are by now standard, see e.g. {\em Mossel, Mossel-Peres}
Next we will give an alternative procedure for sampling from the generalized broadcast tree model. This will allow us to embed the word problem for $A_5$ equivalently as the problem of guessing the label of the root.
\begin{algorithm}
productTreeConstructionAlgorithm(d):
\begin{enumerate}
\item Set $\overline{X}^{(0)}=(\sigma_1\cdot \sigma_2\cdot\cdots\cdot \sigma_{2^d},$ $\sigma_{2^d+1}\cdot \sigma_{2^d+2}\cdot\cdots\cdot \sigma_{2^{d+1}})$.
\item For $d' = 1$ to $d$
\begin{enumerate}
\item For each $i \in L_{d'}$:
\begin{enumerate}
\item There will exist a constant $1\le j\le 2^{d+1}$ and constants $b,b',b''\in A_5$ such that $$\overline{X}^{(d'-1)}_{\parent(i)}=(b'\cdot \sigma_j\cdot\cdots\cdot \sigma_{j+2^{d-d'+1}-1}\cdot b,\mbox{ }b^{-1}\cdot \sigma_{j+2^{d-d'+1}}\cdot\cdots\cdot \sigma_{j+2^{d-d'+2}-1}\cdot b'')$$
\item Randomly select $b'''\in A_5$.
\item With probability $2/3$, set $$\overline{X}^{(d')}_i=(b'\cdot \sigma_j\cdot\cdots\cdot \sigma_{j+2^{d-d'}-1}\cdot b''',\mbox{ }(b''')^{-1}\cdot \sigma_{j+2^{d-d'}}\cdot\cdots\cdot \sigma_{j+2^{d-d'+1}-1}\cdot b)$$ Otherwise, set $$\overline{X}^{(d')}_i=(b^{-1}\cdot \sigma_{j+2^{d-d'+1}}\cdot\cdots\cdot \sigma_{j+3\cdot 2^{d-d'}-1}\cdot b''',\mbox{ }(b''')^{-1}\cdot \sigma_{j+3\cdot 2^{d-d'}}\cdot\cdots\cdot \sigma_{\cdot 2^{d-d'+2}-1}\cdot b'')$$
\end{enumerate}
\end{enumerate}
\item Return $\overline{X}^{(d)}$.
\end{enumerate}
\end{algorithm}
In step 2.a.i we asserted that every element of $\overline{X}^{(d')}$ will have the form
$$(b'\cdot \sigma_j\cdot\cdots\cdot \sigma_{j+2^{d-d'}-1}\cdot b,\mbox{ }b^{-1}\cdot \sigma_{j+2^{d-d'}}\cdot\cdots\cdot \sigma_{j+2^{d-d'+1}-1}\cdot b'')$$
It is easy to see that this is true for $\overline{X}^{(0)}$ and throughout the process, $\overline{X}^{(d'-1)}$ will always be set to an expression of this form. The key fact is:
\begin{lemma}\label{lem:gen}
Let $\sigma\in A_5^{2^{d+1}}$ and $x_0=\left(\prod_{i=1}^{2^d} \sigma_i,\prod_{i=2^d+1}^{2^{d+1}} \sigma_i\right)$.
Then for every $x\in (A_5^2)^n$,
\[\mathbb{P}\left[X^{(d)}=x\middle|X^{(0)}=x_0\right]=\mathbb{P}\left[\overline{X}^{(d)}(\sigma)=x\right]\]
\end{lemma}
Thus $productTreeConstructionAlgorithm(d)$ is an equivalent way to sample from the generalized broadcast tree model that we defined earlier.
\begin{proof}
We will prove by induction on $d'$ that the distribution of $X^{(d')}$ given $X^{(0)}=x_0$ is identical to the distribution of $\overline{X}^{(d')}(\sigma)$ for every $d'$. If $d'=0$, then $\overline{X}^{(d')}=x_0$, so the base case holds. Now, assume that it holds for $d'-1$.
It is easy to check that the way we have defined step 2.a.iii every
vertex at depth $d'$ is assigned a label whose product is equal to the first permutation in its parent's label with probability $2/3$ and the second permutation in its parent's label with probability $1/3$. Moreover the pair of permutations is chosen uniformly at random subject to this constraint. Finally each element of $\overline{X}^{(d')}$ is independent conditioned on the value of its parent. These are exactly the key properties that defined our generalized broadcast tree model, and hence completes the proof.
\end{proof}
Now we are ready to prove that any algorithm for solving the detection problem for our generalized broadcast tree model can be used to solve the word problem over $A_5$ with some advantage over random guessing:
\begin{theorem}
Let $g_d: (A_5^2)^{k^d}\rightarrow A_5$ be a family of functions. Suppose there exists $\epsilon > 0$ independent of $d$ such that $$\mathbb{P}[g_d(X^{(d)})=X^{(0)}]\ge \frac{1}{|A_5|^2} +\epsilon$$ If $\mathbf{TC}^0\neq\mathbf{NC}^1$ then $g$ is not in $\mathbf{TC}^0$.
\end{theorem}
\begin{proof}
For the sake of contradiction we will assume that $g\in \mathbf{TC}^0$. Let $\Sigma_1,\cdots,\Sigma_{2^{d+1}}$ be chosen randomly. We can interpret $productTreeConstructionAlgorithm(d)$ as outputting a random formula that labels the leaves of the generalized broadcast tree model. The key point is both the depth of the tree and the number of bits of randomness that determine the value at any leaf are both logarithmic. Thus $X^{(d)}$ can be computed by a $\mathbf{TC}^0$ circuit. Now let $g'_d$ be the composition of $g_d$ and $productTreeConstructionAlgorithm(d)$.
Because $g_d$ solves the detection problem we have that
\[\mathbb{P}\left[g'_d(\Sigma_1,\cdots,\Sigma_{2^{d+1}})=\left(\prod_{i=1}^{2^d} \Sigma_i,\prod_{i=2^d+1}^{2^{d+1}} \Sigma_i\right)\right]\ge \frac{1}{|A_5|^2}+\epsilon\]
where the randomness is over both the choice of the $\Sigma_i$'s and $g'$ which depends on the generation process. For the sake of simplifying the notation, let $g'_d(\sigma)=(g^{[1]}_d(\sigma),g^{[2]}_d(\sigma))$. Now there are two cases:
In the first case suppose that $g^{[1]}_d$ gets nontrivial advantage over random guessing. In particular suppose
$$\mathbb{P}\left[g^{[1]}_d(\Sigma_1,\cdots,\Sigma_{2^{d+1}})=\prod_{i=1}^{2^d} \Sigma_i\right]\ge\sqrt{\frac{1}{|A_5|^2}+\epsilon}$$
There must exist a specific choice $\Sigma_{2^d + 1} = \sigma_{2^d +1}, \cdots, \Sigma_{2^{d+1}} = \sigma_{2^{d+1}}$ and setting of the randomness in the generation process which achieves nontrivial advantage over random guessing. Even when we fix these values, the function is still in $\mathbf{TC}^0$ and hence we conclude $\mathbf{TC}^0=\mathbf{NC}^1$.
In the second case, we must have
$$\mathbb{P}\left[g^{[2]}_d(\Sigma_1,\cdots,\Sigma_{2^{d+1}})=\prod_{i=2^d+1}^{2^{d+1}} \Sigma_i\middle|g^{[1]}_d(\Sigma_1,\cdots,\Sigma_{2^{d+1}})=\prod_{i=1}^{2^d} \Sigma_i\right]\ge\sqrt{\frac{1}{|A_5|^2}+\epsilon}$$
The idea is we want to use $g^{[2]}_d$ to solve an $\mathbf{NC}^1$ hard problem, but to do so using the above inequality we need to decide if the output of $g^{[1]}_d$ is correct. Now we can once again use an average-case reduction to reduce to the case when we know the product of the inputs to $g^{[1]}_d$ and thus check its own output.
In particular for any $\sigma_1,\cdots,\sigma_{2^{d+1}}\in A_5$ and randomly generated $B_1,\cdots,B_{2^{d+1}}$, let
$$\Sigma'=(\sigma_1 B_1,B_1^{-1}\sigma_2 B_2, B_2^{-1} \sigma_3 B_3,\cdots,B_{2^{d+1}-1}^{-1}\sigma_{2^{d+1}}B_{2^{d+1}})$$
The distribution of $\Sigma'$ is uniform on $A_5^{2^{d+1}}$ so we have
$$\mathbb{P}\left[g^{[2]}_d(\Sigma')=B_{2^d}^{-1}\left(\prod_{i=2^d+1}^{2^{d+1}} \sigma_i\right)B_{2^{d+1}}\middle|g^{[1]}_d(\Sigma')=\left(\prod_{i=1}^{2^d} \sigma_i\right)B_{2^d}\right]\ge\sqrt{\frac{1}{|A_5|^2}+\epsilon}$$
Now we can choose $\sigma_1,\cdots,\sigma_{2^d}$ such that we already know their product and we can repeatedly generate $B_1,\cdots,B_{2^{d+1}}$ until we find one for which $$g^{[1]}_d(\Sigma')=\left(\prod_{i=1}^{2^d} \sigma_i\right)B_{2^d}$$
Now if we guess that $\prod_{i=2^d+1}^{2^{d+1}}\sigma_i$ is equal to $B_{2^d} g^{[2]}_d(\Sigma') B_{2^{d+1}}^{-1}$ we will get nontrivial advantage over random guessing. As before there must be some choice of the randomness (in this case the values of $B_1, \cdots, B_{2^{d+1}}$ and the randomness in the generation process) where the probability of computing the product is at least average. This again implies that $\mathbf{TC}^0=\mathbf{NC}^1$.
\end{proof}
So, this is a set of parameters for which one can determine the root's label from the leaves' labels with very high accuracy in the average case. However, unless $\mathbf{TC}^0$=$\mathbf{NC}^1$, there is no $\mathbf{TC}^0$ algorithm that can determine the root's label with an accuracy that is nontrivially higher than that attained by guessing blindly. With some more work, we could prove that this also holds for sufficiently slight perturbations of these parameters. In Appendix~\ref{app:labelred} we show how to reduce the number of labels to $16$ by using symmetry arguments and working with conjugacy classes of permutations instead.
\section{Difficulty of generation} \label{sec:generation}
In this paper, we are mostly concerned with depth lower (and upper) bounds for estimating $X^{(0)}$ given $X^{(d)}$. However, we also study the generation problem, i.e., the complexity of generating $X^{(d)}$ given a sequence of random bits as an input. More formally:
\begin{definition}
We say that a series of functions $f_d:\{0,1\}^{m(d)} \to \{0,1\}^{L_d}$ are {\em generation} functions if under the uniform distribution over the inputs, it holds that $f_d(x)$ has the distribution $X^{(d)}$ for all $d$.
We call such functions $(\delta_d)_{d=1}^{\infty}$-approximate-generation functions if the total variation distance between the distribution of $f_d(x)$ and $X^{(d)}$ is bounded by $\delta_d$ for all $d$
\end{definition}
Despite the fact that the tree has logarithmic depth, it turns out that generation can be accomplished in $\mathbf{AC}^0$ easily.
\begin{theorem}\label{thm:genmain}
If $\theta$ is a dyadic number: $\theta = a/2^b$ for some integers $a$ and $b$, then there exists generation functions in $\mathbf{AC}^0$.
Moreover, for all $\theta$, and any constant $c > 0$, there exists $2^{-n^c}$--approximate-generation functions in $\mathbf{AC}^0$.
\end{theorem}
\begin{proof}
Assume first that $\theta$ and therefore $(\theta \pm 1)/2$ are dyadic.
This means that there exists a function
$g : \{0,1\}^s \to \{0,1\}$ of a bounded number of bits such that $\mathbb{P}[g = 1] = (\theta+1)/2$.
We apply a copy of $g$ independently for each vertex of the tree thus obtaining a collection of independent random variables $(Y_v)$.
So, if we set $Y^{\prime (0)}$ to be a uniformly random bit and then define $X^{\prime}_v = \prod_{w \in \path(\rho,v)} Y_w$,
then the probability distribution of $X^\prime$ is identical to the probability distribution of $X$. Furthermore, for each $v$, there are at most $d$ elements of $Y$ that effect the value of $X^{\prime}_v$. That means that there are only $2^{d+1}\le 2n$ possible values of $X^{(0)}$ and the elements of $Y$ that effect $X^{\prime(d)}_i$. As such, we only need $O(n)$ gates to have an AND for every possible combination of values of $X^{(0)}$ and these $Y_v$, at which point we can OR together all of the ones for values that result in $X^{\prime}_v = 1$. Doing this for every $v$ merely multiplies the number of gates by $n$, and this clearly has constant depth. This proves the first part of the theorem.
The second part of the theorem is similar, except we now approximate coin tosses of bias $(\theta+1)/2$. It is easy to see that an approximation to error $2^{-n^c}$ is achievable in $\mathbf{AC}^0$ in constant depth and size polynomial in $n$.
This is done by generating a polynomial number of unbiased bits $Z_1,\ldots,Z_{n^c+\lceil \log_2(2n)\rceil}$ and considering them as the binary expansion of a number in $[0,1]$. We then declare the bias-coin toss to be $1$ if the resulting number is bigger than $(1+\theta)/2$ and $0$ otherwise. The threshold computation
$\sum Z_i > (1+\theta)/2$ can be carried out by an OR of AND gates.
\end{proof}
\begin{remark}
If we consider a computational model where the inputs have bias $\theta$ instead of $1/2$, then the proof above provides generation functions in $\mathbf{AC}^0$.
\end{remark}
Now that we have established that $\mathbf{AC}^0$ circuits are capable of drawing strings from the correct probability distribution, the logical next question is whether or not $\mathbf{NC}^0$ circuits can do the same. As it turns out, they generally cannot. The key issue is that each bit output by an $\mathbf{NC}^0$ circuit is affected by a constant number of input bits.
\begin{theorem}\label{thm:nc0gen}
Let $f_n:\{0,1\}^{m_n}\to\{0,1\}^{L_d}$ be a series of functions that can be computed by an $\mathbf{NC}^0$ circuit. Also, let $W_1,\cdots,W_{m_n}$ be independently generated random variables and $X'=f_n(W)$. If $0<\theta<1$ then
\[\sum_{x\in\{0,1\}^{L_d}} \min\left(\mathbb{P}[X^{(d)}=x],\mathbb{P}[X'=x]\right)=O\left(e^{-\sqrt{n}}\right)\]
\end{theorem}
We defer the proof of this theorem to Appendix~\ref{app:nc0}. It turns out to be much easier to prove the simpler result that $\mathbf{NC}^0$ fails when it is given uniformly random bits as input, just because some pairs of bits in the output of the broadcast tree model have weak but non-zero correlations.
\bibliographystyle{plain}
|
1,116,691,500,328 | arxiv | \section{Introduction}
The goal of click-through rate prediction is to predict the probabilities of users clicking ads or items, which is critical to many web applications such as online advertising and recommender systems.
Modeling sophisticated feature interactions plays a central role in the success of CTR prediction.
Distinct from continuous features which can be naturally found in images and audios, the features for web applications are mostly in multi-field categorical form.
For example, the four-fields categorical features for movies may be: (1) \textsf{Language = \{English, Chinese, Japanese, ... \}}, (2) \textsf{Genre = \{action, fiction, ... \}}, (3) \textsf{Director = \{Ang Lee, Christopher Nolan, ... \}}, and (4) \textsf{Starring = \{Bruce Lee, Leonardo DiCaprio, ... \}} (noted that there are much more feature fields in real applications).
These multi-field categorical features are usually converted to sparse one-hot encoding vectors, and then embedded to dense real-value vectors, which can be used to model feature interactions.
Factorization machine (FM) \cite{rendle2010factorization} is a well-known model proposed to learn second-order feature interactions from vector inner products.
Field-aware factorization machine (FFM) \cite{juan2016field} further considers the field information and introduces field-aware embedding.
Regrettably, these FM-based models can only model second-order interaction and the linearity modeling limits its representative power.
Recently, many deep learning based models have been proposed to learn high-order feature interactions, which follow a general paradigm: simply concatenate the field embedding vectors together and feed them into DNN or other specifically designed models to learn interactions.
For example, Factorisation-machine supported Neural Networks (FNN) \cite{zhang2016deep}, Neural Factorization Machine (NFM) \cite{he2017neural}, Wide\&Deep \cite{cheng2016wide} and DeepFM \cite{guo2017deepfm} utilize DNN to model interactions.
However, these model based on DNN learn high-order feature interactions in a bit-wise, implicit fashion, which lacks good model explanations.
Some models try to learn high order interactions explicitly by introducing specifically designed networks.
For example, Deep\&Cross \cite{wang2017deep} introduces Cross Network (CrossNet) and xDeepFM \cite{lian2018xdeepfm} introduces Compressed Interaction Network (CIN).
Nevertheless, we argue that they are still not sufficiently effective and explicit, since they still follow the general paradigm of combining feature fields together to model their interactions.
The simple \emph{unstructured combination} will inevitably limit the capability to model sophisticated interactions among different feature fields in a flexible and explicit fashion.
In this work, we take the structure of multi-field features into consideration.
Specifically, we represent the multi-field features in a graph structure named \emph{feature graph}.
Intuitively, each node in the graph corresponds to a feature field and different fields can interact through edges.
The task of modeling sophisticated interactions among feature fields can be thus converted to modeling node interactions on the feature graph.
To this end, we design a novel model Feature interaction Graph Neural Networks (Fi-GNN) based on Graph Neural Networks (GNN), which is able to model sophisticated node (feature) interactions in a flexible and explicit fashion.
In Fi-GNN, the nodes will interact by communicating the node states with neighbors and update themselves in a recurrent fashion.
At every time step, the model interact with neighbors at one hop deeper.
Therefore, the number of interaction steps equals to the order of feature interactions.
Moreover, the edge weights reflecting importances of different feature interactions and node weights reflecting importances of each feature field on the final CTR prediction can be learnt by Fi-GNN, which can provide good explanations.
Overall, our proposed model can model sophisticated feature interactions in an explicit, flexible fashion and also provide good model explanations.
Our contributions can be summarized in threefold:
\begin{itemize}
\item
We point out the limitation of the existing works which consider multi-field features as an unstructured combination of feature fields.
To this end, we propose to represent the multi-field features in a graph structure for the first time.
\item We design a novel model Feature Interaction Graph Neural Networks (Fi-GNN) to model sophisticated interactions among feature fields on the graph-structured features in a more flexible and explicit fashion.
\item Extensive experiments on two real-world datasets show that our proposed method can not only outperform the state-of-the-arts but also provide good model explanations.
\end{itemize}
The rest of this paper is organized as follows.
Section 2 summarizes the related work.
Section 3 provides an elaborative description of our proposed method.
The extensive experiments and detailed analysis are presented in Section 4, followed by the conclusion in Section 5.
\section{Related Work}
In this section, we briefly review the existing models that model feature interactions for CTR prediction and graph neural networks.
\subsection{Feature Interaction in CTR Prediction} \label{sect:related}
Modeling feature interactions is the key to success of CTR prediction and therefore extensively studied in the literature.
LR is a linear approach, which can only models the first-order interaction on the linear combination of raw individual features.
FM \cite{rendle2010factorization} learns second-order feature interactions from vector inner products.
Afterwards, different variants of FM have been proposed.
Field-aware factorization machine (FFM) \cite{juan2016field} considers the field information and introduces field-aware embedding.
AFM \cite{xiao2017attentional} considers the weight of different second-order feature interactions.
However, these approaches can only model second-order interaction which is not sufficient.
With the success of DNN in various fields, researchers start to use it to learn high-order feature interactions due to its deeper structures and nonlinear activation functions.
The general paradigm is to concatenate the field embedding vectors together and feed them into DNN to learn the high-order feature interactions.
\cite{liu2015convolutional} utilizes convolutional networks to model feature interactions.
Factorisation-machine supported Neural Networks (FNNs) \cite{zhang2016deep} uses the pre-trained factorization machines for field embedding before applying DNN.
Product-based Neural Network (PNN) \cite{qu2016product} models both second-order and high-order interactions by introducing a product layer between field embedding layer and DNN layer.
Similarly, Neural Factorization Machine (NFM) \cite{he2017neural} has a Bi-Interaction Pooling layer between embedding layer and DNN layer to model second-order interactions, but the followed operation is summation instead of concatenation as in PNN.
Some works on another line try to model the second-order and high-order interactions jointly via a hybrid architectures.
The Wide\&Deep \cite{cheng2016wide} and DeepFM \cite{guo2017deepfm} contain a wide part to model the low-order interaction and a deep part to model the high-order interaction.
However, all these approaches leveraging DNN learn the high-order feature interactions in an implicit, bit-wise way and therefore lack good model explainability.
Recently, some work try to learn feature interactions in an explicit fashion via specifically designed networks.
Deep\&Cross \cite{wang2017deep} introduces a CrossNet which takes outer product of features at the bit level.
On the contrary, xDeepFM \cite{lian2018xdeepfm} introduces a CIN to take outer product at the vector level.
Nevertheless, they still don't solve the most fundamental problem, that is to concatenate the field embedding vectors together.
The simple unstructured combination of feature fields will inevitably limit the capability to model sophisticated interactions among different fields in a flexible and explicit fashion.
To this end, we proposed to represent the multi-field features in a graph structure, where each node represents a field and different feature fields can interact through the edges.
Accordingly, we can model the flexible interactions among different feature fields on the graphs.
\subsection{Graph Neural Networks}
Graph is a kind of data structure which models a set of objects (nodes) and their relationships (edges).
Recently, researches of analyzing graphs with machine learning have been receiving more and more attention because of the great representative power of graphs.
Early works usually convert graph-structured data into sequence-structured data to deal with.
Inspired by word2vec \cite{mikolov2013distributed},
the work \cite{perozzi2014deepwalk} proposed an unsupervised DeepWalk algorithm to learn node embedding in graph based on random walks.
After that, \cite{tang2015line} proposed a network embedding algorithm LINE, which preserve the first- and second-order structural information.
\cite{Grover2016node2vec} proposed node2vec which introduces a biased random walk.
However, these methods can be computationally expensive and non-optimal for large graphs.
Graph neural networks (GNN) are designed to tackle these problems, which are deep learning based methods that operate on the graph domain.
The concept of GNN is first proposed by \cite{scarselli2009graph}.
Generally, nodes in GNNs interact with neighbors by aggregating information from neighborhoods and updating their hidden states.
There have been many variants of GNN with various kinds of aggregators and updaters proposed these days.
Here we only present some representative and classical methods.
Gated Graph Neural Networks (GGNN) \cite{li2015gated} uses GRU \cite{cho2014learning} as updater.
Graph Convolutional Networks (GCN) \cite{kipf2016semi} considers the spectral structure of graphs and utilizes the convolutional aggregator.
GraphSAGE \cite{hamilton2017inductive} considers the spatial information. It introduces three kinds of aggregators:
mean aggregator, LSTM aggregator and Pooling aggregator.
Graph attention network (GAT) \cite{velivckovic2017graph} incorporates the attention mechanism into the propagation step.
There are some surveys \cite{wu2019comprehensive,zhou2018graph} which provide more elaborative introduction of various kinds of GNN models.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{./pic/overview2.pdf}
\caption{
Overview of our proposed method.
The input raw multi-field feature vector is first converted to field embedding vectors via an embedding layer and represented as a feature graph, which is then feed into Fi-GNN to model feature interactions.
An attention layer is applied on the output of Fi-GNN to predict the click through rate $\hat{y}$.
Details of embedding layer and Fi-GNN are illustrated in Figure 2 and Figure 3 respectively.}
\vspace{-4mm}
\label{fig:overview}
\end{figure}
Due to convincing performance and high interpretability, GNN has been a widely applied graph analysis method.
Recently, there are many application of GNN like neural machine translation \cite{beck2018graph}, semantic segmentation \cite{qi20173d}, image classification \cite{marino2017more}, situation recognition \cite{li2017situation}, recommendation \cite{Wu2018Session}, script event prediction \cite{Zhongyang2018Constructing}, fashion analysis \cite{cui2019dressing,li2019semi}.
GNN is suitable for modeling node interactions on graph-structured features intrinsically.
In this work, we proposed a model Fi-GNN based on GGNN to model feature interactions on the graph-structured features for CTR prediction.
\section{Our Proposed Method}
We first formulate the problem and then introduce the overview of our proposed method, followed by the elaborate detail of each component.
\subsection{Problem Formulation}
Suppose the training dataset consists of $m$-fields categorical features ($m$ is the number of feature fields) and the associated labels $y \in \left \{ 0,1 \right \}$ which indicate user click behaviors.
The task of CTR prediction is to predict $\hat{y}$ for the input $m$-fields features, which estimates the probability of a user clicking.
The key of the task is to model the sophisticated interactions among different feature fields.
\subsection{Overview}
Figure \ref{fig:overview} is the overview of our proposed method ($m$=4).
The input sparse $m$-field feature vector is first mapped into sparse one-hot embedding vectors and then embedded to dense field embedding vectors via the embedding layer and the multi-head self-attention layer.
The field embedding vectors are then represented as a feature graph, where each node corresponds to a feature field and different feature fields can interact through edges.
The task of modeling interaction can be thus converted to modeling node interactions on the feature graph.
Therefore, the feature graph is feed into our proposed Fi-GNN to model node interactions.
An attention scoring layer is applied on the output of Fi-GNN to estimate the click-through rate $\hat{y}$.
In the following, we will introduce the details of our proposed method.
\subsection{Embedding Layer} \label{sect:graph}
The multi-field categorical feature $\mathbf{x}$ is usually sparse and of huge dimension.
Following previous works \cite{zhang2016deep,qu2016product,wang2017deep,guo2017deepfm,qu2018product}, we represent each field as a one-hot encoding vector and then embed it to a dense vector, noted as field embedding vector.
Let us consider the example in Section 1,
a movie \textsf{\{Language: English, Genre: fiction, Director: Christopher Nolan, Starring: Leonardo DiCaprio \}} is first transformed into a high-dimensional sparse features via one-hot encoding:
\begin{center}
$ \underbrace{\left [ 1, 0, ..., 0 \right ]}_{\text{Language}}, \underbrace{\left [ 0, 1, ..., 0 \right ]}_{\text{Genre}}, \underbrace{\left [ 0, 1, ..., 0 \right ]}_{\text{Director}}, \underbrace{\left [ 0, 1, ..., 0 \right ]}_{\text{Starring}}$
\end{center}
A field-aware embedding layer is then applied upon the one-hot vectors to embed them to low dimensional, dense real-value field embedding vectors as shown in Figure \ref{fig:embedding}.
Likewise, the field embedding vectors of $m$-field feature can be obtained:
\begin{center}
$ \mathbf{E} = \left [ \mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}, ..., \mathbf{e}_{m} \right ], $
\end{center}
where $\mathbf{e}_{i} \in \mathbb{R}^{d}$ denotes the embedding vector of field $i$ and $d$ denotes the dimension of field embedding vectors.
\subsection{Multi-head Self-attention Layer}
Transformer~\cite{vaswani2017attention} is prevalent in NLP and has achieved great success in many tasks.
At the core of Transformer, the multi-head self-attention mechanism is able to model complicated dependencies between word pairs in multiple semantic subspaces.
In the literature of CTR prediction, we take advantage of the multi-head self-attention mechanism to capture the complex dependencies between feature field pairs, i.e, pairwise feature interactions, in different semantic subspaces.
Following~\cite{song2018autoint}, given the feature embeddings $\mathbf{E}$, we obtain the feature representation of features that cover the pairwise interactions of an attention head $i$ via scaled dot-product:
\begin{equation}\nonumber
\mathbf{H}_{i} = \text{softmax}_{i}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{K}}})\mathbf{V},
\end{equation}
\begin{equation}\nonumber
\mathbf{Q}=\mathbf{W}_i^{(Q)}\mathbf{E}, \mathbf{K}=\mathbf{W}_i^{(K)}\mathbf{E}, \mathbf{V}=\mathbf{W}_i^{(V)}\mathbf{E}.
\end{equation}
The matrices $\mathbf{W}_i^{(Q)} \in \mathbb{R}^{d_i \times d}$, $\mathbf{W}_i^{(K)} \in \mathbb{R}^{d_i \times d}$, $\mathbf{W}_i^{(V)} \in \mathbb{R}^{d_i \times d}$ are three weight parameters for attention head $i$, $d_i$ is the dimension size of head $i$, and $\mathbf{H}_{i} \in \mathbb{R}^{m \times d_i}$.
Then we combine the learnt feature representations of each head to preserve the pairwise feature interactions in each semantic subspace:
\begin{equation}\nonumber
\mathbf{H}^1 = \text{ReLU}(\mathbf{H}_{1}\oplus \mathbf{H}_{2}\oplus \cdots \oplus \mathbf{H}_{h}),
\end{equation}
where $\oplus$ denotes the concatenation operation and $h$ denotes the number of attention heads.
The learnt feature representations $\mathbf{H}^1 \in \mathbb{R}^{m \times d'}$ are used for the initial node states of the graph neural network, where $d' = \sum_{i=1}^h d_i$.
\subsection{Feature Graph}
Distinguished from the previous works which simply concatenate the field embedding vectors together and feed them into designed models to learn feature interactions, we represent them in a graph structure.
In particular, We represent each input multi-field feature as a \emph{feature graph} $\mathcal{G} = (\mathcal{N}, \mathcal{E})$,
where each node $n_{i} \in \mathcal{N}$ corresponds to a feature field $i$ and different fields can interact through the edges, so that $\left | \mathcal{N} \right | = m$.
Since each two fields ought to interact, it is a weighted fully connected graph while the edge weights reflect importances of different feature interactions.
Accordingly, the task of modeling feature interactions can be converted to modeling node interactions on the feature graph.
\subsection{Feature Interaction Graph Neural Network}\label{sect:model}
Fi-GNN is designed to model node interactions on the feature graph, which is based on GGNN \cite{li2015gated}.
It is able to model the interactions in a flexible and explicit fashion.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{./pic/framework16.png}
\caption{Framework of Fi-GNN.
The nodes interact with neighbors and update their states in a recurrent fashion.
At each interaction step, each node will first aggregate transformed state information from neighbors and then update its state according to the aggregated information and history via GRU and residual connection.}
\label{fig:framework}
\end{figure}
\noindent \textbf{Preliminaries.}
In Fi-GNN, each node $n_{i}$ is associated with a hidden state vector $\mathbf{h}_{i}^{t}$ and the state of graph is composed of these node states
\begin{center}
$\mathbf{H}^{t} = \left [ \mathbf{h}_{1}^{t}, \mathbf{h}_{2}^{t}, \mathbf{h}_{3}^{t}, ... ,
\mathbf {h}_{m}^{t} \right ],$
\end{center}
where $t$ denote the interaction step.
The learnt feature representations by the multi-head self-attention layer are used for their initial node states $\mathbf{H}^{1}$.
As shown in Figure \ref{fig:framework}, the nodes interact and update their states in a recurrent fashion.
At each interaction step, the nodes aggregate the transformed state information with neighbors, and then update their node states according to the aggregated information and history via GRU and residual connection.
Next, we will introduce the details of Fi-GNN elaborately.
\noindent \textbf{State Aggregation.}
At interaction step $t$, each node will aggregate the state information from neighbors.
Formally, the aggregated information of node $n_{i}$ is sum of its neighbors' transformed state information,
\begin{equation} \label{ggnn_original}
\mathbf{a}_{i}^{t} = \sum_{n_{j} \rightarrow n_{i} \in \mathcal{E}} \mathbf{A}[n_{j}, n_{i}] \mathbf{W}_{p} \mathbf{h}_{j}^{t-1},~
\end{equation}
where $\mathbf{W}_{p}$ is the transformation function.
$\mathbf{A} \in \mathbb{R}^{m \times m}$ is the adjacency matrix containing the edge weights.
For example, $\mathbf{A}[n_{j}, n_{i}]$ is the weight of edge from node $n_{j}$ to $n_{i}$, which can reflect the importance of their interaction.
Apparently, the transformation function and adjacency matrix decide on the node interactions.
Since the interaction on each edge ought to differ, we aim to achieve edge-wise interaction, which requires a unique weight and transformation function for each edge.
(1) \textit{\textbf{Attentional Edge Weights.}}
The adjacency matrix in the conventional GNN models is usually in the binary form, i.e., only contains 0 and 1.
It can only reflect the connected relation of nodes but fails to reflect the importances of their relations.
In order to infer the importances of interactions between different nodes, we propose to learn the edge weights via an attention mechanism.
In particular, the weight of edge from node $n_{i}$ to node $n_{j}$ is calculated with their initial node states, i.e., the corresponding field embedding vectors.
Formally,
\begin{equation} \label{uniform_A}
w(n_{i}, n_{j}) = \frac{\text{exp}(\text{LeakyRelu}(\mathbf{W}_{w} \left [ \mathbf{e}_{i} \left | \right | \mathbf{e}_{j} \right ])) }{\sum_{k} \text{exp}(\text{LeakyRelu}(\mathbf{W}_{w} \left [ \mathbf{e}_{i} \left | \right | \mathbf{e}_{k} \right ]))},
\end{equation}
where $\mathbf{W}_{w} \in \mathbb{R}^{2d'}$ is a weight matrix, $\left | \right |$ is the concatenation operation.
The softmax function is utilized to make weights easily comparable across different nodes.
Therefore, the adjacency matrix is,
\begin{equation} \label{a}
\mathbf{A}[n_{i}, n_{j}]=\begin{cases}
& w(n_{i}, n_{j}), \text{ if } i \neq j, \\
& 0, \text{ else }.
\end{cases}
\end{equation}
Since the edge weights reflects the importances of different interaction, Fi-GNN can provide good explanations on the relation of different feature fields of input instance, which will be further discussed in Section \ref{sect:explannation}.
(2) \textit{\textbf{Edge-wise Transformation.}}
As discussed before, a fixed transformed function on all the edges is unable to model the flexible interactions and a unique transformation for each edge is essential.
Nevertheless, our graph is complete graph with a huge number of edges.
Simply assigning a unique transformation weight to each edge will consuming too much parameter space and running time.
To reduce the time and space complexity and also achieve edge-wise transformation, we assign an output matrix $\mathbf{W}_{out}^{i}$ and an input matrix $\mathbf{W}_{in}^{i}$ to each node $n_{i}$ similar with \cite{cui2019dressing}.
As shown in Figure \ref{fig:framework}, when node $n_{i}$ sends its state information to node $n_{j}$, the state information will first be transformed by its output matrix $\mathbf{W}_{out}^{i}$ and then transformed by node $n_{j}$'s input matrix $\mathbf{W}_{in}^{j}$ before $n_{j}$ receives it.
The transformation function of edge $n_{i} \rightarrow n_{j}$ from node $n_{i}$ to node $n_{j}$ thus could be written as,
\begin{equation} \label{W_inout}
\mathbf{W}_{p}^{n_{i} \rightarrow n_{j}} = \mathbf{W}_{out}^{i}\mathbf{W}_{in}^{j}.
\end{equation}
Likewise, the transformation function of edge $n_{j} \rightarrow n_{i}$ from node $n_{j}$ to node $n_{j}$ is
\begin{equation} \label{W_inout}
\mathbf{W}_{p}^{n_{j} \rightarrow n_{i}} = \mathbf{W}_{out}^{j}\mathbf{W}_{in}^{i}.
\end{equation}
Accordingly, the Equation \ref{ggnn_original} could be rewritten as,
\begin{equation} \label{ggnn_1}
\mathbf{a}_{i}^{t} = \sum_{n_{j} \rightarrow n_{i} \in \mathcal{E}} \textbf{A}[n_{j}, n_{i}]\mathbf{W}_{out}^{j}\mathbf{W}_{in}^{i} \mathbf{h}_{j}^{t-1} + \mathbf{b}_{p}.
\end{equation}
In this way, the number of parameters is proportional to the number of nodes rather than numerous edges, which greatly reduces the space and time complexity and meanwhile achieves edge-wise interaction.
\noindent \textbf{State Update.}
After aggregating state information, the nodes will update the state vectors via GRU and residual connections.
(1) \textit{\textbf{State update via GRU.}}
In traditional GGNN, the state vector of node $n_{i}$ is updated via GRU based on the aggregated state information $\mathbf{a}_{i}^{t}$ and its state at last step.
Formally,
\begin{equation} \label{ggnn_1}
\mathbf{h}_{i}^{t} = GRU(\mathbf{h}_{i}^{t-1}, \mathbf{a}_{i}^{t}).
\end{equation}
It can be formalized in detail as:
\begin{align}
& \mathbf{z}_{i}^{t} = \sigma(\mathbf{W}_{z} \mathbf{a}_{i}^{t} + \mathbf{U}_{z}\mathbf{h}_{i}^{t-1} + \mathbf{b}_{z}),\\
& \mathbf{r}_{i}^{t} = \sigma(\mathbf{W}_{r} \mathbf{a}_{i}^{t} + \mathbf{U}_{r}\mathbf{h}_{i}^{t-1} + \mathbf{b}_{r}),\\
& \mathbf{\tilde{h}}_{i}^{t} = tanh(\mathbf{W}_{h} \mathbf{a}_{i}^{t} + \mathbf{U}_{h}(\mathbf{r}_{i}^{t} \odot \mathbf{h}_{i}^{t-1}) + \mathbf{b}_{h}), \\
& \mathbf{h}_{i}^{t} = \mathbf{\tilde{h}}_{i}^{t} \odot \mathbf{z}_{i}^{t} + \mathbf{h}_{i}^{t-1} \odot (1-\mathbf{z}_{i}^{t}),~
\end{align}
where, $\mathbf{W}_{z}$, $\mathbf{W}_{r}$, $\mathbf{W}_{h}$, $\mathbf{b}_{z}$, $\mathbf{b}_{r}$, $\mathbf{b}_{h}$ are weights and biases of the updating function Gated Recurrent Unit (GRU) \cite{li2015gated}.
$\mathbf{z}_{i}^{t}$ and $\mathbf{r}_{i}^{t}$ are update gate vector and reset gate vector, respectively.
(2) \textit{\textbf{State update via Residual Connections.}}
Previous works \cite{shan2016deep,song2018autoint,cheng2016wide} have proved that it's effective to combine the low-order and high-order interactions together.
We thus introduce extra residual connections to update note states along with GRU, which can facilitate low-order feature reuse and gradients back-propagation.
Therefore, the Eq. (\ref{ggnn_1}) can be rewritten as,
\begin{equation} \label{eq:residual}
\mathbf{h}_{i}^{t} = GRU(\mathbf{h}_{i}^{t-1}, \mathbf{a}_{i}^{t}) + \mathbf{h}_{i}^{1}.
\end{equation}
\subsection{Attentional Scoring Layer}
After $T$ propagation steps, we can obtain the node states
\begin{center}
$\mathbf{H}^{T} = \left [ \mathbf{h}_{1}^{T}, \mathbf{h}_{2}^{T}, ... , \mathbf{h}_{m}^{T} \right ].$
\end{center}
Since the nodes have interacted with their $T$-order neighbors, the $T$-order feature interactions is modeled.
We need a graph-level output to predict CTR.
\noindent{\textbf{Attentional Node Weights}}
The final state of each field node has captured the global information. In other words, these field nodes are neighborhood-aware.
Here we predict a score on the final state of each field respectively and sum them up with an attention mechanism which measures their influences on the overall prediction.
Formally, the prediction score of each node $n_{i}$ and its attentional node weight can be estimated via two multiple layers perceptions respectively as,
\begin{equation} \label{self_attention1}
\hat{y}_{i} = MLP_{1}(\mathbf{h}_{i}^{p})),
\end{equation}
\begin{equation} \label{self_attention2}
a_{i} = MLP_{2}(\mathbf{h}_{i}^{p})).
\end{equation}
The overall prediction is a summation of all nodes:
\begin{equation} \label{self_attention}
\hat{y} = \sum_{i=1}^{m}a_{i}\hat{y}_{i}.
\end{equation}
Note that it is actually same as the work \cite{li2015gated}.
Intuitively, $MLP_{1}$ is used to model the prediction score of each field aware of the global information and $MLP_{2}$ is used to model the weights of each field (i.e., importance of fields' influence on the overall prediction).
\subsection{Training}
Our loss function is Log loss, which is defined as follows:
\begin{equation}
\mathcal{L} = -\frac{1}{N} \sum_{i=1}^{N}(y_{i}log(\hat{y}_{i})+(1-y_{i})log(1-\hat{y}_{i})),
\end{equation} \label{eqa:logloss}
where $N$ is the total number of training samples and $i$ indexes the training samples.
The parameters are updated via minimizing the Log Loss using RMSProp \cite{tieleman2012lecture}.
Most CTR datasets have unbalanced proportion of positive and negative samples, which will mislead the predictions.
To balance the proportion, we randomly select
equal number of positive and negative samples in each batch during training process.
\subsubsection{\textbf{Parameter Space.}}
The parameter needed to be learnt mainly consists of the parameters correlated to nodes and the perception networks in attention mechanism.
For each node $n_{i}$, we have an input matrix $\mathbf{W}_{in}^{i}$ and an output matrix $\mathbf{W}_{out}^{i}$ to transform state information.
Totally we have $2m$ matrices, which are proportional to the number of nodes $m$.
Besides, the multi-head self-attention layer contains the following weight matrices $\left \{ \mathbf{W}_i^{(Q)}, \mathbf{W}_i^{(K)}, \mathbf{W}_i^{(V)} \right \}$ for each head, and the number of parameters of the entire layer is $(3dd'+hdd')$.
In addition, we have two matrices of perception networks in the self-attention mechanism and also parameters in GRU.
Overall, there are $O(2m+hdd')$ matrices.
\subsection{Model Analysis}
\subsubsection{\textbf{Comparison with Previous CTR Models.}}
As discussed before, the previous deep learning based CTR models model high-order interactions in a general paradigm:
raw sparse input multi-filed features are first mapped into dense field embedding vectors, then simply concatenated together and feed into deep neural networks (DNN) or other specifically designed networks to learn high-order feature interactions.
The simple unstructured combination of feature fields inevitably limits the capability to model sophisticated interactions among different fields in a sufficiently flexible and explicit fashion.
In this way, the interaction between different fields is conducted in a fixed fashion, no matter how sophisticated the used network is.
In addition, they lack good model explanation.
Since we represent the multi-field features in a graph structure, our proposed model Fi-GNN is able to model interactions among different fields in the form of node interactions.
Compared with the previous CTR models, Fi-GNN can model the sophisticated feature interaction via flexible edge-wise interaction function, which is more effective and explicit.
Moreover, the edge weights reflecting importance of different interactions can be learnt in Fi-GNN, which provides good model explanations for CTR prediction.
In fact, if the edge weight is all 1 and the transformation matrix on each edge is same, our model Fi-GNN collapses into FM.
Taking advantage of the great power of GNN, we can apply flexible interactions on different feature fields.
\subsubsection{\textbf{Comparison with Previous GNN Models.}}
Our proposed model Fi-GNN is designed based on GGNN, upon which we mainly make two improvements:
(1) we achieve edge-wise interaction via attentional edge weights and edge-wise transformation;
(2) we introduce an extra residual connection along with GRU to update states, which can help regain the low-order information.
As discussed before, the node interaction on each edge in GNN depends on the edge weight and the transformation function on the edge.
The conventional GGNN uses binary edge weights which fails to reflect the importance of the relations, and a fixed transformation function on all the edges.
In contrast, our proposed Fi-GNN can model edge-wise interactions via attention edge weights and edge-wise transformation functions.
When the interaction order is high, the node states tend to be smooth, i.e., the states of all the nodes tend to be similar.
The residual connections can help identity the nodes by adding initial node states.
\begin{table}[h]
\centering\caption{Statistics of evaluation datasets.}
\begin{tabular}{cccc}
\hline
Dataset & \#Instances & \#Fields & \#Features (sparse) \\
\hline
Criteo & 45,840,617 & 39 & 998,960 \\
Avazu & 40,428,967 & 23 & 1,544,488 \\
\hline
\end{tabular}\label{tab::dataset}
\end{table}
\begin{table*}
\centering\caption{Performance Comparison of Different methods. The best performance on each dataset and metric are highlighted. Further analysis is provided in Section \ref{sect:result}.}
\begin{tabular}{llcccccccc}
\hline
\multirow{2}{*}{Model Type} & \multirow{2}{*}{Model} & \multicolumn{4}{c}{Criteo} & \multicolumn{4}{c}{Avazu}\\
& & AUC & RI-AUC & Logloss & RI-Logloss & AUC & RI-AUC & Logloss & RI-Logloss \\
\hline
\multirow{1}{*}{First-order} & LR & 0.7820 & 3.00\% & 0.4695 & 5.43\% & 0.7560 & 2.60\% & 0.3964 & 3.63\% \\
\hline
\multirow{2}{*}{Second-order} & FM~\cite{rendle2010factorization} & 0.7836 & 2.80\% & 0.4700 & 5.55\% & 0.7706 & 0.72\% & 0.3856 & 0.76\% \\
& AFM\cite{xiao2017attentional} & 0.7938 & 1.54\% & 0.4584 & 2.94\% & 0.7718 & 0.57\% & 0.3854 & 0.81\% \\
\midrule
\multirow{5}{*}{High-order}
& DeepCrossing~\cite{shan2016deep} & 0.8009 & 0.66\% &0.4513 & 1.35\% & 0.7643 & 1.53\% & 0.3889 & 1.67\% \\
& NFM~\cite{he2017neural} & 0.7957 & 1.57\% & 0.4562 & 2.45\% & 0.7708 & 0.70\% & 0.3864 & 1.02\% \\
& CrossNet~\cite{wang2017deep} & 0.7907 & 1.92\% & 0.4591 & 3.10\% & 0.7667 & 1.22\% & 0.3868 & 1.12\% \\
& CIN~\cite{lian2018xdeepfm} & 0.8009 & 0.63\% & 0.4517 & 1.44\% & 0.7758 & 0.05\% & 0.3829 & 0.10\% \\
& Fi-GNN (ours) & \textbf{0.8062} & 0.00\% & \textbf{0.4453} & 0.00\% & \textbf{0.7762} & 0.00\% & \textbf{0.3825} & 0.00\% \\
\bottomrule
\end{tabular}
\label{tab::results}
\end{table*}
\section{Experiments}
In this section, we conduct extensive experiments to answer the following questions:
\begin{itemize}
\item[\textbf{RQ1}]
How does our proposed Fi-GNN perform in modeling high-order feature interactions compared with the state-of-the-art models?
\item[\textbf{RQ2}]
Does our proposed Fi-GNN perform better than original GGNN in modeling high-order feature interactions?
\item[\textbf{RQ3}]
What are the influences of different model configurations?
\item[\textbf{RQ4}]
What are the relations between features of different fields?
Is our proposed model explainable?
\end{itemize}
We first present some fundamental experimental settings before answering these questions.
\subsection{Experiment Setup}
\subsubsection{Datasets}
We evaluate our proposed models on the following two datasets, whose statistics are summarized in Table~\ref{tab::dataset}.
\textbf{1. Criteo\footnote{https://www.kaggle.com/c/criteo-display-ad-challenge}.} This is a famous industry benchmark dataset for CTR prediction, which has 45 million users' click records in 39 anonymous feature fields on displayed ads.
Given a user and the page he is visiting, the goal
is to predict the probability that he will click on a given ad.
\textbf{2. Avazu\footnote{https://www.kaggle.com/c/avazu-ctr-prediction}.} This dataset contains users' click behaviors on displayed mobile ads.
There are 23 feature fields including user/device features and ad attributes.
The fields are partial anonymous.
For the two datasets, we remove the infrequent features appearing in less than 10, 5 times respectively and treat them as a single feature ``<unknown>''.
Since the numerical features may have large variance, we normalize numerical values by transforming a value $z$ to $log^2(z)$ if $z > 2$, which is proposed by the winner of Criteo Competition\footnote{\url{https://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf}}.
The instances are randomly split in 8:1:1 for training, validation and testing.
\subsubsection{Evaluation Metrics}
We use the following two metrics for model evaluation: AUC (Area Under the ROC curve) and Logloss (cross entropy).
\textbf{AUC} measures the probability that a positive instance will be ranked higher than a randomly chosen negative one.
A higher AUC indicates a better performance.
\textbf{Logloss} measures the distance between the predicted score and the true label for each instance.
A lower Logloss indicates a better performance.
\textbf{Relative Improvement (RI)}. It should be noted that a small improvement with respect to AUC is regarded significant for real-world CTR tasks \cite{cheng2016wide,guo2017deepfm,wang2017deep,lian2018xdeepfm}.
In order to estimate the relative improvement of our model achieves over the compared models, we here measure \textbf{RI-AUC} and \textbf{RI-Logloss}, which can be formulated as,
\begin{equation}
\textit{RI}\text{-}\textit{X} = \dfrac {\left |\textit{X}(model)-\textit{X}(base) \right |}{\textit{X}(base)} *100\%~,
\end{equation}
where $\left | x \right |$ returns the absolute value of x, $X$ can be either AUC or Logloss, $\textit{model}$ refers to our proposed model and $\textit{base}$ refers to the compared model.
\subsubsection{Baselines}
As described in Section \ref{sect:related},
the early approaches can be categorized into three types:
(A) Logistic Regression (LR) which models first-order interaction;
(B) Factorization Machine (FM) based linear models which model second-order interactions;
(C) Deep learning based models which model high-order interactions on the concatenated field embedding vectors.
We select the following representative methods of three types to compare with ours.
\textbf{LR} (A) models first-order interaction on the linear combination of raw individual features.
\textbf{FM}~\cite{rendle2010factorization} (B) models second-order feature interactions from vector inner products.
\textbf{AFM}~\cite{xiao2017attentional} (B) is a extent of FM, which considers the weight of different second-order feature interactions by using attention mechanism.
It is one of the state-of-the-art models that model second-order feature interactions.
\textbf{DeepCrossing}~\cite{shan2016deep} (C) utilizes DNN with residual connections to learn high-order feature interactions in an implicit fashion.
\textbf{NFM}~\cite{he2017neural} (C) utilizes a Bi-Interaction Pooling layer to model the second-order interactions, and then feeds the concatenated second-order combinatorial features into DNNs to model high-order interactions.
\textbf{CrossNet (Deep\&Cross) }~\cite{wang2017deep} (C) is the core of Deep\&Cross model, which tries to model feature interactions explicitly by taking outer product of concatenated feature vector at the bit-wise level.
\textbf{CIN (xDeepFM)}~\cite{lian2018xdeepfm} (C) is the core of xDeepFM model, which takes outer product of stacked feature matrix at vector-wise level.
\subsubsection{Implementation Details}
We implement our method using Tensorflow\footnote{The code is released at \url{https://github.com/CRIPAC-DIG/Fi_GNN}}. The optimal hyper-parameters are determined by the grid search strategy.
Implementation of baselines follows \cite{song2018autoint}.
Dimension of field embedding vectors is 16 and batch size is 1024 for all methods.
DeepCrossing has four feed-forward layers, each with 100 hidden units.
NFM has one hidden layer of size 200 on top of Bi-Interaction layer as recommended in the paper \cite{he2017neural}.
There are three interaction layers for both CrossNet and CIN.
All the experiments were conducted over a sever equipped with 8 NVIDIA Titan X GPUs.
\begin{figure*}[hbtp]
\subfigure[edge-wise interaction (E) and residual connections (R)]{
\begin{minipage}[b]{0.5\textwidth}
\label{fig:ablation_er}
\includegraphics[width=1\textwidth]{./pic/er.pdf}
\end{minipage}%
}%
\subfigure[attentional edge weight (W) and edge-wise transformation (T)]{
\begin{minipage}[b]{0.5\textwidth}
\label{fig:ablation_wt}
\includegraphics[width=1\textwidth]{./pic/wt.pdf}
\end{minipage}%
}%
\caption{Two groups of ablation studies on Fi-GNN.}
\label{fig:performance}
\end{figure*}
\subsection{Model Comparison (RQ1)}\label{sect:result}
The performance of different methods is summarized in Table \ref{tab::results}, from which we can obtain the following observations:
\begin{itemize}
\item[(1)]
LR achieves the worst performance among these baselines, which proves that the individual features is insufficient in CTR prediction.
\item[(2)]
FM and AFM, which model second-order feature interactions, outperform LR on all datasets, indicating that it's effective to model pair-wise interaction between feature fields.
In addition, AFM achieves better performance than FM, which proves the effectiveness of attention on different interactions.
\item[(3)]
The methods modeling high-order interaction mostly outperform the methods that model second-order interactions.
This indicates the second-order feature interactions is not sufficient.
\item[(4)]
DeepCrossing outperforms NFM, proving the effectiveness of residual connections in CTR prediction.
\item[(5)]
Our proposed Fi-GNN achieves best performance among all these methods on two datasets.
Considering the fact that previous improvements with respect to AUC at \textbf{0.001-level} are regarded significant for CTR prediction task, our proposed method shows great superiority over these state-of-the-arts especially on Criteo dataset, owing to the great representative power of graph structure and the effectiveness of GNN on modeling node interactions.
\item[(6)] Compared with these baselines, the relative improvement of our model achieves on Criteo dataset is higher than that on Avazu dataset. This might be attributed to that there are more feature fields in Criteo dataset, which can take more advantage of the representative power of graph structure.
\end{itemize}
\subsection{Ablation Study (RQ2)}\label{sect:comp_gnn}
Our proposed model Fi-GNN is based on GGNN, upon which we mainly make two improvements:
(1) we achieve edge-wise node interactions via attentional edge weights and edge-wise transformation;
(2) we introduce extra residual connections to update state along with GRU.
To evaluate the effectiveness of the two improvements on modeling node interactions, we conduct ablation study and compare the following three variants of Fi-GNN:
\textbf{Fi-GNN(-E/R)}:
Fi-GNN without the two above mentioned improvements: edge-wise node interactions (\textbf{E}) and residual connections (\textbf{R}).
\textbf{Fi-GNN(-E)}:
Fi-GNN without edge-wise interactions (\textbf{E}).
\textbf{Fi-GNN(-R)}:
Fi-GNN without residual connections (\textbf{R}), which is also GGNN with edge-wise interactions.
The performance comparison is shown in Figure \ref{fig:ablation_er}, from which we can obtain the following observations:
\begin{itemize}
\item[(1)]
Compared with FiGNN,the performance of Fi-GNN(-E) drops by a large margin, suggesting that it's crucial to model the edge-wise interaction.
Fi-GNN(-E) achieves better performance than Fi-GNN(-E/R), proving that the residual connections can indeed provide useful information.
\item[(2)]
The full model Fi-GNN outperforms the three variants, indicating that the two improvements we make, i.e., residual connections and edge-wise interactions, can jointly boost the performance.
\end{itemize}
We take two measures to achieve edge-wise node interactions in Fi-GNN: attentional edge weight (\textbf{W}) and edge-wise transformation (\textbf{T}).
To further investigate where dose the great improvement come from, we conduct another ablation study and compare the following three variants of Fi-GNN:
\textbf{Fi-GNN(-W/T)}: Fi-GNN without self-adaptive adjacency matrix (\textbf{W}) and edge-wise transformation (\textbf{T}), i.e., uses binary adjacency matrix (all the edge weights are 1) and a shared transformation matrix on all the edges.
It is also \textbf{Fi-GNN-(E)},
\textbf{Fi-GNN(-W)}: FI-GNN without attentional edge weights, i.e., uses binary adjacency matrix.
\textbf{Fi-GNN(-T)}: FI-GNN without edge-wise transformation,
i.e., uses a shared transformation on all the edges.
The performance comparison is shown in Figure \ref{fig:ablation_er}.
We can see that Fi-GNN(-T) and Fi-GNN(-W) both outperform Fi-GNN(-W/T), which proves their effectiveness.
Nevertheless, Fi-GNN(-W) achieves greater improvements than Fi-GNN(-T), suggesting that the edge-wise transformation is more effective than attentional edge weights in modeling edge-wise interaction.
This is quite reasonable since the transformation matrix oughts to have stronger influence on interactions than a scalar attentional edge weight.
In addition, Fi-GNN achieves the best performance demonstrates that it's crucial to take both the two measures to model edge-wise interaction.
\begin{figure}[t]
\centering
\subfigure[State Dimensionality]{
\begin{minipage}[b]{0.24\textwidth}
\label{fig:hidden}
\includegraphics[width=1\textwidth]{./pic/hidden.pdf}
\end{minipage}%
}%
\subfigure[Interaction Step]{
\begin{minipage}[b]{0.24\textwidth}
\label{fig:order}
\includegraphics[width=1\textwidth]{./pic/order.pdf}
\end{minipage}%
}%
\caption{AUC performance with different state dimensionality $D$ (left) and interaction step $T$ (right) on Criteo and Avazu dataset.}
\label{fig:performance}
\vspace{-4mm}
\end{figure}
\subsection{Hyper-Parameter Study (RQ3)}
\subsubsection{\textbf{Influence of different state dimensionality.}}
We first investigate how the performance changes w.r.t. the dimension of the node states $d'$, which is also the output size of the initial multi-head self-attention layer.
The results on Criteo and Avazu datasets are shown in Figure \ref{fig:hidden}.
On Avazu dataset, the performance first increases and then begins to decrease when the dimension size reaches 32, which indicates that state size of 32 has been represented enough information and the model is overfitted when too many parameters are used.
Nevertheless, on Criteo dataset, the performance peaks with the dimension size of 64, which is reasonable since the dataset is more complexed which needs larger dimension size to carry out enough information.
\subsubsection{\textbf{Influence of different interaction steps.}}
We are interested in what the optimal highest order of feature interactions is.
Our proposed Fi-GNN can answer the question, since the interaction step $T$ equals to the highest order of feature interaction.
Therefore, we conduct experiments on how the performance changes w.r.t. the highest order of feature interaction, i.e., the interaction step $T$.
The results on Criteo and Avazu datasets are shown in Figure \ref{fig:order}.
On Avazu datasets, we can see that the performance increases along with the increasing of $T$ until it reaches 2, after that the performance starts to decrease.
By contrast, the performance peaks when $T=3$ on Criteo dataset.
This finding suggests 2-order and 3-order interactions are enough for Avazu and Criteo dataset, respectively.
It is reasonable since the Avazu and Criteo datasets have 23 and 39 feature fields, respectively.
Thus the Criteo dataset needs more interaction steps for the field nodes to fully interact with other nodes in the feature graphs.
\subsection{Model Explanation (RQ4)} \label{sect:explannation}
In this section, we will answer the question that can Fi-GNN provide explanations.
We apply attention mechanisms on the edges and nodes in the feature graphs and obtain attentional edge weights and attentional node weights respectively, which can provide explanations from different aspects.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{./pic/attention_edge.png}
\caption{Heat map of attentional edge weights at the global-level on Avazu, which reflects the importance of relations between different feature fields.}
\label{fig:heatmap_edge}
\end{figure}
\subsubsection{\textbf{Attentional Edge weights.}}
The attentional edge weight reflects the importance of interaction between the two connected field nodes, which can also reflect the relation of the two feature fields.
Higher the weight is, stronger the relation is.
Figure \ref{fig:heatmap_edge} presents the heat map of the globally averaged adjacency matrix of all the samples in Avazu dataset, which can reflect the relations between different fields in a global level.
Since they are some anonymous feature fields, we only show the remaining 13 feature fields with real meanings.
As can be seen, some feature fields tend to have a strong relations with others, such as \textsf{site\_category} and \textsf{site\_id}.
This makes sense since the two feature field both corresponds to the website where the impressions are put on. They contain the main contextual information of impressions.
\textsf{Hour} is another feature which have close relations with others. It is reasonable since Avazu focuses on mobile scene, where user surfing online at any time of a day.
The surfing time has strong influence on other advertising features.
On the other hand, \textsf{device\_ip} and \textsf{device\_id} seem to have weak relations with other feature fields.
This may due to that they nearly equal to user identity, which is relatively fixed and hard to be influenced by other features.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{./pic/node_attention.pdf}
\caption{Heat map of attentional node weights at both global- and case-level on Avazu, which reflects the importance of different feature fields on the final prediction.}
\label{fig:heatmap_node}
\vspace{-5mm}
\end{figure}
\subsubsection{\textbf{Attentional Node weights.}}
The attentional node weights reflect the importances of feature fields' influence on the overall prediction score.
Figure \ref{fig:heatmap_node} presents the heat map of global-level and case-level attentional node weights.
The leftmost is an globally averaged one of all the samples in Avazu dataset.
The left four are randomly selected, whose predicted scores are $[0.97, 0.12, 0.91, 0.99]$, and labels are $[1, 0, 1, 1]$ respectively.
At the global level, we can see that the feature field \textsf{app\_category} have the strongest influence on the clicking behaviors.
It is reasonable since Avazu focuses on mobile scene, where the app is the most important factor.
At the case level, we observe that the final clicking behavior mainly depends on one critical feature field in most cases.
\section{Conclusions}
In this paper, we point out the limitations of the previous CTR models which consider multi-field features as an unstructured combination of feature fields.
To overcome these limitations, we propose to represent the multi-field features in a graph structure for the first time, where each node corresponds to a feature field and different fields can interact through edges.
Therefore, modeling feature interactions can be converted to modeling node interaction on the graph.
To this end, we design a novel model Fi-GNN which is able to model sophisticated interactions among feature fields in a flexible and explicit fashion.
Overall, we propose a new paradigm of CTR prediction: represent multi-field features in a graph structure and convert the task of modeling feature interactions to modeling node interactions on graphs, which may motivate the future work in this line.
\begin{acks}
This work is supported by National Natural Science Foundation of China (61772528, 61871378) and National Key Research and Development Program (2016YFB1001000, 2018YFB1402600).
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,500,329 | arxiv |
\section{Introduction}
The exponential reduction of computers' components known as Moore’s law took
computation from the classical physical domain to the quantum physics. The idea of quantum computation was initially proposed in \cite{Feynman}, where Feynman states that quantum computers can simulate quantum physical systems
exponentially faster than classical computers. Some quantum algorithms also
overcome the best knew classical algorithms; the most famous examples being the Shor’s factoring algorithm \cite{shor:97} that is exponentially faster than the best know classical algorithm and the Grover’s search algorithm \cite{grover:96} with quadratic gain in relation to the best classical algorithm for unordered search. It is true that quantum computers are not yet a reality, but there has been an explosion of investment in quantum computing, the result of which are numerous proposals for quantum computers and the general belief which soon they will be realised. The use of an adiabatic quantum system with 84 quantum bits is reported in~\cite{Bian2013} and in \cite{Monz2011} is reported the creation of a quantum system with 14 quantum bits. Empirical evaluations of ideas presented in this work for real problems require a quantum computer with capacity to manipulate hundreds of qubits which is impossible with current technology.
One of the main characteristics of quantum computation is the quantum parallelism that for some problems allows quantum algorithms to have a speedup in relation to the classical algorithms. With quantum parallelism is possible to calculate all possible $2^n$ values of a $n-$ary Boolean function in a single query. However, we cannot visualise these outputs
directly. A quantum measurement is necessary and it returns probabilistically only a more restrict value. The quantum algorithm design problem is then to perform quantum operations to increase the probability of the desired output.
Designing quantum algorithms is not an intuitive task. Attempts to
bring quantum computing power to a greater range of problems are the development of quantum machine-learning algorithms as decision trees \cite{Farhi1998}, evolutionary algorithms \cite{Malossini2008} and artificial
neural networks \cite{panella:09,Altaisky,oliveira:08,ventura:04,Behrman,daSilva:12,Narayanan,Oliveira2009,Liu2013}. In this paper, we are concerned in the field of quantum weightless neural networks.
Weightless neural networks (WNN) are not the most used model of artificial neural networks. WNN have been proposed by Aleksander~\cite{Aleksander1966} as engineering tools to perform pattern classification. Applications of WNN are described in several works ~\cite{Staffa2014a, Carvalho2014, Cardoso2014,esann:2014:tutorial} and quantum versions of WNN have been proposed in~\cite{oliveira:08,Oliveira2009,daSilva:12}.
The idea of quantum neural computation has been proposed in the nineties~\cite{kak:95}, since then
several models of quantum neural networks have been proposed. For instance, quantum weightless neural networks
\cite{daSilva:12}, neural networks with quantum architecture \cite{panella:09}
and a simple quantum neural network \cite{ventura:04}. In all these works
\cite{daSilva:12,panella:09,ventura:04} a quantum neural network configuration
is represented by a string of qubits and quantum learning algorithms are
proposed within a common framework. The main idea of the learning algorithms in
\cite{daSilva:12,panella:09,ventura:04} is to present input data to all possible
neural networks for a given architecture in superposition and perform a quantum
search in the resulting superposition. The objective of this paper is to
generalise this idea to allow architecture selection through the training of a quantum
weightless neural network. To achieve this objective we use a quantum weightless
neural network that stores representations of weightless neural networks with different architectures in its memory
positions and we define a quantum learning algorithm using the non-linear operator proposed
in \cite{PhysRevLett.81.3992} and the measurement and feedback strategy~\cite{Gammelmark:09}.
Selection of a neural network architecture is an important task in neural networks applications.
Normally this task requires a lot of empirical evaluation performed by an expert.
To avoid the tedious empirical evaluation process and help inexperienced users some algorithms have been proposed to perform automatic selection of neural networks architecture. Techniques such as meta-learning~\cite{Abraham20041} and
evolutionary computation~\cite{Almeida2010} have been used for architecture selection.
In this paper, we show how to use a quantum weightless neural network with a non-linear quantum-learning algorithm to find a quantum neural network architecture and parameters with a desired performance. The proposed algorithm uses quantum superposition principle and a non-linear quantum operator. The proposed algorithm performs a global search in architecture and parameters space and its computational time is polynomial in relation to the number of training patterns, architectures and quantum weightless network memory size.
The rest of the paper is organised as follows. Section 2 presents basics concepts on quantum computation such as quantum bits, operators, measure and parallelism. Section
3 presents the concept of weightless neural networks, quantum neural networks and quantum weightless neural networks.
Section 4 describes a quantum learning algorithm for weightless neural networks
and how to apply this learning algorithm to perform architecture selection.
Finally, Section 5 is the conclusion.
\section{Quantum computing}
Deep knowledge of classical physics is not required for designing classical algorithms. In the same vein, the development of quantum algorithms does not require a deep knowledge of quantum physics and there are several books~\cite{nielsen:00, Hirvensalo2003,mermin2007quantum} that follow this approach by introducing only the strictly necessary knowledge of quantum physics for the understanding of quantum computing. In order to create a self-contained text a brief introduction to quantum computing is presented.
The state of a quantum computer with $n$ quantum bits (or \emph{qubits}) can be mathematically represented by a unit vector of an $2^n$-dimensional complex vector space with inner product. For instance, one single qubit can be represented in the vector space $\mathbb{C}^2$ as described in Equation~\eqref{eq:qubit},
\begin{equation}
\ket{\psi}=\alpha\ket{0}+\beta\ket{1}
\label{eq:qubit}
\end{equation}
where $\alpha, \beta \in \mathbb{C}$, $\left|\alpha\right|^2+\left|\beta\right|^2 = 1$ and $\ket{0}$ and
$\ket{1}$ are the vectors described in Equation~\eqref{eq:canbasis}\footnote{We
could have used any other orthonormal basis but in quantum computing the canonical basis also called \emph{computational basis} is the most employed.}.
\begin{equation}
\ket{0} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \mbox{ and } \ket{1} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}
\label{eq:canbasis}
\end{equation}
One qubit in a $n$-dimensional quantum system is represented by the $2^n$-dimensional vector space as described in Equation~\eqref{eq:qubits},
\begin{equation}
\sum_{i=0}^{2^n-1}\alpha_i \ket{\psi_i}
\label{eq:qubits}
\end{equation}
where the sum of the squared modulus of the amplitude $\sum_i |\alpha_i|^2$ is equal to one
and the set $\{\ket{\psi_0}, \ket{\psi_1}, \cdots, \ket{\psi_{2^n-1}} \}$ is an orthonormal basis of $\mathbb{C}^{2^n}$.
A \emph{Quantum operator} in a quantum system with $n$ qubits is an unitary
operator\footnote{An operator (or matrix, for the finite dimensional case once
fixed a basis) $A$ is \emph{unitary} if $AA^\dagger=A^\dagger A=I$ where
$A^\dagger$ is the complex conjugate of the transpose of $A$} in the vector
space $\mathbb{C}^{2^n}$. Let $U$ be an unitary operator over $\mathbb{C}^{2^n}$
and $\ket{\psi_{t_1}}$ the state of the quantum system. After applying the quantum
operator $U$ the system will be in the state $\ket{\psi_{t_2}}$ described in
Equation~\eqref{eq:evolution}.
\begin{equation}
\ket{\psi_{t_2}} = U\ket{\psi_{t_1}}
\label{eq:evolution}
\end{equation}
In the computational basis, the matrix representation of the quantum operators the \emph{not operator} $X$ and the \emph{Hadamard operator} $H$ over one qubit are described in Equation~\eqref{eq:exop}.
\begin{equation}
X = \begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix} \mbox{ and }
H =\frac{1}{\sqrt{2}} \begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\label{eq:exop}
\end{equation}
$X$ acts on the computational basis vectors as a not operator ($X\ket{0}=\ket{1}$ and $X\ket{1}=\ket{0}$) and $H$ applied to a state in the computational basis creates a ``uniform" superposition (or linear combination) of the two basis:
\begin{equation}
\begin{split}
H\ket{0} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})\\
H\ket{1} = \frac{1}{\sqrt{2}}(\ket{0} - \ket{1})
\end{split}
\label{eq:exH1}
\end{equation}
both represent a state which is $\ket{0}$ with probability $\frac{1}{2}$ and $\ket{1}$ with probability $\frac{1}{2}$, and can be thought of a state which is both $\ket{0}$ and $\ket{1}$. That is why one says that a qubit is able to ``store" two classical bits simultaneously. This scale up exponentially with the number of qubits $n$, $H\ket{0}\otimes \cdots \otimes H\ket{0} = H^{\otimes n}\ket{0\cdots 0}$, with $0\cdots 0$ being a sequence of $n$ $0$'s, is the superposition of all $2^n$ possibles $n$-qubits. Equation~\eqref{eq:exH} shows the result for $n=2$, where $H^{\otimes 2} = H\otimes H$:
\begin{equation}
\begin{split}
H^{\otimes 2} \ket{0}\ket{0} = \frac{1}{2}\left(\ket{0} + \ket{1}\right)\otimes \left(\ket{0} + \ket{1}\right)
= \\ \frac{1}{2}(\ket{0}\ket{0} + \ket{0}\ket{1} + \ket{1}\ket{0} + \ket{1}\ket{1})
\end{split}
\label{eq:exH}
\end{equation}
\emph{Quantum parallelism} is one of the main properties of quantum computation and it is used in the majority of quantum algorithms. Let $U_f$ be a quantum operator with action described in Equation~\eqref{eq:uf},
\begin{equation}
U_f\ket{x,c} = \ket{x, c \oplus f(x)}
\label{eq:uf}
\end{equation}
where $f:B^m \rightarrow B^n$ is a Boolean function. Applying this operator in a state in superposition $\sum_i\ket{x_i,0}$ the value of $x_i$ will be calculated for all $i$ in a single quantum operation, as described in Equation~\eqref{eq:parallelism}.
\begin{equation}
U_f\left(\sum_{i}\ket{x_i,0}\right) = \sum_{i}U_f \ket{x_i,0} = \sum_{i}\ket{x_i,f(x_i)}
\label{eq:parallelism}
\end{equation}
Despite the possibility of obtaining all possible outputs of a Boolean function in a single query, quantum parallelism cannot be used directly.
Results in quantum computation are obtained via \emph{measurement} which returns only a limited information about the system. For instance, if a measurement is performed in a quantum state $\ket{\psi} = \alpha_i\ket{\psi_i}$ the result will be $\ket{\psi_i}$ with probability $|\alpha_i|^2$. After measurement state $\ket{\psi}$ collapses to the output obtained and new measurements will result in the same output.
With the definition given above, also adopted by the mainstream quantum literature as~\cite{nielsen:00}, quantum operators are linear operators. In this paper, we suppose the viability of a nonlinear quantum operator $Q$ proposed in~\cite{PhysRevLett.81.3992} whose action is described in Equation~\eqref{eq:nonlin} if at least one $\ket{c_i}$ is equal to $\ket{1}$ otherwise its action is described in Equation~\eqref{eq:nonlin2}.
\begin{equation}
Q \left(\sum_i \ket{\psi_i}\ket{c_i} \right)= \left(\sum_i \ket{\psi_i}\right)\ket{1}
\label{eq:nonlin}
\end{equation}
\begin{equation}
Q \left(\sum_i \ket{\psi_i}\ket{c_i}\right) = \left(\sum_i \ket{\psi_i}\right)\ket{0}
\label{eq:nonlin2}
\end{equation}
The speedup obtained by the application of non-linear operators have been associated with unphysical effects, however in~\cite{czachor1998remarks,czachor1998notes} it is presented a version of this non linear quantum operator free of unphysical influences.
\section{Classical and quantum weightless neural networks}
This work deals with quantum weightless neural networks. Weightless Neural Networks (WNN) are neural networks without weights associated in their connections where the information is stored in a look~up table. The first model of WNN named RAM has been proposed in \cite{Aleksander1966}, since then several neural networks models have been proposed, for instance the Probabilistic Logic Neuron (PLN), the Multi-valued Probabilistic Logic Neuron (MPLN), Goal Seeking Neuron (GSN) and the quantum RAM neuron (qRAM).
A weightless neuron with $n$ input values has a memory with $2^n$ addressable positions. The learning procedure of a WNN does not require differential calculus or any complex mathematical calculations. The learning procedure is performed by writing in the look~up table. This learning strategy is faster than techniques based in gradient descendant methods and are suitable to implementation in conventional digital hardware.
Several models of weightless neural networks are described on~\cite{Ludermir1999}. In this paper we deal with the qRAM neural network. The qRAM neuron is based in the simplest weightless model, the RAM neuron. Besides its simplicity RAM neurons can be trained very rapidly. Some applications of RAM and RAM based neurons in real world problems are described e.g. in~\cite{Staffa2014,Cardoso2014,Carvalho2014,DeSouza2009}. For a recent review see \cite{esann:2014:tutorial}. In~\cite{Staffa2014, Carvalho2014} a WiSARD system is used to track moving objects or human beings, in~\cite{Cardoso2014} a WiSARD clustering version is proposed to perform credit analysis and in~\cite{DeSouza2009} a VG-RAM weightless neural network is used to perform multi-label text categorisation.
\subsection{RAM Node}
A RAM neuron with $n$ inputs has a memory $C$ with $2^n$ addressable positions. Each memory position of a RAM neuron stores a Boolean value and its address is a Boolean string in $\{0,1\}^n$ also called Boolean vector. When a RAM neuron receives a Boolean vector $x=x_1\cdots x_n$ as input it will produce the output $C[x]$. Learning in the qRAM node is very simple and can be accomplished updating the bits in memory positions for each one of the patterns in the training set.
Architectures of weightless neural networks are weekly connected as a consequence of the limited number of neurons inputs. Two common architectures are pyramidal where the output of a neuron in one layer is connected with a single neuron in the next layer or with only one layer where the neural network output is the sum of each neuron output.
\subsection{Quantum neural networks}
The notion of quantum neural networks has been proposed on several occasions~\cite{panella:09,ventura:04,Behrman}. In~\cite{panella:09, ventura:04} quantum neural models are pure abstract mathematical devices and in~\cite{Behrman} quantum neural networks are described as a physical device. In this paper we follow the first approach where a neural network is a mathematical model. It is also possible to classify quantum neural networks models as either quantum neural model~\cite{Andrecut2002,panellaneurofuzzy, panella:09,ventura:04,Behrman,daSilva:12} or quantum inspired model~\cite{Li2013,kouda:05}. Quantum inspired models are classical models of computation that uses ideas from quantum computing. Implementation of the quantum weightless neural network mathematically described in this paper requires a real quantum computer. Recent reviews on quantum neural networks can be found in~\cite{Schuldquest,Altaiskycurrent}.
Models of quantum weightless neural networks are proposed or analysed in~\cite{oliveira:08,Oliveira2009,daSilva:12, DaSilva2012}. Quantum weightless neural networks' models are first proposed in~\cite{oliveira:08}, in~\cite{daSilva:10a} a quantum version of the RAM neuron based on an associative quantum memory is presented and in~\cite{daSilva:12} the qRAM neuron and a learning algorithm for quantum weightless neural networks are presented.
Learning algorithms for quantum neural networks are also proposed in~\cite{panella:09, ventura:04} where a superposition of neural networks with a fixed architecture is created and a quantum search is performed to recover the best neural network architecture. In this paper we propose a variation of this methodology to train quantum weightless neural networks. In our training strategy, weightless neural networks with different architectures are in a superposition. The neural network model used in this learning methodology is the qRAM neural network.
\subsection{qRAM Node}
The qRAM neuron is a quantum version of the RAM neuron. As in classical case, a $n$ input qRAM neuron has a quantum memory with $2^n$ memory positions. The content of the qRAM memory cannot be directly stored because a measurement of the output could destroy the information stored in the qRAM memory. We store quantum bits in the computational basis named selectors and apply one quantum operator $A$ to obtain the stored qubit.
The $A$ operator used in the qRAM is the control $X$ operator described in Equation~\ref{eq:CNOT}. With this operator a quantum RAM neuron can be described as in Definition~\ref{def:qram}, where memory contents are stored in quantum register selectors.
\begin{equation}
\begin{array}{lr}
A = \begin{pmatrix}
I & 0 \\
0 & X
\end{pmatrix}
&
\begin{array}{l}
\mbox{where}\\
A\ket{00} = \ket{0}I\ket{0}\\
A\ket{10} = \ket{1}X\ket{0}\\
\end{array}
\end{array}
\label{eq:CNOT}
\end{equation}
\begin{definition}
A qRAM node with $n$ inputs is represented by the operator $\textsf{N}$
described in Equation~\eqref{eq:N}. The inputs, selectors and outputs of
$\textsf{N}$ are organised in three quantum registers $\ket{i}$ with $n$ qubits,
$\ket{s}$ with $2^n$ qubits and $\ket{o}$ with 1 qubit. The quantum state
$\ket{i}$ describe qRAM input, and quantum state $\ket{s}\ket{o}$ describes qRAM
state.
\label{def:qram}
\end{definition}
\begin{equation}
\textsf{N} = \sum_{i=0}^{2^n-1} \ket{i}_n \bra{i}_n A_{s_i,o}
\label{eq:N}
\end{equation}
The qRAM neural network functions exactly as a RAM neural network when the selectors are in the computational basis. For instance, when the quantum register selectors of a qRAM neuron is in the state $\ket{c_{00}c_{01}c_{10}c_{11}}$ and the input $\ket{xy}$ is presented its output is $\ket{c_{xy}}$. The difference between the qRAM and RAM neurons can be observed when the selectors are initialised with a state in superposition. Suppose an initialisation of the quantum register selector with a state in the superposition $\frac{1}{\sqrt{2}}\left( \ket{c_{00}c_{01}c_{10}c_{11}} + \ket{c'_{00}c'_{01}c'_{10}c'_{11}}\right)$. When the neuron receives an input $\ket{xy}$, the output for each configuration in the superposition will be calculated and the quantum register output will be in the state $\frac{1}{\sqrt{2}}\left(\ket{ c_{xy} } + \ket{ c'_{xy} }\right)$, a sort of parallel execution of the network.
Classical and quantum weightless neurons require a memory (in the classical case) and a number of selectors (in the quantum case) exponential in relation to the number of inputs. To avoid exponential memory requirements, classical and quantum weightless neural networks use a feed-forward, low connected, pyramidal architecture. A pyramidal, feed-forward neural network with three two inputs qRAM Nodes is shown in Figure~\ref{fig:qRAMNet1}. A pyramidal qRAM network with $n$ inputs and composed of neurons with two inputs will have $2^{log_2(n)}-1 = n-1$ neurons. Each two input neuron has a memory with 4 selectors than network memory will need of $4 \cdot (n-1)$ selectors (linear memory size instead of an exponential memory size).
Configuration of a qRAM neural network is realised by the neuron selectors. For instance the configuration of the qRAM neural network in Figure~\ref{fig:qRAMNet1} is the state of quantum registers $\ket{s_1,s_2,s_3}$. For instance, a qRAM network with architecture displayed in Figure~\ref{fig:qRAMNet1} with configuration $\ket{s_1}=\ket{0110}$, $\ket{s_2}=\ket{0110}$ and $\ket{s_3}=\ket{0110}$ can solve the 4 bit parity problem. Superposition of qRAM neural networks with a given architecture can be obtained with the initialisation of qRAM neural configuration with a state in superposition. In Section 5, we explore superposition of qRAM networks in the learning procedure to allow neural network architecture selection.
\begin{center}
\begin{figure}
\center
\setlength{\unitlength}{0.7mm}
\begin{picture}(85,50)(0,0)
\put(20,24){\framebox(15,15)}
\put(26,31){$\textsf{N}_1$}
\put(0,35){$i_1$}
\put(0,30){$i_2$}
\put(5,25){$s_1$}
\put(5,35.5){\vector(1,0){10}}
\put(5,30.5){\vector(1,0){10}}
\put(10,25.5){\vector(1,0){5}}
%
\put(20,5){\framebox(15,15)}
\put(26,11){$\textsf{N}_2$}
\put(0,15){$i_3$}
\put(0,10){$i_4$}
\put(5,5){$s_2$}
\put(5,15.5){\vector(1,0){10}}
\put(5,10.5){\vector(1,0){10}}
\put(10,5.5){\vector(1,0){5}}
%
\put(35,32.5){\line(1,0){10}}
\put(45,32.5){\line(0,-1){5}}
\put(45,27.5){\vector(1,0){18}}
%
\put(35,12.5){\line(1,0){10}}
\put(45,12.5){\line(0,1){10}}
\put(45,22.5){\vector(1,0){18}}
%
\put(65,15){\framebox(15,15)}
\put(71,21){$\textsf{N}_3$}
\put(53,15){$s_3$}
\put(58,15.5){\vector(1,0){5}}
\put(80,22.5){\vector(1,0){10}}
%
\end{picture}
\caption{ qRAM Neural Network of 2 layers}
\label{fig:qRAMNet1}
\end{figure}
\end{center}
\section{Non linear quantum learning}
Nonlinear quantum operators have been used previously~\cite{panella:09, zhou:12}. In this section we show how to train a weightless neural network with a nonlinear quantum algorithm. The proposed algorithm is based on a strategy proposed in~\cite{Gammelmark:09}, where the learning procedure is performed by measurement and feedback. Figure~\ref{fig:mf} illustrates how the measurement and feedback strategy works. The input is presented to a controlled quantum operator named quantum processor, and the result of a measurement performed in the output registers is used to update qubits in the control quantum register. The procedure is repeated until the control qubits $\ket{s}$ are set to some desired value.
\begin{figure}[ht]%
\begin{center}
\includegraphics[width=0.7\columnwidth]{mf}
\end{center}
\caption{Measurement and feedback methodology}%
\label{fig:mf}%
\end{figure}
The quantum processor in our learning strategy will be a qRAM weightless neural network with a fixed architecture.
This quantum weightless neural network can have any number of layers and neurons and must have a feed-forward architecture.
Patterns selected from a training set will be presented to several neural networks in parallel.
This step cannot be efficiently performed in a classical computer, but it can be performed in a quantum computer using quantum parallelism.
Operation performed by the quantum processor is described in Figure~\ref{fig:qp} where each pattern $x$ is presented to all qRAM network configurations represented in the quantum register $\ket{s}$ and the performance quantum register is updated to indicate if the neural network output is equal to the desired output $d(x)$. After the presentation of all patterns in the training set all pairs of neural network configuration and its respective performance will be in superposition.
\begin{figure}%
\resizebox{\columnwidth}{!}{
\input{framework}
}
\caption{Action of quantum processor in Figure~\ref{fig:mf} when the selector quantum register of a qRAM weightless neural network with fixed architecture is a superposition of quantum states}%
\label{fig:qp}%
\end{figure}
Control qubits of the quantum processor in the measurement and feedback strategy are selectors of the qRAM neural network.
In the $k$th iteration of the measurement and feedback methodology a non-linear quantum operator and a measurement are performed to determine the $k$th quantum bit of the selectors quantum register $\ket{s}$.
After all iterations, the quantum register $\ket{s}$ will hold a qRAM configuration with performance greater than or equal to a given threshold $\theta$ for given training set (if exists).
\begin{algorithm}[ht]
\caption{Learning algorithm}
\label{alg:la}
\For{$k=1$ to $n_s$ \label{line:for1}}{
Set input quantum register $\ket{i}$ to $\ket{0}$ \label{line:initinput}\\
Set the $n_s-k+1$ last qubits in quantum register $\ket{s}$ to $H\ket{0}$ \label{line:setselect}\\
Set output quantum register to $\ket{0}$ \label{line:initoutput}\\
Set performance quantum register to $\ket{0}$ \label{line:initper}\\
Set objective quantum register to $\ket{0}$ \label{line:initobj}\\
\For{each pattern $x \in$ training set \label{line:for2}}{
Set quantum register $\ket{i}$ to $\ket{x}$ and quantum
register $\ket{d}$ to $\ket{d(x)}$\label{line:loadpattern}\\
Allow the qRAM network to produce it output in quantum register $\ket{o}$ \label{line:run}\\
\If{$\ket{o} = \ket{d(x)}$ \label{line:if}}{add 1 into quantum register $\ket{\mathit{perf}}$ \label{line:endif}}
Remove $\ket{x}$ and $\ket{d(x)}$ of quantum registers $\ket{i}$ and $\ket{d}$ \label{line:remove}\\
}
\For{$l = 0$ to 1 \label{line:for3}}{
Set quantum register objective to $\ket{1}$ if $k$th quantum bit in neuron representation
is equal to $l$ and performance is greater than a given threshold $\theta$. \label{line:obj}\\
Apply the non-linear quantum operator NQ to quantum register objective. \label{line:nonlin}\\
\If{$\ket{objective} = \ket{1}$ \label{line:if2}}{
Perform a measurement in all quantum register\\
Set $k$th bit of quantum register selectors to $l$ \label{line:endif2}
}
}
\label{line:endfor3}
}
\end{algorithm}
Algorithm~\ref{alg:la} presents the proposed learning strategy. It requires six quantum registers.
Input quantum register $\ket{i}$ used to present patterns from the training set to the qRAM network. The free parameters or selectors quantum register $\ket{s}$ used to store qRAM neural network configuration. The output quantum register $\ket{o}$ used to store the qRAM neural network output, the desired output quantum register $\ket{d}$, performance quantum register $\ket{\mathit{perf}}$ used to store the performance of each classifier in the superposition. And the objective quantum register $\ket{obj}$ used to mark configurations with desired performance. A configuration of the weightless neuron during the execution of Algorithm~\ref{alg:la} will be represented using the quantum state $\ket{\psi}$ described in Equation~\eqref{eq:qr}.
\begin{equation}
\ket{\psi} = \ket{i}\ket{s}\ket{o}\ket{d}\ket{\mathit{perf}}\ket{obj}
\label{eq:qr}
\end{equation}
The for loop starting in line~\ref{line:for1} will be repeated $n_s$ times, where $n_s$ is the number of quantum bits in quantum register $\ket{s}$. At the end of the $k$th iteration a non-linear quantum operator is performed to determine the $k$th bit $l_k$ of the quantum register $\ket{s}$.
Steps~\ref{line:initinput},~\ref{line:initoutput},~\ref{line:initper},~\ref{line:initobj} initialise quantum registers input, output, performance and objective. Step~\ref{line:setselect} of Algorithm~\ref{alg:la} initialises selector quantum register. After this step, the state of quantum registers $\ket{s}$ is described in Equation~\eqref{eq:init}, where the value of first $k$ qubits $l_i$ were determined in $i$th iteration of the for loop and the last $n_s-k$ qubits are initialised with $H\ket{0}$ state.
\begin{equation}
\ket{s} = \left(\frac{1}{\sqrt{2}}\right)^{n_s-k+1}\ket{l_1 \cdots l_{k-1}} \left(\ket{0}+\ket{1}\right)^{\otimes(n_s-k+1)}
\label{eq:init}
\end{equation}
The for loop starting in line~\ref{line:for2} performs the quantum processor operation. It calculates the performance of all configurations in the superposition for the given architecture simultaneously due to principle of quantum parallelism. Step~\ref{line:loadpattern} initialises quantum register input with a pattern $x$ from the data-set, and desired output quantum register with the desired output of $x$ named $d(x)$. These initialisation steps can be performed by unitary operators controlled by a classical system using the classical representation of $x$ and $d(x)$. Step~\ref{line:run} runs the qRAM neural network and its output quantum register is set to the calculated output $y(x,s)$ for pattern $x$ with neural network configuration $s$. Steps~\ref{line:if} to~\ref{line:endif} adds 1 to quantum register performance if $y(x,s)$ is equal to $d(x)$. After these steps, description of state $\ket{\psi}$ is presented in Equation~\eqref{eq:step4}, where state $\ket{s}$ is described in Equation~\eqref{eq:init} and $\ket{\mathit{perf}(x,s)}$ is the performance of the neural network with selectors $s$ after reading the input $x$.
\begin{equation}
\ket{\psi} = \ket{x}\ket{s}\ket{y(x,s)}\ket{d(x)}\ket{\mathit{perf}(x,s)}\ket{0}
\label{eq:step4}
\end{equation}
Step~\ref{line:remove} removes $\ket{x}$ and $\ket{d(x)}$ of quantum registers $\ket{i}$ and $\ket{d}$ performing the inverse operation of Step~\ref{line:loadpattern}. After the execution of the for loop starting in line~\ref{line:for2} the performance of each classifier $\ket{\mathit{perf}(s)}$ will be in superposition with its representation $\ket{s}$.
The for loop starting in line~\ref{line:for3} performs the measurement and feedback.
An exhaustive non-linear quantum search is performed to determine the value of the $k$th bit in quantum state $\ket{s}$.
Step 16 sets the quantum register $\ket{obj}$ to $\ket{1}$ if $\mathit{perf}(s) =\theta$ and $k=l$. This step can be performed by a unitary controlled operator $U_g$ that flips objective quantum register if and only if $\mathit{perf}(x,s) \geq \theta$ and $k=l$.
After Step 16 the state of quantum registers $\ket{s}$, $\ket{\mathit{perf}}$ and $\ket{obj}$ is described in Equation~\eqref{eq:f},
where $\delta_{ms,l,\mathit{perf}(ms)}$ is equal to 1 if $\mathit{perf}(s) \geq \theta$ and the $k$th quantum bit in $\ket{s}$ is equal to $l$.
\begin{equation}
\ket{s,\mathit{perf},obj}=\ket{s,\mathit{perf}(s),\delta_{s,l,\mathit{perf}(s)}}
\label{eq:f}
\end{equation}
All previous steps can be performed utilising only linear quantum operators.
Step~\ref{line:nonlin} applies the non-linear quantum operator proposed in~\cite{PhysRevLett.81.3992} to the objective quantum register.
The objective quantum register will be changed to the basis state $\ket{1}$ if there is at least one configuration in the superposition with objective equal to one.
In this case, Steps~\ref{line:if2} to~\ref{line:endif2} performs a measurement in state $\ket{\psi}$ and changes the $k$th quantum bit in quantum register $\ket{s}$ to $l$.
The computational cost of Algorithm 1 depends on the number of patterns in the training set $n_t$ and on the number of qubits used in selector quantum register $n_s$. The for loop starting in line 1 will be repeated $n_s$ times. Steps~\ref{line:initinput} to~\ref{line:initobj} have constant computational time. For loop in lines~\ref{line:for2} to 13 will be repeated $n_t$ times and each inner line has constant computational cost. For loop in lines~\ref{line:for3} to \ref{line:endfor3} does not depend on $n_t$ and $n_s$ and it has a constant computational cost. In this way the overall cost of the Algorithm 1 is $O(n_t \cdot n_s )$. Then Algorithm 1 has polynomial time in relation to the number of qubits used to represent the qRAM neural network selectors and the number of patterns in the training set.
A concrete example of Algorithm 1 execution is presented to illustrate its functionality.
Without loss of generality we use a qRAM neural network composed by only one neuron with two inputs to learn the 2-bit XOR toy problem described in Equation~\eqref{eq:xor}. For this problem, quantum register input needs two qubits, quantum register selectors has 4 qubits, quantum register output needs 1 qubit, quantum register performance has 3 qubits and quantum register objective has 1 qubit.
\begin{equation}
T=\left\{\left(\ket{00},\ket{0}\right),\left(\ket{01},\ket{1}\right),\left(\ket{10},\ket{1}\right),\left(\ket{11},\ket{0}\right)\right\}
\label{eq:xor}
\end{equation}
In Steps 2, 4, 5 and 6 bits in quantum registers input, output, performance and objective are initialised with the quantum state $\ket{0}$. The number of quantum bits in $\ket{s}$ quantum register is equal to 4 and in the first iteration $n_s-k+1$ is also equal to 4, then all four qubits in quantum register $\ket{s}$ are initialised with the state $H\ket{0}$. After these initialisation steps, neural network configuration $\ket{\psi}$ is described in Equation~\eqref{eq:6}.
\begin{equation}
\begin{split}
\ket{\psi} = \frac{1}{4}\ket{00}\left(\ket{0}+\ket{1}\right)^{\otimes 4}\ket{0}\ket{0}\ket{000}\ket{0} = \\
\frac{1}{4}\sum_{j\in \left\{0,1\right\}^4}\ket{00}\ket{j}\ket{0}\ket{0}\ket{000}\ket{0}
\end{split}
\label{eq:6}
\end{equation}
Suppose that in the first iteration of the for loop starting in line~\ref{line:for1} $x$ assumes value $\ket{01}$ and $d(x)$ is $\ket{1}$. Step~\ref{line:loadpattern} initialises pattern and desired output quantum register respectively to $\ket{01}$ and $\ket{1}$. This initialisation can be performed through CNOT operators applied to $\ket{\psi}$ resulting in state $\ket{\psi_1}$ described in Equation~\eqref{eq:7}.
\begin{equation}
\frac{1}{4}\sum_{j\in \left\{0,1\right\}^4}\ket{01}\ket{j}\ket{0}\ket{1}\ket{000}\ket{0}
\label{eq:7}
\end{equation}
Step~\ref{line:run} runs the neural network and this output is calculated in quantum register $\ket{o}$. After this step we obtain the state $\ket{\psi_2}$ described in Equation~\eqref{eq:8}, where $j_1$ is the qubit in memory position 01 and $\delta_{j_1,1}=1$ if and only if $j_1=1$.
\begin{equation}
\frac{1}{4}\sum_{j\in \left\{0,1\right\}^4}\ket{01}\ket{j}\ket{\delta_{j_1,1}}\ket{1}\ket{000}\ket{0}
\label{eq:8}
\end{equation}
Step~\ref{line:if} to~\ref{line:endif} check if desired output is equal to the calculated output, adding one to the performance quantum register if they are equal. The resulting state after Step~\ref{line:endif} $\ket{\psi_3}$ is described in Equation~\eqref{eq:9}. These steps can be performed using a unitary operator describing the qRAM neural network and a quantum operator that adds one to the quantum register performance with controls $\ket{o}$ and $\ket{d}$.
\begin{equation}
\begin{split}
\ket{\psi_3} = \frac{1}{4}\left(\ket{01}\ket{0000}\ket{0}\ket{1}\ket{000}\ket{0} \right. \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0} \\
+ \ket{01}\ket{0001}\ket{0}\ket{1}\ket{000}\ket{0}
\end{split}
\label{eq:9}
\end{equation}
Step~\ref{line:remove} removes the values of $\ket{x}$ and $\ket{d(x)}$ from quantum registers $\ket{i}$ and $\ket{d}$ allowing the initialisation of the next for loop iteration. After the for loop last execution only one configuration in superposition, with $\ket{s}=\ket{0110}$, has performance 100\% and the selectors and performance quantum registers are described by quantum state in Equation \eqref{eq:10}, where $\mathit{perf}(j)<4$ for all $j \neq 0110$.
\begin{equation}
\ket{s,\mathit{perf}} = \frac{1}{4}\left(\ket{0110}\ket{4}_3 + \sum_{j\in \{0,1\}^4, j\neq 0110}\ket{j}\ket{\mathit{perf}(j)}\right)
\label{eq:10}
\end{equation}
Setting $\theta$ to 100\%, in the first iteration of the for loop ($l=0$) in line \ref{line:for3}, Step~\ref{line:obj} changes objective register to $\ket{1}$ when the $k$th qubit of $\ket{s}$ is $\ket{0}$ and performance is $\theta$. After Step 15 selectors, performance and objective quantum registers are described in Equation (11).
\begin{equation}
\ket{s,\mathit{perf},obj} = \frac{1}{4}\left(\ket{0110}\ket{4}_3\ket{1} + \sum_{j\in \{0,1\}^4, j\neq 0110}\ket{j}\ket{\mathit{perf}(j)}\ket{0}\right)
\label{eq:11}
\end{equation}
Step~\ref{line:nonlin} applies the nonlinear quantum operator in objective quantum register and the state of selectors, performance and objective quantum registers are described in Equation~\eqref{eq:12}. The nonlinear quantum operator sets the quantum register objective to $\ket{1}$.
\begin{equation}
\ket{s,\mathit{perf},obj} = \frac{1}{4}\left(\ket{0110}\ket{4}_3 + \sum_{j\in \{0,1\}^4, j\neq 0110}\ket{j}\ket{\mathit{perf}(j)}\right)\ket{1}
\label{eq:12}
\end{equation}
Since the objective quantum register is in a base state we can check whether $\ket{obj} = \ket{1}$ with no information loss. In Steps~\ref{line:if2} to~\ref{line:endif2} a measurement is performed in quantum register $\ket{s}$ and the first qubit of $\ket{s}$ is set to $\ket{l_1} = \ket{0}$. This qubit will not be changed in the next iterations.
At the end of the main for loop the selector quantum register $\ket{s}$ will be in the state $\ket{0110}$ and the desired configuration was found. Next section shows how to perform a search in the architecture space of a quantum weightless neural network.
\section{Architecture learning}
The operator $A$ in a qRAM neural network is known as controlled not operator. In other
models of quantum weightless neural networks this operator can assume different
forms. For instance, in \cite{oliveira:08} the $A$ operators of qPLN are
represented in computational basis by the quantum operator $A_{qPLN}$ described
in Equation~\eqref{eq:aqpln}, where $\textsf{U}$ is an arbitrary quantum
operator
\begin{equation}
\begin{split}
A_{qPLN} =\ket{00}\bra{00}\otimes \textsf{I} + \ket{01}\bra{01}
\otimes \textsf{X} + \\ \ket{10}\bra{10}\otimes\textsf{H} +
\ket{11}\bra{11}\otimes\textsf{U}
\end{split}
\label{eq:aqpln}
\end{equation}
and
the $A$ operators of a qMPLN with $n$ qubits in each memory position are
represented by the matrix described in Equation~\eqref{eq:aqmpln}, where
$\qop{U}_{p_k}$ is a rotation operator with angle $p_k$.
\begin{equation}
A_{qMPLN} = \sum_{k=0}^{n-1}\ket{k}\bra{k}\otimes \qop{U}_{p_k}
\label{eq:aqmpln}
\end{equation}
These $A$ operators are used to generate the values stored in a specific
memory position. For instance in the qPLN, instead of storing the qubit
$\frac{1}{\sqrt{2}}\left(\ket{0} + \ket{1}\right)$, we store the qubits in the computational basis $\ket{10}$ and uses
the operator $\qop{A}_{qPLN}$ to generate the content $\frac{1}{\sqrt{2}}\left(\ket{0} + \ket{1}\right)$.
\begin{figure}%
\includegraphics[width=\columnwidth]{architectureLearning}%
\caption{Quantum neuron representing a weightless neural networks with four different architectures}%
\label{fig:qn}%
\end{figure}
The main idea in this Section is to allow a weightless neural network to store
the output of a weightless neural network with a given input $x$ and selectors $s$. In this case, the quantum
version of this weightless neural network will need a matrix $A$ representing
the weightless neural network to generate the output of the weightless neural network. Then the
$A$ operators are replaced by operators representing weightless neural networks and selectors
are replaced by the neural network inputs and selectors. Figure~\ref{fig:qn} illustrates this weightless neuron with two inputs, where $\ket{a_1 a_2}$ are architecture selectors, input pattern $x$ and selectors are combined in one single quantum register and acts as the free parameters of the neuron, and quantum register output is shared by all weightless networks $N_0,N_1,N_2,N_3$.
\begin{figure}%
\resizebox{\columnwidth}{!}{
\input{framework2}
}
\caption{Action of quantum processor in Figure~\ref{fig:mf} when the selector and architecture selector quantum registers of a weightless neuron with some distinct architectures are in a superposition of quantum states}%
\label{fig:fram2}%
\end{figure}
With this quantum neuron the action of the quantum processor in Figure~\ref{fig:mf} can be described by Figure~\ref{fig:fram2}. Initialisation of the architecture selector quantum register with a quantum state in superposition will put different architectures, represented by the doted boxes, into superposition. And the initialisation of selectors quantum registers puts different configurations of each architecture into superposition. Problem of architecture selection is reduced to the problem of training the weightless neuron in Figure~\ref{fig:qn} where the input is represented by quantum register $\ket{x}$ and selectors are represented by quantum registers $\ket{a,s}$. In this way, Algorithm 1 can be used to learning parameters and architecture simultaneously.
Architecture selection computational time is directly related to computational time of the Algorithm~\ref{alg:la}. Due to the linearity of quantum operators, neurons can share selectors and under supposition that all architectures are pyramidal and low connected then network memory size (or the necessary number of selectors) will be polynomial in relation to the number of neural network inputs.
The cost of architecture selection will be $O\left(n_a+n_s+n_t\right)$, where $n_a$ is the number of architectures, $n_s$ is the number of selectors in the most complex (with more selectors) architecture and $n_t$ is the number of training patterns.
\subsection{Architecture selection with SAL algorithm}
Quantum computers are not yet a reality and we cannot evaluate SAL algorithm in real problems. In this Section we present a concrete example (with low dimensionality) of the SAL algorithm in architecture selection. W
use the artificial dataset described in Table~\ref{tab:artificialDataSet
obtained in the following way. Two weightless neural network architectures were defined and an exhaustive search was performed to find a dataset in each one architecture can learn the dataset and the other architecture cannot learn the dataset using selectors in the computational basis.
\begin{table}%
\begin{center}
\begin{tabular}{|cccc|c|}\hline
\multicolumn{4}{|c|}{Patterns} & Class \\ \hline
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 \\
0 & 1 & 0 & 1 & 1 \\
0 & 1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 1 \\
1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 1 & 0 & 0 \\
1 & 0 & 1 & 1 & 1 \\
1 & 1 & 0 & 0 & 0 \\
1 & 1 & 0 & 1 & 1 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 1 \\ \hline
\end{tabular}
\end{center}
\caption{Simple artificial data set}
\label{tab:artificialDataSet}
\end{table}
The architectures used in the experiment are two layers, pyramidal qRAM weightless neural networks. The first architecture $\qop{N}_0$ has two qRAM neurons each with two inputs in the first layer and one qRAM neuron with two inputs in the second layer. Figure~\ref{fig:qRAMNet1} displays architecture $\qop{N}_0$. The second architecture $\qop{N}_1$ has two qRAM neurons in the first layer where the first neuron has three inputs and the second neuron has one input and the second layer has one qRAM neuron with two inputs.
The first architecture needs of 12 quantum bits for representing selector quantum register, 4 quantum bits for representing input of the first layer, 2 quantum bits to represent the second layer input, and 1 quantum bit to representing the neural network output. In this way, the first architecture representation needs of 19 quantum bits. The second architecture needs of 14 quantum bits for representing selector quantum register and the same number of quantum bits used by the first architecture to represent neurons inputs and network output than the second architecture representation requires 21 quantum bits.
These two qRAM neural networks are represented in a single circuit with six quantum registers. Neurons inputs quantum register $\ket{i}$ with 6 quantum bits, selectors quantum register $\ket{s}$ with 14 quantum bits, output quantum register $\ket{o}$ with one qubit and architecture selector quantum register $\ket{a}$ with 1 qubit. Performance quantum register $\ket{\mathit{perf}}$ with 5 quantum bits. Output quantum register with 1 quantum bit.
The qRAM neural network with architecture $\qop{N}_0$ uses all qubits in quantum registers selectors, input and output. The qRAM neural network with architecture $\qop{N}_1$ uses all qubits in inputs and output quantum register and uses only 12 qubits in selectors quantum register. The architecture quantum register $\ket{a}$ is used to select the architecture. If $\ket{a}$ is equal to 0 the architecture 1 is used. If $\ket{a}$ is equal to 1 the architecture 2 is used.
After the initialization steps of Algorithm~\ref{alg:la}, the state of quantum registers $\ket{a}\ket{s}\ket{\mathit{perf}}$ is described in Equation~\eqref{eq:aftinit}, where $\ket{a}$ and $\ket{s}$ are in a superposition with all possible values and the quantum bits in performance quantum register are initialized with $\ket{0}$.
\begin{equation}
\ket{a}\ket{s}\ket{\mathit{perf}} = (\ket{0}+\ket{1})\sum_{k\in\{0,1\}^{14}}\ket{k}\ket{00000}
\label{eq:aftinit}
\end{equation}
After the datased presetation to the neural network performed in Steps 7 to 14 of Algorithm~\ref{alg:la}, the state of quantum registers $\ket{a}\ket{s}\ket{\mathit{perf}}$ is described in Equation~\eqref{eq:aftpresentation}, where $\mathit{perf}(k,N_i)$ is the performance of qRAM neural network with architecture $N_i$ and selectors $\ket{k}$.
\begin{equation}
\begin{split}
\ket{a}\ket{s}\ket{\mathit{perf}} = \\ \ket{0}\left(\sum_{k\in\{0,1\}^{12}}\ket{k}\qop{H}^{\otimes 2}\ket{00}\right)\ket{\mathit{perf}(k,N_0)} \\ +\ket{1}\sum_{k\in\{0,1\}^{14}}\ket{k}\ket{\mathit{perf}(k,N_1)}
\end{split}
\label{eq:aftpresentation}
\end{equation}
$\qop{N}_0$ architecture cannot learn the dataset with 100\% of accuracy and $\qop{N}_1$ can learn the dataset with 100\% of accuracy when its selectors are in the set
\begin{equation}\begin{split}
T = \{\ket{0 1 0 1 0 1 1 1, 0 1, 1 1 0 1},
\ket{0 1 0 1 0 1 1 1, 1 0, 1 1 1 0}, \\
\ket{1 0 1 0 1 0 0 0, 0 1, 0 1 1 1},
\ket{1 0 1 0 1 0 0 0, 1 0, 1 0 1 1} \}.\\
\end{split}\end{equation}
In the second iteration of for loop starting in line 15, the quantum register objective is set to $\ket{1}$ if and only if the performance is greather than a given threshold $\theta$. Here we use $\theta$ equal to 16 (100\% of accuracy), after this operation the state of quantum registers $\ket{a}\ket{s}\ket{\mathit{perf}}\ket{obj}$ is described in Equation~\eqref{eq:afttheta}.
\begin{equation}
\begin{split}
\ket{a}\ket{s}\ket{\mathit{perf}}\ket{obj} =\\ \ket{0}\left(\sum_{k\in\{0,1\}^{12}}\ket{k}\qop{H}^{\otimes 2}\ket{00}\right)\ket{\mathit{perf}(k,N_0)}\ket{0} \\ +\ket{1}\sum_{k\in\{0,1\}^{14}, k\notin T}\ket{k}\ket{\mathit{perf}(k,N_1)}\ket{0} \\ +\ket{1}\sum_{k\in T}\ket{k}\ket{\mathit{perf}(k,N_1)}\ket{1}
\end{split}
\label{eq:afttheta}
\end{equation}
Step 17 applies the nonlinear quantum operator and the resultant state of quantum registers $\ket{a}\ket{s}\ket{\mathit{perf}}\ket{obj}$ is described in Equation~\eqref{eq:aftnonlinear}, where a measurement can be performed and the architecture register will be in state $\ket{1}$ and the architecture $\qop{N}_1$ was chosen.
\begin{equation}
\begin{split}
\ket{a}\ket{s}\ket{\mathit{perf}}\ket{obj} = \ket{1}\sum_{k\in T}\ket{k}\ket{\mathit{perf}(k,N_1)}\ket{1}
\end{split}
\label{eq:aftnonlinear}
\end{equation}
\subsection{Discussion}
We proposed a methodology to select quantum neural network parameters and architecture using a quantum weightless neural networks in polynomial time in relation to the number of training patterns, architectures and neural network free parameters. The proposed algorithm, named Superposition based Architecture Learning (SAL), performs a non-linear global search in the space of weightless neural networks parameters and for a given data set returns a classifier with a desired performance $\theta$ or returns that there is no classifier otherwise.
A classical polynomial time algorithm to perform neural network architecture selection is not known. Classical techniques used to perform architecture selection are heuristics that do not guarantee to find an exact solution. Some strategies used to find near optimal neural networks architectures or parameters are evolutionary algorithms~\cite{Almeida2010} and meta-learning~\cite{Miranda201427}. Running time of evolutionary algorithms used in architecture selection are displayed in~\cite{Miranda201427} and even in benchmark problems the running time of these classical strategies can vary from 3 to 400 minutes.
In the application of the SAL algorithm to perform architecture selection, if there is a solution in the space search then the solution will be found in polynomial time. SAL algorithm puts all neural network configurations with some architectures in superposition, the performance is calculated and a nonlinear operator is used to recover the configuration and architecture with desired performance. SAL algorithm is the first algorithm to perform quantum weightless neural network architecture selection in polynomial time in relation to the number of patterns, architectures.
Superposition principle allows the evaluation of neural networks architectures in a way that is not possible in classical neural networks. In a classical neural network the architecture evaluation is biased by a choice of neural network parameters. In SAL algorithm all neural network parameters are initialized with all parameters in superposition allowing the evaluation of neural network architecture without the bias of a given set of parameters.
The gain in computational time of the proposed strategy is a result of the use of non-linear quantum operator proposed in~\cite{PhysRevLett.81.3992}. Despite non-linear quantum computing has been used in several works, there still remains some controversy with some authors claiming that non linear quantum operators are not physically realisable~\cite{PhysRevLett.81.3992} while other researchers claiming otherwise~\cite{czachor1998remarks}.
Even if non-linear quantum operators do not become a reality, the proposed learning algorithm furnishes a framework for the development of linear quantum algorithms to perform neural network architecture selection. The proposed idea is to define a quantum weightless neural network such that its memory positions store configurations of neural networks with different architectures.
\section{Conclusion}
For some problems there are quantum algorithms which are asymptotically faster than the known classical algorithms~\cite{grover:96,shor:97,trugenberger:02}. In this paper, we defined a quantum Superposition based Architecture Learning algorithm for weightless neural networks that finds architecture and parameters with polynomial time in relation to the number of training patterns, architectures and the size of the selectors quantum register. The proposed algorithm used the quantum superposition principle and a nonlinear quantum operator.
A linear version of the proposed algorithm is challenging research topic which is the subject of on going work. This linear version should be a quantum probabilistic algorithm, because the problem of training a weightless neural networks is a NP-complete problem. One could use the quantum processor to create a superposition of weightless neural networks with different architectures and to perform classical learning steps in these neural networks in superposition before performing the measurement and feedback.
Quantum weightless neural networks proposed in~\cite{oliveira:08} are generalisation of the classical models based on a classical RAM memory. Another possible future work is the analysis of quantum memories~\cite{altaiskymemory,ventura:98} for the development of weightless neural networks models. These quantum memories has an exponential gain in memory capacity when compared with classical memories.
\section*{Acknowledgements}
This work is supported by research grants from CNPq, CAPES and FACEPE (Brazilian research agencies).
\bibliographystyle{elsarticle-num}
|
1,116,691,500,330 | arxiv | \section{Introduction}
The aim of this paper is to deduce the algebraic rules for determining the
dynamical charactersitics of a prescribed network consisting of specified
quantum oscillator systems connected by input-output fields \cite{Gardiner},
\cite{Wiseman}. Physical models included cavity systems or local quantum
oscillators with a quantum optical field. The resulting dynamics is linear,
and the analysis is carried out using transfer function techniques \cite{YK1}%
, \cite{YK2}. The rules have been recently deduced in \cite{QFN1} in the
general setting for nonlinear quantum dynamical systems by first
constructing a network Hamiltonian and transfering to the interaction
picture with respect to the free flow of the fields around the network
channels. However it is of interest to restrict to linear systems for two
main reasons. Firstly, the derivation here for linear systems procedes by an
alternative method to the general nonlinear case, and we are able to confirm
the restriction of the nonlinear formula to linear systems yields the same
result. Secondly, linear systems are the most widely studied models in both
classical and quantum dynamical systems theory and so it is natural to
develop these further. There has been recent interest in the development of
coherent, or fully quantum control for linear systems \cite{GJ Series}-\cite
{NJP}\ and this paper contributes by establishing the algebraic rules for
building networks of such devices.
\section{Linear Quantum Markov Models}
The dynamical evolution of a quantum system is determined by a family of
unitaries $\left\{ V\left( t,s\right) :t\geq s\right\} $ satisfying the
propagation law $V\left( t_{3},t_{2}\right) V\left( t_{2},t_{1}\right)
=V\left( t_{3},t_{1}\right) $ where $t_{3}\geq t_{2}\geq t_{1}$. The
evolution of a state from time $s$ to a later time $t$ being then given by $%
\psi \left( t\right) =V\left( t,s\right) \psi \left( s\right) $. In a Markov
model we factor the underlying Hilbert space as $\frak{h}\otimes \mathcal{E}$
representing the system and its environment respectively and the unitary $%
V\left( t,s\right) $ couples the system specifically with the degrees of
freedom of the environment acting between times $s$ and $t$. For a bosonic
environment, we introduce input processes $b_{i}\left( t\right) $ for $%
i=1,\cdots ,n$ with the canonical commutation relations, \cite{Gardiner},
\begin{equation}
\left[ b_{i}\left( t\right) ,b_{j}^{\dag }\left( s\right) \right] =\delta
_{ij}\,\delta \left( t-s\right) .
\end{equation}
It is convenient to assemble these into the following column vectors of
length $n$
\begin{equation}
\mathbf{b}^{\text{in}}\left( t\right) =\left(
\begin{array}{c}
b_{1}\left( 1\right) \\
\vdots \\
b_{n}\left( t\right)
\end{array}
\right) .
\end{equation}
A Markov evolution can be described equivalently by the
chronological-ordered and Wick-ordered expressions
\begin{equation*}
V\left( t,s\right) =\;\vec{T}\exp -i\int_{s}^{t}\Upsilon \left( \tau \right)
d\tau \equiv \;:\exp -i\int_{s}^{t}\Upsilon _{\text{Wick}}\left( \tau
\right) d\tau :
\end{equation*}
where the stochastic Hamiltonian is (with $E_{ij}^{\dag }=E_{ji}$ and $%
K^{\dag }=K$)
\begin{equation*}
\Upsilon \left( t\right) =\sum_{i,j=1}^{n}E_{ij}\otimes b_{i}^{\dag }\left(
t\right) b_{j}\left( t\right) +\sum_{i=1}^{n}F_{i}\otimes b_{i}^{\dag
}\left( t\right) +\sum_{j=1}^{n}F_{j}^{\dag }\otimes b_{j}\left( t\right)
+K\otimes 1,
\end{equation*}
and the Wick-ordered generator is given by \cite{G Wong-Zakai}
\begin{eqnarray*}
-i\Upsilon _{\text{Wick}}\left( t\right) &=&\sum_{i,j=1}^{n}(S_{ij}-\delta
_{ij})\otimes b_{i}^{\dag }\left( t\right) b_{j}\left( t\right)
+\sum_{i=1}^{n}L_{i}\otimes b_{i}^{\dag }\left( t\right) \\
&&-\sum_{j=1}^{n}L_{i}^{\dag }S_{ij}\otimes b_{j}\left( t\right) -\left(
\frac{1}{2}\sum_{i=1}^{n}L_{i}^{\dag }L_{i}-iH\right) \otimes 1.
\end{eqnarray*}
The Wick-ordered coefficients are given by the Stratonovich-Ito conversion
formulae, see appendix,
\begin{equation}
S=\frac{1-\frac{i}{2}E}{1+\frac{i}{2}E},\quad L=-i\frac{1}{1+\frac{i}{2}E}%
F,\quad H=E_{00}+\frac{1}{2}\func{Im}F\frac{1}{1+\frac{i}{2}E}F^{\dag }.
\label{Strat-Ito}
\end{equation}
Note that $H$ is selfadjoint, and that $S$ is a unitary matrix whose entries
are operators on $\frak{h}$:$\sum_{k=1}^{n}S_{ik}S_{jk}^{\dag }=\delta
_{ij}=\sum_{k=1}^{n}S_{ki}^{\dag }S_{kj}$. In fact, we may write $S=e^{-iJ}$
with $J=2\arctan \dfrac{E}{2}$.
In differential form we have
\begin{eqnarray*}
\frac{d}{dt}V\left( t,s\right) &=&\;-i:\Upsilon _{\text{Wick}}\left(
t\right) V\left( t,s\right) : \\
&\equiv &\sum_{i,j=1}^{n}b_{i}^{\dag }\left( t\right) (S_{ij}-\delta
_{ij})V\left( t,s\right) b_{j}\left( t\right) +\sum_{i=1}^{n}b_{i}^{\dag
}\left( t\right) L_{i}V\left( t,s\right) \\
&&-\sum_{j=1}^{n}L_{i}^{\dag }S_{ij}V\left( t,s\right) b_{j}\left( t\right)
-\left( \frac{1}{2}\sum_{i=1}^{n}L_{i}^{\dag }L_{i}-iH\right) V\left(
t,s\right) .
\end{eqnarray*}
Note that all the creators appear on the left and all annihilators on the
right. This equation can be interpreted as a quantum stochastic differential
equation \cite{Gardiner}, \cite{HP}, \cite{partha}.
We sketch the system plus field as a two port device having an input and an
output port.
\begin{center}
\setlength{\unitlength}{.04cm}
\begin{picture}(120,45)
\label{pic1}
\thicklines
\put(45,10){\line(0,1){20}}
\put(45,10){\line(1,0){30}}
\put(75,10){\line(0,1){20}}
\put(45,30){\line(1,0){30}}
\thinlines
\put(48,20){\vector(-1,0){45}}
\put(120,20){\vector(-1,0){20}}
\put(120,20){\line(-1,0){48}}
\put(50,20){\circle{4}}
\put(70,20){\circle{4}}
\put(100,26){input, ${\bf b}^{\rm in}$}
\put(48,35){system}
\put(-10,26){output, ${\bf b}^{\rm out}$}
\end{picture}
Figure 1: input-output component
\end{center}
The output fields are defined by $b_{i}^{\text{out}}\left( t\right) =V\left(
t,0\right) ^{\dag }b_{i}\left( t\right) V\left( t,0\right) $ and we have the
input-output relation
\begin{equation*}
b_{i}^{\text{out}}\left( t\right) =\sum_{j=1}^{n}S_{ij}\left( t\right)
b_{j}\left( t\right) +L_{i}\left( t\right) ,
\end{equation*}
where $S_{ij}\left( t\right) =V\left( t,0\right) ^{\dag }S_{ij}V\left(
t,0\right) $ and $L_{i}\left( t\right) =V\left( t,0\right) ^{\dag
}L_{i}V\left( t,0\right) $. More compactly, $\mathbf{b}^{\text{out}}\left(
t\right) =S\left( t\right) \mathbf{b}^{\text{in}}\left( t\right) +L\left(
t\right) $.
Let $X$ be a fixed operator of the system and set $X\left( t,t_{0}\right)
=V\left( t,t_{0}\right) ^{\dag }XV\left( t,t_{0}\right) $, then we obtain
the Heisenberg-Langevin equation
\begin{eqnarray*}
\frac{d}{dt}X\left( t,t_{0}\right) &=&V\left( t,t_{0}\right) ^{\dag }\frac{1%
}{i}[X,\Upsilon \left( t\right) ]V\left( t,t_{0}\right) \\
&=&b_{i}^{\dag }\left( t\right) V\left( t,t_{0}\right) ^{\dag }\left(
S_{ki}^{\dag }XS_{kj}-\delta _{ij}X\right) V\left( t,t_{0}\right)
b_{j}\left( t\right) \\
&&+b_{i}^{\dag }\left( t\right) V\left( t,t_{0}\right) ^{\dag }S_{ki}^{\dag
} \left[ X,L_{k}\right] V\left( t,t_{0}\right) \\
&&+V\left( t,t_{0}\right) ^{\dag }[L_{i}^{\dag },X]S_{ij}V\left(
t,t_{0}\right) b_{j}\left( t\right) \\
&&+V\left( t,t_{0}\right) ^{\dag }\left\{ \frac{1}{2}L_{k}^{\dag }\left[
X,L_{k}\right] +\frac{1}{2}[L_{k}^{\dag },X]L_{k}-i\left[ X,H\right]
\right\} V\left( t,t_{0}\right) .
\end{eqnarray*}
Note that the final term does not involve the input noises, and that the
expression in braces is a Lindbladian. In the special case where $S=1$, this
equation reduces to the class of Heisenberg-Langevin equations introduced by
Gardiner \cite{Gardiner}.
\subsection{Linear Models}
We consider a quantum mechanical system consisting of a family of harmonic
oscillators $\left\{ a_{j}:j=1,\cdots ,m\right\} $ with canonical
commutation relations $\left[ a_{j},a_{k}\right] =0=\left[ a_{j}^{\dag
},a_{k}^{\dag }\right] $ and $\left[ a_{j},a_{k}^{\dag }\right] =\delta
_{jk} $. We collect into column vectors:
\begin{equation}
\mathbf{a}=\left(
\begin{array}{c}
a_{1} \\
\vdots \\
a_{m}
\end{array}
\right) .
\end{equation}
Our interest is in the general linear open dynamical system and here we make
several simplifying assumptions:
\begin{itemize}
\item[1)] The $S_{jk}$ are scalars.
\item[2)] The $L_{j}^{\prime }s$ are linear, i.e., there exist constants $%
c_{jk}$ such that $L_{j}\equiv \sum_{k}c_{jk}a_{k}$.
\item[3)] $H$ is quadratic, i.e., there exist constants $\omega _{jk}$ such
that $H=\sum_{jk}a_{j}^{\dag }\omega _{jk}a_{k}$.
\end{itemize}
The complex damping is $\frac{1}{2}L^{\dag }L+iH=-\mathbf{a}^{\dag }A\mathbf{%
a}$ where $A=-\frac{1}{2}C^{\dag }C-i\Omega $ with $C=\left( c_{jk}\right) $
and $\Omega =\left( \omega _{jk}\right) $. The Heisenberg-Langevin equations
for $\mathbf{a}\left( t\right) =V\left( t,0\right) \mathbf{a}V\left(
t,0\right) $ and input-output relations then simplify down to
\begin{eqnarray}
\mathbf{\dot{a}}\left( t\right) &=&A\mathbf{a}\left( t\right) -C^{\dag }S%
\mathbf{b}(t), \\
\mathbf{b}^{\text{out}}\left( t\right) &=&S\mathbf{b}\left( t\right) +C%
\mathbf{a}\left( t\right) .
\end{eqnarray}
These linear equations are amenable to Laplace transform techniques \cite
{YK1},\cite{YK2}. We define for $\func{Re}s>0$
\begin{equation}
\hat{C}\left( s\right) =\int_{0}^{\infty }e^{-st}C\left( t\right) dt,
\end{equation}
where $C$ is now any of our stochastic processes. Note that $\widehat{%
\mathbf{\dot{a}}}\left( s\right) =s\mathbf{\hat{a}}\left( s\right) -\mathbf{a%
}$. We find that
\begin{eqnarray*}
\mathbf{\hat{a}}\left( s\right) &=&-\left( sI_{m}-A\right) ^{-1}C^{\dag }S%
\mathbf{\hat{b}}^{\text{in}}\left( s\right) +\left( sI_{m}-A\right) ^{-1}%
\mathbf{a}, \\
\mathbf{\hat{b}}^{\text{out}}\left( s\right) &=&S\mathbf{\hat{b}}^{\text{in}%
}\left( s\right) +C\mathbf{\hat{a}}\left( s\right) .
\end{eqnarray*}
The operator $\mathbf{\hat{a}}\left( s\right) $ can be eliminated entirely
to give
\begin{equation}
\mathbf{\hat{b}}^{\text{out}}\left( s\right) =\Xi \left( s\right) \mathbf{%
\hat{b}}^{\text{in}}\left( s\right) +\xi \left( s\right) \mathbf{a}
\end{equation}
where the \textit{transfer matrix function} is
\begin{equation}
\Xi \left( s\right) =S-C\left( sI_{m}-A\right) ^{-1}C^{\dag }S
\end{equation}
and $\xi \left( s\right) =C\left( sI_{m}-A\right) ^{-1}$.
\bigskip
As an example, consider a single mode cavity coupling to the input field via
$L=\sqrt{\gamma }a,$ and with Hamiltonian $H=\omega a^{\dag }a$. This
implies $K=\frac{\gamma }{2}+i\omega $ and $C=\sqrt{\gamma }$. If the output
picks up an additional phase $S=e^{i\phi }$, the corresponding transfer
function is then computed to be
\begin{equation}
\Xi _{cavity}\left( s\right) =e^{i\phi }\,\frac{s+i\omega -\frac{\gamma }{2}%
}{s+i\omega +\frac{\gamma }{2}}.
\end{equation}
\subsection{The Transfer Matrix Function}
The models we consider are therefore determined completely by the matrices $%
\left( S,C,\Omega \right) $ with $S\in \mathbb{C}^{n\times n},C\in \mathbb{C}%
^{n\times m}$ and $\Omega \in \mathbb{C}^{m\times m}$. We shall use the
convention $\left[
\begin{tabular}{l|l}
$A$ & $B$ \\ \hline
$C$ & $D$%
\end{tabular}
\right] \left( s\right) =D+C\left( s-A\right) ^{-1}B$ for matrices $A\in
\mathbb{C}^{m\times m},B\in \mathbb{C}^{m\times n},C\in \mathbb{C}^{n\times
m}$ and $D\in \mathbb{C}^{n\times n}$, and write the transfer matrix
function as
\begin{equation}
\Xi \left( s\right) =\left[
\begin{tabular}{r|r}
$A$ & $-C^{\dag }S$ \\ \hline
$C$ & $S$%
\end{tabular}
\right] \left( s\right) , \label{TF}
\end{equation}
where $A=-\frac{1}{2}C^{\dag }C-i\Omega $. We note the decomposition
\begin{equation*}
\Xi =\left[ I_{n}-C\left( sI_{m}-A\right) ^{-1}C^{\dag }\right] S\equiv
\left[
\begin{tabular}{r|r}
$A$ & $-C^{\dag }$ \\ \hline
$C$ & $I_{n}$%
\end{tabular}
\right] S.
\end{equation*}
In the simplest case of a single cavity mode we have
\begin{equation*}
\Xi _{cavity}\left( s\right) =\left[
\begin{tabular}{r|r}
$-\frac{\gamma }{2}-i\omega $ & $-\sqrt{\gamma }e^{i\phi }$ \\ \hline
$\sqrt{\gamma }$ & $e^{i\phi }$%
\end{tabular}
\right] \left( s\right) .
\end{equation*}
\begin{lemma}
For each $\omega \in \mathbb{R}$, the transfer function $\Xi \left( i\omega
\right) \equiv \Xi \left( 0^{+}+i\omega \right) $ is unitary whenever it
exists.
\end{lemma}
\begin{proof}
The decomposition follows immediately from $\left( \ref{TF}\right) $. We
have then for instance
\begin{multline*}
\Xi \left( 0^{+}+i\omega \right) \Xi \left( 0^{+}+i\omega \right) ^{\dag }=
\left[ I-C\frac{1}{\frac{1}{2}C^{\dag }C+i\Omega ^{\prime }}C^{\dag }\right] %
\left[ I-C\frac{1}{\frac{1}{2}C^{\dag }C-i\Omega ^{\prime }}C^{\dag }\right]
\\
=I-C\frac{1}{\frac{1}{2}C^{\dag }C+i\Omega ^{\prime }}\left\{ \frac{1}{2}%
C^{\dag }C+i\Omega ^{\prime }+\frac{1}{2}C^{\dag }C-i\Omega ^{\prime
}-C^{\dag }C\right\} \frac{1}{\frac{1}{2}C^{\dag }C-i\Omega ^{\prime }}%
C^{\dag },
\end{multline*}
where $\Omega ^{\prime }=\Omega +\omega $. The term in braces however
vanishes identically, leaving $\Xi \left( 0^{+}+i\omega \right) \Xi \left(
0^{+}+i\omega \right) ^{\dag }=I$. The relation $\Xi \left( 0^{+}+i\omega
\right) \Xi \left( 0^{+}+i\omega \right) ^{\dag }=I$ is similarly
established.
\end{proof}
\bigskip
Whenever appropriate, we may determine $\Xi $ from its (unitary) values on
the imaginary axis by using the Hilbert transform
\begin{equation*}
\Xi \left( s\right) =\frac{1}{2\pi i}PV\int_{-\infty }^{\infty }\frac{\Xi
\left( i\omega \right) }{\omega +is}d\omega .
\end{equation*}
In general, the real and imaginary parts of $A$ need not commute - that is, $%
\left[ C^{\dag }C,\Omega \right] $ need to be identically zero. However,
when this does occur we recover a multi-mode version of the cavity situation.
\begin{lemma}
If $A$ is a function of $C^{\dag }C$ then
\begin{equation*}
\Xi \left( s\right) =\frac{s+\tilde{A}^{\dag }}{s-\tilde{A}^{\dag }}S,
\end{equation*}
where $\tilde{A}$ is a function of $CC^{\dag }$ and $\Xi $ may be
analytically continued into the whole complex plane.
\end{lemma}
\begin{proof}
Here we must have $A=-\frac{1}{2}C^{\dag }C-i\varepsilon \left( C^{\dag
}C\right) $ where $\varepsilon $ is a real valued function. We set $\tilde{A}%
=-\frac{1}{2}CC^{\dag }-i\varepsilon \left( CC^{\dag }\right) $. From the
identity $Cf\left( C^{\dag }C\right) C^{\dag }=CC^{\dag }f\left( CC^{\dag
}\right) $ for suitable analytic functions $f$, we have
\begin{equation*}
\left( s-\tilde{A}\right) \Xi \left( s\right) =\left( s-\tilde{A}\right) %
\left[ I-\frac{1}{s-\tilde{A}}CC^{\dag }\right] S=\left( s-\tilde{A}%
-CC^{\dag }\right) S
\end{equation*}
however, $-\tilde{A}-CC^{\dag }=\tilde{A}^{\dag }$, and this gives the
result.
The hermitean matrices $C^{\dag }C$ and $CC^{\dag }$ will have the same set
of eigenvalues: to see this, suppose that $\phi $ is a non-zero unit
eigenvector of $CC^{\dag }$ with eigenvalue $\gamma $, then $\psi =\gamma
^{-1/2}C^{\dag }\phi $ is a unit eigenvector of $C^{\dag }C$ with the same
eigenvalue, conversely, every eigenvector $\psi $ of $C^{\dag }C$ with
non-zero eigenvalue $\gamma $ gives rise to a nonzero eigenvector $\phi
=\gamma ^{-1/2}C\psi $ of $CC^{\dag }$.
Let $CC^{\dag }$ have the spectral form $\sum_{k}\gamma _{k}E_{k}$ with real
eigenvalues $\gamma _{k}$ and corresponding eigenprojectors $E_{k}$, then we
have
\begin{equation*}
\Xi \left( s\right) =\sum_{k}\frac{s-\frac{1}{2}\gamma _{k}+i\varepsilon _{k}%
}{s+\frac{1}{2}\gamma _{k}+i\varepsilon _{k}}E_{k}S,
\end{equation*}
where $\varepsilon _{k}=\varepsilon \left( \gamma _{k}\right) $. In
particular, the rational fraction is of modulus unity for imaginary $s\left(
=i\omega \right) $ and we may write
\begin{equation*}
\Xi \left( 0^{+}+i\omega \right) =\sum_{k}e^{i\phi _{k}\left( \omega \right)
}E_{k}S
\end{equation*}
where $\phi _{k}\left( \omega \right) =\arg \frac{i\left( \omega
+\varepsilon _{k}\right) -\gamma _{k}/2}{i\left( \omega +\varepsilon
_{k}\right) +\gamma _{k}/2}$. Note that $\Xi \left( 0^{+}+i\omega \right) $
is clearly unitary and the limit $\omega \rightarrow 0$ is well-defined.
This limit will equal $-S$ in the special case that $K$ is selfadjoint
(i.e., $\varepsilon \equiv 0$). $\Xi $ may be analytically continued into
the negative-real part of the complex plane. The poles of $\Xi $ then form
the resolvent set of $\tilde{K}$, and the zeroes being the complex
conjugates.
\end{proof}
\bigskip
\section{Introducing Connections}
The situation depicted in the figure below is one where (some of) the output
channels are fed back into the system as an input. Prior to the connection
between output port(s) $s_{\mathsf{i}}$ and input port(s) $r_{\mathsf{i}}$
being made, we may model the component as having the total input $\mathbf{b}%
^{\text{in}}=\left(
\begin{array}{c}
\mathbf{b}_{\mathsf{i}}^{\text{in}} \\
\mathbf{b}_{\mathsf{e}}^{\text{in}}
\end{array}
\right) $ and total output $\mathbf{b}^{\text{out}}=\left(
\begin{array}{c}
\mathbf{b}_{\mathsf{i}}^{\text{out}} \\
\mathbf{b}_{\mathsf{e}}^{\text{out}}
\end{array}
\right) $ where the $\mathbf{b}_{j}^{\text{in}}$ and $\mathbf{b}_{j}^{\text{%
out}}$ may be multi-dimensional noises (we in fact only require the
multiplicities to agree for $j=\mathsf{i},\mathsf{e}$ respectively).
\begin{center}
\setlength{\unitlength}{.1cm}
\begin{picture}(80,28)
\label{pic2}
\thicklines
\put(30,5){\line(0,1){15}}
\put(30,5){\line(1,0){20}}
\put(50,5){\line(0,1){15}}
\put(30,20){\line(1,0){20}}
\thinlines
\put(32,10){\vector(-1,0){15}}
\put(63,10){\vector(-1,0){15}}
\put(25,15){\line(1,0){7}}
\put(25,15){\line(0,1){10}}
\put(25,25){\line(1,0){30}}
\put(55,25){\line(0,-1){10}}
\put(55,15){\line(-1,0){7}}
\put(25,25){\vector(1,0){15}}
\put(33,10){\circle{2}}
\put(33,15){\circle{2}}
\put(47,10){\circle{2}}
\put(47,15){\circle{2}}
\put(35,10){$s_{\sf e}$}
\put(35,15){$s_{\sf i}$}
\put(42,10){$r_{\sf e}$}
\put(42,15){$r_{\sf i}$}
\end{picture}%
figure 2: A quantum system with feedback
\end{center}
The transfer matrix function takes the general form
\begin{equation*}
\Xi \equiv \left[
\begin{tabular}{l|ll}
$A$ & $-\sum_{j}C_{j}^{\dag }S_{j\mathsf{i}}$ & $-\sum_{j}C_{j}^{\dag }S_{j%
\mathsf{e}}$ \\ \hline
$C_{\mathsf{i}}$ & $S_{\mathsf{ii}}$ & $S_{\mathsf{ie}}$ \\
$C_{\mathsf{e}}$ & $S_{\mathsf{ei}}$ & $S_{\mathsf{ee}}$%
\end{tabular}
\right] .
\end{equation*}
When we make the connection, we impose the various constraints $b_{r_{%
\mathsf{i}}\left( k\right) }^{\text{in}}\left( t\right) =b_{s_{\mathsf{i}%
}\left( j\right) }^{\text{out}}\left( t-\tau \right) $ where output field
labelled $s_{\mathsf{i}}\left( j\right) $ is to be connected to the input
field $r_{\mathsf{i}}\left( k\right) $ where $\tau >0$ is the time delay. We
assume the idealized situation of instantaneous feedback $\tau \rightarrow
0^{+}$. To avoid having to match up the labels of the internal channels, it
is more convenient to introduce a fixed labelling and write
\begin{equation*}
\mathbf{b}_{\mathsf{i}}^{\text{out}}\left( t^{-}\right) =\eta \mathbf{b}_{%
\mathsf{i}}^{\text{in}}\left( t\right)
\end{equation*}
where $\eta $ is the adjacency matrix:
\begin{equation*}
\eta _{sr}=\left\{
\begin{array}{cc}
1, & \text{if }\left( s,r\right) \text{ is an internal channel,} \\
0, & \text{otherwise}
\end{array}
\right.
\end{equation*}
The model with the connections is then a reduction of the original and the
remaining external fields are the input $\mathbf{b}_{\mathsf{e}}^{\text{in}}$
and output $\mathbf{b}_{\mathsf{e}}^{\text{out}}$.
\begin{theorem}
Let $\left( \eta -S_{\mathsf{ii}}\right) $ be invertible. The feedback
system described above has input-output relation $\mathbf{\hat{b}}_{\mathsf{e%
}}^{\text{out}}=\Xi _{\mathrm{red}}\mathbf{\hat{b}}_{\mathsf{e}}^{\text{in}%
}+\xi _{\mathrm{red}}\mathbf{a}$ and the reduced transfer matrix function
\begin{equation*}
\Xi _{\mathrm{red}}\equiv \left[
\begin{tabular}{r|r}
$A_{\mathrm{red}}$ & $-C_{\mathrm{red}}^{\dag }S_{\mathrm{red}}$ \\ \hline
$C_{\mathrm{red}}$ & $S_{\mathrm{red}}$%
\end{tabular}
\right] ,\quad \xi _{\mathrm{red}}\equiv C_{\mathrm{red}}\frac{1}{s-A_{%
\mathrm{red}}},
\end{equation*}
where
\begin{eqnarray}
S_{\mathrm{red}} &=&S_{\mathsf{ee}}+S_{\mathsf{ei}}\left( \eta -S_{\mathsf{ii%
}}\right) ^{-1}S_{\mathsf{ie}}, \notag \\
C_{\mathrm{red}} &=&S_{\mathsf{ei}}\left( \eta -S_{\mathsf{ii}}\right)
^{-1}C_{\mathsf{i}}+C_{\mathsf{e}}, \notag \\
A_{\mathrm{red}} &=&A-\sum_{j=\mathsf{i},\mathsf{e}}C_{j}^{\dag }S_{j\mathsf{%
i}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}C_{\mathsf{i}}.
\end{eqnarray}
\end{theorem}
\begin{proof}
The dynamical equations can be written as
\begin{eqnarray*}
\mathbf{\dot{a}}\left( t\right) &=&A\mathbf{a}\left( t\right)
-\sum_{j,k}C_{j}^{\dag }S_{jk}\mathbf{b}_{k}^{\text{in}}\left( t\right) , \\
\mathbf{b}_{j}^{\text{out}}\left( t\right) &=&\sum_{k=\mathsf{i},\mathsf{e}%
}S_{jk}\mathbf{b}_{k}^{\text{in}}\left( t\right) +C_{j}\mathbf{a}\left(
t\right) .
\end{eqnarray*}
Now the constraint $\eta \mathbf{b}_{\mathsf{i}}^{\text{in}}=\mathbf{b}_{%
\mathsf{i}}^{\text{out}}$ implies that
\begin{equation*}
\mathbf{b}_{\mathsf{i}}^{\text{in}}\left( t\right) =\left( \eta -S_{\mathsf{%
ii}}\right) ^{-1}(S_{\mathsf{ie}}\mathbf{b}_{\mathsf{e}}^{\text{in}}\left(
t\right) +C_{\mathsf{i}}\mathbf{a}\left( t\right) ),
\end{equation*}
and so
\begin{eqnarray*}
\mathbf{\dot{a}}\left( t\right) &=&[A-\sum_{j=\mathsf{i},\mathsf{e}%
}C_{j}^{\dag }S_{j\mathsf{i}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}C_{%
\mathsf{i}}]\mathbf{a}\left( t\right) \\
&&-\sum_{j=\mathsf{i},\mathsf{e}}C_{j}^{\dag }\left( S_{j\mathsf{e}}+S_{j%
\mathsf{i}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}}\right)
\mathbf{b}_{\mathsf{e}}^{\text{in}}\left( t\right)
\end{eqnarray*}
or
\begin{equation*}
\mathbf{\hat{a}}\left( s\right) =-\frac{1}{s-A_{\mathrm{red}}}\sum_{j=%
\mathsf{i},\mathsf{e}}C_{j}^{\dag }\left( S_{j\mathsf{e}}+S_{j\mathsf{i}%
}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}}\right) \mathbf{%
\hat{b}}_{\mathsf{e}}\left( s\right) +\frac{1}{s-A_{\mathrm{red}}}\mathbf{a},
\end{equation*}
with $A_{\mathrm{red}}$ as above. Consequently,
\begin{eqnarray*}
\mathbf{\hat{b}}_{\mathsf{e}}^{\text{out}} &=&S_{\mathsf{ei}}\mathbf{\hat{b}}%
_{\mathsf{i}}^{\text{in}}+S_{\mathsf{ee}}\mathbf{\hat{b}}_{\mathsf{e}}^{%
\text{in}}+C_{\mathsf{e}}\mathbf{\hat{a}} \\
&=&S_{\mathrm{red}}\mathbf{\hat{b}}_{\mathsf{e}}^{\text{in}}+C_{\mathrm{red}}%
\mathbf{\hat{a}} \\
&=&\Xi _{\mathrm{red}}\mathbf{\hat{b}}_{\mathsf{e}}^{\text{in}}+\xi _{%
\mathrm{red}}\mathbf{a}
\end{eqnarray*}
where
\begin{eqnarray*}
\Xi _{\mathrm{red}} &=&S_{\mathrm{red}}-\sum_{j=\mathsf{i},\mathsf{e}}C_{%
\mathrm{red}}\frac{1}{s-A_{\mathrm{red}}}C_{j}^{\dag }\left( S_{j\mathsf{e}%
}+S_{j\mathsf{i}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}%
}\right) , \\
\xi _{\mathrm{red}} &=&C_{\mathrm{red}}\frac{1}{s-A_{\mathrm{red}}},
\end{eqnarray*}
and \ $S_{\mathrm{red}}$, $C_{\mathrm{red}}$ are as in the statement of the
theorem.
We now show that $\sum_{j=\mathsf{i},\mathsf{e}}C_{j}^{\dag }[S_{j\mathsf{e}%
}+S_{j\mathsf{i}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}%
}]=C_{\mathrm{red}}^{\dag }S_{\mathrm{red}}$. Now
\begin{eqnarray*}
\sum_{j=\mathsf{i},\mathsf{e}}C_{j}^{\dag }[S_{j\mathsf{e}}+S_{j\mathsf{i}%
}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}}] &=&C_{\mathsf{i}%
}^{\dag }[S_{\mathsf{ie}}+S_{\mathsf{ii}}\left( \eta -S_{\mathsf{ii}}\right)
^{-1}S_{\mathsf{ie}}]+C_{\mathsf{e}}^{\dag }S_{\mathrm{red}} \\
&=&C_{\mathsf{i}}^{\dag }\eta \left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{%
\mathsf{ie}}+C_{\mathsf{e}}^{\dag }S_{\mathrm{red}},
\end{eqnarray*}
while $C_{\mathrm{red}}^{\dag }S_{\mathrm{red}}=C_{\mathsf{i}}^{\dag }(\eta
^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}S_{\mathsf{ie}}^{\dag }S_{\mathrm{red}%
}+C_{\mathsf{e}}^{\dag }S_{\mathrm{red}}$. However,
\begin{equation*}
(\eta ^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}S_{\mathsf{ie}}^{\dag }S_{%
\mathrm{red}}=(\eta ^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}S_{\mathsf{ei}%
}^{\dag }(S_{\mathsf{ee}}+S_{\mathsf{ei}}\left( \eta -S_{\mathsf{ii}}\right)
^{-1}S_{\mathsf{ie}})
\end{equation*}
and using the identities $S_{\mathsf{ii}}^{\dag }S_{\mathsf{ii}}+S_{\mathsf{%
ei}}^{\dag }S_{\mathsf{ei}}=1$, $S_{\mathsf{ii}}^{\dag }S_{\mathsf{ie}}+S_{%
\mathsf{ei}}^{\dag }S_{\mathsf{ee}}=0$, this reduces to
\begin{eqnarray*}
(\eta ^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}S_{\mathsf{ie}}^{\dag }S_{%
\mathrm{red}} &=&(\eta ^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}\left[ -S_{%
\mathsf{ii}}^{\dag }S_{\mathsf{ie}}+(1-S_{\mathsf{ii}}^{\dag }S_{\mathsf{ii}%
})\left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}}\right] \\
&=&(\eta ^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}\left[ -S_{\mathsf{ii}}^{\dag
}\left( \eta -S_{\mathsf{ii}}\right) +(1-S_{\mathsf{ii}}^{\dag }S_{\mathsf{ii%
}})\right] \left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}} \\
&=&\eta \left( \eta -S_{\mathsf{ii}}\right) ^{-1}S_{\mathsf{ie}}.
\end{eqnarray*}
Therefore $\Xi _{\mathrm{red}}=S_{\mathrm{red}}-\sum_{j=\mathsf{i},\mathsf{e}%
}C_{\mathrm{red}}\dfrac{1}{s-A_{\mathrm{red}}}C_{\mathrm{red}}^{\dag }S_{%
\mathrm{red}}$, as required.
For consistency, we should check that we have $A_{\mathrm{red}}=-\frac{1}{%
\mathsf{e}}C_{\mathrm{red}}^{\dag }C_{\mathrm{red}}-i\Omega _{\mathrm{red}}$
with $\Omega _{\mathrm{red}}$ selfadjoint. Indeed, setting $A=-\frac{1}{2}C_{%
\mathsf{i}}^{\dag }C_{\mathsf{i}}-\frac{1}{2}C_{\mathsf{e}}^{\dag }C_{%
\mathsf{e}}-i\Omega $ and substituting in for $C_{\mathrm{red}}$ and $K_{%
\mathrm{red}}$ we find after some algebra that
\begin{equation*}
\Omega _{\mathrm{red}}=\Omega +\func{Im}\left\{ C_{\mathsf{i}}^{\dag }S_{%
\mathsf{ii}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}C_{\mathsf{i}}\right\} +%
\func{Im}\left\{ C_{\mathsf{e}}^{\dag }S_{\mathsf{ei}}\left( \eta -S_{%
\mathsf{ii}}\right) ^{-1}C_{\mathsf{i}}\right\} .
\end{equation*}
The manipulation for this is trivial except for the calculation of the term
of the form $\frac{1}{2}C_{\mathsf{i}}^{\dag }XC_{\mathsf{i}}$ where
\begin{eqnarray*}
X &=&1+2S_{\mathsf{ii}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}-(\eta
^{\dag }-S_{\mathsf{ii}}^{\dag })^{-1}S_{\mathsf{ei}}^{\dag }S_{\mathsf{ei}%
}\left( \eta -S_{\mathsf{ii}}\right) ^{-1} \\
&\equiv &(1-\eta S_{\mathsf{ii}}^{\dag })^{-1}\left[ S_{\mathsf{ii}}\eta
^{\dag }-\eta S_{\mathsf{ii}}^{\dag }\right] \left( 1-S_{\mathsf{ii}}\eta
^{\dag }\right) ^{-1} \\
&=&2i\func{Im}\frac{S_{\mathsf{ii}}\eta ^{\dag }}{1-S_{\mathsf{ii}}\eta
^{\dag }}=2i\func{Im}\left\{ S_{\mathsf{ii}}\left( \eta -S_{\mathsf{ii}%
}\right) ^{-1}\right\}
\end{eqnarray*}
where again we use the identity $S_{\mathsf{ii}}^{\dag }S_{\mathsf{ii}}+S_{%
\mathsf{ei}}^{\dag }S_{\mathsf{ei}}=1$.
\end{proof}
\bigskip
In terms of the parameters $\left( S,L,H\right) $ with $S=\left(
\begin{array}{cc}
S_{\mathsf{ii}} & S_{\mathsf{ie}} \\
S_{\mathsf{ei}} & S_{\mathsf{ee}}
\end{array}
\right) $, $L=\left(
\begin{array}{c}
L_{\mathsf{i}} \\
L_{\mathsf{e}}
\end{array}
\right) =\left(
\begin{array}{c}
C_{\mathsf{i}}\mathbf{a} \\
C_{\mathsf{e}}\mathbf{a}
\end{array}
\right) $ and $H=\mathbf{a}^{\dag }\Omega \mathbf{a}$, we have that the
feedback system is described by the reduced parameters $\left( S_{\mathrm{red%
}},L_{\mathrm{red}},H_{\mathrm{red}}\right) $ where
\begin{eqnarray}
S_{\mathrm{red}} &=&S_{\mathsf{ee}}+S_{\mathsf{ei}}\left( \eta -S_{\mathsf{ii%
}}\right) ^{-1}S_{\mathsf{ie}} \notag \\
L_{\mathrm{red}} &=&S_{\mathsf{ei}}\left( \eta -S_{\mathsf{ii}}\right)
^{-1}L_{\mathsf{i}}+L_{\mathsf{e}}, \notag \\
H_{\mathrm{red}} &=&H+\func{Im}\left\{ L_{\mathsf{i}}^{\dag }S_{\mathsf{ii}%
}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}L_{\mathsf{i}}\right\} +\func{Im}%
\left\{ L_{\mathsf{e}}^{\dag }S_{\mathsf{ei}}\left( \eta -S_{\mathsf{ii}%
}\right) ^{-1}L_{\mathsf{i}}\right\} . \notag \\
&& \label{general feedback law}
\end{eqnarray}
The same equations have been deduced in the nonlinear case by different
arguments \cite{QFN1}. Note the identity $\func{Im}\left\{ L_{\mathsf{i}%
}^{\dag }S_{\mathsf{ii}}\left( \eta -S_{\mathsf{ii}}\right) ^{-1}L_{\mathsf{i%
}}\right\} =\func{Im}\left\{ L_{\mathsf{i}}^{\dag }\left( \eta -S_{\mathsf{ii%
}}\right) ^{-1}L_{\mathsf{i}}\right\} $.
\bigskip
\begin{remark}
Let $U$ be a unitary operator on a fixed Hilbert space $\frak{H}=\frak{H}%
_{1}\oplus \frak{H}_{2}$ which decomposes as $U=\left(
\begin{array}{cc}
U_{11} & U_{12} \\
U_{21} & U_{22}
\end{array}
\right) $. The non-commutative M\"{o}bius transform $\varphi
_{U}^{2\rightarrow 1}$ is the superoperator defined by
\begin{equation*}
\varphi _{U}^{2\rightarrow 1}\left( X\right) =U_{11}+U_{12}\left(
1-XU_{22}\right) ^{-1}XU_{21}
\end{equation*}
defined on the domain of operators $X$ on $\frak{H}_{2}$ for which the
inverse $\left( 1-XU_{22}\right) ^{-1}$ exists. The transform $\varphi
_{U}^{2\rightarrow 1}$ maps unitaries on $\frak{H}_{2}$ in its domain to
unitaries in $\frak{H}_{1}$ \cite{Young}.
\end{remark}
\begin{remark}
In particular, $S_{\mathrm{red}}$ is unitary as it equals $\varphi _{S}^{%
\mathsf{i}\rightarrow \mathsf{e}}\left( \xi \right) $ where $\xi =\eta ^{-1}$
with $\eta $ being unitary. We may expand the geometric series to write
\begin{equation*}
S_{\mathrm{red}}=S_{\mathsf{ee}}+S_{\mathsf{ei}}\xi S_{\mathsf{ie}}+S_{%
\mathsf{ei}}\xi S_{\mathsf{ii}}\xi S_{\mathsf{ie}}+S_{\mathsf{ei}}\xi S_{%
\mathsf{ii}}\xi S_{\mathsf{ii}}\xi S_{\mathsf{ie}}+\cdots =S_{\mathsf{ee}%
}+\sum_{n=0}^{\infty }S_{\mathsf{ei}}\xi \left( S_{\mathsf{ii}}\xi \right)
^{n}S_{\mathsf{ie}}
\end{equation*}
which shows that $S_{\mathrm{red}}$ can be built up from\ contributions from
the various paths through the network. Likewise
\begin{eqnarray*}
L_{\mathrm{red}} &=&L_{\mathsf{e}}+\sum_{n=0}^{\infty }S_{\mathsf{ei}}\xi
\left( S_{\mathsf{ii}}\xi \right) ^{n}L_{\mathsf{i}},\quad \\
H_{\mathrm{red}} &=&H+\sum_{n=0}^{\infty }\func{Im}\left\{ L_{\mathsf{i}%
}^{\dag }\left( S_{\mathsf{ii}}\xi \right) ^{n}L_{\mathsf{i}}\right\}
+\sum_{n=0}^{\infty }\func{Im}\left\{ L_{\mathsf{e}}^{\dag }S_{\mathsf{ei}%
}\xi \left( S_{\mathsf{ii}}\xi \right) ^{n}L_{\mathsf{i}}\right\} .
\end{eqnarray*}
\end{remark}
\section{Systems in Series}
As a very special case of feedback connections we consider the situation of
systems in series. This is referred to as \textit{feedforward} in
engineering.
\begin{center}
\setlength{\unitlength}{.1cm}
\begin{picture}(100,22)
\label{pic3}
\thicklines
\put(30,5){\line(0,1){10}}
\put(30,5){\line(1,0){20}}
\put(50,5){\line(0,1){10}}
\put(30,15){\line(1,0){20}}
\put(60,5){\line(0,1){10}}
\put(60,5){\line(1,0){20}}
\put(80,5){\line(0,1){10}}
\put(60,15){\line(1,0){20}}
\thinlines
\put(32,10){\vector(-1,0){15}}
\put(62,10){\vector(-1,0){14}}
\put(92,10){\vector(-1,0){14}}
\put(33,10){\circle{2}}
\put(63,10){\circle{2}}
\put(47,10){\circle{2}}
\put(77,10){\circle{2}}
\put(35,11){$s_2$}
\put(65,11){$s_1$}
\put(42,11){$r_2$}
\put(72,11){$r_1$}
\end{picture}%
figure 3: Cascaded systems
\end{center}
The individual transfer functions before the connection $e=\left(
s_{1},r_{2}\right) $ is made are given by $\Xi _{i}=\left[
\begin{tabular}{r|r}
$A_{i}$ & $-C_{i}^{\dag }S_{i}$ \\ \hline
$C_{i}$ & $S_{i}$%
\end{tabular}
\right] $ with $A_{i}=-\frac{1}{2}C_{i}^{\dag }C_{i}-i\Omega _{i}$.and these
may be concatenated to give
\begin{equation*}
\Xi =\left[
\begin{tabular}{l|ll}
$A_{1}+A_{2}$ & $-C_{1}^{\dag }S_{1}$ & $-C_{2}^{\dag }S_{2}$ \\ \hline
$C_{1}$ & $S_{1}$ & $0$ \\
$C_{2}$ & $0$ & $S_{2}$%
\end{tabular}
\right] .
\end{equation*}
To use the formula for the reduced transfer function following connection,
we must first of all identify the internal (eliminated) and external fields:
here
\begin{equation*}
\mathbf{b}^{\text{in}}=\left(
\begin{array}{c}
\mathbf{b}_{\mathsf{i}}^{\text{in}} \\
\mathbf{b}_{\mathsf{e}}^{\text{in}}
\end{array}
\right) =\left(
\begin{array}{c}
\mathbf{b}_{2}^{\text{in}} \\
\mathbf{b}_{1}^{\text{in}}
\end{array}
\right) ,\quad \mathbf{b}^{\text{out}}=\left(
\begin{array}{c}
\mathbf{b}_{\mathsf{i}}^{\text{out}} \\
\mathbf{b}_{\mathsf{e}}^{\text{out}}
\end{array}
\right) \equiv \left(
\begin{array}{c}
\mathbf{b}_{1}^{\text{out}} \\
\mathbf{b}_{2}^{\text{out}}
\end{array}
\right) ,
\end{equation*}
and
\begin{equation*}
\left(
\begin{array}{cc}
S_{\mathsf{ii}} & S_{\mathsf{ie}} \\
S_{\mathsf{ei}} & S_{\mathsf{ee}}
\end{array}
\right) \equiv \left(
\begin{array}{cc}
0 & S_{1} \\
S_{2} & 0
\end{array}
\right) ,\quad L_{\mathsf{i}}\equiv L_{1},L_{\mathsf{e}}\equiv L_{2},
\end{equation*}
with trivially $\eta =1$. The reduced transfer function is then readily
computed to be
\begin{equation*}
\Xi _{\text{series}}=\left[
\begin{tabular}{r|r}
$A_{1}+A_{2}-C_{2}^{\dag }S_{2}C_{1}$ & $-\left( C_{2}^{\dag
}S_{2}+C_{1}^{\dag }\right) S_{1}$ \\ \hline
$C_{2}+S_{2}C_{1}$ & $S_{2}S_{1}$%
\end{tabular}
\right] .
\end{equation*}
Likewise we deduce the relations
\begin{equation}
S=S_{2}S_{1},\quad L=L_{2}+S_{2}L_{1},\quad H=H_{1}+H_{2}+\func{Im}\left\{
L_{2}^{\dag }S_{2}L_{1}\right\} . \label{special feedback law}
\end{equation}
The same equations have been deduced in the nonlinear case by different
arguments \cite{GJ Series}.
\subsection{Feedforward: Cascades}
If the two systems are truly distinct systems, that is, if they are
different sets of oscillators, then we are in the situation of properly
\textit{cascaded} systems. In this case one would expect that the transfer
function to factor as the ordinary matrix product $\Xi _{\text{series}%
}\equiv \Xi _{2}\Xi _{1}$. We now show that this is indeed the case.
\begin{lemma}
Let $\Xi _{j}$ be transfer functions for $m_{j}$ oscillators coupled to $n$
fields $(j=1,2)$. If we consider the ampliated transfer functions for $%
m_{1}+m_{2}$ oscillators coupled to $n$ fields
\begin{eqnarray*}
\tilde{\Xi}_{1} &=&\left[
\begin{tabular}{c|c}
$\left(
\begin{array}{cc}
A_{1} & 0 \\
0 & 0
\end{array}
\right) $ & $\left(
\begin{array}{c}
-C_{1}^{\dag }S_{1} \\
0
\end{array}
\right) $ \\ \hline
$\left( C_{1},0\right) $ & $S_{1}$%
\end{tabular}
\right] , \\
\tilde{\Xi}_{2} &=&\left[
\begin{tabular}{c|c}
$\left(
\begin{array}{cc}
0 & 0 \\
0 & A_{2}
\end{array}
\right) $ & $\left(
\begin{array}{c}
0 \\
-C_{2}^{\dag }S_{2}
\end{array}
\right) $ \\ \hline
$\left( 0,C_{2}\right) $ & $S_{2}$%
\end{tabular}
\right] ,
\end{eqnarray*}
then
\begin{equation}
\tilde{\Xi}_{\text{series}}=\Xi _{2}\Xi _{1}.
\end{equation}
\end{lemma}
\begin{proof}
We compute this directly,
\begin{eqnarray*}
\tilde{\Xi}_{\text{series}} &=&\left[
\begin{tabular}{c|c}
$\left(
\begin{array}{cc}
A_{1} & 0 \\
-C_{2}^{\dag }S_{2}C_{1} & -A_{2}
\end{array}
\right) $ & $\left(
\begin{array}{c}
-C_{1}^{\dag }S_{1} \\
-C_{2}^{\dag }S_{2}S_{1}
\end{array}
\right) $ \\ \hline
$\left( C_{1},C_{2}\right) $ & $S_{2}S_{1}$%
\end{tabular}
\right] \\
&=&S_{2}S_{1}+\left.
\begin{array}{c}
(C_{1},C_{2})
\end{array}
\right. \left(
\begin{array}{cc}
s-A_{1} & 0 \\
C_{2}^{\dag }S_{2}C_{1} & s-A_{2}
\end{array}
\right) ^{-1}\left(
\begin{array}{c}
-C_{1}^{\dag }S_{1} \\
-C_{2}^{\dag }S_{2}S_{1}
\end{array}
\right) \\
&=&S_{2}S_{1}-\left.
\begin{array}{c}
(C_{1},C_{2})
\end{array}
\right. \left(
\begin{array}{ll}
\frac{1}{s-A_{1}} & 0 \\
-\frac{1}{s-A_{2}}C_{2}^{\dag }S_{2}C_{1}\frac{1}{s-A_{1}} & \frac{1}{s-A_{2}%
}
\end{array}
\right) \left(
\begin{array}{c}
C_{1}^{\dag }S_{1} \\
C_{2}^{\dag }S_{2}S_{1}
\end{array}
\right) \\
&=&\left[ S_{2}-C_{2}\left( s-A_{2}\right) ^{-1}C_{2}^{\dag }S_{2}\right]
\times \left[ S_{1}-C_{1}\left( s-A_{1}\right) ^{-1}C_{1}^{\dag }S_{1}\right]
,
\end{eqnarray*}
giving the result.
\end{proof}
\section{Beam Splitters}
A simple beam splitter is a device performing physical superposition of two
input fields. It is described by a fixed unitary operator $T=\left(
\begin{array}{cc}
\alpha & \beta \\
\mu & \nu
\end{array}
\right) \in U\left( 2\right) $:
\begin{equation*}
\left(
\begin{array}{c}
\mathbf{b}_{1}^{\text{out}} \\
\mathbf{b}_{2}^{\text{out}}
\end{array}
\right) =\left(
\begin{array}{cc}
\alpha & \beta \\
\mu & \nu
\end{array}
\right) \left(
\begin{array}{c}
\mathbf{b}_{1}^{\text{in}} \\
\mathbf{b}_{2}^{\text{in}}
\end{array}
\right) .
\end{equation*}
This is a canonical transformation and the output fields satisfy the same
canonical commutation relations as the inputs. The action of the beam
splitter is depicted in the figure below. On the left we have a traditional
view of the two inputs being split into two output fields. On the right we
have our view of the beam splitter as being a component with two input ports
and two output ports: we have sketched some internal detail to emphasize how
the scattering (superimposing) of inputs how ever we shall usually just draw
this as a ``black box'' component in the following.
\begin{center}
\setlength{\unitlength}{.075cm}
\begin{picture}(100,40)
\label{pic4}
\thinlines
\put(64,10){\vector(-1,0){9}}
\put(64,25){\vector(-1,0){9}}
\put(66,10){\line(1,0){13}}
\put(66,25){\line(1,0){13}}
\put(90,10){\vector(-1,0){9}}
\put(90,25){\vector(-1,0){9}}
\put(66,11){\line(1,1){13}}
\put(66,24){\line(1,-1){13}}
\put(65,10){\circle{2}}
\put(65,25){\circle{2}}
\put(80,10){\circle{2}}
\put(80,25){\circle{2}}
\thinlines
\put(17,20){\vector(1,0){12}}
\put(15,5){\vector(0,1){12}}
\put(0,20){\vector(1,0){12}}
\put(15,23){\vector(0,1){12}}
\thicklines
\put(7,12){\line(1,1){16}}
\put(88,12){${\bf b}^{\rm in}_2$}
\put(88,27){${\bf b}^{\rm in}_1$}
\put(48,12){${\bf b}^{\rm out}_2$}
\put(48,27){${\bf b}^{\rm out}_1$}
\put(18,8){${\bf b}^{\rm in}_2$}
\put(0,22){${\bf b}^{\rm in}_1$}
\put(17,35){${\bf b}^{\rm out}_2$}
\put(30,22){${\bf b}^{\rm out}_1$}
\put(62,7){\dashbox(22,20)}
\end{picture}
Figure 4: Beam-splitter component.
\end{center}
To emphasize that the beam splitter is an input-output device of exactly the
for we have been considering up to now, let us state that its transfer
matrix function is
\begin{equation*}
\Xi _{\text{beam splitter}}=\left[
\begin{tabular}{c|c}
$0$ & $0$ \\ \hline
$0$ & $T$%
\end{tabular}
\right] \equiv T.
\end{equation*}
Our aim is to describe the effective Markov model for the feedback device
sketched below where the feedback is implemented by means of a beam
splitter. Here we have a component system, called the plant, in-loop and we
assume that it is described by the transfer function $\Xi _{0}=\left[
\begin{tabular}{c|c}
$A_{0}$ & $-C_{0}^{\dag }S_{0}$ \\ \hline
$C_{0}$ & $S_{0}$%
\end{tabular}
\right] $.
\begin{center}
\setlength{\unitlength}{.05cm}
\begin{picture}(120,80)
\label{pic5}
\thicklines
\put(30,40){\line(1,1){20}}
\thinlines
\put(10,50){\line(1,0){27}}
\put(10,50){\vector(1,0){10}}
\put(40,20){\line(0,1){27}}
\put(40,20){\vector(0,1){10}}
\put(40,53){\line(0,1){13}}
\put(40,53){\vector(0,1){10}}
\put(43,50){\line(1,0){77}}
\put(43,50){\vector(1,0){10}}
\put(120,20){\line(0,1){30}}
\put(40,20){\line(1,0){32}}
\put(120,20){\line(-1,0){32}}
\thicklines
\put(70,10){\line(0,1){20}}
\put(70,10){\line(1,0){20}}
\put(90,10){\line(0,1){20}}
\put(70,30){\line(1,0){20}}
\put(87,20){\circle{2}}
\put(73,20){\circle{2}}
\put(73,33){plant}
\put(10,55){${\bf b}_{1}^{\rm in}$}
\put(40,74){${\bf b}_{1}^{\rm out}$}
\put(83,54){${\bf b}_{2}^{\rm out}$}
\put(33,14){${\bf b}_{2}^{\rm in}$}
\end{picture}
Figure 5: Feedback using a beam-splitter.
\end{center}
It is more convenient to view this as the network sketched below.
\begin{center}
\setlength{\unitlength}{.075cm}
\begin{picture}(60,50)
\label{pic6}
\thicklines
\put(26,10){$s_1$}
\put(26,20){$s_2$}
\put(26,40){$r_3$}
\put(31,10){$r_1$}
\put(31,20){$r_2$}
\put(31,40){$s_3$}
\put(20,5){\line(0,1){20}}
\put(20,5){\line(1,0){20}}
\put(40,5){\line(0,1){20}}
\put(20,25){\line(1,0){20}}
\put(20,35){\line(0,1){10}}
\put(20,35){\line(1,0){20}}
\put(40,35){\line(0,1){10}}
\put(20,45){\line(1,0){20}}
\thinlines
\put(10,20){\line(1,0){13}}
\put(24,20){\circle{2}}
\put(10,40){\vector(1,0){5}}
\put(10,10){\line(1,0){13}}
\put(24,10){\circle{2}}
\put(24,40){\circle{2}}
\put(10,10){\vector(-1,0){5}}
\put(37,20){\line(1,0){13}}
\put(36,20){\circle{2}}
\put(37,10){\line(1,0){13}}
\put(36,10){\circle{2}}
\put(36,40){\circle{2}}
\put(50,20){\vector(-1,0){5}}
\put(50,10){\vector(-1,0){5}}
\put(10,20){\line(0,1){20}}
\put(10,40){\line(1,0){13}}
\put(50,40){\line(-1,0){13}}
\put(50,20){\line(0,1){20}}
\put(2,5){$ {\bf b}_{1}^{\rm out}$}
\put(55,5){$ {\bf b}_{1}^{\rm in}$}
\put(43,43){{\it plant}}
\put(42,14){{\it beam splitter} }
\end{picture}
Figure 6: Network representation.
\end{center}
Here we have the pair of internal edges $\left( s_{2},r_{3}\right) $ and $%
\left( s_{3},r_{2}\right) $. The transfer function for the network is
\begin{equation*}
\Xi _{\text{unconn.}}=\left[
\begin{tabular}{l|lll}
$A_{0}$ & 0 & 0 & $-C_{0}^{\dag }S_{0}$ \\ \hline
0 & $T_{11}$ & $T_{12}$ & 0 \\
0 & $T_{21}$ & $T_{22}$ & 0 \\
$C_{0}$ & 0 & 0 & $S_{0}$%
\end{tabular}
\right]
\end{equation*}
with respect to the labels $\left( 0,s_{1},s_{2},s_{3}\right) $ for the rows
and $\left( 0,r_{1},r_{2},r_{3}\right) $ for the columns. This time the
external fields are $\mathbf{b}_{\mathsf{e}}^{\text{in}}=\mathbf{b}_{1}^{%
\text{in}}$, $\mathbf{b}_{\mathsf{e}}^{\text{out}}=\mathbf{b}_{1}^{\text{out}%
}\equiv T_{11}\mathbf{b}_{1}^{\text{in}}+T_{12}\mathbf{b}_{2}^{\text{in}}$
while the (matched) internal fields are
\begin{equation*}
\mathbf{b}_{\mathsf{i}}^{\text{in}}=\left(
\begin{array}{c}
\mathbf{b}_{2}^{\text{in}} \\
\mathbf{b}_{3}^{\text{in}}
\end{array}
\right) ,\quad \mathbf{b}_{\mathsf{i}}^{\text{out}}=\left(
\begin{array}{c}
\mathbf{b}_{2}^{\text{out}} \\
\mathbf{b}_{3}^{\text{out}}
\end{array}
\right) \equiv \left(
\begin{array}{c}
T_{21}\mathbf{b}_{1}^{\text{in}}+T_{22}\mathbf{b}_{2}^{\text{in}} \\
S_{0}\mathbf{b}_{3}^{\text{in}}+L_{0}
\end{array}
\right) .
\end{equation*}
That is
\begin{equation*}
\begin{tabular}{ll}
$S_{\mathsf{ii}}=\left(
\begin{array}{cc}
T_{22} & 0 \\
0 & S_{0}
\end{array}
\right) ,$ & $S_{\mathsf{ie}}=\left(
\begin{array}{c}
T_{21} \\
0
\end{array}
\right) ,$ \\
$S_{\mathsf{ei}}=\left( T_{12},0\right) ,$ & $S_{\mathsf{ee}}=T_{11},$ \\
$L_{\mathsf{i}}=\left(
\begin{array}{c}
L_{0} \\
0
\end{array}
\right) ,$ & $L_{\mathsf{i}}=0,\quad \eta =\left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right) .$%
\end{tabular}
\end{equation*}
Substituting into our reduction formula we obtain
\begin{eqnarray*}
S &=&T_{11}+
\begin{array}{c}
(T_{12},0)
\end{array}
\left(
\begin{array}{cc}
-T_{22} & 1 \\
1 & -S_{0}
\end{array}
\right) ^{-1}\left(
\begin{array}{c}
T_{21} \\
0
\end{array}
\right) \\
&\equiv &T_{11}+T_{12}\left( 1-S_{0}T_{22}\right) ^{-1}S_{0}T_{21}, \\
C &=&
\begin{array}{c}
(T_{12},0)
\end{array}
\left(
\begin{array}{cc}
-T_{22} & 1 \\
1 & -S_{0}
\end{array}
\right) ^{-1}\left(
\begin{array}{c}
0 \\
C_{0}
\end{array}
\right) \\
&\equiv &T_{12}\left( 1-S_{0}T_{22}\right) ^{-1}C_{0}, \\
\Omega &=&\Omega _{0}+\func{Im}
\begin{array}{c}
(0,L_{0}^{\dag })
\end{array}
\left(
\begin{array}{cc}
-T_{22} & 1 \\
1 & -S_{0}
\end{array}
\right) ^{-1}\left(
\begin{array}{c}
0 \\
L_{0}
\end{array}
\right) \\
&\equiv &\Omega _{0}+\func{Im}C_{0}^{\dag }\left( 1-S_{0}T_{22}\right)
^{-1}C_{0}.
\end{eqnarray*}
and so, when the connections are made, the transfer fmatrix function is
\begin{equation*}
\Xi _{\text{conn.}}=\left[
\begin{tabular}{l|l}
$A_{0}-C_{0}^{\dag }S_{0}T_{22}C_{0}$ & $-C_{0}^{\dag
}S_{0}T_{21}-C_{0}^{\dag }S_{0}\left( 1-S_{0}T_{22}\right) ^{-1}T_{22}$ \\
\hline
$T_{12}\left( 1-S_{0}T_{22}\right) ^{-1}C_{0}$ & $T_{11}+T_{12}\left(
1-S_{0}T_{22}\right) ^{-1}S_{0}T_{21}$%
\end{tabular}
\right] ,
\end{equation*}
Note that $S=\varphi _{T}^{2\rightarrow 1}\left( S_{0}\right) $ where $%
\varphi _{T}^{2\rightarrow 1}\left( z\right) =T_{11}+T_{12}\beta \left(
z^{-1}-T_{22}\right) T_{21}$ is the M\"{o}bius transformation in the complex
plane associated with $T$.
If we further set $T=\left(
\begin{array}{cc}
\alpha & \beta \\
\mu & \nu
\end{array}
\right) $, and $x+iy=S_{0}\nu $, then
\begin{gather*}
C^{\dag }C=\left| \frac{\beta }{1-S_{0}\nu }\right| ^{2}C_{0}^{\dag }C_{0}=%
\frac{1-|\nu |^{2}}{|1-S_{0}\nu |^{2}}C_{0}^{\dag }C_{0}\equiv \frac{%
1-x^{2}-y^{2}}{\left( 1-x\right) ^{2}+y^{2}}C_{0}^{\dag }C_{0}, \\
\func{Im}C_{0}^{\dag }\left( 1-S_{0}\nu \right) ^{-1}C_{0}=\func{Im}\left\{
\frac{1}{1-x-iy}\right\} C_{0}^{\dag }C_{0}=\frac{y}{\left( 1-x\right)
^{2}+y^{2}}C_{0}^{\dag }C_{0}.
\end{gather*}
In particular, if we take a single oscillator in-loop with $S_{0}=e^{i\phi
_{0}}$, then we obtain $S\equiv e^{i\phi }$ and the phase is determined by
the M\"{o}bius transformation. If we further have $L_{0}=\sqrt{\gamma _{0}}a$%
, $H_{0}=\omega _{0}a^{\dag }a$, we find that $L\equiv e^{i\delta }\sqrt{%
\gamma }a$ and $H=\omega a^{\dag }a$ where
\begin{equation*}
\gamma =\frac{1-x^{2}-y^{2}}{\left( 1-x\right) ^{2}+y^{2}}\gamma _{0},\quad
\omega =\frac{y}{\left( 1-x\right) ^{2}+y^{2}}\omega _{0},
\end{equation*}
and $\delta $ is a real phase. In the specific case $T=\left(
\begin{array}{cc}
\alpha & \beta \\
\beta & -\alpha
\end{array}
\right) $ with $S_{0}=1,\omega _{0}=0$\ considered by Yanagisawa and Kimura
\cite{YK1}, we have $x=-\alpha $ and $y=0$, therefore we find
\begin{equation*}
\gamma =\frac{1-\alpha }{1+\alpha }\gamma _{0},\quad \omega =0
\end{equation*}
which agrees with their findings.
\bigskip
An alternative computation of $\Xi $ \ is given by the following argument.
We consider the input-output relations
\begin{equation*}
\mathbf{\hat{b}}_{i}^{\text{out}}=\sum_{j=1,2}T_{ij}\mathbf{\hat{b}}_{j}^{%
\text{in}},\quad \mathbf{\hat{b}}_{2}^{\text{in}}=\Xi _{0}\mathbf{\hat{b}}%
_{1}^{\text{out}}+\xi _{0}\mathbf{a}_{0},
\end{equation*}
and eliminating $\mathbf{\hat{b}}_{2}^{\text{out}}\equiv \left( 1-T_{22}\Xi
\right) ^{-1}\left[ T_{21}\mathbf{\hat{b}}_{1}^{\text{in}}+T_{22}\xi _{0}%
\mathbf{a}_{0}\right] $ yields
\begin{equation*}
\mathbf{\hat{b}}_{1}^{\text{out}}=\left[ T_{11}+T_{12}\Xi _{0}\left(
1-T_{22}\Xi _{0}\right) ^{-1}T_{21}\right] \mathbf{\hat{b}}_{1}^{\text{in}%
}+T_{21}\left( 1-\Xi _{0}T_{22}\right) ^{-1}\xi _{0}\mathbf{a}_{0}.
\end{equation*}
That is
\begin{equation*}
\Xi =T_{11}+T_{12}\left( \Xi _{0}^{-1}-T_{22}\right) ^{-1}T_{21}=\varphi
_{T}^{2\rightarrow 1}\left( \Xi _{0}\right) .
\end{equation*}
We remark that if $T_{12}$ and $T_{21}$ are invertible, then we may invert
the M\"{o}bius transformation to get
\begin{equation*}
\Xi _{0}^{-1}=T_{22}+T_{21}\frac{1}{\Xi -T_{11}}T_{12}.
\end{equation*}
To illustrate with a cavity mode in-loop, we take the beam splitter matrix
to be $T=\left(
\begin{array}{cc}
\alpha & \beta \\
\beta & -\alpha
\end{array}
\right) $ with $\alpha ^{2}+\beta ^{2}=1$, and the transfer function $\Xi
_{0}\left( s\right) =\frac{s+i\omega -\gamma /2}{s+i\omega +\gamma /2}$,
then we find
\begin{equation*}
\Xi =\frac{\alpha +\Xi _{0}}{1+\alpha \Xi _{0}}=\frac{s+i\omega -\frac{%
1-\alpha }{1+\alpha }\frac{\gamma }{2}}{s+i\omega +\frac{1-\alpha }{1+\alpha
}\frac{\gamma }{2}}.
\end{equation*}
\section{The Redheffer Star Product}
An important feedback arrangement is shown in the figure below.
\begin{center}
\setlength{\unitlength}{.1cm}
\begin{picture}(60,65)
\label{pic7}
\thicklines
\put(20,10){\line(0,1){20}}
\put(20,10){\line(1,0){20}}
\put(40,10){\line(0,1){20}}
\put(20,30){\line(1,0){20}}
\put(20,40){\line(0,1){20}}
\put(20,40){\line(1,0){20}}
\put(40,40){\line(0,1){20}}
\put(20,60){\line(1,0){20}}
\thinlines
\put(24,14){\circle{2}}
\put(24,26){\circle{2}}
\put(36,14){\circle{2}}
\put(36,26){\circle{2}}
\put(24,44){\circle{2}}
\put(24,56){\circle{2}}
\put(36,44){\circle{2}}
\put(36,56){\circle{2}}
\put(23,14){\vector(-1,0){12}}
\put(49,14){\vector(-1,0){12}}
\put(11,56){\vector(1,0){12}}
\put(37,56){\vector(1,0){12}}
\put(23,26){\line(-1,0){8}}
\put(15,26){\vector(0,1){18}}
\put(23,44){\line(-1,0){8}}
\put(37,26){\line(1,0){8}}
\put(45,44){\vector(0,-1){18}}
\put(37,44){\line(1,0){8}}
\put(28,62){$A$}
\put(0,12){${\bf b}^{\rm out}_4$}
\put(-5,35){$ {\bf b}^{\rm out}_3 ={\bf b}^{\rm in}_2$}
\put(0,56){${\bf b}^{\rm in}_1$}
\put(28,4){$B$}
\put(50,12){${\bf b}^{\rm in}_4$}
\put(50,35){${\bf b}^{\rm out}_2 = {\bf b}^{\rm in}_3$}
\put(50,56){${\bf b}^{\rm out}_1$}
\put(26,14){$s_4$}
\put(26,26){$s_3$}
\put(26,44){$r_2$}
\put(26,56){$r_1$}
\put(31,14){$r_4$}
\put(31,26){$r_3$}
\put(31,44){$s_2$}
\put(31,56){$s_1$}
\end{picture}%
Figure 7: Composite System
\end{center}
We shall now derive the matrices for this system taking component $A$ to be
described $\left(
\begin{array}{cc}
S_{11}^{A} & S_{12}^{A} \\
S_{21}^{A} & S_{22}^{A}
\end{array}
\right) ,$ $\left(
\begin{array}{c}
C_{1}^{A} \\
C_{2}^{A}
\end{array}
\right) ,$ $\Omega _{A}$ and $B$ by $\left(
\begin{array}{cc}
S_{33}^{B} & S_{34}^{B} \\
S_{43}^{B} & S_{44}^{B}
\end{array}
\right) ,$ $\left(
\begin{array}{c}
C_{3}^{B} \\
C_{4}^{B}
\end{array}
\right) ,$ $\Omega _{B}$. The operators of systems $A$ are asumed to commute
with those of $B$. We have two internal channels to eliminate which we can
do in sequence, or simulataneously. We shall do the latter. here we have
\begin{eqnarray*}
\mathsf{S}_{\mathtt{ee}} &=&\left(
\begin{array}{cc}
S_{11}^{A} & 0 \\
0 & S_{44}
\end{array}
\right) ,\mathsf{S}_{\mathtt{ei}}=\left(
\begin{array}{cc}
S_{12}^{A} & 0 \\
0 & S_{43}^{B}
\end{array}
\right) \\
\mathsf{S}_{\mathtt{ie}} &=&\left(
\begin{array}{cc}
S_{21}^{A} & 0 \\
0 & S_{34}^{B}
\end{array}
\right) ,\mathsf{S}_{\mathtt{ii}}=\left(
\begin{array}{cc}
S_{22}^{A} & 0 \\
0 & S_{33}^{B}
\end{array}
\right)
\end{eqnarray*}
and
\begin{equation*}
\mathsf{L}_{\mathtt{e}}=\left(
\begin{array}{c}
L_{1}^{A} \\
L_{4}^{B}
\end{array}
\right) ,\quad \mathsf{L}_{\mathtt{i}}=\left(
\begin{array}{c}
L_{2}^{A} \\
L_{3}^{B}
\end{array}
\right) ,\quad \eta =\left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right) .
\end{equation*}
The parameters are therefore
\begin{eqnarray*}
S_{\star } &=&\left(
\begin{array}{cc}
S_{11}^{A} & 0 \\
0 & S_{44}^{B}
\end{array}
\right) +\left(
\begin{array}{cc}
S_{12}^{A} & 0 \\
0 & S_{43}^{B}
\end{array}
\right) \left(
\begin{array}{cc}
-S_{22}^{A} & 1 \\
1 & -S_{33}^{B}
\end{array}
\right) ^{-1}\left(
\begin{array}{cc}
S_{21}^{A} & 0 \\
0 & S_{34}^{B}
\end{array}
\right) \\
&=&\left(
\begin{array}{cc}
S_{11}^{A}+S_{12}^{A}S_{33}^{B}\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}S_{21}^{A} & S_{12}^{A}\left( 1-S_{22}^{A}S_{33}^{B}\right) ^{-1}S_{34}
\\
S_{43}^{B}\left( 1-S_{22}^{A}S_{33}^{B}\right) ^{-1}S_{21}^{A} &
S_{44}+S_{43}^{B}\left( 1-S_{22}^{A}S_{33}^{B}\right) ^{-1}S_{22}^{A}S_{34}
\end{array}
\right) , \\
C_{\star } &=&\left(
\begin{array}{c}
C_{1}^{A} \\
C_{4}^{B}
\end{array}
\right) +\left(
\begin{array}{cc}
S_{12}^{A} & 0 \\
0 & S_{43}^{B}
\end{array}
\right) \left(
\begin{array}{cc}
-S_{22}^{A} & 1 \\
1 & -S_{33}^{B}
\end{array}
\right) ^{-1}\left(
\begin{array}{c}
C_{2}^{A} \\
C_{3}^{B}
\end{array}
\right) \\
&=&\left(
\begin{array}{c}
C_{1}^{A}+S_{12}^{A}S_{33}^{B}\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}C_{2}^{A}+S_{12}^{A}\left( 1-S_{22}^{A}S_{33}^{B}\right) ^{-1}C_{3}^{B}
\\
C_{4}^{B}+S_{43}^{B}\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}C_{2}^{A}+S_{43}^{B}S_{22}^{A}\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}C_{3}^{B}
\end{array}
\right) ,
\end{eqnarray*}
\begin{equation*}
\begin{array}{l}
\Omega _{\star }=\Omega _{A}+\Omega _{B}+\func{Im}\left\{ C_{3}^{B\dag
}\left( 1-S_{33}^{B}S_{22}^{A}\right) ^{-1}C_{3}^{B}+C_{3}^{B\dag }\left(
1-S_{33}^{B}S_{22}^{A}\right) ^{-1}S_{33}^{A}C_{2}^{A}\right. \\
+C_{2}^{A\dag }\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}S_{22}^{A}C_{3}^{B}+C_{2}^{A\dag }\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}C_{2}^{A} \\
+C_{1}^{A\dag }S_{12}^{A}\left( 1-S_{33}^{B}S_{22}^{A}\right)
^{-1}C_{3}^{B}+C_{1}^{A\dag }S_{12}^{A}\left( 1-S_{33}^{B}S_{22}^{A}\right)
^{-1}S_{33}^{B}C_{2}^{B} \\
\left. +C_{4}^{B\dag }S_{43}^{B}\left( 1-S_{22}^{A}S_{33}^{B}\right)
^{-1}S_{22}^{A}C_{3}^{B}+C_{4}^{B\dag }S_{43}^{B}\left(
1-S_{22}^{A}S_{33}^{B}\right) ^{-1}C_{2}^{A}\right\} .
\end{array}
\end{equation*}
\section{Appendix (Stratonovich to It\={o} Conversion)}
It is convenient to introduce integrated fields
\begin{equation*}
B_{i}\left( t\right) \equiv \int_{0}^{t}b_{i}\left( s\right) ds,B_{i}^{\dag
}\left( t\right) \equiv \int_{0}^{t}b_{i}^{\dag }\left( s\right) ds,\Lambda
_{ij}\left( t\right) \equiv \int_{0}^{t}b_{i}^{\dag }\left( s\right)
b_{j}\left( s\right) ds.
\end{equation*}
$B_{i}\left( t\right) $ and $B_{i}^{\dag }\left( t\right) $ are called the
annihilation and creation process, respectively, for the $i$th field and
collectivey are referred to as a quantum Wiener process. $\Lambda
_{ij}\left( t\right) $ is called the gauge process or scattering process
from the $j$th field to the $i$th field. A noncommutative version of the Ito
theory of stochastic integration with respect to these processes can be
built up. The quantum It\={o} table giving the product of infinitesimal
increments of these process is
\begin{equation*}
\begin{tabular}{l|llll}
$\times $ & $dB_{k}$ & $d\Lambda _{kl}$ & $dB_{l}^{\dag }$ & $dt$ \\ \hline
$dB_{i}$ & 0 & $\delta _{ik}dB_{l}$ & $\delta _{il}dt$ & 0 \\
$d\Lambda _{ij}$ & 0 & $\delta _{jk}d\Lambda _{il}$ & $\delta
_{jl}dB_{i}^{\dag }$ & 0 \\
$dB_{j}^{\dag }$ & 0 & 0 & 0 & 0 \\
$dt$ & 0 & 0 & 0 & 0
\end{tabular}
.
\end{equation*}
The Ito equation for the unitary process is then $dV=\left( dG\right) V$
where
\begin{equation*}
dG=\sum_{i,j=1}^{n}(S_{ij}-\delta _{ij})d\Lambda
_{ij}+\sum_{i=1}^{n}L_{i}dB_{i}^{\dag }-\sum_{j=1}^{n}L_{i}^{\dag
}S_{ij}B_{j}-\left( \frac{1}{2}\sum_{i=1}^{n}L_{i}^{\dag }L_{i}-iH\right) dt.
\end{equation*}
The Stratonovich form is $dV=-i\left( dE\right) \circ V$ where
\begin{equation*}
dE=\sum_{i,j=1}^{n}E_{ij}d\Lambda _{ij}+\sum_{i=1}^{n}F_{i}dB_{i}^{\dag
}+\sum_{j=1}^{n}F_{j}^{\dag }dB_{j}\left( t\right) +Kdt.
\end{equation*}
and we define the Stratonovich differential to be $\left( dX\right) \circ
Y=\left( dX\right) Y+\frac{1}{2}\left( dX\right) \left( dY\right) $ with the
last term computed using the It\={o} table. We have the consistency
condition $dV=\left( dG\right) V\equiv -i\left( dE\right) V-\frac{i}{2}%
\left( dE\right) \left( dG\right) V$ or
\begin{equation*}
dG=-idE-\frac{i}{2}\left( dE\right) \left( dG\right) ,
\end{equation*}
and using the table we see that
\begin{eqnarray*}
S-1 &\equiv &-iE-\frac{i}{2}E\left( S-1\right) \\
L &=&-iF-\frac{i}{2}EL \\
-\frac{1}{2}L^{\dag }L-iH &=&-iK-\frac{i}{2}F^{\dag }L
\end{eqnarray*}
which can be solved to give the relations $\left( \ref{Strat-Ito}\right) $.
|
1,116,691,500,331 | arxiv |
\section{The New Physics Potential of GENIUS}
\subsection{Introduction}
Two outstanding problems in contemporary astro- and particle physics
are the nature of the dark matter in the Universe and the question for
the neutrino mass. There is compelling evidence on all cosmological
scales that the dominant form of matter is nonbaryonic
\cite{kolb94}. Attractive
candidates for nonbaryonic dark matter are relic elementary particles
left over from the big bang, the three most promising being neutrinos,
axions and neutralinos \cite{jkg96}.
There is significant evidence from theories of structure
formation against neutrinos as the bulk of dark matter. A mixed hot
plus cold dark matter scenario still gives the better fit to the CMB and
LSS-data \cite{eric98}. Recently $\Lambda$CDM scenarios have become
the most attractive ones \cite{turner99}.
To adress both issues we propose the GENIUS experiment
\cite{Kla98,KK2}.
The optimal locations would be
the Gran Sasso or WIPP underground laboratories.
GENIUS, using
ionization in a Ge detector as detection technique, would operate naked Ge
crystals in ultrapure liquid nitrogen.
The aim is to reach the background level of 10$^{-3}$ events/kg y keV
in the low energy region, thus to cover most of the parameter space
predicted for neutralinos in the MSSM, and
to be sensitive to the low-energy pp and $^7$Be solar neutrino flux. In the energy region of the
$0\nu\beta\beta$ {}-decay of $^{76}$Ge the goal is to reach a count rate of 0.3
events/t y keV, thus testing the effective Majorana neutrino mass down
to 0.01 eV for one ton of enriched $^{76}$Ge (0.001 eV for ten
tons). While for dark matter search only 100 kg of natural Ge are needed as
detectors, an amount of the order of one ton of (natural
or enriched) Ge would allow to observe for the first time pp
neutrinos in a real-time measurement.
In addition to the unique information on neutrino masses and mixings
obtainable, GENIUS would allow also a breakthrough into the multi-TeV range
for many other beyond standard models of particle physics, such as
supersymmetry (R-parity breaking, sneutrino mass), compositeness, right-handed
W boson mass, test of special relativity and equivalence principle in the
neutrino sector, and others, competitive to corresponding research at future
high-energy colliders.
\subsection{Direct Dark Matter Detection}
\subsubsection{Three dark matter problems}
There is evidence on all cosmological scales that most of the matter
in our Universe is dark. The quantity and composition of dark matter
is of fundamental importance in cosmology.
We know of three so-called dark matter problems at present:
the bulk of baryonic matter is dark; the dominant form of matter
is nonbaryonic; an additional dark, exotic form of energy contributes
about 60\% to the critical density $\Omega_0$.
A precise determination of the universal baryon density is provided by
the big-bang nucleosynthesis (BBN).
Comparison of the measured primeval
deuterium abundance with its big-bang prediction yields a baryon density
of $\Omega_B$h$^2$ = 0.019 $\pm$ 0.0012 \cite {Bur99}.
However, clusters of galaxies account only for about 10\% of the baryons
\cite{per92}.
Promising candidates for the dark baryons are so-called MACHOS.
The search for MACHOs in the halo of our own galaxy, in form of
planets, white and brown dwarfs or primordial black holes, exploits the
gravitational microlensing effect, the temporary brightening
of a background star as an unseen object passes close to the line of sight.
For several years two groups are monitoring the brightness of millions
of stars in the Magellanic clouds, the MACHO \cite {macho} and the EROS
\cite {eros98} collaborations.
Several candidates have already been
detected. If interpreted as dark matter they would make up half the
amount needed in the galactic halo. The most probable mass of
these candidates, which can be inferred from the duration of a stars
brightening together with the lense distance, is about half the solar
mass. However, no stellar candidate seems able to explain the observations.
Measurements of carbon abundances in Lyman $\alpha$ forest lines speak
against white dwarfs, even if their masses lie in the
expected region \cite{free99}. The possibility remains, that MACHOs are
an exotic form of baryonic matter, like primordial
black holes, or that they are not located in the halo of our galaxy
\cite{griest99}.
Two events which have been discovered in the direction of the Small
Magellanic Cloud underline this hypothesis; both
lenses are stars within the satellite galaxy
itself \cite{moniez}.
Clusters of galaxies provide very reliable methods of estimating
the total matter density \cite{white}.
They are the largest observed structures, which in part already attained
hydrostatic equilibrium. The
relative amount of baryons and dark matter within their hydrostatic
region provides a measure of the cosmic mix of these components.
The baryonic component of rich clusters is dominated by the X-ray
emitting intracluster gas. Using the cluster baryon fraction
determined from X-ray measurements and assuming that clusters provide
a fair sample of matter in the Universe, a matter density of about a
third of the critical density (i.e. a flat Universe) is inferred \cite{gus}.
There are many other methods of inferring the total matter density,
involving different physics.
For example, compelling evidence for both baryonic and nonbaryonic dark
matter comes from observation of the rotation curves of galaxies. In
particular the rotation curves of dwarf spirals are completely dark
matter dominated \cite{burkert}.
Also distant field galaxies, which are much
younger than nearby galaxies are entirely embedded in dark halos
\cite{fuchs}.
This is indeed expected from
theories of galaxy cosmogony, where the dark
matter haloes themselves are thought to be the sites of galaxy formation.
On the other side, there is strong evidence from structure formation
that the total matter density, $\Omega_M$ is significantly greater then the
density of baryons, $\Omega_B$ \cite{dodelson}.
Evidence for an additional dark, smoothly distributed form of energy
comes from observation of distant supernovae of type Ia.
By measuring the deviation of the Hubble law from linearity at high
redshifts, the acceleration or deceleration of the expansion can be
determined. Objects with well understood properties,
which can be observed up to very large distance (standard
candles)
are type Ia supernovae, thermonuclear explosions of white dwarfs in
binary systems. Two groups succeeded in measuring distances
to some 50 supernovae type Ia \cite{perl,riess}.
The measurements indicate that the
Universe is speeding up, the simplest explanation being a so-called
cosmological constant, a smooth contribution to the energy density of
the Universe, which cannot clump. Such a contribution is also
supported by inflation, since the dynamically determined
matter density ($\Omega_M$ $\simeq$ 0.4) is too low to yield the predicted
flat Universe ($\Omega_0$ = 1).
A summary of the matter/energy composition of the Universe is shown
in Fig. \ref{turner} (from \cite{turner99}).
\begin{figure}[h!]
\epsfysize=8cm
\hspace*{1.8cm}
\epsfbox{omega_sum.eps}
\caption{Summary of an overall accounting of matter and energy in the
Universe, from \cite{turner99}}
\label{turner}
\end{figure}
\subsubsection{Nonbaryonic dark matter candidates}
From the nonbaryonic dark matter candidates proposed in the 1980s,
only WIMPs (weakly interacting massive particles) and axions survived
\cite{kamion}.
WIMPs, which arise in supersymmetric or
other theories beyond the standard model of particle physics, were in
thermal equilibrium with other particles during the early phase of the
Universe.
Their abundance depends only on their annihilation
cross section, but not on the WIMP mass.
Their annihilation cross section must be in the
order of a typical weak interaction should their abundance be of
cosmological relevance today.
If supersymmetry is realized in
nature on the required energy scale, then the lightest supersymmetric
particle (LSP), the neutralino, shows exactly the properties of a WIMP
\cite{kamion}.
Axions, the other leading dark matter particle candidates, were
postulated two decades ago to explain why the strong interaction
conserves the CP-symmetry, which is violated in the standard model.
Axions could have been produced during the QCD phase
transition; their masses are constrained by experimental searches,
astrophysical and cosmological arguments to the order of 10$^{-5}$ eV.
Since axions couple to two photons, they can be detected by stimulating
their conversion to
photons in a cavity permeated by a strong static magnetic field
\cite{sikivie}.
In order to cover
a wide mass range the cavity must be tuned to different frequencies.
Two pilot experiments in Brookhaven \cite{rbf} and Florida \cite{uf}
already demonstrated the
feasibility of the cavity detection method; the second generation
experiments presently under way at the Lawrence Livermore National
Laboratory \cite{lnll} and at the Kyoto University \cite{ku}
will have a sensitivity which
is sufficient to discover axions if they populate the galactic halo.
Besides these dark matter candidates, another class of candidates,
the so-called superheavy dark matter, emerged. If one gives up the
assumption, that the particle was in thermal equilibrium in the early
universe, then its present abundance is
no longer determined by the annihilation cross section and much
heavier particles, so-called WIMPZILLAs, are allowed \cite{rockyI}.
There are two necessary conditions for WIMPZILLAs, they
must be stable, or at least have a lifetime much greater than the age
of the universe and their interaction rate must be sufficiently weak
such that thermal equilibrium with the primordial plasma was never
obtained. For this, the particle must be extremely massive, of the
order of the Hubble parameter at the end of inflation and the
annihilation rate per particle must be smaller than the expansion rate
of the Universe \cite{rockyI}.
Another kind of heavy particles, which could form a natural dark matter
candidate, are stable baryonic Q-balls \cite{kusenko}.
They are predicted by supersymmetric models and could
have been produced during the baryogenesis epoch.
In this scenario, the baryonic matter and the dark matter are
produced in the same process, therefore it is easy to understand why the
observed abundances of these are in the same order of magnitude.
The way to detect Q-balls depends on their ability
to pick up electric charge as they travel through ordinary matter.
Electrically charged superballs would loose their energy in atomic
collisions, their expected signature is similar to those of
nuclearites, which are searched for in the MACRO experiment.
Non observation of these gives a lower limit on the baryon number of
dark matter Q-balls of 10$^{21}$. The signature of electrically
neutral Q-balls are similar to those expected from GUT-monopoles,
the lower limit from the Baikal experiment on their baryonic charge is
10$^{23}$. These limits will be improved by future experiments,
like AMANDA or ANTARES. However for covering the entire
cosmologically interesting range a detector with an are of several
square kilometers would be needed \cite{kusenko}.
\subsubsection{Status of the direct search for WIMPs}
If WIMPs populate the halo of our galaxy they could be detected directly
in low background laboratory experiments or indirectly through
their annihilation products in the halo, the centre of the Sun or
Earth.
The goal of the direct detection experiments is to look for
the elastic scattering
of a WIMP off nuclei in a low-background detector. The recoil nucleus looses
its energy through ionization and thermal processes. The methods to
detect this energy loss span from scintillators, ionization
detectors, bolometers to superheated droplet or superconducting granular
detectors.
The deposited energy for neutralinos
with masses between 10 GeV and 1 TeV is below 100 keV \cite{jkg96}.
For a standard halo comprised of WIMPs with a Maxwellian velocity
distribution characterized by v$_{rms}$ = 270 km/s and a mass density
of 0.4 GeV/cm$^2$, the expected event
rates are well below 1 event per kilogramm target material and day
\cite{jkg96}.
This makes any experimental attempt to directly detect neutralinos a great
technical challenge, requiring a large detector mass, a low energy
threshold, a low background and/or an effective background discrimination
technique.
Direct detection experiments operating so far have reached
sensitivities low enough to enter the parameter space predicted for
neutralinos in the MSSM.
The best current limits on WIMP-nucleon cross section come from the
DAMA NaI experiment \cite{DAMA}, from the Heidelberg-Moscow
experiment \cite{HM98} and from CDMS \cite{cdms98}
(see Figure \ref{limits}).
\begin{figure}[h]
\epsfysize=8cm
\epsfbox{wimp100.eps}
\caption{Total measured spectrum with one enriched $^{76}$Ge detector
of the Heidelberg-Moscow experiment after an exposure of 0.69 kg yr
and
a theoretical spectrum for a 100 GeV WIMP.}
\label{ang2}
\end{figure}
The Stanford CDMS (Cold Dark Matter Search) experiment \cite{rick97}
uses thermal detectors of
ultrapure germanium and silicon which are operated at a temperature of 20
mK. The simultaneous measurement of both ionization and phonon signals
allows the discrimination of a nuclear recoil event from an electron
interaction. This represents a very effective background suppression method.
For the moment the experiment is located at the Stanford Underground Facility,
10.6 m below ground, where it is planned to obtain an exposure of 100 kg d
with two silicon and four germanium devices. For the future
it is planned to operate the detector in the Soudan
Mine with 2000 mwe overburden, which will reduce the muon flux by
five orders of magnitude and thus reduce the cosmogenic activities and
the neutron background.
Fig. \ref{limits} shows the
expected sensitivity at the Stanford site and at the Soudan site.
The DAMA experiment is running 115.5 kg NaI detectors in the Gran
Sasso Underground Laboratory \cite{Ber97}. The high obtainable statistic
opens the possibility to look for a WIMP signature, as a
variation of the event rate due to the movement of the Sun in the
galactic halo and the Earth rotation around the Sun. The analysis of
about 54 kg yr in terms of a WIMP annual modulation signature
favours a positive signal, the allowed region of WIMP masses and cross sections
is well embedded in the minimal supersymmetric parameter space
predicted for neutralinos \cite{damaevid,Bot98}.
However, a further confirmation by DAMA and by other experiments must
be awaited.
\begin{figure}[h]
\epsfysize=9cm
\epsfbox{detindet6.ps}
\caption{Schematic figure of the HDMS detector.}
\label{detindet}
\end{figure}
The Hei\-del\-berg-Moscow \cite{HM97}
and HDMS (Heidelberg Dark Matter Search) \cite{Bau97}
experiments are both located in the Gran Sasso Laboratory,
where the muon component of the cosmic rays is reduced to one part in
a million.
The Heidelberg-Moscow experiment, which also searches
for the neutrinoless double beta decay in enriched $^{76}$Ge, gives
at present the most stringent limits on WIMP-nucleon cross section on
spin-independent interactions
for using raw data without pulse shape analysis \cite{HM98}.
Fig. \ref{ang2} shows
the measured spectrum with one enriched $^{76}$Ge detector
after an exposure of 0.69 kg yr and a calculated WIMP spectrum for a 100 GeV
WIMP.
\begin{figure}[h]
\epsfysize=7cm
\epsfbox{hdms_gs_2.ps}
\caption{The HDMS detector during its installation in the Gran Sasso
Laboratory.}
\label{hdms}
\end{figure}
HDMS, which is a
dedicated dark matter experiment \cite{Bau97},
aims to improve this limit by one
order of magnitude. As in the Heidelberg-Moscow experiment, the
aim is to look for a small ionization signal inside a high purity Ge crystal.
The actual dark matter target, a 200 g crystal made of natural Ge,
is surrounded by a
well-type Ge crystal (see Fig. \ref{detindet}).
The outer detector acts as an effective veto against
multiple scattered photons, allowing to suppress the background
originating from the latter by a factor 6-10.
The HDMS prototype (see Fig. \ref{hdms}) started to measure in April 1998,
while the full HDMS experiment (inner crystal made of enriched $^{73}$Ge
and a new cryostat system of selected copper) will start measuring in
the course of the year 2000.
With the expected sensitivity (see Fig. \ref{limits}) it will be able
to test, like CDMS, the complete DAMA evidence region.
\subsubsection{GENIUS as a dark matter detector}
For an almost complete covering of the MSSM parameter space, an
increase in sensitivity by more than three orders of magnitude relative
to running experiments is required.
The GENIUS experiment could accomplish this task by operating about 40
'naked' natural Ge detectors (100 kg) in a tank of ultrapure liquid nitrogen.
The idea is to increase the target mass to 100 kg while decreasing
at the same time the absolute background by a considerable amount.
The final goal would be to reach a background level of
10$^{-3}$ events/kg y keV in the energy region relevant for dark matter
searches.
The energy threshold would be about 11 keV, the energy resolution
being better than 0.3 \%.
With 100 kg of target material, GENIUS could
also look for a WIMP signature in form of an annual modulation of
the WIMP-signal.
A comparable sensitivity can be reached in principle
only by LHC. An advantage of GENIUS is that it will be particulary
sensitive in regions of large tan$\beta$ in the minimal SUGRA space,
where conventional signals for supersymmetry in collider experiments
are difficult to detect. Thus, if the parameter tan$\beta$ is large,
then the first direct evidence for supersymmetry could come from
GENIUS, rather than from collider searches for sparticles
\cite{Bae97}. But also if SUSY will be detected by collider
experiments, it would still be fascinating - and necessary - to verify
the existence and properties of neutralino dark matter.
\begin{figure}[h]
\epsfysize=8cm
\epsfbox{dmall_all.eps}
\caption{WIMP--nucleon cross section limits as a function of the WIMP
mass. The hatched region is excluded by the Heidelberg--Moscow
\cite{HM98} and the DAMA experiment \cite{DAMA}, the dashed lines are
expectations for recently started or future experiments, like HDMS
\cite{Bau97}, CDMS \cite{cdms98} and CRESST \cite{cresst96}. The
filled contour represents the 2$\sigma$ evidence region of the DAMA
experiment \cite{damaevid}.
The solid fat
line denotes the expectation for the GENIUS project with a background
level of 0.01 counts/(keV kg y), an energy threshold of 11 keV
and an exposure of 300 kg yr.
The experimental limits are compared to
expectations (scatter plot) for WIMP--neutralinos calculated in the
MSSM framework with non--universal scalar mass unification \cite{Bed97c}.}
\label{limits}
\end{figure}
Fig. \ref{limits} shows a comparison of existing constraints and future
sensitivities of cold dark matter experiments, together with the
theoretical expectations for neutralino scattering rates
\cite{Bed97b}.
Obviously, GENIUS could easily cover the range of positive
evidence for dark matter
recently claimed by DAMA \cite{Ber97a,Bot97}.
It would also be by far more
sensitive than all other dark matter experiments at present under construction
or proposed, like the cryogenic experiment CDMS. Furthermore,
obviously GENIUS will be the only experiment,
which could seriously test the MSSM predictions over a large part of the
SUSY parameter space. In this way, GENIUS could compete even
with LHC in the search for SUSY, see for example the discussion
in \cite{Bae97}. It is important to note, that GENIUS could reach the
sensitivity shown in Fig. \ref{limits} with only 100 kg of {\it natural} Ge
detectors in a measuring time of three years \cite{Kla98d}.
\subsection{Double Beta Decay}
Strong hints for neutrino masses are given by the atmospheric and
solar neutrino data, in particular by the SuperKamiokande confirmation
of the atmospheric neutrino deficit \cite{SuperK}. However, neutrino oscillation
experiments can measure only neutrino mass differences. In view of the
SuperKamiokande data and of the dark matter problem, a determination
of the absolute neutrino mass scale should become a high priority.
A method for measuring the Majorana neutrino mass is provided by
neutrinoless double beta decay, at the same time a unique method of
discerning between a Majorana and a Dirac neutrino.
The current most stringent experimental limit on the effective
Majorana neutrino mass, $\langle {\rm m} \rangle < $0.2 eV, comes from the
Heidelberg-Moscow experiment \cite{Bau99a}. Future planned experiments, like NEMO or
Kamland will, like the Heidelberg-Moscow experiment, improve this
limit at best by a factor of two.
Fig. \ref{mass_time} gives the present status of the at present most sensitive double beta experiments and of future plans.
For a significant step beyond this
limit, much higher source strenghts and lower background levels are
needed.
This goal could be accomplished by the GENIUS experiment operating 300
detectors made of enriched $^{76}$Ge, (1 ton) in a liquid nitrogen
shielding
(see Fig. \ref{tank_sch}).
\begin{figure}[h]
\epsfysize=10cm
\hspace*{1.8cm}
\epsfbox{genius_krauss.ps}
\caption{Schematic view of the GENIUS experiment.}
\label{tank_sch}
\end{figure}
GENIUS would search for the $0\nu\beta\beta$ {}-decay in $^{76}$Ge, at the
Q-value of 2038.56 $\pm$ 0.32 keV \cite{hyka91}. The aim is to reach the sensitivity of
$\langle {\rm m} \rangle < $0.01 eV after one year of measuring
time. In an extended version using 10 tons of $^{76}$Ge,
a sensitivity of 0.001 eV could be reached. Already the first step will have
striking influence on presently discussed neutrino mass scenarios. The
potential of GENIUS would also allow a breakthrough into the multi TeV
range for many beyond standard models. It will give information on
supersymmetry (R-parity breaking, sneutrino mass), leptoquarks
(leptoquark -Higgs coupling or leptoquark mass), compositeness,
right-handed W boson mass, test of special relativity and equivalence
principle in the neutrino
sector and others, competitive to
corresponding results from future high-energy colliders.
The sensitivity of GENIUS in the neutrino sector would be larger than of many
present terrestrial neutrino oscillation
experiments and would provide complementary informations
to those planned for the future.
GENIUS with one ton would be able to check the LSND indication for
neutrino oscillations and GENIUS with ten tons could probe directly
the large angle solution of the solar neutrino problem. For an almost
degenerate neutrino mass scenario it could even probe the small angle
solution of the solar neutrino problem.
This potential has been descibed recently in various papers \cite{KK2,Kla97d,Pan99,KPS}.
In the following subsubsections we give a short background of the general potential of double beta decay, a report of the status of double beta experiments, proposals and results, and an outline of the potential of GENIUS for investigation of neutrino masses and mixings and of other beyond standard model physics.
\subsubsection{General new physics potential of double beta decay}
Double beta decay can occur in several decay modes (Fig. \ref{fig1-paes}):
\begin{equation}
^{A}_{Z}X \rightarrow ^A_{Z+2}X + 2 e^- + 2 {\overline \nu_e}
\end{equation}
\begin{equation}
^{A}_{Z}X \rightarrow ^A_{Z+2}X + 2 e^-
\end{equation}
\begin{equation}
^{A}_{Z}X \rightarrow ^A_{Z+2}X + 2 e^- + \phi
\end{equation}
\begin{equation}
^{A}_{Z}X \rightarrow ^A_{Z+2}X + 2 e^- + 2\phi
\end{equation}
the last three of them violating lepton number conservation by $\Delta L=2$.
For the neutrinoless mode
(2) we expect a sharp line at $E=Q_{\beta\beta}$, for the two--neutrino mode
and the various Majoron--accompanied modes classified by their spectral index,
continuous spectra (see Fig. \ref{fig2-paes}).
Important for particle physics are the decay modes (2)--(4).
\begin{figure}
\epsfxsize=90mm
\epsfbox{fig7a_2.eps}
\caption{Schematic representation of $2\nu$ and $0\nu$ double beta decay.}
\label{fig1-paes}
\end{figure}
\begin{figure}
\epsfxsize=90mm
\epsfbox{b2_1.eps}
\caption{Spectral shapes of the different modes of double beta decay,
denotes the spectral index, n=5 for 2$\nu\beta\beta$ decay}
\label{fig2-paes}
\end{figure}
Figure \ref{fig3-paes} gives the Feynman graphs of the neutrinoless
double beta decay mode triggered by the exchange of a neutrino.
The neutrinoless mode (2) needs not be necessarily connected with the
exchange of a virtual neutrino or sneutrino. {\it Any} process violating
lepton number can
in principle lead to a process with the same signature as usual
$0\nu\beta\beta$
decay. It may be triggered by exchange of neutralinos, gluinos, squarks,
sleptons, leptoquarks,... (see below and \cite{KK2,Paes97,Paes99}).
Fig. \ref{fig4-paes} gives the graph of the general neutrinoless double
beta decay mode.
\begin{figure}
\vspace*{1cm}
\hspace*{10mm}
\epsfxsize=60mm
\epsfysize=50mm
\epsfbox{graph_ne.eps}
\caption{Feynman graph for neutrinoless double beta decay
triggered by exchange of a left--handed light or heavy neutrino}
\label{fig3-paes}
\end{figure}
\begin{figure}
\vspace*{1cm}
\hspace*{10mm}
\epsfxsize=100mm
\epsfbox{general.eps}
\caption{Feynman graphs of the general double beta decay rate:
The contributions (a)-(c)correspond to the long range part,
the contribution d) is the short range part.}
\label{fig4-paes}
\end{figure}
This gives rise
to the broad potential of double beta decay for testing or yielding
restrictions on
quantities of beyond standard model physics (Table \ref{rah}).
\begin{table}
{\footnotesize
{
\begin{tabular}[!h]{|lll|}
\hline
Observ. & Restrictions & Topics investigated\\
\hline
\hline
$0\nu$: &\underline{via $\nu$ exchange:} &
Beyond the standard model and SU(5)\\
&Neutrino mass & model; early universe, matter--antimatter\\
& \hskip 3mm Light Neutrino & asymmetry, Dark matter\\
& \hskip 3mm Heavy Neutrino & L--R --symmetric models (e.g. SO(10)),\\
&&compositeness\\
&Test of Lorentz invariance & \\
&and equivalence principle & \\
&Right handed weak currents & $ V+ A$ interaction, $W^{\pm}_{R}$ masses \\
&\underline{via photino, gluino, zino} & SUSY models: Bounds for parameter \\
&\underline{(gaugino) or sneutrino} & space beyond the range of accelerators\\
&\underline{exchange:}& \\
&R-parity breaking, & \\
&sneutrino mass & \\
&\underline{via leptoquark exchange} & leptoquark masses and models\\
& leptoquark-Higgs interaction & \\
\hline
$0\nu\chi$: &existence of the Majoron & Mechanism of (B-L) breaking\\
&&
-explicit\\
&& -spontaneous breaking of the\\
&& local/global B-L symmetry\\
&& new Majoron models\\
\hline
\end{tabular}
}}
\caption {$\beta\beta$ decay and particle physics}
\label{rah}
\end{table}
There is, however, a generic relation between the amplitude of $0\nu\beta\beta$
decay and the $(B-L)$ violating Majorana mass of the neutrino. It has been
recognized about 15 years ago \cite{Sch81} that if any of these two quantities
vanishes, the other one vanishes, too, and vice versa, if one of them is
non--zero, the other one also differs from zero. This Schechter-Valle-theorem
is valid for
any gauge model with spontaneously broken symmetry at the weak scale,
independent of the mechanism of $0\nu\beta\beta$ decay. A generalisation
of this theorem to supersymmetry has been given recently \cite{Hir97,Hir97a}.
This theorem claims for the neutrino
Majorana mass, the $B-L$ violating mass of the
sneutrino and neutrinoless double beta decay amplitude:
If one of them is non--zero, also the others are non--zero and vice versa,
independent of the mechanisms of $0\nu\beta\beta$ decay and (s-)neutrino
mass generation. This theorem connects double beta research with new processes
potentially observable at future colliders like NLC (next linear collider)
\cite{Hir97,Kolb1}.
\subsubsection*{1.3.1.1 Neutrino mass}
Neutrino physics has entered an era of new actuality in connection
with several possible indications of physics beyond the standard model
(SM) of particle physics: A lack of solar
($^7Be$) neutrinos, an atmospheric $\nu_{\mu}$ deficit and mixed dark matter
models could all be explained simultaneously by non--vanishing neutrino masses.
Recent GUT models, for example an extended SO(10) scenario with $S_4$
horizontal symmetry could explain these observations by requiring
degenerate neutrino masses of the order of 1 eV
\cite{19,Moh94,20,21,22,23} \cite{12,13}.
For an overview see \cite{Smi96a,Mohneu}.
More recent theoretical discussions are given in
\cite{Kla99b,Adh98,Min97,Giu99,Ma99,Vis99,Bil99}.
From all this work it is clear that double beta decay experiments have come
into some key
position, since
the predictions of or assumptions in such
scenarios start now to become testable partly already by the most advanced present experiments like the Heidelberg-Moscow experiment.
Neutrinoless double beta decay can be triggered by exchange of a
light or heavy left-handed Majorana neutrino (see Fig. \ref{fig3-paes}).
For exchange of a heavy {\it right}--handed neutrino see below.
The propagators in the first and second case show a different $m_{\nu}$
dependence: Fermion propagator $\sim \frac {m}{q^2-m^2} \Rightarrow$
\begin{equation}
a)\hskip5mm m\ll q \rightarrow \sim m \hskip5mm 'light' \hskip2mm neutrino
\end{equation}
\begin{equation}
b)\hskip5mm m\gg q \rightarrow \sim \frac{1}{m} \hskip5mm 'heavy' \hskip2mm
neutrino
\end{equation}
The half--life for $0\nu\beta\beta$ decay induced by exchange of a light
neutrino is given by \cite{27}
\ba{71}
[T^{0\nu}_{1/2}(0^+_i \rightarrow 0^+_f)]^{-1}= C_{mm}
\frac{\langle m_{\nu} \rangle^2}{m_{e}^2}
+C_{\eta\eta} \langle \eta \rangle^2 + C_{\lambda\lambda}
\langle \lambda \rangle^2 +C_{m\eta} \frac{m_{\nu}}{m_e}
\nonumber
\end{eqnarray}
\begin{equation}
+ C_{m\lambda}
\langle \lambda \rangle \frac{\langle m_{\nu} \rangle}{m_e}
+C_{\eta\lambda}
\langle \eta \rangle \langle \lambda \rangle
\end{equation}
or, when neglecting the effect of right--handed weak currents, by
\begin{equation}
[T^{0\nu}_{1/2}(0^+_i \rightarrow 0^+_f)]^{-1}=C_{mm}
\frac{\langle m_{\nu} \rangle^2}{m_{e}^2}
=(M^{0\nu}_{GT}-M^{0\nu}_{F})^2 G_1
\frac{\langle m_{\nu} \rangle^2}{m_e^2}
\end{equation}
where $G_1$ denotes the phase space integral, $ \langle m_{\nu} \rangle$
denotes an effective neutrino mass
\begin{equation}
\langle m_{\nu} \rangle = \sum_i m_i U_{ei}^2,
\label{obs}
\end{equation}
respecting the possibility of the electron neutrino to be a mixed state
(mass matrix not diagonal in the flavor space)
\begin{equation}
|\nu_e \rangle = \sum_i U_{ei} |\nu_{i}\rangle
\end{equation}
For Majorana neutrinos, $U$ is given by
\begin{equation}
\footnotesize
{\tiny{
\left(
\begin{array}{ccc}
c_{12}c_{13} & s_{12}c_{13}e^{-i\delta_{12}} & s_{13}e^{-i\delta_{13}}\\
-s_{12}c_{23}e^{i\delta_{12}}
-c_{12}s_{23}s_{13}e^{i(\delta_{13}+\delta_{23})} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i(\delta_{23}+\delta_{13}-\delta_{12})}&
s_{23}c_{13}e^{i\delta_{23}} \\
s_{12}s_{23}e^{i(\delta_{23}+\delta_{13})}
-c_{12}c_{23}s_{13}e^{i(\delta_{23}+\delta_{13})} &
-c_{12}s_{23}e^{i\delta_{23}}
-s_{12}c_{23}s_{13}e^{i(\delta_{13}-\delta_{12})} & c_{23}c_{13} \\
\end{array}
\right),}}
\end{equation}
\normalsize
where $s_{ij}=sin \theta_{ij}, c_{ij}=cos \theta_{ij}$ and $\delta$ is a
CP violating phase. For a given neutrino oscillation pattern the absolute
$\nu$ masses are not fixed, adding an arbitrary $m_0$
\begin{equation}
m_i \rightarrow m_i + m_0
\end{equation}
does not change the oscillation probabilities, but the $0\nu\beta\beta$
rate. This means the oscillation pattern does not restrict the effective
Majorana
mass. Thus neutrinoless double beta decay is a indispensable crucial check of
neutrino mass
models.
The effective mass $\langle m_{\nu} \rangle$ could be smaller than $m_i$
for all i for appropriate CP phases of the mixing coefficiants $U_{ei}$
\cite{Wol81}.
In general not too pathological GUT models yield
$m_{\nu_e}=\langle m_{\nu_e}
\rangle$ (see \cite{15}).
$\eta$,$\lambda$ describe an admixture of right--handed weak currents, and
$M^{0\nu}\equiv M_{GT}^{0\nu}-M_{F}^{0\nu}$ denote nuclear matrix elements.
\subsubsection*{Nuclear matrix elements:}
A detailed discussion of $\beta\beta$ matrix elements for neutrino induced
transitions including the substantial (well--understood) differences
in the precision with which $2\nu$ and $0\nu\beta\beta$ rates can be
calculated, can be found in \cite{16,27,28} \cite{29,KK1,KK2}.
\subsubsection*{1.3.1.2 Supersymmetry}
Supersymmetry (SUSY) is considered as prime candidate for a theory beyond the
standard model, which could overcome some of the most puzzling questions of
today's particle physics (see, e.g. \cite{44,45,Kan97}).
Generally one can add the following R--parity violating terms
to the usual superpotential \cite{hal84}.
\begin{equation}
W_{R_{P} \hspace{-0.9em}/\;\:}%\hspace{0.8em}}=\lambda_{ijk}L_{i}L_{j}\overline{E}_{k}+\lambda^{'}_{ijk}
L_i Q_j \overline{D}_k + \lambda^{''}\overline{U}_i \overline{D}_j
\overline{D}_k,
\end{equation}
where indices $i,j,k$ denote generations. $L$,$Q$ denote lepton and quark
doublet superfields and $\overline{E}, \overline{U}, \overline{D}$ lepton and
up, down quark singlet superfields. Terms proportional to $\lambda$,
$\lambda^{'}$
violate lepton number, those proportional to $\lambda^{''}$ violate baryon
number. From proton decay limits it is clear that both types of terms cannot
be present at the same time in the superpotential. On the other hand, once the
$\lambda^{''}$ terms being assumed to be zero, $\lambda$ and $\lambda^{'}$
terms are not limited. $0\nu\beta\beta$ decay can occur within the
$R_p \hspace{-1em}/\;\:$ MSSM through Feynman graphs such as those of Fig. \ref{fig5}
In lowest order
there are alltogether six different graphs of this kind. \cite{6,47,75}.
Thus $0\nu\beta\beta$ decay can be used to restrict R--parity violating
SUSY models \cite{6,hir96c,17,47,48}. From these graphs one derives \cite{6}
under some assumptions
\begin{equation}
[T^{0\nu}_{1/2}(0^+ \rightarrow 0^+)]^{-1} \sim G_{01}
(\frac{\lambda_{111}'^2}{m^4_{{\tilde q},{\tilde e}}m_{{\tilde g}\chi}}M)^2
\end{equation}
where $G_{01}$ is a phase space factor,
$m_{{\tilde q}{\tilde e}{\tilde g}\chi}$
are the masses of supersymmetric particles involved: squarks, selectrons,
gluinos, or neutralinos. $\lambda'_{111}$ is the strength of an R--parity
breaking interaction (eq. 11), and $M$ is a nuclear matrix element. For the
matrix elements and their calculation (see \cite{hir96c}).
\begin{figure}
\epsfxsize=50mm
\epsfbox{graph2.ps}
\vspace*{-45mm}
\hspace*{60mm}
\epsfxsize=50mm
\epsfbox{graph4.ps}
\caption{ Examples of Feynman graphs for $0\nu\beta\beta$ decay within
R--parity violating supersymmetric models (from [Hir95a]).}
\label{fig5}
\end{figure}
\begin{figure}
\epsfxsize=50mm
\epsfbox{nexch1.eps}
\vspace*{-45mm}
\hspace*{60mm}
\epsfxsize=50mm
\epsfbox{nexch2.eps}
\caption{ (left) Feynman graph for the mixed SUSY-neutrino exchange mechanism
of 0$\nu\beta\beta$ decay. R-parity violation occurs through scalar
quark exchange. (right) As left figure,
but for scalar lepton exchange (from [Hir96]).}
\label{fig6}
\end{figure}
\begin{figure}
\parbox{14cm}{
\vspace*{6mm}
\epsfxsize=50mm
\epsfbox{e4boxa.ps}
\parbox{6cm}{
\vspace*{-45mm}
\hspace*{60mm}
\epsfxsize=50mm
\epsfbox{lnvd01.ps}
\vspace*{8mm}
}
\caption{ Examples of $R_P$ conserving SUSY contributions
to $0\nu\beta\beta$ decay
(from [Hir97a]).}}
\label{fig7}
\end{figure}
It is also worthwile to notice that $0\nu\beta\beta$ decay is not only
sensitive to $\lambda^{'}_{111}$. Taking into account the fact that the SUSY
partners of the left and right--handed quark states can mix with each other,
one can derive limits on different combinations of $\lambda^{'}$
\cite{hir96,7,bab95,Paes99b} (see Fig. \ref{fig6}).
The dominant diagram
of this type is the one where the exchanged scalar particles are the
$\tilde{b}-\tilde{b}^C$ pair. Under some assumptions (e.g. the MSSM mass
parameters to be approximately equal to the ``effective'' SUSY breaking scale
$\Lambda_{SUSY}$), one obtains \cite{hir96}
\begin{equation}
\lambda_{11i}^{'}\cdot \lambda_{1i1}^{'}\leq \epsilon_i^{'}
\Big( \frac{\Lambda_{SUSY}}{100 GeV} \Big)^3
\end{equation}
and
\begin{equation}
\Delta_n \lambda^{'}_{311} \lambda_{n13} \leq \epsilon \Big(\frac
{\Lambda_{SUSY}
}{100 GeV}\Big)^3
\end{equation}
Further constraints on R parity violating Supersymmetry may be obtained
directly from the neutrino mass bound \cite{Bha99}.
Products of trilinear couplings $\lambda$ and/or
$\lambda'$ may generate a complete neutrino mass matrix through
one-loop self-energy graphs \cite{trilinear,recent}.
Let us first consider the effects of the $\lambda'$ interactions. The
relevant part of the Lagrangian can be written as
\begin{equation}
- {\cal L}_{\lambda'} = \lambda'_{ijk} \left[\bar{d}_k P_L \nu_i
\tilde{d}_{jL} + \bar{\nu}^c_i P_L d_j \tilde{d}^*_{kR}\right]
+ ~{\rm h.c.}
\label{lagrangian}
\end{equation}
Majorana mass terms for the left-handed neutrinos, given by
\begin{equation}
{\cal
L}_M = -\frac{1}{2} m_{\nu_{ii'}} \bar{\nu}_{Li} \nu^c_{Ri'} +~{\rm
h.c.},
\end{equation}
are generated at one loop. Fig. \ref{rpvfiggen} shows the
corresponding diagrams. The induced masses are given by
\begin{equation}
m_{\nu_{ii'}} \simeq {{N_c \lambda'_{ijk} \lambda'_{i'kj}}
\over{16\pi^2}} m_{d_j} m_{d_k}
\left[\frac{f(m^2_{d_j}/m^2_{\tilde{d}_k})} {m_{\tilde{d}_k}} +
\frac{f(m^2_{d_k}/m^2_{\tilde{d}_j})} {m_{\tilde{d}_j}}\right],
\label{mass}
\end{equation}
where $f(x) = (x\ln x-x+1)/(x-1)^2$. Here, $m_{d_i}$ is the down quark
mass of the $i$th generation inside the loop, $m_{\tilde{d}_i}$ is an
average of $\tilde{d}_{Li}$ and $\tilde{d}_{Ri}$ squark masses, and
$N_c = 3$ is the colour factor. In deriving Eq.~(\ref{mass}), it was
assumed that the left-right squark mixing terms in the soft part of
the Lagrangian are diagonal in their physical basis and proportional
to the corresponding quark masses, {\em i.e.} $\Delta m^2_{\rm LR} (i)
= m_{d_i} m_{\tilde{d}_i}$. With $\lambda$-type interactions, one obtains
exactly similar results
as above. The quarks
and squarks in these equations will be replaced by the leptons and
sleptons of the corresponding generations. The colour factor $N_c = 3$
and $Q_d$ would be replaced by $1$
and $Q_e$, respectively.
Among the different entries of the flavour space mass
matrix, only the $ee$-term has a {\em direct experimental} bound
obtained from neutrinoless double-beta decay.
\begin{figure}
\epsfxsize=100mm
\vspace*{-3cm}
\epsfbox{magmomnew.ps}
\vspace*{-3cm}
\caption{
The $\lambda'$-induced one loop diagrams
contributing to Majorana masses for the neutrinos.}
\label{rpvfiggen}
\end{figure}
For an overview on our knowledge
on $\lambda^{'}_{ijk}$
from other sources we refer to \cite{Kol97a,Bha97,Bha99}.
Also R--parity {\it conserving} softly broken supersymmetry can give
contributions to $0\nu\beta\beta$ decay, via the $B-L$--violating sneutrino
mass term, the latter being a generic ingredient of any weak--scale SUSY
model with a Majorana neutrino mass \cite{Hir97,Kolb1}.
These contributions are
realized at the level of box diagrams \cite{Kolb1} (Fig. 13).
The $0\nu\beta\beta$ half-life for contributions from sneutrino exchange
is found to be \cite{Kolb1}
\begin{equation}
[{T_{1/2}^{0\nu\beta\beta}}]^{-1}=G_{01}\frac{4 m_p^2}{G^4_F}
\Big|\frac{\eta^{SUSY}}{m^5_{SUSY}} M^{SUSY}\Big|,
\end{equation}
where the phase factor $G_{01}$ is tabulated in \cite{74}, $\eta^{SUSY}$
is the effective lepton number violating parameter, which contains the
$(B-L)$ violating sneutrino mass $\tilde{m}_M$ and $M^{SUSY}$ is the nuclear
matrix element \cite{11}.
\subsubsection*{ 1.3.1.3 Left--Right symmetric theories --
Heavy neutrinos and right--handed W Boson}
Heavy {\it right--handed } neutrinos appear quite naturally in left--right
symmetric GUT models. Since in such models the symmetry breaking
scale for the right--handed sector is not fixed by the theory, the
mass of the right--handed $W_R$ boson and the mixing angle between the mass
eigenstates $W_1$, $W_2$ are free parameters. $0\nu\beta\beta$ decay taking
into account contributions from both, left-- and right--handed neutrinos
have been studied theoretically by \cite{11,49}. The former gives a more
general expression for the decay rate than introduced earlier by \cite{50}.
The amplitude will be proportional to (see Fig. \ref{fig8})\cite{11}
\begin{equation}
\Big( \frac{m_{W_{L}}}{m_{W_R}}^4 \Big) \Big(\frac{1}{m_N}+\frac{m_N}
{m^2_{\Delta^{--}_R}}\Big)
\label{ncs3}
\end{equation}
\begin{figure}
\vspace*{6mm}
\epsfxsize=50mm
\epsfbox{basic.ps}
\vspace*{-45mm}
\hspace*{60mm}
\epsfxsize=50mm
\epsfbox{higgs.ps}
\vspace*{8mm}
\caption{ Left: Heavy neutrino exchange contribution to neutrinoless
double beta decay in left-right symmetric models, and Right: Feynman
graph for the virtual exchange of a double-charged Higgs boson
(from [Hir96d]).}
\label{fig8}
\end{figure}
Eq. \ref{ncs3} and the experimental lower limit of $0\nu\beta\beta$ decay leads
to a constraint limit within the 3--dimensional parameter space
($m_{W_R}-m_N-m_{\Delta^{--}_R}$).
\subsubsection*{1.3.1.4 Compositeness}
Although so far there are no experimental signals of a substructure of quarks
and leptons, there are speculations that at some higher energy ranges beyond 1
TeV or so there might exist an energy scale $\Lambda_C$ at which a
substructure of quarks and leptons (preons) might become visible
\cite{8,45,51,Pan99} (Fig. \ref{fig9}).
\begin{figure}
\epsfxsize=80mm
\epsfbox{panfig2.eps}
\caption{The idea of compositeness. At a (still unknown) energy scale
$\Lambda_C$ quarks and leptons might show an internal structure}
\label{fig9}
\end{figure}
A possible low energy manifestation of compositeness could be neutrinoless
double beta decay, mediated by a composite heavy Majorana neutrino,
which then should be a Majorana particle (Fig. \ref{fig10}).
Recent theoretical work shows (see \cite{8,9,Pan97,Tak97,Pan99})
that the mass bounds for such an excited neutrino
which can be derived from double
beta decay are at
least of the same order of magnitude as those coming from the
direct search of excited states in high energy accelerators
(see also subsection 1.3.2).
\begin{figure}
\epsfxsize=80mm
\hspace*{1cm}
\epsfbox{panfig1.eps}
\caption{Neutrinoless double beta decay ($\Delta$L = +2 process)
mediated by a composite heavy Majorana neutrino.}
\label{fig10}
\end{figure}
\subsubsection*{1.3.1.5 Majorons}
The existence of new bosons, so--called Majorons, can play a significant
role in new physics beyond the standard model, in the history
of the early universe, in the evolution of stellar objects, in supernovae
astrophysics and the solar neutrino problem \cite{61,62,Kla92}.
In many theories of physics beyond the standard model neutrinoless
double beta decay can occur with the emission of Majorons
\begin{equation}
2n\rightarrow2p+2e^{-}+\phi
\end{equation}
\begin{equation}
2n\rightarrow2p+2e^{-}+2\phi.
\end{equation}
To avoid an unnatural fine--tuning in recent years several
new Majoron models were proposed \cite{68,69,70},
where the term
Majoron denotes in a more general sense light or massless bosons
with couplings to neutrinos.
The main novel features of these ``New Majorons'' are that they
can carry leptonic charge, that they need not be
Goldstone bosons and that emission of two Majorons
can occur.
The latter can be scalar--mediated
or fermion--mediated. For details we refer to
\cite{71,72}.
The half--lifes are according to \cite{73,74} in some approximation given
by
\begin{equation}
[T_{1/2}]^{-1}=|<g_{\alpha}>|^{2}\cdot|M_{\alpha}|^{2}\cdot G_{BB_{\alpha}}
\end{equation}
for $\beta\beta\phi$-decays, or
\begin{equation}
[T_{1/2}]^{-1}=|<g_{\alpha}>|^{4}\cdot|M_{\alpha}|^{2}\cdot G_{BB_{\alpha}}
\end{equation}
for $\beta\beta\phi\phi$--decays. The index ${\alpha}$
indicates that effective neutrino--Majoron coupling constants $g$,
matrix elements $M$ and phase spaces $G$ differ for different models.
\subsubsection*{Nuclear matrix elements:}
There are five different nuclear matrix elements. Of
these $M_{F}$ and $M_{GT}$ are the same which occur in $0\nu\beta\beta$ decay.
The other ones and the corresponding phase spaces have been calculated
for the first time
by \cite{71,75}. The calculations of the matrix elements show
that the new models predict,
as consequence of the small matrix elements,
very large half--lives and that unlikely large
coupling constants would be needed to produce observable decay rates
(see \cite{71,75}).
\subsubsection*{1.3.1.6 Sterile neutrinos}
Introduction of sterile neutrinos has been claimed to solve simultaneously the
conflict between dark matter neutrinos, LSND and supernova nucleosynthesis
\cite{76} and light sterile neutrinos are part of popular
neutrino mass scenarios
for understanding the various hints for neutrino
oscillations and \cite{Moh96,Mohneu,Moh97a}.
Neutrinoless double beta decay can also
investigate several effects
of {\it heavy} sterile neutrinos \cite{77} (Fig. \ref{fig11}).
If we assume having a light neutrino with a mass $\ll$ 1 eV, mixing with a much
heavier (m $\ge$ 1 GeV) sterile neutrino can yield under certain conditions
a detectable signal in current $\beta\beta$ experiments.
\begin{figure}[h!]
\epsfysize=8cm
\epsfbox{burgess_2.epsi}
\caption{Regions of the parameter space
($\epsilon-M_{N'_{+}}$ )plane yielding
an observable signal (shaded areas) (from \cite{77}). Darker area: 'natural'
region, lighter shaded: Finetuning needed, to keep $m_{\nu_e}$ below 1 eV.
$M_{N'_+}$: mass eigenstate, $\epsilon$: strength of lepton number violation in
mass matrix}
\label{fig11}
\end{figure}
\subsubsection*{1.3.1.7 Leptoquarks}
Interest on leptoquarks (LQ) has been renewed during the last few years
since ongoing collider experiments have good prospects for searching
these particles \cite{Lagr1}. LQs are vector or scalar particles
carrying both lepton and baryon numbers and, therefore, have a
well distinguished experimental signature. Direct searches of LQs in
deep inelastic ep-scattering at HERA \cite{H196} placed lower limits
on their mass $M_{LQ} \ge 225-275$ GeV, depending on the LQ type and
couplings.
\begin{figure}[h!]
\epsfxsize=50mm
\epsfysize=50mm
\hspace*{0.0cm}
\epsfbox{grlq1.ps}
\hspace*{1cm}
\epsfxsize=50mm
\epsfysize=50mm
\epsfbox{grlq2.ps}
\vskip5mm
\caption{ Examples of Feynman graphs for $0\nu\beta\beta$ decay
within LQ models. $S$ and $V^{\mu}$ stand symbolically for scalar
and vector LQs, respectively (from [Hir96a]).}
\label{feyn2}
\end{figure}
To consider LQ phenomenology in a model-independent fashion one
usually follows some general principles in constructing the Lagrangian
of the LQ interactions with the standard model fields. In order to
obey the stringent constraints from (c1) helicity-suppressed
$\pi \rightarrow e\nu$ decay, from (c2) FCNC processes and
from (c3) proton stability, the following assumptions are commonly adopted:
(a1) LQ couplings are chiral, (a2) LQ couplings are generation
diagonal, and (a3) there are no diquark couplings.
Recently, however, it has been pointed out \cite{hir96a} that possible
LQ-Higgs interactions spoil assumption (a1): Even if one assumes
LQs to be chiral at some high energy scale, LQ-Higgs interactions
introduce after electro-weak symmetry breaking mixing between
LQ states with different chirality. Since there is no fundamental
reason to forbid such LQ-Higgs interactions, it seems difficult
to get rid of the unwanted non-chiral interactions in LQ models.
In such LQ models there appear contributions to $0\nu\beta\beta$
decay via the Feynman graphs of Fig.\ref{feyn2}. Here, $S$ and $V^{\mu}$
stand
symbolically for scalar and vector LQs, respectively.
The half--life for $0\nu\beta\beta$ decay arising from leptoquark
exchange is given by \cite{hir96a}
\begin{equation}
T_{1/2}^{0\nu}=|M_{GT}|^2 \frac{2}{G_F^2}[\tilde{C}_1a^2+C_4 b_R^2
+2 C_5 b_L^2].
\end{equation}
with $a=\frac{\epsilon_S}{M_S^2}+\frac{\epsilon_{V}}{M_V^2}$,
$b_{L,R}=\frac{\alpha_{S}^{(L,R)}}{M_S^2}+\frac{\alpha_V^{(L,R)}}{M_V^2}$,
$\tilde{C}_1=C_1 \Big(\frac{{\cal M}_1^{(\nu)}/(m_e R)}{M_{GT}-
\alpha_2 M_F}
\Big)^2$.
For the definition of the $C_n$ see \cite{74} and for
the calculation
of the
matrix element ${\cal M}_{1}^{(\nu)}$ see \cite{hir96a}.
This allows to deduce information on leptoquark masses and leptoquark--Higgs
couplings (see subsection 1.3.2).
\subsubsection*{1.3.1.8 Special Relativity and Equivalence Principle}
Special relativity
and the equivalence principle can be considered as the most
basic foundations of the theory of gravity.
Many experiments already have tested these principles to a very high
level of
accuracy \cite{rel} for ordinary matter - generally for
quarks and leptons of the first
generation. These precision tests of
local Lorentz invariance -- violation of the equivalence
principle should produce a similar effect \cite{will} -- probe for any
dependence of the (non--gravitational) laws of physics on a laboratory's
position, orientation or velocity relative to some preferred frame of
reference, such as the frame in which the cosmic microwave background is
isotropic.
A typical feature of the violation of local Lorentz invariance (VLI)
is that different species of matter have a characteristical
maximum attainable speed.
This can be tested in various sectors of the standard model
through vacuum Cerenkov radiation \cite{gasp}, photon decay \cite{cole},
neutrino oscillations \cite{glash,nu1,nu2,hal,nu3} and $K-$physics
\cite{hambye,vepk}. These arguments can be extended
to derive new constraints from neutrinoless double
beta decay \cite{KPS}.
The equivalence principle implies that spacetime is described by
unique operational geometry and hence universality of the gravitational
coupling for all species of matter. In the recent years there
have been attempts to constrain a possible amount of
violation of the equivalence principle (VEP) in the neutrino sector
from neutrino oscillation experiments \cite{nu1,nu2,hal,nu3}.
However, these bounds do not apply when the gravitational and the
weak eigenstates have small mixing. In a recent paper \cite{KPS}
a generalized formalism of the neutrino sector has been given to test the VEP
and it has been shown that neutrinoless double beta decay also constrains the
VEP. VEP implies different neutrino species to suffer from
different gravitational potentials while propagating through the
nucleus and hence the effect of different eigenvalues doesn't cancel
for the same effective momentum.
The main result is that neutrinoless double beta decay can constrain
the amount of VEP even when the mixing angle is zero, {\it i.e.},
when only the weak equivalence principle is violated, for which
there does not exist any bound at present.
\subsubsection*{1.3.2 Double Beta Decay Experiments: Present Status and Results}
\subsubsection*{1.3.2.1 Present Experiments and Proposals}
Fig. \ref{mass_time} shows an overview over measured
$0\nu\beta\beta$ half--life limits and deduced mass limits. The largest
sensitivity for $0\nu\beta\beta$ decay is obtained at present by active source
experiments (source=detector), in particular $^{76}$Ge \cite{KK1,KK2,Bau99a}.
\begin{figure}
\parbox{10cm}{
\vspace*{-3cm}
\epsfxsize10cm
\epsffile{lim-engl.ps}
\vspace*{-6cm}
}
\parbox{10cm}{
\hspace*{0.65cm}
\epsfxsize10cm
\epsffile{mlimlog.ps}
}
\vspace*{-4cm}
\caption{
Present situation, 1999, and expectation for the near future
and beyond, of
the most promising $\beta\beta$-experiments concerning accessible half life
(upper) and neutrino mass limits (lower). The light-shaded parts of the bars
correspond
to the present status, the dark parts of the bars to
expectations for running experiments, dashed lines to
experiments under construction and dash-dotted lines to proposed
experiments.}
\label{mass_time}
\end{figure}
Only a few of the present most sensitive experiments may probe the
neutrino mass
in the next years into
the sub--eV region, the
Heidelberg--Moscow experiment being the by far
most advanced and most sensitive one, see Fig. \ref{mass_time}.
Figs. \ref{mass_time} show in addition to the present status
the future perspectives of the main existing
$\beta\beta$ decay experiments and includes some ideas for the future
which have been published.
The best presently existing limits besides the HEIDELBERG-MOSCOW
experiment (light-shaded bars in Fig. \ref{mass_time}),
have been obtained with the isotopes:
$^{48}$Ca \cite{87},
$^{82}$Se \cite{88},
$^{100}$Mo \cite{89},
$^{116}$Cd \cite{90},
$^{130}$Te \cite{91},
$^{136}$Xe \cite{92} and
$^{150}$Nd \cite{93}.
These and other double beta decay setups presently under construction or
partly in operation
such as NEMO \cite{94,Bar97},
the Gotthard $^{136}$Xe TPC experiment \cite{95},
the $^{130}$Te cryogenic experiment \cite{91},
a new ELEGANT $^{48}$Ca experiment using 30 g of $^{48}$Ca \cite{96},
a hypothetical experiment with an improved UCI TPC \cite{93} assumed to use 1.6 kg of $^{136}$Xe,
etc., will not reach or exceed the $^{76}$Ge limits.
The goal 0.3 eV aimed at for the year 2004 by the NEMO experiment
(see \cite{98,Bar97}
and Fig. \ref{mass_time})
may even be very optimistic if claims about the effect of proton-neutron
pairing on the $0\nu\beta\beta$ nuclear matrix elements by
\cite{Pan96} will
turn out to be true, and also if the energy resolution will not be improved
considerably
(see Fig. 1 in \cite{83}).
Therefore, the conclusion given by \cite{Bed97c} concerning the
future SUSY potential of NEMO has no serious basis.
As pointed out by Raghavan \cite{97}, even use of an
amount of about 200 kg of
enriched $^{136}$Xe or 2 tons of natural Xe added to the scintillator of the
KAMIOKANDE detector
or similar amounts added to BOREXINO (both primarily devoted to solar neutrino
investigation)
would hardly lead to a sensitivity larger
than the present $^{76}$Ge experiment.
This idea is going to be realized at present by the KAMLAND
experiment \cite{Suz97}.
It is obvious from Fig. \ref{mass_time} that {\it none}
of the present experimental approaches, or plans or even vague ideas has a
chance to surpass the border of 0.1 eV for the neutrino mass to lower values
(see also \cite{Nor97}).
At present there is only one way visible to reach the domain of lower
neutrino masses,
suggested by \cite{KK1} and meanwhile investigated
in some
detail concerning its experimental realization and physics potential in
\cite{Kla97d,Hel97,KK2,KK3,Bau99a,Kla99b}.
\subsubsection*{1.3.2.2
Present limits on beyond standard model parameters from double beta decay}
The sharpest limits from $0\nu\beta\beta$ decay are presently coming from
the Heidelberg--Moscow experiment \cite{84,KK2,Kla99a,Bau99a}.
They will be given in the following.
With five
enriched (86\% of $^{76}$Ge) detectors of a total mass of 11.5 kg
taking data in the Gran Sasso underground laboratory, and with a background
of at present 0.06 counts/kg year keV in the region of the Q--value,
the experiment has reached its final
setup and is now
exploring the sub--eV range for the mass of the electron neutrino.
Fig. \ref{pfa} shows the spectrum taken in a measuring time of 24 kgy with pulse
shape analysis.
\subsubsection*{Half-life of neutrinoless double beta decay}
The deduced half-life limit for $0\nu\beta\beta$ decay is using the method
proposed by \cite{PDG98}
\begin{equation}
T^{0\nu}_{1/2} > 1.1 \cdot 10^{25} y \hspace{2mm}(90\% C.L.)
\end{equation}
\begin{equation}
\hskip8mm > 1.6 \cdot 10^{25} y \hspace{2mm}(68 \% C.L.).
\end{equation}
\noindent
from the full data set with 49 kg yr and:\\
\begin{equation}
T^{0\nu}_{1/2} > 1.8 \cdot 10^{25} y \hspace{2mm}(90\% C.L.)
\end{equation}
\begin{equation}
\hskip8mm > 3.0 \cdot 10^{25} y \hspace{2mm}(68 \% C.L.).
\end{equation}
\noindent
from the data with pulse shape analysis with a total exposure of
31 kg yr.
{\sl {Neutrino mass}}\\
{\it {Light neutrinos:}} The upper limit of an (effective) electron
neutrino Majorana mass, deduced from the data with pulse shape
analysis, is, with the matrix element from \cite{29}
\begin{equation}
\langle m_{\nu} \rangle < 0.36 eV \hspace{2mm}(90\% C.L.)
\end{equation}
\begin{equation}
\hskip10mm < 0.28 eV \hspace{2mm}(68 \% C.L.)
\end{equation}
\begin{figure}[h!]
\hspace*{15mm}
\epsfxsize=90mm
\epsfbox{0n_last.eps}
\caption{Integral spectrum in the region of interest after
subtraction of the first 200 days of measurement of each detector,
leaving 49 kg yr and 31 kg y of measuring time without and with
pulse shape analysis respectively.
The solid
curves correspond to the signal excluded
with $90 \% C.L.$ (with ${\rm T}_{1/2}^{0\nu}
\geq 1.1 \times 10^{25} {\rm~ yr}$, 90\% C.L. and
${\rm T}_{1/2}^{0\nu} \geq 1.8 \times 10^{25} {\rm~ yr}$, 90\% C.L.)}
\label{pfa}
\end{figure}
This is the sharpest limit for a Majorana mass of the electron neutrino so
far. With these values the Heidelberg--Moscow experiment starts to take
striking influence on presently discussed neutrino mass scenarios, which arose
in connection with the recent Superkamiokande results on solar and atmospheric
neutrinos. We mention a few examples:
The new $0\nu\beta\beta$ result excludes already now simultaneous 3$\nu$
solutions for hot dark matter, the atmospheric neutrino problem and the small
mixing angle MSW solution \cite{Adh98}. This means that Majorana neutrinos
are ruled out, if the small mixing angle solution of the solar neutrino
problem is borne out -- if we insist on neutrinos as hot dark matter
candidates. According to \cite{Min97} degenerate neutrino mass schemes for hot
dark matter, solar and atmospheric anomalies and CHOOZ are already now
excluded (with 68 \% C.L.) for the small {\it and}
large mixing angle MSW solutions
(without unnatural finetuning). If starting from recent dark matter models
\cite{Pri98} including in addition to cold and hot dark matter also a
cosmological constant $\Lambda \neq 0$, these conclusions remain also valid,
except for the large angle solution which would not yet be excluded by
$0\nu\beta\beta$ decay (see \cite{Kla99b}).
According to \cite{Bar98} simultaneous 3$\nu$ solutions of solar and
atmospheric neutrinos, LSND and CHOOZ (no hot dark matter!) predict
$\langle m_{\nu} \rangle\simeq 1.5 eV$ for the degenerate case
($m_i \simeq 1 eV$) and
$\langle m \rangle \simeq 0.14 eV$ for the hierarchical case.
This means that the first case is being tested already by the present
Heidelberg--Moscow result. A model producing the neutrino masses based on a
heavy scalar triplet instead of the seesaw mechanism derives from the solar
small angle MSW allowed range of mixing, and accomodating the atmospheric
neutrino problem, $\langle m_{\nu} \rangle$ =0.17-0.31 eV \cite{Ma99}. Also
this model is already close to be disfavored. Looking into 4-neutrino
scenarios, according to \cite{Giu99} there are only two schemes with
four neutrino mixing that can accomodate the results of {\it all}
neutrino oscillation experiments (including LSND). In the first of the schemes,
where $m_1 < m_2 \ll m_3 < m_4$, with
solar (atmospheric) neutrinos oscillating between $m_3$ and $m_4$ ($m_1$ and
$m_2$), and
$\Delta m^2_{LSND}= \Delta m_{41}^2$,
the HEIDELBERG--MOSCOW $0\nu\beta\beta$ bound excludes \cite{Giu99} the
small mixing angle MSW solution of the solar neutrino problem, for both
$\nu_e \rightarrow \nu_{\tau}$, and $\nu_e \rightarrow \nu_s$ transitions.
Including recent astrophysical data yielding $N_{\nu}^{BBN}\leq 3.2$
(95 \% C.L.) \cite{Bur99}, the oscillations of solar neutrinos occur mainly
in the $\nu_e \rightarrow \nu_s$ channel, and {\it only} the small angle
solutions is allowed by the fit of the solar neutrino data \cite{Bah98,Fuk99}.
This means that $0\nu\beta\beta$ excludes the whole first scheme.
In the second scheme $m_1 < m_2 \ll m_3 < m_4$, with solar (atmospheric)
neutrinos oscillating between $m_1$ and $m_2$ ($m_3$ and $m_4$),
the present neutrino
oscillation experiments indicate an effective Majorana mass of
$7 \cdot 10^{-4} eV \leq |\langle m \rangle| \leq 2 \cdot 10^{-2} eV$. This
could eventually be measured by GENIUS (see below). For a similar recent
analysis see \cite{Bil99}. For further detailed analyses of neutrino mass
scenarios
in the light of present and future
neutrino experiments including double beta
decay we refer to \cite{Kla99b}.
\subsubsection*{Superheavy neutrinos:}
For a superheavy {\it left}--handed neutrino we deduce
\cite{79,14,Bel98} exploiting the
mass dependence of the matrix
element (for the latter
see \cite{28}) a lower limit (see also Fig. 36)
\begin{equation}
\langle m_{H} \rangle \ge 9 \cdot 10^7 GeV.
\end{equation}
Assuming the bound on the mixing matrix, $U^2_{ei}<5 \cdot 10^{-3}$
\cite{Bel98}, and
assuming no cancellation between the involved states, this limit implies a
bound on the mass eigenstate
\begin{equation}
M_i > 4.5 \cdot 10^5 GeV.
\end{equation}
\subsubsection*{Right--handed W boson}
For the right--handed W boson we obtain (see Fig. \ref{fig15})
a lower limit of
\begin{equation}
m_{W_R} \ge 1.4 TeV
\end{equation}
(see \cite{11,KKP}).
\begin{figure}[h!]
\hspace*{5mm}
\epsfxsize=80mm
\hspace*{4mm}
\epsfbox{wr1.ps}
\vspace*{-2cm}
\caption{ Limits on the mass of the right-handed W-boson from
neutrinoless double beta decay (full lines) and vacuum stability
(dashed line). The five full lines correspond to the following
masses of the doubly charged higgs, $m_{\Delta^{--}}$: 0.3,
1.0, 2.0, 5.0 and $\infty$ [TeV] downward (from \cite{11}).}
\label{fig15}
\end{figure}
\subsubsection*{SUSY parameters -- R--parity breaking and sneutrino mass}
The constraints on the parameters of the minimal supersymmetric standard model
with explicit R--parity violation deduced \cite{6,hir96c,hir96}
from the $0\nu\beta\beta$
half--life limit are more stringent than those from other
low--energy processes and from the largest high energy
accelerators (Fig. \ref{fig16}). The limits are
\begin{equation}
\lambda^{'}_{111} \leq 4 \cdot 10^{-4} \Big(\frac {m_{\tilde{q}}}{100 GeV}
\Big)^2 \Big(\frac {m_{\tilde{g}}}{100 GeV} \Big)^{\frac{1}{2}}
\end{equation}
with $m_{\tilde{q}}$ and $m_{\tilde{g}}$ denoting squark and gluino masses,
respectively, and with the assumption $m_{\tilde{d_R}} \simeq m_{\tilde{u}_L}$.
This result is important for the discussion of new physics in the connection
with the high--$Q^2$ events seen at HERA. It excludes the possibility of
squarks of first generation (of R--parity violating SUSY) being produced in the
high--$Q^2$ events \cite{Cho97,Alt97,Hir97b}.
\begin{figure}[h!]
\vspace*{-4.0cm}
\epsfxsize=120mm
\epsfbox{figure12.ps}
\vspace*{-7cm}
\caption{Comparison of limits on the R--parity violating MSSM parameters
from different experiments in the $\lambda'_{111}$--$m_{\tilde{q}}$
plane. The ashed line is the limit from charged current universality
according to \cite{113}. The vertical line is the limit from the data
of Tevatron
\cite{114}. The thick full line is the region which might be explored by HERA
\cite{115}. The two dash--dotted lines to the right are the limits obtained
from the half--life limit for $0\nu\beta\beta$ decay of $^{76}$Ge, for
gluino masses of (from left to right) $m_{{\tilde{g}}}=$1TeV and 100 GeV,
respectively. The regions to the upper left of the lines are forbidden.
(from [Hir95])}
\label{fig16}
\end{figure}
We find further \cite{Paes99b}
\begin{equation}
\lambda_{113}^{'}\lambda_{131}^{'}\leq 3 \cdot 10^{-8}
\end{equation}
\begin{equation}
\lambda_{112}^{'}\lambda_{121}^{'}\leq 1 \cdot 10^{-6}.
\end{equation}
The constraints on coupling products derived from the double beta decay
neutrino mass limit
\cite{Bha99} are presented in
tab. \ref{tabrpv}. As is obvious from the table, the double beta decay
neutrino mass limits improve previous bounds on products of R--parity
violating couplings by 1-5 orders of magnitude.
\begin{table}
\begin{center}
\begin{tabular}{ccc}
\hline
\hline
$\lambda^{(')}_{ijk}\lambda^{(')}_{i^{'}kj}$
& Our & Previous \\
& Bounds & Bounds \\
\hline
\hline
$m_{ee}<0.36$ eV && \\ \hline
$\lambda^{'}_{133}\lambda^{'}_{133}$ & $5.0 \cdot 10^{-8}$ & $4.9
\cdot 10^{-7}$ \\
$\lambda^{'}_{132}\lambda^{'}_{123}$ & $1.0 \cdot 10^{-6}$ & $1.6
\cdot 10^{-2}$ \\
$\lambda^{'}_{122}\lambda^{'}_{122}$ & $3.0 \cdot 10^{-5}$ & $4.0
\cdot 10^{-4}$ \\
$\lambda_{133}\lambda_{133}$ & $9.0 \cdot 10^{-7}$ & $9.0 \cdot
10^{-6}$ \\
$\lambda_{132}\lambda_{123}$ & $2.0 \cdot 10^{-5}$ & $2.0 \cdot
10^{-3} $ \\
$\lambda_{122}\lambda_{122}$ & $2.0 \cdot 10^{-4}$ & $1.6 \cdot
10^{-3}$\\ \hline
\end{tabular}
\caption{Correlation among neutrino mass bounds from neutrinoless double
beta decay and upper limits
on RPV couplings. We have used $m_d$=9
MeV, $m_s$= 170 MeV, $m_b$=4.4 GeV \protect{\cite{PDG98}}.
For
$\lambda$-products, $m_{\tilde{d}}$ should be read as
$m_{\tilde{e}}$. The relevant scalars are always assumed to have a
common mass of 100 GeV.
\label{tabrpv}}
\end{center}
\end{table}
For the $(B-L)$ violating sneutrino mass $\tilde{m}_{M}$ the following limits
are obtained \cite{Hir97a}
\ba{rconv2}
\tilde{m}_M &\leq& 1.3 \Big(\frac{m_{SUSY}}{100 GeV}\Big)^{\frac{3}{2}}GeV,
\hskip5mm \chi \simeq \tilde{B}\\
\tilde{m}_M &\leq& 7 \Big(\frac{m_{SUSY}}{100 GeV}\Big)^{\frac{7}{2}}GeV,
\hskip5mm \chi \simeq \tilde{H}
\end{eqnarray}
for the limiting cases that the lightest neutralino is a pure Bino $\tilde{B}$,
as suggested by the SUSY solution of the dark matter problem \cite{jkg96},
or a pure Higgsino. Actual values for $\tilde{m}_M$ for other choices of the
neutralino composition should lie in between these two values.
Another way to deduce a limit on the `Majorana' sneutrino mass $\tilde{m}_M$
is to start from the experimental neutrino mass limit, since the sneutrino
contributes to the Majorana neutrino mass $m_M^{\nu}$ at the 1--loop level
proportional to $\tilde{m}^2_M$
\cite{Hir97a}.
Starting from the mass limit determined for the electron neutrino by
$0\nu\beta\beta$ decay this leads to
\begin{equation}
\tilde{m}_{M_{(e)}} \leq 14 MeV
\end{equation}
This result is somewhat dependent on neutralino masses and mixings.
A non--vanishing `Majorana' sneutrino mass would result in new processes
at future colliders, like sneutrino--antisneutrino oscillations.
Reactions at the Next Linear Collider (NLC) like the SUSY analog to inverse
neutrinoless double beta decay $e^-e^-\rightarrow \chi^-\chi^-$ (where $\chi^-$
denote charginos) or single sneutrino production, e.g. by
$e^-\gamma \rightarrow \tilde{\nu}_e \chi^-$ could give information on the
Majorana sneutrino mass, also. This is discussed by \cite{Hir97,Hir97a,Kolb1}.
A conclusion is that future
accelerators can give information on second and third generation sneutrino
Majorana masses, but for first generation sneutrinos cannot compete with
$0\nu\beta\beta$--decay.
\subsubsection*{Compositeness}
Evaluation of the $0\nu\beta\beta$ half--life limit assuming
exchange of excited
Majorana neutrinos $\nu^*$ yields for the mass of the
excited neutrino a lower bound of \cite{Pan97,Tak97,Pan99}.
\begin{equation}
m_{N} \geq 3.4 m_W
\end{equation}
for a coupling of order {\cal O}(1) and $\Lambda_c \simeq m_N$. Here,
$m_W$ is the W--boson mass. Fig. \ref{fig17} shows that this result is
more stringent than the result obtained by LEPII..
\begin{figure}[h!]
\epsfxsize=80mm
\hspace*{1.5cm}
\epsfbox{panella.ps}
\caption{Comparison between the 0$\nu\beta\beta$
(Heidelberg-Moscow experiment) and the LEPII upper
bound on the quantity $|$f$|$($\sqrt{2}M_N$) as a function of the heavy
neutrino mass M$_N$, with the choice $\Lambda_C$ = M$_N$. Regions
above the curves are excluded (from [Pan99]).}
\label{fig17}
\end{figure}
\subsubsection*{Leptoquarks}
Assuming that either scalar or vector leptoquarks contribute
to $0\nu\beta\beta$ decay, the following constraints on the
effective LQ parameters (see subsection 1.3.1) can be derived \cite{hir96a}:
\ba{dbd_constraint}
\epsilon_I \leq 1.0 \times 10^{-9}
\left(\frac{M_I}{100\mbox{GeV}}\right)^2, \\
\alpha_I^{(L)} \leq 1.3 \times 10^{-10}
\left(\frac{M_I}{100\mbox{GeV}}\right)^2, \\
\alpha_I^{(R)} \leq 2.8 \times 10^{-8}
\left(\frac{M_I}{100\mbox{GeV}}\right)^2.
\end{eqnarray}
Since the LQ mass matrices appearing in $0\nu\beta\beta$
decay are ($4\times4$)
matrices \cite{hir96a}, it is difficult to solve their diagonalization
in full generality algebraically. However, if one assumes that only
one LQ-Higgs coupling is present at a time, the (mathematical) problem is
simplified greatly and one can deduce from, for example,
eq. 1.41 that either
the LQ-Higgs coupling must be smaller than $\sim 10^{-(4-5)}$ or there can not
be any LQ with e.g. couplings of electromagnetic strength with masses below
$\sim 250 GeV$. These bounds from $\beta\beta$ decay are of interest in
connection with recently discussed evidence for new physics from HERA
\cite{Hew97,Bab97,Kal97,Cho97}. Assuming that actually leptoquarks have
been produced at HERA, double beta decay (the Heidelberg--Moscow experiment)
would allow to fix the leptoquark--Higgs coupling to a few $10^{-6}$
\cite{Hir97b}. It may be noted, that after the first
consideration of leptoquark--Higgs coupling in \cite{hir96a} recently
Babu et al. \cite{Bab97b} noted that taking into account
leptoquark--Higgs coupling reduces the leptoquark mass lower bound deduced
by TEVATRON -- making it more consistent with the value of 200 GeV
required by
HERA.
\vspace*{3mm}
{\sl {Special Relativity and Equivalence Principle}}\\
{\it Violation of Lorentz invariance (VLI):} The bound obtained from the
Heidelberg--Moscow experiment is
\begin{equation}
\delta v < 2 \times 10^{-16}~~~~ {\rm for}~~~ \theta_v=\theta_m =0
\end{equation}
where $\delta v=v_1-v_2$ is the measure of VLI in the neutrino sector.
$\theta_v$ and $\theta_m$ denote the velocity mixing angle and the weak
mixing angle, respectively.
In Fig. \ref{fig6a} (from \cite{KPS}) the bound implied by double beta decay is
presented for the entire
range of $sin^2(2 \theta_v)$, and compared with bounds obtained from
neutrino oscillation experiments (see \cite{hal}).
\begin{figure}[h!]
\epsfysize=80mm
\hspace*{15mm}
\epsfbox{vep_bild.eps}
\vspace*{5mm}
\caption{ Double beta decay bound (solid line)
on violation of Lorentz invariance
in the neutrino sector, excluding the region to the upper left.
Shown is a double logarithmic plot
in the $\delta v$--$\sin^2(2 \theta)$ parameter space.
The bound becomes most stringent for the
small mixing region, which has not been constrained from any
other experiments. For comparison the bounds obtained from neutrino oscillation
experiments (from \protect{\cite{hal}})
in the $\nu_{e} - \nu_{\tau}$ (dashed lines) and in the
$\nu_e - \nu_\mu$ (dashed-dotted lines) channel, excluding the region to the
right, are shown (from \protect{\cite{KPS}).}}
\label{fig6a}
\end{figure}
\nopagebreak
{\it Violation of equivalence principle (VEP):}
Assuming only violation of the weak equivalence principle, there does not
exist any bound on the amount of VEP. It is this region of the parameter space
which is most restrictively bounded by neutrinoless double beta decay.
In a linearized theory the gravitational part of the Lagrangian to first order
in a weak gravitational field $g_{\mu\nu}=\eta_{\mu\nu}+ h_{\mu\nu}$
($h_{\mu\nu}= 2\frac{\phi}{c^2} {\mbox diag}(1,1,1,1)$)
can be written as ${\cal L} = -\frac{1}{2}(1+g_i)h_{\mu\nu}T^{\mu\nu}$,
where $T^{\mu\nu}$ is the stress-energy in the gravitational
eigenbasis. In the presence of VEP the $g_i$ may differ.
We obtain \cite{KPS} the following bound from the Heidelberg--Moscow
experiment, for $\theta_v=\theta_m=0$:
\ba{99}
\phi \delta g &<& 2 \times 10^{-16} ~ ({\rm for~} \bar{m}<13
{\rm eV})\nonumber \\
\phi \delta g &<& 1 \times 10^{-18} ~ ({\rm for~} \bar{m}<0.08
{\rm eV}).
\end{eqnarray}
Here $\bar{g}=\frac{g_1+g_2}{2}$ can be considered as the standard
gravitational coupling, for which the equivalence principle applies.
$\delta g=g_1 - g_2$.
The bound on the VEP thus, unlike the one for VLI, will depend on the choice
for the Newtonian potential $\phi$.
\subsubsection*{Half--life of $2\nu\beta\beta$ decay}
The Heidelberg--Moscow experiment
produced for the first time a high statistics $2\nu\beta\beta$
spectrum ($\gg$ 20000 counts, to be compared with the 40 counts on which the
first detector observation of $2\nu\beta\beta$ decay by \cite{Ell87}
(for the decay of $^{82}$Se) had to rely).
The deduced half--life is \cite{HM2000}
\begin{equation}
T^{2\nu}_{1/2} = (1.55 \pm 0.01(stat.)^{+0.03}_{-0.02}(norm.)^{+0.16}_{-0.13}(syst.))\cdot 10^{21} y
\end{equation}
This result brings $\beta\beta$ research for the first time into the region
of `normal' nuclear spectroscopy and allows for the first time statistically
reliable investigation of Majoron--accompanied decay modes.
\subsubsection*{Majoron--accompanied decay}
From simultaneous fits of
the $2\nu$ spectrum and one selected Majoron mode, experimental limits
for the half--lives of the decay modes of
the newly introduced Majoron models \cite{72} are given
for the first time \cite{71,HM96}.
The small matrix elements and phase spaces for these modes
\cite{71,75} already determined that these
modes by far cannot be seen
in experiments of the present sensivity if we assume typical values for the
neutrino--Majoron coupling constants around $\langle g \rangle = 10^{-4}$.
\subsubsection*{1.3.3 The GENIUS Potential for Double Beta Decay}
\subsubsection*{Neutrino mass matrix and neutrino oscillations}
GENIUS will allow a large step in sensitivity for probing the neutrino mass.
It will allow to probe the effective neutrino Majorana mass down to
10$^{-(2-3)}$ eV, and thus
surpass the existing experiments being sensitive on the mass eigenstate
by a factor of 50-500.
GENIUS will test the structure of the neutrino mass matrix and thereby also
neutrino oscillation parameters
\footnote{The double beta observable, the effective neutrino mass
(eq. 10), can be expressed
in terms of the usual neutrino oscillation parameters, once an assumption
on the ratio of $m_1/m_2$ is made. E.g., in the simplest two--generation case
\begin{equation}
\langle m_{\nu} \rangle=|c_{12}^2 m_1 + s_{12}^2 m_2 e^{2 i \beta}|,
\end{equation}
assuming CP conservation, i.e. $e^{2 i \beta}=\eta=\pm 1$, and
$c_{12}^2 m_1 << \eta s_{12}^2 m_2$,
\begin{equation}
\Delta m^2_{12}\simeq m_2^2=\frac{4 \langle m_{\nu} \rangle^2}{(1-\sqrt{1-
sin^2 2 \theta})^2}
\end{equation}
A little bit more general, keeping corrections of the order $(m_1/m_2)$
one obtains
\begin{equation}
m_2=\frac{ \langle m_{\nu} \rangle}{|(\frac{m_1}{m_2})+\frac{1}{2}
(1-\sqrt{1-sin^2 2 \theta})(\pm 1 - (\frac{m_1}{m_2}))|}.
\end{equation}
For the general case see \cite{Kla97d}.}
superior in sensitivity to many present
dedicated terrestrial neutrino oscillation experiments and will provide
complementary informations to recent proposals for the future in this field.
Already in the
first stage GENIUS will test degenerate or inverted neutrino
mass scenarios, discussed in the literature as possible solutions of current
hints to finite neutrino masses (see \cite{Kla99b,Giu99,Cza99,Vis99}).
If the $10^{-3}$ eV
level is reached, GENIUS will allow to test the large angle and for degenerate
models even the small angle MSW
solution of the solar neutrino problem. It will also allow to test the
hypothesis of a shadow world underlying introduction of a sterile neutrino \cite{Moh97a}.
The Figures 26, 27, 28, 29 show some examples of this potential (for more
details see \cite{Kla97d,KK1,KK2,KK3,Kla99a}. Fig. \ref{fig20} compares the
potential of GENIUS with the sensitivity of CHORUS/NOMAD and with the
proposed future experiments NAUSIKAA-CERN and NAUSIKAA-FNAL
-- now renamed to TOSCA and COSMOS,
looking for
$\nu_e \leftrightarrow \nu_{\tau}$ oscillations, for different assumptions on
$m_1/m_2$.
\begin{figure}[h!]
\epsfxsize=10cm
\epsfbox{curetaunew.ps}
\vspace*{-5cm}
\caption{ Current limits and future experimental sensitivity
on $\nu_e - \nu_{\tau}$ oscillations. The shaded area is currently
excluded from reactor experiments. The thin line is the estimated
sensitivity of the CHORUS/NOMAD experiments. The dotted and dash-dotted
thin lines are sensitivity limits of proposed accelerator experiments,
NAUSICAA and E803-FNAL [Gon95].
The thick lines show the sensitivity of GENIUS (broken line:
1 t, full line: 10 t), for two examples of mass ratios. The straight lines are
for the strongly hierarchical case (R=0), while the lines bending to the left
assume R=0.01.
(from [Kla97c])}
\label{fig20}
\end{figure}
\begin{figure}[h!]
\vskip0mm
\hskip5mm
\epsfxsize=90mm
\epsfbox{louis3.ps}
\caption{LSND compared to the sensitivity of GENIUS 1t
for $\eta^{CP} = +1$ and three ratios $R_{12}$, from top to bottom
$R_{12}= 0, 0.01, 0.02$ (from [Kla97c])}
\label{fig21}
\end{figure}
Already in the worst case for double beta decay of $m_1/m_2=0$
GENIUS 1 ton is more sensitive than CHORUS and NOMAD.
For quasi--degenerate models, for example $R=0.01$ already, GENIUS 1 ton would
be more sensitive than the planned future experiments TOSCA and COSMOS.
Fig. \ref{fig21} shows the potential of GENIUS for checking the LSND indication for
neutrino oscillations (original figure from \cite{Lou98}).
Under the assumption
$m_1/m_2 \geq 0.02$ and $\eta=1$, GENIUS 1 ton will be sufficient to find
$0\nu\beta\beta$ decay if the LSND result is to be explained in terms of $\nu_e
\leftrightarrow \nu_{\mu}$ oscillations. This sensitivity is comparable to --
and for small and large mixing angles better than -- the
one of the dedicated project MINIBOONE and
might be of particular interest
also since the upgraded KARMEN will not completely cover \cite{Dre97} the full
allowed LSND range.
Fig. 28 shows the situation for $\nu_e$ - $\nu_{\mu}$
oscillations in reactor and accelerator experiments
(assuming sin$^2 \theta_{13}$ = 0).
The original figure
is taken from [Gel95]. The GENIUS 10 ton sensitivity is
superior to the one obtained by CHOOZ
and could compete with the long baseline project MINOS,
even in the worst case of $m_{\nu_e}<<m_{\nu_{\mu}}$.
In
the quasi-degenerate models GENIUS would be much more sensitive -
see Fig. \ref{fig21}.
\begin{figure}[h!]
\epsfxsize=80mm
{\epsfbox{curemunew.ps}
\caption{Current limits on $\nu_e - \nu_{\mu}$ oscillations.
Various existing experimental limits from reactor and accelerator
experiments are indicated, as summarized in ref. [Gel95]. In addition,
the figure shows the expected sensitivities for GENIUS with 1 ton
(thick broken line) and GENIUS with 10 tons (thick, full line)
(from [Kla97c])}}
\label{fig22}
\end{figure}
Fig. \ref{fig24} shows a summary of currently known constraints
on neutrino oscillation parameters (original taken from \cite{Hat94}), but
including the $0\nu\beta\beta$ decay sensitivities of GENIUS 1 ton and GENIUS
10 tons, for different assumptions on $m_1/m_2$ (for $\eta^{CP}=+1$,
for $\eta^{CP}=-1$ see \cite{Kla97d}).
It is seen that already GENIUS 1 ton tests all degenerate or quasi--degenerate
($m_1/m_2 \geq \sim 0.01$)
neutrino mass models in any range where neutrinos are
interesting for cosmology, and also the atmospheric neutrino problem, if it is
due to $\nu_e \leftrightarrow \nu_{\mu}$ oscillations. GENIUS in its 10 ton
version would directly test the large angle solution of the solar neutrino
problem and in case of almost degenerate neutrino masses, also the
small angle solution.
After this overview we discuss the potential of GENIUS for the various
neutrino mass scenarios in some detail, putting some emphasis on the relations
to the solar and atmospheric neutrino oscillation experiments and on the
complementarity of recent and future projects in these fields including the
investigation of cosmological parameters like
by the future satellite experiments MAP and PLANCK.
In a three neutrino framework the
atmospheric neutrino data are assumed to be described by
$\nu_{\mu} - \nu_{\tau}$ oscillations \cite{Smi99}
with:
\begin{equation}
\Delta m^2_{atm} = (1 - 10)~10^{-3} {\rm eV}^2~,~~~
\sin^2 2\theta_{atm} = 0.8 - 1,
\end{equation}
as the leading mode. Also small contributions of
other modes are not excluded.
For solar neutrinos different
possibilities are considered \cite{Smi99}
which in general lead to different expectations
for the double beta decay:
1. Small mixing MSW solution with
\begin{equation}
\Delta m^2_{\odot} = (0.4 - 1) \cdot 10^{-5} {\rm eV}^2~,~~~
\sin^2 2\theta_{\odot} = (0.3 - 1.2) \cdot 10^{-2}
\label{small}
\end{equation}
2. Large mixing MSW solution with
\begin{equation}
\Delta m^2_{\odot} = (0.1 - 3)\cdot 10^{- 4} {\rm eV}^2~,~~~
\sin^2 2\theta_{\odot} = (0.7 - 1)
\label{large}
\end{equation}
3. Vacuum oscillation solutions
\begin{equation}
\Delta m^2_{\odot} = (0.6 - 6)\cdot 10^{- 10} {\rm eV}^2~,~~~
\sin^2 2\theta_{\odot} = (0.6 - 1)
\label{VO}
\end{equation}
The so-called MSW low solution gives a worse fit to the data and will not be
considered in the following (see however subsection 1.4).
Expressing eq. \ref{obs} in terms of oscillation parameters we get
\begin{equation}
\langle m \rangle = |U_{e1}|^2 m_0 +
e^{i\phi_{21}}|U_{e2}|^2 \sqrt{\Delta m^2_{21} + m_0^2}
+
e^{i\phi_{31}}|U_{e3}|^2 \sqrt{\Delta m^2_{31} + m_0^2}~,
\label{mee}
\end{equation}
where $\phi_{ij}$ are the relative phases of
masses $m_i$ and $m_j$.
Assuming $m_1$ to be the lightest state we
have absorbed $m_1^2$
in the definition of $m_0^2$, $m_0^2 \rightarrow m_0^2 + m_1^2$
so that now $m_0 \geq 0$.
The crucial assumption in order to link neutrino oscillations and the
double beta observable eq. \ref{mee}
concerns the grade of
degeneracy in the
\epsfysize=180mm
\epsfbox{genalldeg_r05.ps}
\pagebreak
\begin{figure}
\caption{~Summary of currently known constraints on neutrino
oscillation parameters. The (background) figure without the $0\nu\beta\beta${}
decay constraints can be obtained from
http://dept.physics.upenn.edu/\~\-www/neutrino/solar.html. Shown are
the vacuum and MSW solutions (for two generations of neutrinos)
for the solar neutrino problem,
the parameter range which would solve the atmospheric neutrino problem
and various reactor and accelerator limits on neutrino oscillations.
In addition, the mass range in which neutrinos are good hot dark matter
candidates is indicated,
as well as limits on neutrino oscillations into sterile states from
considerations of big bang nucleosynthesis. Finally the
thick lines indicate the sensitivity of GENIUS (full lines 1 ton,
broken lines 10 ton) to neutrino oscillation parameters for three values
of neutrino mass ratios $R = 0, 0.01$ and $0.1$ (from top to bottom).
For GENIUS 10 ton also the contour line for $R=0.5$ is shown.
The region beyond the lines would be excluded.
While already the 1 ton GENIUS would be sufficient to constrain degenerate
and quasi-degenerate neutrino mass models, the 10 ton version of
GENIUS could cover a significant new part of the parameter space,
including the large angle MSW solution to the solar neutrino problem,
even in the worst case of $R=0$. For $R\geq 0.5$ it would even probe the
small angle MSW solution (see \cite{klapneut,KKP}).}
\label{fig24}
\end{figure}
neutrino mass spectrum, which may be
described by the value
of $m_0$. Three possibilities are determined by
the relative values of $m_0^2$, $\Delta m^2_{21}$ and
$\Delta m^2_{31}$:
\begin{itemize}
\item
neutrino schemes with strong hierarchy:
$m_0^2 \ll \Delta m^2_{21} \ll \Delta m^2_{31}$,
\item
with partial degeneracy:
$\Delta m^2_{21} \ll m_0^2 \ll \Delta m^2_{31}$,
\item
and with complete degeneracy:
$\Delta m^2_{21} \ll \Delta m^2_{31} \ll m_0^2$.
\end{itemize}
\subsubsection*{1.3.4 Schemes with mass hierarchy \label{hs}}
In the hierarchical case,
\begin{equation}
m_0^2 \ll \Delta m^2_{21} \ll \Delta m^2_{31}~,
\end{equation}
the absolute values of two heavy neutrinos are completely
determined by the mass squared differences (that is,
by the oscillation parameters):
\begin{equation}
m_3^2 = \Delta m^2_{31} = \Delta m^2_{atm},~~
m_2^2 = \Delta m^2_{21} = \Delta m^2_{\odot},~~ m_1^2 = m_0^2,
\end{equation}
and the only freedom is the choice of the value of $m_1$.
In this scheme
there is no explanation of the LSND result, and
the contribution to the Hot Dark Matter component of the universe is
small: $\Omega_{\nu} < 0.01$.
Different solutions of the solar neutrino problem
lead to different implications for the effective neutrino mass.
It is useful to discuss the contributions of the mass eigenstates separately.
They are shown in figs. \ref{smix1}-\ref{smix3}.
In the {\it single maximal (large) mixing scheme} $\nu_{\mu}$ and $\nu_{\tau}$
are mixed strongly in $\nu_{2}$ and $\nu_{3}$.
The electron flavor is weakly mixed:
it is mainly
in $\nu_{1}$ with small admixtures in the heavy states.
The solar neutrino data are explained by
$\nu_e \rightarrow \nu_{2}$ resonance conversion inside the Sun.
For double beta decay searches this scheme is a kind of worst--case
scenario.
Due to the hierarchy of masses and the small admixture of $m_{\mu}$
in $m_1$
$\langle m \rangle$
is dominated by $m_3 \sim \Delta m_{13}$:
\begin{equation}
\langle m \rangle ^{(3)} \simeq U_{e3}^2 m_3,
\label{meffsmh}
\end{equation}
which is severely constrained by the CHOOZ experiment (see fig. \ref{smix1}).
In terms of oscillation parameters the effective neutrino mass can be
written as
\begin{equation}
\langle m \rangle \simeq \frac{1}{2}\sqrt{\Delta m_{atm}^2} \cdot
\left(1- \sqrt {1- \sin^2 2 \theta}\right).
\label{third}
\end{equation}
The contribution from the second mass (first term)
is $ < 10^{-5}$ eV.
As follows from fig. \ref{smix1} in the range of
$\Delta m^2$ relevant for the solution
of
the atmospheric
neutrino problem $\langle m \rangle$ can reach
$\langle m \rangle \approx \langle m \rangle^{(3)} = (3 - 4)\cdot 10^{-3}$ eV
and in the best fit range:
$\langle m \rangle \approx 2\cdot 10^{-3}$ eV.
Thus the 10 ton
GENIUS experiment could access
the unexcluded region of $\langle m \rangle$, while the observation of
neutrinoless double beta decay induced by the neutrino
mass mechanism with
$\langle m \rangle > 6 \cdot 10^{-3}$eV, the final sensitivity of the 1 ton
version, would rule out the single maximal
scenario with maximal mass hierarchy.
{\it Bi-large mixing:} The previous scheme can be modified in such a way that
solar neutrino data are explained by large angle MSW conversion.
The contribution from the third state is the same as in eq.
\ref{third}.
However now the contribution from the second level can be
significant: both mixing parameter and the mass are now larger.
The
contribution
from the second state equals
\begin{equation}
\langle m \rangle^{(2)} = m_2 |U_{e2}^2| = \sqrt{\Delta m_{\odot}^2} \sin^2
\theta_{\odot} \sim (0.8 - 6) \cdot 10^{-3} {\rm eV}.
\label{second}
\end{equation}
Providing a sensitivity of $\langle m \rangle =0.001$ eV
GENIUS could cover the main part of the
large
mixing angle MSW solution of the solar neutrino deficit and could
be complementary to the search of day-night effects (see fig. \ref{smix2}).
{\it Contributions of the first state:}
For a non-vanishing $m_0$ a further contribution for both schemes is implied
by an offset over the oscillation pattern. This contribution arising from
$m_1$ is shown in fig. \ref{smix3}. The total effective neutrino mass can
easily be determined from figs. \ref{smix1} - \ref{smix3} by adding the
single contributions.
This is true as
long as $m_1^2 \ll \Delta m_{12}^2 \simeq m_2^2$.
As can be seen from fig. \ref{smix3} this additional contribution
($< \langle m \rangle^{(1)} +2 m_0$) easily may
reach $10^{-2}$ eV without leaving the hierarchical pattern, shifting the
effective neutrino mass to observable values for the 1 ton version of GENIUS.
However, also cancellation of
the contributions may appear. In any case neutrino oscillations are not
sensitive on this quantity, pushing GENIUS into some key position for testing
the mass of the first state.
\begin{figure}[htb]
\epsfxsize=10cm
\hspace*{0.8cm}
\epsfbox{neutnew1a.ps}
\caption{
Iso-mass ($\langle m \rangle$) lines
in the single maximal mixing
scheme with hierarchical mass pattern.
From the upper right downward $\langle m \rangle$ =
0.01, 0.009, 0.008, 0.007, 0.006,
0.005, 0.004, 0.003, 0.002, 0.001, 0.0009, 0.0008, 0.0007, 0.0006, 0.0005,
0.0004, 0.0003 eV. Also shown are the regions
favored by
the atmospheric neutrino data of Super--Kamiokande
with current bestfit and Kamiokande (lower and upper shaded areas respectively
according \protect{\cite{Kaj99}})
and the borders of regions excluded by CHOOZ and
BUGEY (solid lines) as well as the expected final sensitivity of CHOOZ
(according to \protect{\cite{Dec99}})
and KAMLAND (dashed) as well as of MINOS and K2K (dash-dotted)
(according to \protect{\cite{Zub98}}). For less hierarchical scenarios
additional contributions from the first state arise (see fig. \ref{smix3}).
(from \protect{\cite{Kla99b}})}
\label{smix1}
\end{figure}
\begin{figure}[htb]
\epsfxsize=75mm
\hspace*{1.5cm}
\epsfbox{neutnew2.ps}
\caption{
The iso-mass $\langle m \rangle^{(2)}$ lines, determining the
contribution of the second
state in the $\Delta m_{12}^2 - \sin^2 2 \theta_{12}$ plane for the
hierarchical scheme with the LMA MSW solution.
From the upper right downward:
$\langle m \rangle^{(2)}$ =
0.01,
0.009, 0.008, 0.007, 0.006, 0.005, 0.004, 0.003, 0.002, 0.001 eV.
Also shown is the MSW LMA 90 \% C.L. allowed region from the combined rate
analysis of Homestake, Gallex, Sage and Super-K with the BP98 SSM and the
Super--Kamiokande Day-Night variation \protect{\cite{Fuk99}}
with the point showing the bestfit (rates only) according to
\protect{\cite{Bah98}}.
The solid, dashed and dash-dotted lines correspond to contours of constant
day-night assymmetry $A_{n-d}= Q_n-Q_d/Q_n+Q_d$ of average rates $Q$
in Super-Kamiokande, SNO and ICARUS respectively,
according to
\protect{\cite{bahkra}}.
KAMLAND should observe a
disappearance signal and the 10 ton version of GENIUS should see double beta
decay in this model.
(from \protect{\cite{Kla99b}})
}
\label{smix2}
\end{figure}
\subsubsection*{Schemes with partial degeneracy\label{pd}}
In the partially degenerate case,
\begin{equation}
\Delta m^2_{21} \ll m_0^2 \ll \Delta m^2_{31}~,
\end{equation}
the two light neutrinos have close masses determined by $m_0$
and the heaviest mass is determined by the oscillation
parameter:
\begin{equation}
m_1^2 \approx m_2^2 \approx m_0^2~, ~~~ m_3^2 \approx
\Delta m^2_{31} = \sqrt{\Delta m_{atm}^2}~.
\end{equation}
The expression for the effective neutrino mass can be
written as
\begin{equation}
\langle m \rangle = m_0 (\sin^2 \theta_{\odot} +
e^{i \phi_{21}} \cos^2 \theta_{\odot}) +
e^{i \phi_{31}}\sqrt{\Delta m_{atm}^2} \sin^2 \theta_{atm},
\end{equation}
where $\theta_{atm}$ determines the admixture of the $\nu_e$
in $\nu_3$.
For the small mixing MSW solution of the solar neutrino problem we get
\begin{equation}
\langle m \rangle \approx m_0 +
e^{i \phi_{31}}\sqrt{\Delta m_{atm}^2} \sin^2 \theta_{atm}.
\end{equation}
For the {\it large mixing angle MSW solution}
cancellation of contributions from the
lightest states may occur, so that even for relatively large
$m_0$ the contribution from the third neutrino state
gives
the
main contribution, which is severely constrained by CHOOZ (see fig.
\ref{smix1}).
The partially
degenerate spectrum with
\begin{equation}
\Delta m^2_{21} \ll \Delta m^2_{31} = m_0^2~.
\end{equation}
leads to a scheme with inverse mass hierarchy:
\begin{equation}
m_1^2 \approx m_2^2 \approx m_0^2~ = \Delta m_{atm}^2, ~~~
m_3^2 \ll m_0^2.
\end{equation}
The effective Majorana mass can be written as
\begin{equation}
\langle m \rangle \approx \sqrt{\Delta m_{atm}^2}(\sin^2 \theta_{\odot} +
e^{i \phi_{21}} \cos^2 \theta_{\odot})~,
\label{mee3}
\end{equation}
where we have neglected the small contribution from the third state:
$m_3 U_{e3}^2$ ($U_{e3}^2 < 5 \cdot 10^{-2}$).
The two heavier eigenstates contribute to the hot dark matter (HDM)
\begin{equation}
\Omega_{\nu}=\frac{2 m_1}{91.5 eV}h^{-2}
\end{equation}
Assuming the {\it vacuum oscillation
solution}
(for inverse hierarchy no level--crossing in the sun and thus no MSW effect
appears), both addition yielding $\langle m \rangle \simeq 0.03-0.1$ eV
as well as compensation of the contributions from the two
heavy states can occur. In the case
of the
bi-maximal scheme the compensation is complete. Again, additional
contributions from the lightest state, here $m_3$ may be possible.
In Fig. \ref{smix5} the sensitivity in the $m_0 - \sin^2 2 \theta$
plane is shown together with the favored regions of the
``Just-so'' vacuum solution.
Allowing for CP violation here just implies the cancellation to be less
effective. All values for $\phi_{12},\phi_{13}$ imply a behavior of the mass
eigenstates settled between the extreme values of 0 and $\pi$.
As can be seen, GENIUS may provide sensitive informations about the mixing
and the grade of cancellation among the states, being complementary to
determinations of the sum of the neutrino mass eigenstates due to studies
of the power spectra of the cosmic microwave background or galaxies by
MAP and Planck or the SDSS (Sloan Digital Sky Survey) \cite{Eis98}.
\begin{figure}[!tt]
\epsfxsize=8cm
\hspace*{1.5cm}
\epsfbox{neutnew5.ps}
\caption{
Contribution of the first state in hierarchical
models with $m_1>0$.
Shown is $\langle m \rangle$ in the $m_0-sin^2(2 \theta_{12})$ plane, together
with the favored regions for LMA MSW (solid)
and SMA MSW (dash-dot-dot), the SMA extends further to smaller mixing, where
$\langle m \rangle=m_0=const.$
The horizontal solid lines indicate the region above which the assumption
$m_1^2 \ll \Delta m^2 = m_2^2$
is not valid anymore (Super-K bestfit for MSW LMA and SMA).
Vacuum oscillations are not included here, since
for this case the model will be partially degenerate before any significant
contribution to $\langle m \rangle$ arises. It is easy to see that sizable
contributions from the first state could lead to observable double beta rates
even in hierarchical models.
(from \protect{\cite{Kla99b}})
}
\label{smix3}
\end{figure}
\begin{figure}[!h]
\epsfxsize=8cm
\hspace*{1.5cm}
\epsfbox{neutnew4.ps}
\caption{
Plotted are iso-mass lines in the $m_0-sin^2(2 \theta)$ plane for the case of
cancellation between degenerate states $m_1$ and $m_2$ in partially
degenerate scenarios
with two neutrinos contributing to the hot dark matter.
Mass splitting is neglected, since
$m_1-m_2 \ll m_0$ and $m_3-m_1$ changes the cosmological considerations
less than
10 \%.
Shown are the bestfits for CHDM (according to
\protect{\cite{eric98}}),
and $\Lambda$CHDM (according to
\protect{\cite{Pri98}})
for different values of the Hubble constant. Also shown is the sensitivity of
MAP/Planck combined with SDSS according to \protect{\cite{Eis98}}.
The regions of the MSW LMA and vac. osc. have been taken from
\protect{\cite{Bah98}}.
Also shown is the bestfit for atmospheric neutrinos, which gives a
lower limit for $m_0$ in inverse hierarchical models.
Combined with the neutrino oscillation results and the
precision determinations of cosmological parameters GENIUS will allow to
give precise informations about mixing and the absolute mass scale in
partially degenerate scenarios. Assuming, e.g. a worst case $m_0=0.06$ eV
just above the atmospheric neutrino bestfit, the MSW LMSA or vacuum solution
would imply $\langle m \rangle$ = 0.03 eV
(from \protect{\cite{Kla99b}}). }
\label{smix5}
\end{figure}
\begin{figure}[!h]
\epsfxsize=8cm
\hspace*{1.5cm}
\epsfbox{neut3_3.6.ps}
\caption{
As figure \protect{\ref{smix5}} for totally degenerate scenarios, that means
three neutrinos are contributing to the hot dark matter.
Combined with the neutrino oscillation results and the
precision determinations of cosmological parameters GENIUS will allow to
give precise informations about mixing and the absolute mass scale in
degenerate scenarios. E.g. assuming an overall scale of 0.3 eV corresponding
to the $\Lambda$CHDM models with $\Omega=0.04$ and $h=0.5$,
the bestfit of either the
MSW or the vacuum solution would imply $\langle m \rangle$ = 0.15 eV
(from \protect{\cite{Kla99b}}).}
\label{smix4}
\end{figure}
\subsubsection*{Schemes with complete degeneracy}
In degenerate schemes the
common mass is much larger than the mass splittings:
\begin{equation}
\Delta m^2_{21} \ll \Delta m^2_{31} \ll m_0^2.
\end{equation}
In this case the effective neutrino mass is
\begin{equation}
\langle m \rangle = (|U_{e1}|^2
+ |U_{e2}|^2 e^{i\phi_{21}}
+ |U_{e3}|^2 e^{i\phi_{31}}) m_0,
\end{equation}
which
is determined by mixing angles and relative phases of the mass
terms.
In the case of the small mixing MSW solution ($U_{e1}^2 \gg U_{e2}^2,
U_{e3}^2$) no substantial cancellation appears and
$\langle m \rangle \approx m_0$.
The same expression can be obtained for any solution of the solar neutrino
problem if the CP violating phases are zero: $\phi_{12}=\phi_{13}=0$ or
$\phi_{12}=0$,$U^2_{e3}\simeq 0$.
Double beta decay and neutrino oscillations decouple.
The effective neutrino
mass can be restricted by cosmological observations.
The contribution of neutrinos to the
HDM in the universe is
\begin{equation}
\Omega_{\nu}=\frac{3 m_0}{91.5 eV}h^{-2}
\end{equation}
(see Fig. \ref{smix5}). In bimaximal schemes $\langle m \rangle$
is exactly vanishing.
However, comparing with the quark sector, this case seems to be rather
unnatural. For $U^2_{e3}\simeq 0$ and $\phi_{12}=\pi$ in eq. \ref{mee}
the double beta observable becomes
\begin{equation}
\langle m \rangle \simeq m_0 \sqrt{1 - \sin^2 2\theta}.
\end{equation}
As in fig. \ref{smix5}
in fig. \ref{smix4} the sensitivity in the $m_0 - \sin^2 2 \theta$
plane is shown together with the favored regions of the solar
MSW large mixing angle solution as well as the ``Just-so'' vacuum solution.
Again GENIUS provides a tool of unique sensitivity for determining mixings
and the grade of cancellation among the states.
\subsubsection*{LSND and four neutrino scenarios}
Additional neutrino states being singlets under the usual SU(2) have been
discussed \cite{Smi99} in order to account for the LSND anomaly with
\begin{equation}
\Delta m^2_{LSND}\simeq1 eV^2.
\end{equation}
The viable schemes contain two light states responsible for the solution of
the solar neutrino problem and two heavy states in the range relevant
for structure formation in the universe and for oscillations of atmospheric
neutrinos (see also the discussion in \cite{Giu99}).
$\nu_{\mu}$ and $\nu_{\tau}$ are strongly
mixed in two heavy mass eigenstates $\nu_2$ and $\nu_3$ with
\begin{equation}
\sqrt{m_3^2 - m_2^2} \equiv \sqrt{\Delta m_{ATM}^2} \ll
m_3 \approx m_{HDM},
\end{equation}
so that
$\nu_{\mu} \leftrightarrow \nu_{\tau}$ oscillations
solve the atmospheric neutrino problem.
$\nu_e$ and $\nu_s$ are weakly mixed in the two lightest
mass states. Resonance conversion $\nu_e \rightarrow \nu_s$
solves the solar neutrino problem.
As has been pointed out in \cite{Giu99} the inverse scheme requires strong
cancellation in the heavy states to fit the present bound from the
Heidelberg--Moscow experiment. This requires large mixing of $\nu_s$ and
$\nu_e$, which is excluded by BBN bounds on the number of neutrino species.
Since this issue is still rather controversial (see \cite{Giu99} and
references therein) it may be interesting to study the situation
of a weakened BBN bound. In this case the inverse hierarchical scheme is still
unexcluded in combination with the MSW LMA or vacuum solution and \cite{Bil99}
\begin{equation}
7 \cdot 10^{-2} eV < \langle m \rangle < 1.4 eV.
\end{equation}
In fig. \ref{bilfig99} $\langle m \rangle$ is plotted as a function of
$\Delta m^2$ for the case of the LMA MSW solution.
\begin{figure}[htb]
\vspace*{2cm}
\hspace*{1cm}
\epsfxsize=9cm
{\epsfbox{bilfig.epsi}
\vspace*{-9cm}
\caption{
Four neutrinos in the scheme with direct mass hierarchy: The shaded area
shows the possible value of the effective Majorana mass $\langle m \rangle$
in the range of $\Delta m^2_{LSND}$ for the case of the MSW LMA solution of
the solar neutrino problem.
This case can be easily checked by the 1 ton
version of GENIUS (from \protect{\cite{Bil99}}).}
\label{bilfig99}}
\end{figure}
Turning to the hierarchical scheme
the contribution to $m_{ee}$ from the pair of heavy mass
states can be written as
\begin{equation}
m_{ee}^{(23)} = U_{e2}^2 m_2 + U_{e3}^2 m_3 \approx
(|U_{e2}|^2 + |U_{e3}|^2 e^{i\phi}) m_{3}~.
\end{equation}
The masses $m_{3} \approx m_{2}$ can be relevant for cosmology,
their value determines the
splitting between pairs of the heavy and
the light states and can induce the oscillations observed by LSND:
\begin{equation}
m_{3} = \sqrt{\Delta m_{LSND}^2} = \frac{1}{2} m_{HDM}~.
\end{equation}
Therefore,
\begin{equation}
\langle m \rangle^{(23)} =
(|U_{e2}|^2 + |U_{e3}|^2 e^{i \phi}) \sqrt{\Delta m_{LSND}^2}.
\end{equation}
Taking the bound from Bugey into account
this is leading to
\begin{equation}
7 \cdot 10^{-4} <\langle m \rangle^{(23)}< 2 \cdot 10^{-2},
\end{equation}
which may yield a positive signal in GENIUS \cite{Giu99}.
The contribution of the light states corresponds to the situation
in hierarchical schemes discussed above. It may become
significant in the case of strong cancellation in the heavy states.
\subsubsection*{Summary}
In summary, GENIUS can play an important role in reconstructing the neutrino
mass spectrum. In strongly hierarchical schemes the magnitude of the double beta
observable depends crucially on the assumed solution for the solar neutrino
problem. While with assuming the MSW SMA or vacuum oscillations the observation
of $0\nu\beta\beta$ with $\langle m \rangle > 6 \cdot 10^{-3}$ would rule out
the scheme, in scenarios with MSW LMA GENIUS (10 tons) should observe a
positive signal for the main part of the MSW LMA solution, being complementary
to the search for day-night effects in present and future solar neutrino
experiments such as Superkamiokande, SNO or ICARUS.
In any case GENIUS may provide a unique possibility to determine
the mass of the lightest state. Even more stringent restrictions may be
obtained in partially or completely degenerate schemes, motivated by giving
sizable contributions to the hot dark matter in the universe.
In such scenarios already
the present half life limit of the Heidelberg--Moscow experiment requires
strong cancellation between the mass eigenstates. GENIUS could help to
determine the mixing in such schemes with extreme accuracy, providing
informations being complementary to precision tests of cosmological
parameters by MAP and Planck. In four neutrino scenarios GENIUS has good
perspectives for testing the LSND signal.
For further recent discussions of the potential of GENIUS for probing neutrino
masses we refer, e.g., to \cite{Kla99b,Giu99,Cza99,Vis99,Bil99}.
\subsubsection*{1.3.3.2 GENIUS and super--heavy left--handed neutrinos:}
Fig. \ref{fig25} (from \cite{Bel98}) compares the sensitivity of GENIUS for heavy
left-handed neutrinos (as function of $U_{ei}^2$, for which the present
LEP limit is $U_{ei}^2 \leq 5 \cdot 10^{-3}$ \cite{Nar95}) with the discovery
limit for $e^- e^- \rightarrow W^- W^-$ at Next Linear Colliders. The
observable in $0\nu\beta\beta$ decay is
\begin{equation}
\langle m^{-1}_{\nu} \rangle_H = \sum_i ~^{''}U^2_{ei} \frac{1}{M_i}.
\end{equation}
Also shown are the present limits from the Heidelberg--Moscow experiment
(denoted by $0\nu\beta\beta$) assuming different matrix elements. It is
obvious that $0\nu\beta\beta$ is more sensitive than any reasonable future
Linear Collider.
\begin{figure}[h!]
\epsfysize=85mm
\epsfbox{belanger_2.ps}
\caption{Discovery limit for $e^- e^- \rightarrow W^- W^-$ at a linear collider
as function of the mass $M_i$ of a heavy left--handed neutrino, and of
$U_{ei}^2$ for $\sqrt{s}$ between 500 GeV and 10 TeV. In all cases the
parameter space above the line corresponds to observable events.
Also shown are the limits set by the Heidelberg--Moscow $0\nu\beta\beta$
experiment as well as the prospective limits from GENIUS. The areas {\rm above}
the $0\nu\beta\beta$ contour lines are {\rm excluded}. The horizontal
line denotes the limit on neutrino mixing, $U_{ei}^2$, from LEP.
Here the parameter space above the line is excluded. (from \cite{Bel98}).
}
\label{fig25}
\end{figure}
\subsubsection*{1.3.3.3 GENIUS and left--right symmetry:}
If GENIUS is able to reach down to $\langle m_{\nu} \rangle \le 0.01$ eV, it would at
the same time be sensitive to right-handed $W$-boson masses up to
$m_{W_R} \ge 8$ TeV (for a heavy right-handed neutrino mass of
$1$ TeV) or $m_{W_R} \ge 5.3$ TeV (at $\langle m_N \rangle = m_{W_R}$)
\cite{Kla97d}.
Such a limit would be comparable to the one expected for LHC,
see for example \cite{Riz96}, which quotes a final sensitivity
of something like $5-6$ TeV. Note, however that in order to
obtain such a limit the experiments at LHC need to accumulate
about $100 fb^{-1}$ of statistics. A 10 ton version of
GENIUS
could even reach a sensitivity of $m_{W_R} \ge 18$ TeV (for a heavy
right-handed neutrino mass of
$1$ TeV) or
$m_{W_R} \ge 10.1$ TeV (at $\langle m_N \rangle = m_{W_R}$).
This means that already GENIUS 1 ton could be sufficient to definitely
test recent supersymmetric left--right symmetric models having the
nice features of solving the strong CP problem without the need for an axion
and having automatic R--parity conservation \cite{Kuc95,Moh96}.
\subsubsection*{1.3.3.4 GENIUS and $R_p$--violating SUSY:}
The improvement on the R--parity breaking Yukawa coupling $\lambda^{'}_{111}$
(see subsection 1.3.1) is shown in Fig. \ref{fig26}.
The full line to the right is the expected sensitivity of the
LHC -- in the
limit of large statistics. The three dashed--dotted lines denote (from top
to bottom) the current constraint from the Heidelberg--Moscow experiment
and the sensitivity of GENIUS 1 ton and GENIUS 10 tons, all
for the
conservative case of a gluino mass of 1 TeV. If squarks would be heavier than
1 TeV, LHC could not compete with GENIUS. However, for typical squark masses
below 1 TeV, LHC could probe smaller couplings.
However, one should keep in
mind, that LHC can probe squark masses up to 1 TeV only with several years of
data taking.
The potential of GENIUS on R-parity breaking coupling products
derived from
the neutrino mass bounds
is shown
in tab. \ref{tabrpv2} \cite{Bha99}.
GENIUS in the 1(10) ton version would provide
a further improvement by 1(2) orders of magnitude compared to the
Heidelberg--Moscow experiment.
\begin{table}[!h]
\begin{center}
\begin{tabular}{ccc}
\hline
\hline
$\lambda^{(')}_{ijk}\lambda^{(')}_{i^{'}kj}$
& Our & Previous \\
& Bounds & Bounds \\
\hline
\hline
$m_{ee}<0.01 (0.001)$ eV & [GENIUS 1(10)t] & \\ \hline
$\lambda^{'}_{133}\lambda^{'}_{133}$ & $1.5 \cdot 10^{-9(-10)}$ & $4.9
\cdot 10^{-7}$ \\
$\lambda^{'}_{132}\lambda^{'}_{123}$ & $3.7 \cdot 10^{-8(-9)}$ & $1.6
\cdot 10^{-2}$ \\
$\lambda^{'}_{122}\lambda^{'}_{122}$ & $9.2 \cdot 10^{-7(-8)}$ & $4.0
\cdot 10^{-4}$\\
$\lambda_{133}\lambda_{133}$ & $2.6 \cdot 10^{-8(-9)}$ & $9.0 \cdot
10^{-6}$ \\
$\lambda_{132}\lambda_{123}$ & $4.3 \cdot 10^{-7(-8)}$ & $2.0 \cdot
10^{-3} $ \\
$\lambda_{122}\lambda_{122}$ & $7.1 \cdot 10^{-6(-7)}$ & $1.6 \cdot
10^{-3}$\\ \hline
\end{tabular}
\caption{Correlation among neutrino mass bounds from GENIUS and upper limits
on RPV couplings. We have used $m_d$=9
MeV, $m_s$= 170 MeV, $m_b$=4.4 GeV \protect{\cite{PDG98}}.
For
$\lambda$-products, $m_{\tilde{d}}$ should be read as
$m_{\tilde{e}}$. The relevant scalars are always assumed to have a
common mass of 100 GeV.
\label{tabrpv2}
}
\end{center}
\end{table}
\begin{figure}[h!]
\vskip-25mm
\hskip10mm
\epsfxsize=100mm
\epsfysize=120mm
\epsfbox{figph.ps}
\vskip-80mm
\noindent
$\lambda'_{111}$
\vskip45mm
\hskip90mm $m_{\tilde q}$ [GeV]
\bigskip
\caption{ Comparison of sensitivities of existing and future
experiments on $R_p \hspace{-1em}/\;\:$ SUSY models in the plane $\lambda'_{111}-m_{\tilde q}$.
Note the double logarithmic scale! Shown are the areas currently excluded
by the experiments at the TEVATRON, the limit from charged-current
universality, denoted by CCU, and the limit from absence of $0\nu\beta\beta${}
decay from the Heidelberg-Moscow collaboration ($0\nu\beta\beta${} HDMO).
In addition, the estimated sensitivity of HERA and the LHC is compared to the
one expected for GENIUS in the 1 ton and the 10 ton version (from [Kla97c]).}
\label{fig26}
\end{figure}
\subsubsection*{1.3.3.5 GENIUS and $R_p$--conserving SUSY:}
Since the limits on a `Majorana--like' sneutrino mass $\tilde{m}_M$ scale
with $(T_{1/2})^{1/4}$, GENIUS 1 ton (or 10 tons)
would test `Majorana' sneutrino masses lower
by factors of about 7(20), compared with present constraints
\cite{Hir97,Hir97a,Hir97b}.
\subsubsection*{1.3.3.6 GENIUS and Leptoquarks:}Limits on the lepton--number violating parameters as defined previously
improve as $\sqrt{T_{1/2}}$. This means that for leptoquarks in the range
of 200 GeV LQ--Higgs couplings down to (a few) $10^{-8}$ could be explored.
In other words, if leptoquarks interact with the standard model Higgs boson
with a coupling of the order ${\cal O}(1)$, either $0\nu\beta\beta$ must be
found, or LQs must be heavier than (several) 10 TeV.
\subsubsection*{1.3.3.7 GENIUS and composite neutrinos}
GENIUS in the 1(10) ton version would improve the limit on the excited
Majorana neutrino mass deduced from the Heidelberg--Moscow experiment
(eq. 32) to
\begin{equation}
m_N\geq 1.1 (2.3) \hskip3mm TeV
\end{equation}
A recent detailed study \cite{Pan99} shows that while the HEIDELBERG--MOSCOW
experiment already exceeds the sensitivity of LEPII in probing compositeness,
GENIUS will reach the sensitivity of LHC. With the $0\nu\beta\beta$ half life
against decay by exchange of a composite Majorana
neutrino given by \cite{Pan99}
\begin{equation}
T_{1/2}^{-1}=\Big(\frac{f}{\Lambda_c}\Big)^4 \frac{m_A^8}{M_N^2}
|{\cal M}_{FI}|^2 \frac{G_{01}}{m_e^2}
\end{equation}
where $M_N$ is the composite neutrino Majorana mass, and $f$ denotes the
coupling with the electron. Fig. 38 shows the situations of GENIUS and LHC.
\begin{figure}[h!]
\epsfysize=10cm
\hspace*{1.5cm}
{\epsfbox{panneu.ps}
\caption{Sensitivity of LHC and GENIUS to compositeness parameters (assuming
$\Lambda_C=M_N$). Regions above the curves are excluded. The LHC bound is weaker
than the GENIUS bound for $M_N<550 (1000)$ GeV.
(from \cite{Pan99})
}}
\label{fig27}
\end{figure}
\subsubsection*{1.3.3.8 GENIUS, special relativity and equivalence principle
in the neutrino sector}
The already now strongest limits given by the Heidelberg--Moscow experiment
discussed in subsection 1.3.2 would be improved by 1--2 orders of magnitude.
It should be stressed again, that while neutrino oscillation bounds
constrain the region of large mixing of the weak and gravitational
eigenstates, these bounds from double beta decay apply even in the case
of no mixing and thus probe a totally unconstrained region in the parameter
space.
\subsection{The Solar Neutrino Potential of GENIUS}
\subsubsection{Introduction}
The study of neutrinos coming from the Sun is a very active area of research.
Results from five solar neutrino experiments are now available.
These experiments measure the solar neutrino flux with different energy
thresholds and using very different detection techniques.
All of them, the Chlorine experiment at Homestake \cite{chlor},
the radiochemical Gallium experiments, GALLEX \cite{gallex} and SAGE
\cite{sage}, the water Cerenkov detectors Kamiokande \cite{kamiok} and
Super-Kamiokande \cite{SuperK},
measure a deficit of the neutrino flux
compared to the predictions of the standard solar model (SSM) \cite{SSM}.
Recently it has been stated out that it is imposible to construct
a solar model which would reconcile all the data \cite{hiroshi}.
Moreover, a global analysis of the data of all the experiments do not leave
any room for the $^7$Be neutrinos \cite{Bah98b}.
On the other hand the predictions of the SSM have
been confirmed by helioseismology \cite{basu} to a high precision.
An explanation of the results of solar neutrino experiments
seems to require new physics beyond the standard model of electroweak
interaction.
If neutrinos have non-zero masses and if they mix in analogy to the quark
sector, then conversions between different neutrino flavours become
possible. Flavour conversions can occur in different physical scenarios,
depending on certain parameters on neutrino masses and mixing angles.
One oscillation scenario makes use of the
MSW-mechanism \cite{msw85}, where the solar $\nu_e$ transform into other
neutrino flavours or into sterile neutrinos as they pass through a thin
resonance region near the solar core.
The other scenario assumes that the neutrinos
oscillate in the vacuum between the Sun and the Earth \cite{gla87}, which means
that the oscillation length `just so` matches the Earth-Sun distance.
\subsubsection{The solar neutrino spectrum}
The Sun acquires its energy by nuclear reactions taking place in the core,
mainly via the so-called pp-chain (see Fig. \ref{ppchain}).
\begin{figure}[h!]
\epsfxsize=12cm
\epsfbox{sreactions.eps}
\caption{Nuclear reactions in the pp-chain in the Sun.}
\label{ppchain}
\end{figure}
The neutrino spectrum predicted by the SSM for the pp-chain
is shown in Fig. \ref{nuspec}. The dominant part of the flux is
emitted at energies below 1 MeV.
\begin{figure}[h!]
\epsfxsize=12cm
\epsfbox{sspectrum.eps}
\caption{Predicted solar neutrino spectrum in the SSM (from \cite{bah91}).}
\label{nuspec}
\end{figure}
The pp neutrinos, emitted in the reaction p+p $\rightarrow$ D+e$^+$+$\nu_e$,
have a continuous energy spectrum with the endpoint at 420 keV.
Their flux is most accurately predicted in the SSM, since it is strongly
restricted by the solar luminosity and by helioseismological measurements.
The other main features of the solar neutrino spectrum are a strong
monoenergetic line at 861 keV, from the reaction $^7$Be+ e$^-$$\rightarrow$
$^7$Li+$\gamma$+$\nu_e$, the $^7$Be neutrinos, and a continuous
spectrum of neutrinos extending up to 15 MeV, due to the reaction
$^8$B$\rightarrow$2$\alpha$+e$^+$+$\nu_e$, the $^8$B neutrinos.
Table \ref{nufluxes} gives the solar neutrino fluxes in the SSM with
their respective uncertainties (from \cite{Bah98c}).
\subsubsection{Present status of the solar neutrino experiments}
The solar neutrino problem has been known for two decades,
since the Homestake experiment reported its first result.
At that time, however, it was not clear if the difference between
the chlorine measurement and the standard solar model prediction
was due to experimental systematics or the uncertainties in the
SSM or if it was a sign of new physics.
Meanwhile, the observed discrepancy was confirmed by other
four solar neutrino experiments (see Fig. \ref{theoexp}, from \cite{Bah96}).
Model independent analysis performed by many authors (see \cite{hiroshi}
and references therein)
suggest that the solar neutrino problem can only be solved if
some additional assumptions are made in the standard electroweak
theory. The most generic assumption is to give neutrinos a mass,
which leads to neutrino oscillations in vacuum or matter.
Oscillations between two neutrino species are characterized by two
parameters: $\Delta$m$^2$, the difference of the squared mass eigenstates,
and $\theta$, the mixing angle between the mass eigenstates.
The Ga experiments, sensitive to the low-energy pp and $^7$Be neutrinos,
combined with the Homestake and Super-Kamiokande experiments, which
are sensitive to the high-energy $^8$B neutrinos, strongly restrict
the allowed range of $\Delta$m$^2$ and $\theta$ for all oscillation
scenarios.
There exist four parameter areas compatible with the results of
all existing solar neutrino experiments: the
large mixing angle solution (LMA), the small mixing angle solution (SMA),
the low mass solution (LOW) and the vacuum oscillation solution
with strong mixing (see Fig. \ref{solutions} for the MSW-solutions).
Up to date, there is no clear evidence for one of the above solutions.
To clarify the situation, there is great demand for additional solar
neutrino experiments, especially at energies below 1 MeV.
Borexino \cite{borexprop} is now being built up especially to measure
the flux of $^7$Be neutrinos in real time.
It will use 300 tons of organic scintillator
(100 tons of fiducial volume) to detect recoil electrons from
elastic neutrino-electron scattering. Since the scintillator has
no directional information and the signal is characterized only by the
scintillation light produced by the recoil electron, very stringent
constraints on the radiopurity of the scintilator and on the activity
of all detector materials are imposed.
\begin{table}
\hspace*{2.4cm}
\begin{tabular}{lc}
Source & Flux (10$^{10}$ cm$^{-2}$s$^{-1}$)\\
\hline
pp & 5.94 $\pm$ 0.01 \\
pep & 1.39$\times$10$^{-2}$$\pm$ 0.01 \\
$^7$Be & 4.80$\times$10$^{-1}$$\pm$ 0.09 \\
$^8$B & 5.15$\times$10$^{-4}$$\pm$ 0.19 \\
\end{tabular}
\caption{Solar Standard Model predictions of the neutrino fluxes, from
\cite{Bah98c}}
\label{nufluxes}
\end{table}
So far, there exist three proposals to measure the pp-flux in real time,
HERON \cite{heron}, HELLAZ \cite{hellaz} and LENS \cite{lens}.
The HERON project will use $^4$He in its superfluid state (at 20 mK) as the
target medium. The detection reaction is elastic neutrino-electron
scattering, the electron recoil energy is converted into low-energy
elementary excitations of the helium, rotons, which can be detected.
For a fiducial volume of seven tons, the total SSM predicted event rate
is 14 per day (8 events per day from the pp neutrinos). HERON would
measure only the energy distribution of recoiling electrons, without
a direct determination of the neutrino energy.
In the HELLAZ project a large TPC (2000 m$^3$) filled with gaseous helium
at high pressure (5 atm.) and low temperature (77 K) will serve as a target.
It is planned to measure both the kinetic energy and the scattering angle
of recoil electrons from elastic neutrino-electron scattering and thus
to determine the solar neutrino energy.
The kinetic energy of recoil electrons is measured by counting the individual
electrons in a ionisation cloud generated by the energy loss of the recoil
electron due to ionisation in the helium gas.
The expected event rate for 2$\times$10$^{30}$ target electrons is 7 per day
and 4 per day for pp neutrinos and $^7$Be neutrinos, respectively.
\begin{figure}[h!]
\epsfysize=10cm
\epsfbox{thvsexp.eps}
\caption{Comparison of the total rates predicted in the SSM and the
observed rates in the present solar neutrino experiments,
from \cite{Bah96}.}
\label{theoexp}
\end{figure}
LENSE would be a complementary approach to the above detectors using flavour
independent elastic scattering from electrons.
The method of neutrino detection is neutrino capture in $^{82}$Se,
$^{160}$Gd or $^{176}$Yb.
The neutrino captures occur to excited states of the final nuclides,
providing a strong signature against radioactive background.
The thresholds for neutrino capture are 173 keV for$^{82}$Se, 244 keV
for $^{160}$Gd and 301 keV for $^{176}$Yb. Three different techniques
for implementation as a solar neutrino detector are explored at present
\cite{lenseloi}:
liquid scintillator loaded with Yb or Gd, scintillating crystals of silicates
of Gd (GSO) and time projection chambers with a gaseous compound of isotopic
$^{82}$Se.
All of these projects are still in a stage of research and development,
they have not yet shown full feasibility for implementation as a solar
neutrino detector.
\begin{figure}[h!]
\epsfysize=10cm
\epsfbox{mswrates.ps}
\caption{The allowed regions (99\% C.L.) in $\Delta m^2$ ---
$\sin^22\theta$ parameter space for the MSW solution, from \cite{Bah98}. }
\label{solutions}
\end{figure}
\subsubsection{ Time signatures of solar neutrinos}
Due to the eccentricity of the Earth orbit, seasonal variations
in the flux of solar neutrinos are expected.
The number of neutrinos of all flavours reaching the Earth is larger
when the Earth is closer to the Sun than when it is farther away and
should vary with 1/R$^2$, where R is the Eart-Sun distance,
R=R$_{0}$(1-$\epsilon$cos(2$\pi$t/year)). R$_{0}$= 1AU and $\epsilon$=0.017.
The neutrino flux thus shows a seasonal variation of about 7\% from
maximum to minimum.
This variation can in principle be used by a real time solar neutrino
experiment to extract the neutrino signal independently of background
(if the background is stable in time) and is limited only by
statistics.
Beyond the so-called `normal' seasonal variation, an anomalous
seasonal variation is predicted for the $^7$Be neutrino flux
in case of the vacuum oscillation solution,
since their oscillation length in this case is comparable to the seasonal
variation of the Earth-Sun distance due to the eccentricity of the
Earth orbit.
The flux variations in this case are much larger than for the normal
seasonal variation,
they could serve as a unique
signature of vacuum oscillations \cite{gla87}.
If neutrinos oscillate via the MSW-effect, then a regeneration of
electron-neutrinos while passings through the Earth is predicted
\cite{bahc89}.
The so-called day/night-effect is neutrino energy dependent, its detection
would be a strong evidence for the MSW-effect. In Fig. \ref{daynight}
(from \cite{bahkra}) the $\nu_e$ survival probabilities for the MSW solutions
computed for the day-time and night-time are shown. At
low energies only the LOW solution shows significant differences between the
day- and night-time survival probability.
Therefore this solution could be tested by a real-time detector
of low energy solar neutrinos, in particular by measuring the pp and
$^7$Be neutrino flux.
\begin{figure}[h!]
\epsfysize=10cm
\epsfbox{survival2.ps}
\caption{Survival probabilities for an electron neutrino created in the Sun
for the three MSW solutions, from \cite{bahkra}. SMA, LMA, LOW stand
for the small mixing angle, the large mixing angle and the low
$\Delta$m$^2$ MSW-solutions. }
\label{daynight}
\end{figure}
\subsubsection{GENIUS as a solar neutrino detector}
The goal of the GENIUS project as a dark matter detector is to
achieve the background level of 10$^{-3}$ events/kg y keV in the energy
region below 100 keV. Such a low background in combination with
a target mass of at least 1 ton of natural (or enriched) Ge opens
the possibility to measure the solar pp- and $^7$Be-neutrino flux
in real time with a very low energy threshold.
\subsubsection{Signal Detection}
The detection reaction is the elastic scattering process
$\nu$ + e$^- \rightarrow$ $\nu$ + e$^-$.
The maximum electron recoil energy is 261 keV for the pp-neutrinos and 665
keV for the $^7$Be-neutrinos \cite{bahc89}.
The energy of the recoiling electrons is detected through ionisation
in high purity Ge detectors.
GENIUS in its 1 ton version would consist of an array of about 400 HPGe
detectors, 2.5 kg each. Thus, the sensitive volume would be naturally divided
into 400 cells which helps in background discrimination, since a
neutrino interaction is taking place in a single cell.
\subsubsection{Signal Rates}
The dominant part of the signal in GENIUS is produced by
pp-neutrinos (66 \%) and the $^7$Be-neutrinos (33\%).
A target mass of 1 ton (10 tons) of natural or enriched Ge corresponds
to about 3$\times$10$^{29}$ (3$\times$10$^{30}$) electrons.
With the cross section for elastic neutrino-electron scattering
\cite{bahc89}:\\
\noindent
$\sigma_{\nu_{e}}$ = 11.6 $\times$ 10$^{-46}$cm$^2$ \hspace*{3mm} pp\\
$\sigma_{\nu_{e}}$ = 59.3 $\times$ 10$^{-46}$cm$^2$ \hspace*{3mm} $^7$Be\\
\noindent
and the neutrino fluxes \cite{Bah98c}:\\
\noindent
$\phi_{pp}$ = 5.94 $\times$10$^{10}$ cm$^{-2}$s$^{-1}$\\
$\phi_{^{7}Be}$ = 0.48 $\times$10$^{10}$ cm$^{-2}$s$^{-1}$\\
\noindent
the expected number of events calculated
in the standard solar model (BP98 \cite{SSM}) can be estimated:\\
\noindent
R$_{pp}$ = 69 SNU = 1.8 events/day (18 events/day for 10 tons)\\
R$_{^7Be}$ = 28.5 SNU = 0.6 events/day (6 events/day for 10 tons),\\
\noindent
The event rates for full $\nu_e \rightarrow \nu_{\mu}$ conversion
are 0.48 events/day for pp-neutrinos and 0.14 events/day for
$^7$Be-neutrinos for 1 ton of Ge and ten times higher for 10 tons
(see also Table \ref{rates})
\begin{table}
\begin{tabular}{lcc}
Case & Events/day & Events/day \\
& 11-665 keV& 11-665 keV\\
& (1 ton) & (10 tons)\\
\hline
SSM & 2.4 & 24 \\
Full $\nu_e \rightarrow \nu_{\mu}$ conversion & 0.62 & 6.2\\
\end{tabular}
\caption{Neutrino signal rates in GENIUS for 1 ton (10 tons) of
Germanium.}
\label{rates}
\end{table}
\subsubsection{Background requirements}
GENIUS is conceived such that the external background from the natural
radioactivity of the environment and from muon interactions is reduced
to a minimum, the main background contributions coming from the
liquid nitrogen shielding and the Ge detectors themselves.
To measure the low-energy solar neutrino flux, a nitrogen shielding
of 13 m in diameter is required.
Regarding the radiopurity of li\-quid nitrogen, the values reached at
present by the Borexino collaboration for their liquid scintillator
would be sufficient. Much attention has to be paid to the cosmogenic
activation of the Ge crystals at the Earth surface. In case of one day
exposure, five years of deactivation below ground are required.
The optimal solution would be to produce the detectors in an underground
facillity.
Table \ref{sol_backgr} shows the expected background events in the
energy region 11-260 keV and 11-665 keV.
\begin{figure}[!ht]
\epsfxsize=100mm
\epsfbox{pp_7be.eps}
\caption{Simulated spectra of the low energy neutrino signal (in the
SSM) and the total background in GENIUS (1 ton of natural germanium).}
\label{simspektrum}
\end{figure}
Fig. \ref{simspektrum} shows the simulated spectrum of the low-energy neutrino
signal in GENIUS, together with the total expected background.
If the signal to background ratio S/B will be greater than 1, than the
pp- and $^7$Be-neutrino flux can be measured by spectroscopic techniques
alone.
If S/B $<$ 1, one can make use of a solar signature in order to derive
the flux.
\begin{table}
\begin{tabular}{lc}
Energy region & Events/day \\
\hline
11 - 260 keV & 1.4 \\
11 - 665 keV & 1.8 \\
\end{tabular}
\caption{Expected background events in the GENIUS experiment (1 ton of Germanium).}
\label{sol_backgr}
\end{table}
The eccentricity of the Earth's orbit induces a seasonal variation
of about 7\% from maximum to minimum. Even if the number of background
events is not known, the background event rate and the signal event rate
can be extracted independently by fitting the event rate to the seasonal
variation. The only assumption is that the background is stable in time
and that enough statistics is available.
In case of a day/night - variation of the solar neutrino flux,
GENIUS would be sensitive to the LOW MSW solution of the
solar neutrino problem (compare Fig. \ref{daynight}).
GENIUS could be the {\sl{first detector to detect the solar pp neutrinos
in real time.}}
Although this imposes
very strong purity restrictions for all the detector components, with a
liquid nitrogen shielding of 13 m in diameter and production of the Germanium
detectors below ground, it should be feasible to achieve such a low
background level.
The advantages are the well understood detection technique
(ionization in a HPGe detector), the excellent energy resolution (1 keV
at 300 keV), low energy threshold (about 11 keV) and the measurement
of the recoiling electrons in real time.
The good energy resolution for detecting the recoiling electrons
would allow for the first time to measure the 1.3 keV predicted
shift of the average energy of the beryllium neutrino line.
This shift is a direct measure of the central temperature of the Sun
\cite{bah93}.
\section{The GENIUS experiment}
\subsection{Design, detection technique, threshold}
GENIUS will operate an array of 40 or 300 'naked' Ge crystals (natural
Ge for WIMP-detection and measurement of the pp-flux, enriched
$^{76}$Ge for double beta decay searches) in a cylindrical vessel
filled with liquid nitrogen.
The basic idea of the GENIUS setup relies on the fact that most of the
contributions of measured spectra in conventional low level detectors
result from the cryostat system and the shielding material. If these
can be eliminated reasonably, the sensitivity of the experiment
increases accordingly (linearly in the case of Dark Matter and solar
Neutrino search). It is therefore essential to keep away radioactive
materials and sources as far as possible from the detector itself.
In case of the GENIUS project this will be accomplished by the use of
liquid nitrogen as a cooling medium and as the shielding material against
natural radioactivity of the environment at
once. Liquid nitrogen has the advantage that it can be processed to a
very high purity through fractional distillation.
In this way practically all radioactive impurities in the material
near the detectors, which are known to produce the main part of the
radioactive background, are eliminated.
\subsubsection{Detector Size}
Due to its rather low density (0.8 g/cm$^3$), the nitrogen
shielding has to be several meters in diameter.
The dimensions of the vessel depend on the gamma- and
n-flux in the Gran Sasso Laboratory and on the intrinsic
radiopurity of the steel vessel.
The required background conditions imply a tank size of 12m
diameter and height (13 m for solar neutrino detection),
if no other shielding is used in addition to the nitrogen (see Chapter
3.1). The tank size could be reduced to some limited extent against the $\gamma$-radiation
and neutron flux from outside the tank, by replacing part of the outer
nitrogen by other shielding material, e.g.
lead (2m of nitrogen could be replaced by a layer of 10.8 cm Pb).
The minimal diameter of the nitrogen tank
would physically be determined by the contamination of the vessel
and the shielding material and the distance between the lead and the
tank wall from the crystals. It has been calculated that a nitrogen
tank with $\sim$ 8m in diameter would be the minimum for this purpose.
This gives some flexibility to adapt the setup to the different sizes
of the halls in the Gran Sasso.
Of course the cost of the project would be increased in a
non-negligible way by such a lead layer
(10.8~cm correspond to $\sim$1000 tons).
We have also considered
other alternative setups (see, e.g. \cite{Kla98i}).
We see, however, no other reasonable way
to accomplish the goal of
reducing the background to the required level than to use a tank of
the above dimensions.
These considerations show also that an intermediate size test setup
(as discussed in chapter 3.1) as a first step of the full setup seems
unreasonable.
Figure \ref{confA} shows the design of the experiment,
which could be located, e.g., in the Gran Sasso Underground
Laboratory, or in the WIPP laboratory.
\subsubsection{Detection Technique}
The proposed detection technique for GENIUS is ionization in a
Germanium detector.
The detectors would be coaxial HPGe crystals of p-type, weighting
about 2.5 kg each. For p-type crystals, the outer contact is n$^+$ and
the surface dead layer has a thickness of se\-veral hundred
micrometers. This prevents the detection of $\beta$-particles and
gamma rays of low energy from outside the crystals.
The optimal working temperature is 77 K.
Besides the energy signal, the pulse shape of the interactions can be
recorded in view of background discrimination.
The energy resolution of GENIUS would be about 0.3\%, the energy
threshold about 11 keV.
\subsection{Signals and signatures}
\subsubsection{Dark Matter}
The signal for a dark matter WIMP with mass between 20 GeV and 1 TeV is
expected in the energy region below 100 keV. The event rates for the
neutralino as the lightest supersymmetric particle
range in most SUSY models from 10$^{-2}$ to 10$^{2}$ events/kg y keV.
The low-energy spectrum in GENIUS is dominated by the 2$\nu \beta\beta$ signal
from the decay of $^{76}$Ge. For natural Germanium (7.8\% $^{76}$Ge)
an event rate of 3$\times$10$^{-2}$ events/kg y keV from 2$\nu \beta\beta$ decay is expected.
Therefore, the 2$\nu \beta\beta$- signal has to be subtracted.
Another possibility is to make use of the predicted seasonal
modulation of the WIMP flux. Due to the motion of the Sun in the
galactic halo and the Earth motion around the Sun, a flux variation of
7\% between two extremes is expected \cite{freese} .
\subsubsection{Neutrinoless double beta decay}
The expected signature for the 0$\nu \beta\beta$ decay of $^{76}$Ge
is a peak at the energy of 2038.56$\pm$0.32 keV \cite{hyka91}. The event rate for 1 ton
of enriched $^{76}$Ge and an effective Majorana neutrino mass of 0.01
eV is 0.3 events/yr. Due to the good energy resolution of Ge
detectors (typically better than 0.3 \% ), the 0$\nu \beta\beta$ signal
is not affected by the 2$\nu \beta\beta$ spectrum.
\subsubsection{Solar neutrinos}
The reaction used to detect solar neutrinos is the elastic neutrino
electron scattering: $\nu$ + e$^- \rightarrow$ $\nu$ + e$^-$.
The maximum electron recoil energy is 261 keV for the pp-neutrinos and 665
keV for the $^7$Be-neutrinos \cite{bahc89}.
The detection rates for the pp and $^7$Be-fluxes, calculated for the
SSM \cite{SSM}, are R$_{pp}\simeq$ 70 SNU and R$_{^7Be}\simeq$ 26 SNU (1 SNU =
10$^{-36}$/(s target atom)). For one ton of natural (or enriched)
Ge (corresponding
to 3$\times$ 10$^{29}$ electrons), the total rates are R$_{pp}\simeq$
1.8 events/day and R$_{^7Be}\simeq$ 0.65 events/day, assuming the
detection of all electrons. This is about ten times higher than the rates
in present radiochemical Gallium (GALLE and SAGE) experiments.
The event rates for full $\nu_e \rightarrow \nu_{\mu}$ conversion
are 0.48 events/day for pp-neutrinos and 0.14 events/day for
$^7$Be-neutrinos.
GENIUS can measure only the energy distribution of the recoiling
electrons, whereas the energy of the incoming neutrinos is not directly
determined. However, due to the excellent energy resolution of the
detectors and the
difference in the elastic scattering cross section of electron and
muon neutrinos, a comparison of the energy spectrum of recoiling
electrons with the theoretical prediction of the SSM can be made.
Due to its relatively high counting rate, GENIUS would be able to test the
LOW MSW
flavour conversion solution \cite{bahkra} via the day-night modulation of the
neutrino flux and the vacuum-oscillation solution via the seasonal
flux variation.
\subsection{Technical study of detector operation}
To demonstrate the feasibility of operating Ge detectors in liquid nitrogen,
instead of in a vacuum--tight cryostat system \cite{kno89},
a first experiment has been successfully performed in the low level
laboratory in Heidelberg with one naked p--type Ge crystal immersed in a 50
l dewar \cite{KK3}.
Already in this attempt we could not see any deterioration in the
detector performance relative to our conventionally operated detectors.
\begin{figure}[!t]
\hspace*{1.1cm}
\epsfxsize9cm
\epsfbox{baudis.ps}
\caption{The three--crystal holder-system with germanium crystals mounted
shortly before cooling. Some crystal-to-FET cables can be seen.}
\label{canberra2}
\end{figure}
In a second phase the goal was to look for possible interferences between
two or more naked Ge crystals, to test different cable lengths between
FETs and crystals and to design and test a preliminary holder system
of high molecular polyethylene.
We performed a technical study operating three germanium detectors on a
common plastic holder system inside liquid nitrogen \cite{Bau98}.
All crystals were of p--type and weighted about 300 g each.
A picture of the three--crystal holder--system can be seen in
figure ~\ref{canberra2}.
Two thin polyethylene plates (1 cm thick) are used to fix the contacts
to the crystals. The FETs are placed close to the liquid nitrogen
surface but kept inside. Cables having three different lengths (2, 4 and
6 m) connect the three crystals to their FETs.
The main purpose of the experiment was to test the behaviour of the
crystals in the low energy region: energy resolution, energy
threshold, crosstalk between the detectors and possible signs of microphonic events caused by nitrogen boiling.
The general performance
of the crystals is as stable as already seen with a single detector inside
liquid nitrogen. We couldn't observe any cross talk using only p--type
detectors (same polarity for the HV-bias), since cross talk signals have the wrong
polarity and are filtered by the amplifier.
Figure \ref{topfgs_back} shows a background spectrum and
figure \ref{baspec} a $^{133}$Ba calibration spectrum of one of the naked Ge
detectors in liquid nitrogen. The
cable length between detector and FET was 6 m (winded up in loops).
We achieved an energy resolution of 1.0 keV at 300 keV and a threshold of 2 keV.
No microphonic events due to nitrogen boiling beyond 2 keV could be detected.
We conclude that the performance of the Ge detectors is as good (or even
better) as for conventionally operated crystals, even with 6 m cable
lengths between crystal and FET.
\begin{figure}[!tt]
\epsfxsize12cm
\epsfbox{back32.eps}
\caption{Background spectrum of a naked, unshielded Ge crystal in liquid nitrogen.
Note the low energy threshold of 2 keV of the detector.}
\label{topfgs_back}
\end{figure}
\begin{figure}[h]
\epsfxsize12cm
\epsfbox{barium_32.eps}
\caption{Calibration $^{133}$Ba spectrum of a naked Ge crystal in
liquid nitrogen.
The energy resolution is 1 keV at 300 keV.}
\label{baspec}
\end{figure}
\begin{figure}[h]
\epsfysize10cm
\hspace*{1.2cm}
\epsfbox{det_kevlar_2.ps}
\caption{A naked Ge crystal suspended on kevlar wires. Only 3 g of
material in total (kevlar and electrical contacts) were used.}
\label{kevlar}
\end{figure}
A third phase was dedicated to the optimization of the holder system
design (material minimization).
In figure \ref{kevlar} a Ge crystal suspended on kevlar wires can be
seen. The inner contact is fixed with a stainless steel spring, the
outer contact with a thin stainless steel wire. Only 3 g of material
in total (kevlar plus steel wires) were used. Figure \ref{back} and
\ref{ba} show a background and a $^{133}$Ba calibration spectrum of a
400 g crystal in liquid nitrogen. An energy energy resolution of 1.2 keV at 300 keV and a
threshold of 2.5 keV were achieved.
\begin{figure}[h]
\epsfxsize12cm
\epsfbox{back.eps}
\caption{Background spectrum of a naked, unshielded 400g Ge crystal in liquid nitrogen.
The energy threshold is 2.5 keV.}
\label{back}
\end{figure}
\begin{figure}[h]
\epsfxsize12cm
\epsfbox{baco.eps}
\caption{Calibration $^{133}$Ba spectrum of a 400g naked Ge crystal in liquid nitrogen.
The energy resolution is 1 keV at 300 keV.}
\label{ba}
\end{figure}
Currently we are measuring the radiopurity of kevlar and at the same time
we are testing other possible materials for the holder system.
\section{Background simulations}
To study the expected background in the GENIUS experiment,
we performed detailed Monte Carlo
simulations and calculations of all the relevant background sources
\cite{Bau98}.
The sources of background can be divided into external and internal ones.
External background is generated by events originating from outside
the liquid shielding, such as photons and neutrons from the Gran Sasso
rock, muon interactions and muon induced activities.
Internal background arises from residual impurities in the liquid
nitrogen, in the steel vessels, in the crystal holder system, in the Ge crystals
themselves and from activation of both liquid nitrogen and Ge crystals
at the Earths surface.
For the simulation of muon showers, the external photon flux
and the radioactive decay chains we used the GEANT3.21 package
\cite{geant} extended for nuclear decays \cite{mueller}.
This version had already successfully been tested in establishing a
quantitative background model for the Heidelberg--Moscow experiment
\cite{HM97}.
We used the following detector geometry to perform the simulations.
The nitrogen shielding is given by a cylindrical geometry
of variable diameter and height,
with the crystals positioned in its center.
The vessel is surrounded by a 2 m thick polyethylene-foam isolation, which
is held by two 2 mm thick steel layers
(constructional data from Messer--Griesheim).
The simulated setup consists of natural Ge detectors
integrated into a holder system of high molecular polyethylene.
\subsection{Photon flux from the surroundings}
We simulated the influence of the photon flux with energies between
0 -- 3 MeV measured in hall C of the Gran Sasso laboratory
\cite{arpesella92}. This measurement is in good agreement with photon flux
calculations by the Borexino Collaboration \cite{borexprop}.
The main contributions are given in table \ref{photonGS}.
\begin{table}[htb]
\begin{center}
\setcounter{mpfootnote}{0}
\vskip0.3cm
\centering
\begin{tabular}{lcc}
\hline
Isotope & Energy [keV]&Flux [m$^{-2}$d$^{-1}$]\\
\hline
$^{40}$K & 1460 & 3.8$\times10^{7}$\\
\hline
$^{214}$Pb & 295.2 & 0.8$\times10^{7}$\\
$^{214}$Pb & 352 & 1.8$\times10^{7}$\\
$^{214}$Bi & 609.3 & 2.9$\times10^{7}$\\
$^{214}$Bi & 1120.3 & 1.4$\times10^{7}$\\
$^{214}$Bi & 1764.5 & 1.7$\times10^{7}$\\
\hline
$^{208}$Tl & 2614.5 & 1.35$\times10^{7}$\\
\hline
\end{tabular}
\caption{Simulated components of the gamma ray flux from natural
radioactivity in the Gran Sasso Laboratory (from \cite{arpesella92}).}
\label{photonGS}
\end{center}
\end{table}
\subsubsection{Intermediate size detector (4$\times$4 m)}
The dimensions of the cylindrical tank
are dictated by the photon flux measured in the Gran Sasso
laboratory and the radiopurity of the tank walls (made of stainless
steel).
With a diameter and a height of 4 m for the liquid shielding, the
contribution from the surrounding gammas is about 70 counts/(kg y keV)
in the energy region 11 -- 100 keV.
This is almost 4 orders of magnitude higher than the goal of 10$^{-2}$
events/(kg y keV) of the GENIUS experiment.
An alternative would be to use an additional outer water shielding.
In this case however, the
limitations of the tank dimensions are given by the radiopurity of the
steel walls. Assuming an U/Th contamination of
5$\times$10$^{-9}$ g/g for the steel, as measured by the BOREXINO
collaboration, a count rate of about 1.5 counts/(kg y keV) from this
component is achieved.
This again is too high by more than 2 orders of magnitude.
For the assumed steel radiopurity of 5$\times$10$^{-9}$ g/g, the
minimal allowed tank
dimensions are $\sim$ 8$\times$8 m (maximal allowed countrate due to
this component was assumed to be 0.3$\times10^{-3}$ counts/(kg y keV)
in the energy
region between 11~keV and 100~keV) or, the other way round, for a tank
size of 4$\times$4 m, an unrealistic contamination level of the order of
10$^{-11}$ g/g for the steel is required.
It has been suggested to set up an intermediate size detector
in order to prove the predictions of the concept in a first step,
i.e. to prove that no unexpected background components appear, which
would dramatically decrease the sensitivity with respect to the
predictions. However, we see no sense in such an intermediate step,
since its costs would be only
negligibly smaller than that of the full setup - increasing the total
costs of the project by about a factor of two.
This means a test of the
proposed setup by a smaller tank as a first step seems unreasonable.
\subsubsection{Full size detector}
For GENIUS as a dark matter and neutrinoless double beta decay
detector, a 12$\times$12 m tank is suggested.
The obtained count
rate from the external gammas in the energy region 11 -- 100 keV is
4$\times$10$^{-3}$ counts/(kg y keV).
However, to measure the solar pp- and $^7$Be neutrino flux, a tank
size of 13$\times$13 m is needed (with a count rate from external
gammas of
9$\times$10$^{-4}$ counts/(kg y keV) below 260 keV).
\subsection{Neutron flux from the surroundings}
We simulated the measured neutron flux \cite{arp} in the Gran Sasso
laboratory.
The 2 m polyethylene foam isolation ($\rho$ = 0.03 g cm$^{-3}$) around
the nitrogen tank reduces
the neutron flux for energies below 1 keV by more than 5 orders of
magnitude. Only about 3\% of neutrons with energies between 1 keV
and 2.5 MeV will pass the polyethylene isolation, whereas for energies
between 2.5 and 15 MeV the overall flux is reduced by about 40\%.
The neutron flux reaching the tank can be reduced by another
two orders of magnitude by doping the polyethylene foam isolation with
about 1.4 t of boron.
The flux of the $^7$Li deexcitation gamma rays from
the reaction n~+~$^{10}$B$\rightarrow~\alpha$~+~$^7$Li*, with an
energy of 0.48 MeV, would be too low
to reach the inner part of the liquid shielding.
After the first meter of liquid nitrogen the total neutron flux is reduced
by another 4--5 orders of magnitude, therefore we simulated the neutron capture
reactions randomly distributed in the first meter of the nitrogen shielding.
With the conservative assumption that all neutrons reaching the nitrogen are thermalized and
captured by the reactions $^{14}$N(n,p)$^{14}$C$^{*}$ and
$^{14}$N(n,$\gamma$)$^{15}$N$^{*}$,
a total of 4.4$\times$10$^{7}$
neutron capture reactions per year have to be taken into account.
The relevant contribution to the background comes from the
deexcitation of the $^{14}$C$^{*}$ and $^{15}$N$^{*}$ nuclei.
The contribution of the $\beta$--decay of $^{14}$C nuclei in the liquid nitrogen
is negligible, since only low energy electrons (E$_{\beta max}$ = 156
keV) are emitted and the decay probability is very low due to the long
half life (10$^{-4}$ per year).
Using the assumptions of a 12$\times$12m tank and a 2m isolation
around it,
the mean count rate in the low--energy region due to neutron capture
reactions would be about 4$\times10^{-4}$ counts/(kg y keV).
\subsection{Activities induced by muons}
The muon flux in the Gran Sasso laboratory was measured to be
$\phi$$_{\mu}$=2.3$\times$10$^{-4}$ m$^{-2}$s$^{-1}$ with a mean
energy of $\bar{E}_{\mu}$=200 GeV \cite{arpesella92}.
We simulated the effect of muon-induced showers in the liquid
nitrogen. With the aid of a muon veto in form of scintillators or gas counters
on top of the tank, the total induced background can be drastically reduced.
Here we assumed a veto efficiency of 96\% as measured in a more
shallow laboratory \cite{heusser91}. The count rate due to muon
induced showers in the low--energy
region is about 2$\times$10$^{-3}$ counts/(kg y keV).
This can be further improved using the anticoincidence power of the
Ge detectors among each other. For example, for 300 Ge detectors (1
ton), the count rate reduces to 7.2$\times$10$^{-6}$ counts/(kg y
keV) in the energy region below 260 keV.
Besides muon showers, we have to consider muon-induced nuclear
disintegration and interactions due to secondary neutrons generated in the above
reactions.
\subsubsection{Neutrons generated by cosmic muons}
The muon-induced production of neutrons can be approximated by
A$_{n} \sim$ 3.2$\times$10$^{-4}$ (g$^{-1}$ cm$^{2}$), due to the
$<E>^{0.75}$ dependence of the number of neutrons on the mean muon
energy \cite{myonneutrons}.
With the geometry of the tank h = 12 m, r = 6 m, the density of
nitrogen $\rho$ = 0.808 g/cm$^{3}$ and the cited flux, a mean
production rate of $\phi _{n\mu}$ = 2.5$\times$10$^{5}$ neutrons/year in
the whole vessel is obtained.
Table \ref{neutrons} gives the neutron-induced reactions
in the liquid nitrogen for neutron energies $<$~20~MeV (based on
all reactions found in \cite{McLane88}).
\begin{table}[htb]
\vskip0.3cm
\centering
\begin{tabular}{lcc}
\hline
Reaction & T$_{1/2}$ of the product& Decay energy\\
\hline
$^{14}$N(n,p)$^{14}$C & T$_{1/2}$=5.7$\times$10$^3$y& E$_{{\beta}^-}$=0.16 MeV\\
$^{14}$N(n,$\gamma$)$^{15}$N & stable& \\
$^{14}$N(n,2n)$^{13}$N & T$_{1/2}$= 9.96 m& E$_{{\beta}^+}$=1.2 MeV\\
$^{14}$N(n,$\alpha$)$^{11}$B & stable&\\
$^{14}$N(n,t)$^{12}$C & stable&\\
$^{14}$N(n,2$\alpha$)$^{7}$Li &stable&\\
\hline
\end{tabular}
\caption{Neutron interactions in the liquid nitrogen for neutron
energies $<$ 20 MeV.}
\label{neutrons}
\end{table}
All of the produced nuclides are stable or short-lived with the exception
of $^{14}$C and $^{13}$N. The contribution of gammas from the excited
$^{14}$C$^*$ nucleus corresponds to 10$^{-3}$ counts/(kg y keV) between
0 -- 100 keV. The contribution from the $\beta^{-}$-- particles
with E$_{max}$ = 156 keV is negligible due to the low decay probability
of $^{14}$C. The production rate of $^{13}$N is
1$\times$10$^6$ atoms per year in the whole tank.
From 10$^6$ simulated positrons with E$_{max}$ = 1.2 MeV ($\beta
^+$-decay), corresponding to an
exposure of about 1 year, only one event could be observed in the detectors.
Therefore, the contribution of $^{13}$N to the background will be negligible.
In the Germanium material, 2.3$\times$10$^2$ neutrons/(y ton) due to
muon interactions are produced. For the low energy region the
most significant reaction is the $^{70}$Ge(n,$\gamma$)$^{71}$Ge capture
reaction. $^{71}$Ge decays through EC (100\%) with T$_{1/2}$ = 11.43 d
and Q$_{\rm EC}$ = 229.4 keV \cite{firestone} and can not be discriminated by the
anticoincidence method. The simulation of this decay yields
5$\times$10$^{-4}$ counts/(kg y keV) in the energy region below 260 keV.
\subsubsection{Negative muon capture}
A negative muon stopped in the liquid shielding can be captured by a
nitrogen nucleus, leading to one of the reactions that are listed in
table ~\ref{myonspall}.
Estimations of the number of stopping muons in the nitrogen tank
\cite{bergamasco82,gaisser,lohman} lead to 86 stopped muons per day
(for a 12$\times$12 m tank).
The rates of decaying and captured muons are shown in table
\ref{mcapture}.
\begin{table}[htb]
\vskip0.3cm
\centering
\begin{tabular}{lccc}
\hline
Reaction & T$_{1/2}$ & Decay energy & Rate [y$^{-1}$]\\
\hline
$^{14}$N($\mu$,$\nu_{\mu}$)$^{14}$C & T$_{1/2}$=5.7$\times$10$^4$y& E$_{{\beta}^-}$=0.16 MeV&584\\
$^{14}$N($\mu$,$\nu_{\mu}\alpha$)$^{10}$Be & T$_{1/2}$=1.6$\times$10$^{10}$y& E$_{{\beta}^-}$=0.6 MeV&29\\
$^{14}$N($\mu$,$\nu_{\mu}$p)$^{13}$B & T$_{1/2}$=17.33ms& E$_{{\beta}^-}$=13.4 MeV&116\\
$^{14}$N($\mu$,$\nu_{\mu}$n)$^{13}$C & stable&&3798\\
$^{14}$N($\mu$,$\nu_{\mu}\alpha$n)$^{9}$Be & stable&&17\\
$^{14}$N($\mu$,$\nu_{\mu}\alpha$p)$^{9}$Li & T$_{1/2}$=178ms& E$_{{\beta}^-}$=13.6 MeV&0.6\\
$^{14}$N($\mu$,$\nu_{\mu}$2n)$^{12}$C & stable&&1168\\
$^{14}$N($\mu$,$\nu_{\mu}$3n)$^{11}$C & T$_{1/2}$=20.38m&
E$_{{\beta}^-}$=13.4 MeV&292\\
$^{14}$N($\mu$,$\nu_{\mu}$4n)$^{10}$C & T$_{1/2}$=19.3s& E$_{{\beta}^+}$=1.9 MeV&117\\
\hline
\end{tabular}
\caption{Spallation reactions from muon capture.}
\label{myonspall}
\end{table}
\begin{table}[htb]
\begin{center}
\setcounter{mpfootnote}{0}
\vskip0.3cm
\centering
\begin{tabular}{lcc}
\hline
Muon flux & 124 h$^{-1}$&\\
\hline
Stopped muons & 86 d$^{-1}$&\\
\hline
Decaying muons& $\mu ^+$ & $\mu ^-$\\
& 50 d$^{-1}$& 20 d$^{-1}$\\
\hline
Captured muons &$\mu ^+$ & $\mu ^-$\\
& 0 & 16 d$^{-1}$\\
\hline
\end{tabular}
\caption{Muon flux, stopped, captured and decaying muons in the
nitrogen shielding of the Genius detector (for a 12$\times$12 m tank).}
\label{mcapture}
\end{center}
\end{table}
The derived production rates \cite{charalambus} for the various isotopes
are listed in table \ref{myonspall}.
Only the isotopes
$^{14}$C, $^{10}$Be, $^{11}$C and $^{10}$C
can not be discriminated by
muon anticoincidence, since their individual lifetime is too long.
$^{14}$C and $^{10}$Be will not be seen in our detector due to
their very low decay probabilities (10$^{-4}$ and 10$^{-10}$ per year)
and low production rates.
The contribution of $^{10}$C and $^{11}$C,
with a production rate of 117 atoms/year and 292 atoms/year,
respectively, in the whole nitrogen tank, will be negligible.
The gamma rays from the excited nuclei produced in all the reactions can
be discriminated by anticoincidence with the muon shielding on the top
of the tank.
\subsubsection{Inelastic muon scattering}
Another way of producing radioactive isotopes in the liquid
shielding are electromagnetic nuclear reactions of muons through
inelastic scattering off nitrogen nuclei:
$\mu$~+~N~$\rightarrow$~$\mu'~$~+~X$^*$. The only resulting isotopes
with half lifes $>$ 1s
are $^{14}$N($\gamma$,n)$^{13}$N, with T$_{1/2}$=9.96 m and
$^{14}$N($\gamma$,tn)$^{10}$C, with T$_{1/2}$=19.3 s.
The production rate per day for one isotope can be written as
\cite{OConnell88}:
R(d$^{-1}$)=
6$\times$10$^{-2}\phi_{\mu}$(d$^{-1}$m$^{-2}$)N$_{T}$(kt)$\sigma_{\mu}$($\mu$b)/A,
where $\phi _{\mu}$ is the flux of muons on the detector, N$_{T}$ is the
number of target nuclei, $\sigma_{\mu}$ the reaction cross section and
A the atomic weight of the target nucleus. For our detector this
yields R(y$^{-1}$)= 45$\times$$\sigma_{\mu}$($\mu$b).
With typical reaction cross sections of a few hundred $\mu$b \cite{OConnell88,Napoli73},
we obtain a production rate of (5--10)$\times$10$^3$ atoms per year.
A simulation of an activation time of ten years for both isotopes
yields negligible count rates in comparison to contributions
from other background components.
\subsection
[Intrinsic impurities]
{Intrinsic impurities in the nitrogen shielding, Ge crystals,
holder system and steel vessel}
The assumed impurity levels for the liquid nitrogen
are listed in table ~\ref{forderung}.
For the $^{238}$U and $^{232}$Th decay chains they have been
measured by the Borexino collaboration \cite{borex} for their liquid scintillator.
Due to the very high cleaning efficiency of fractional distillation,
it is conservative to assume
that these requirements will also be fulfilled for liquid nitrogen.
The $^{238}$U and $^{232}$Th decay chains were simulated under the assumption that
the chains are in secular equilibrium.
The count rate due to $^{238}$U, $^{232}$Th and $^{40}$K
contaminations of the liquid nitrogen is about 1.2$\times 10^{-3}$ in the
energy region below 100 keV.
\begin{table}[htb]
\begin{center}
\begin{tabular}{lcc}
\hline
Source & Radionuclide & Purity \\
\hline
Nitrogen & $^{238}$U & 3.5$\times$10$^{-16}$g/g \\
& $^{232}$Th & 4.4$\times$10$^{-16}$g/g \\
& $^{40}$K & 1$\times$10$^{-15}$g/g \\
\hline
Steel vessel & U/Th & 5$\times$10$^{-9}$g/g \\
\hline
\end{tabular}
\caption{Assumed contamination levels for
the liquid nitrogen and steel vessel. }
\label{forderung}
\end{center}
\end{table}
New measurements of the $^{222}$Rn contamination of freshly
produced liquid nitrogen yield 325 $\mu$Bq/m$^{3}$ \cite{rau99}.
After about a month it is reduced to about 3~$\mu$Bq/m$^{3}$ (T$_{1/2}$ =
3.8 days).
Such a level could be maintained if the evaporated nitrogen is always
replaced by Rn--pure nitrogen, previously stored in an underground
facility or if a nitrogen cleaning system is used.
It is planned to install a nitrogen recycling device (through condensation)
inside the tank. This would reduce the Rn contamination to a
negligible level.
Surface emanations are reduced to a negligible level for cooled
surfaces in direct contact with the liquid nitrogen.
The mean count rate
from the contamination of $\;^{222}$Rn in the
region below 100 keV is 3$\times$10$^{-4}$ counts/(kg y keV) assuming an
activity of 3 $\mu$Bq/m$^{3}$ in the liquid nitrogen.
For the intrinsic impurity concentration in Ge crystals we can
give only upper limits from measurements with the detectors of the
Heidelberg--Moscow experiment.
We see a clear $\alpha$--peak in two of the enriched detectors at 5.305
MeV, and an indication for the same peak in two other detectors. It
originates from the decay of $^{210}$Po (which decays with 99\%
through an $\alpha$--decay to $^{206}$Pb) and is a sign for a $^{210}$Pb
contamination of the detectors. However, it is very unlikely that the
contamination is located inside the Ge-crystals, most probably it is located on
the crystals surface at the inner contact.
Using three Ge detectors, we
derive an upper limit at 90\% CL (after 19 kg y counting statistics)
of 1.8$\times$10$^{-15}$g/g for $^{238}$U and 5.7$\times$10$^{-15}$g/g
for $^{232}$Th.
Assuming these impurity concentrations throughout the whole Ge
detector volumes, our
simulations yield a count rate of about 10$^{-2}$ counts/(kg y keV)
for both $^{238}$U and $^{232}$Th decay chains.
It is however secure to assume that these upper
limits are very conservative and that the true contamination
of HPGe is much lower. Special attention will have to be paid in order
to avoid surface contaminations of the crystals.
An important factor in the background spectrum
is the effect of the holder-system.
For the simulation we assumed the possibility to obtain a polyethylene
with an impurity concentration of 10$^{-13}$g/g for the U/Th decay
chains (this is a factor of 100 worse than the values reached at present for the
organic liquid-scintillator by the Borexino collaboration \cite{borex}).
Encouraging are the results already achieved by the SNO experiment
\cite{sno}, which developed an acrylic with current limits on
$^{232}$Th and $^{238}$U contamination of 10$^{-12}$g/g.
Since it is not yet sure that such a low contamination level will be
reached for polyethylene, we are currently testing also other materials.
Assuming the above impurity level with the simulated geometry (130 g
of material per detector) a count
rate of $\sim$
8$\times$10$^{-4}$ counts/(kg y keV) in the energy region below 100
keV from this component is reached.
This result will be further improved by using the new
developed holder design with a minimized amount of material.
In case of using about 10 g of material per detector, a contamination
level of 10$^{-12}$g/g for the holder system material would suffice.
For the steel vessel an impurity concentration of 5$\times$10$^{-9}$
g/g for U/Th was assumed (as measured in \cite{borexprop}).
The contribution in the energy region 0 -- 100 keV is
1.5 $\times$ 10$^{-5}$ counts/(kg y keV). Assuming an equal
contamination for the polyethylene foam isolation as for steel,
the contribution of both materials to the background is negligible (for a
12$\times$12 m tank).
\subsection{Cosmic activation of the germanium crystals}
We have estimated the cosmogenic production rates of radioisotopes
in the germanium crystals with the $\Sigma$ programme \cite{JensB}.
The programme was developed to calculate cosmogenic activations of
natural germanium, enriched germanium and copper.
It was demonstrated that it can reproduce the measured cosmogenic
activity in the Heidelberg--Moscow experiment \cite{HM97}
within about a factor of two \cite{BerndM}.
Assuming a production plus transportation time of 10 days at sea level
for the natural Ge detectors (with the exception of $^{68}$Ge, where the
saturation activity is assumed), and a deactivation time of three years,
we obtain the radioisotope concentrations listed in table \ref{ge_cosmo}.
All other produced radionuclides have much smaller activities due to their shorter
half lifes. The required short production time has been guaranteed by detector production companies.
\begin{table}[htb]
\begin{center}
\setcounter{mpfootnote}{0}
\vskip0.3cm
\centering
\begin{tabular}{lccc}
\hline
Isotope & Decay mode, & Energy [keV]& A\\
& T$_{1/2}$ & & $\mu$Bq/ \\
& & &kg\\
\hline
$^{49}$V & EC, 330 d & no $\gamma$, E (K$_{\alpha}$ $^{49}$Ti)=4.5 & 0.17\\
$^{54}$Mn & EC, 312.2 d &E$_{\gamma}$=1377.1 E (K$_{\alpha}$ $^{54}$Cr)=5.99 & 0.20\\
$^{55}$Fe & EC, 2.73 a & no $\gamma$, E (K$_{\alpha}$ $^{55}$Mn)=5.9 & 0.31\\
$^{57}$Co & EC, 271.3 d & 136.5 (99.82\%)E (K$_{\alpha}$ $^{57}$Fe)=7.1 & 0.18\\
$^{60}$Co & $\beta ^-$, 5.27 a & 318 (99.88\%), E$_{\gamma 1,2}$=1173.24, 1332.5&0.18\\
$^{63}$Ni & $\beta^-$, 100.1 a & E$_{\beta^-}$= 66.95 no ${\gamma}$ &0.01 \\
$^{65}$Zn & EC, 244.3 d & E$_{\gamma}$=1115.55 (50.6\%),E (K$_{\alpha}$ $^{65}$Cu)=8.9 & 1.14\\
$^{68}$Ge & EC, 288 d & E (K$_{\alpha}$ $^{68}$Ga)=10.37,
Q$_{EC}$($^{68}$Ga)= 2921 &101.4\\
\hline
\end{tabular}
\caption{Cosmogenic produced isotopes in the Ge crystals for an
exposure time at sea level of 10 days and for 3 years deactivation
time (for $^{68}$Ge an initial saturation activity was assumed).}
\label{ge_cosmo}
\end{center}
\end{table}
The count rate below 11 keV is dominated by X--rays from
the decays of $^{68}$Ge, $^{49}$V,
$^{55}$Fe and $^{65}$Zn (see table \ref{ge_cosmo}).
Due to their strong contribution, the energy threshold of GENIUS would be
at 11 keV, which is still acceptable (as can be seen from figure \ref{limits}).
Between 11 keV and 70 keV the contribution from $^{63}$Ni dominates
due to the low Q--value (66.95 keV) of the $\beta^-$--decay.
$^{68}$Ge plays a special role. Since it can not be
extracted by zone melting like all other, non--germanium isotopes,
the starting activity would be in equilibrium with the production
rate.
After 3 years of deactivation below ground, the activity is about
100 $\mu$Bq/kg.
With a half--life of 288 d it will dominate the other background
components (with about 4$\times$10$^{-2}$ events/kg y keV below 100
keV). However, with such a background level one already would be able to
test a significant part of the SUSY predicted parameter space for
neutralinos (see Fig. \ref{limits}).
After 5 years of deactivation the activity reduces to 14.7$\mu$Bq/kg
(5.3$\times$10$^{-3}$ events/kg y keV below 100 keV).
At this point one can in addition reduce the contribution of $^{68}$Ga by
applying a time analysis method. $^{68}$Ge decays through EC (100\%)
to the ground state of $^{68}$Ga. The resulting line at 10.37 keV
(88\% K-capture) would be observed in GENIUS with an energy
resolution of about 1 keV. Thus one could use an event with an energy
of 10.37 keV as a trigger for subsequent events in the same detector
during a few half-lives of $^{68}$Ga (T$_{1/2}$ = 68.1 m),
removing up to 88\% of the $^{68}$Ga caused events.
With about 2.5 events per day and detector one would loose only
30\% of the measuring time.
Another solution could be to process the germanium ore directly
in an underground facility or to use high purity germanium which
has already been stored for several years in an underground laboratory.
Figure \ref{cosmo} shows the sum and the single contributions from the
different isotopes.
\begin{figure}[h]
\epsfxsize=12cm
\epsfbox{cosmosum.eps}
\caption{Background originating from cosmic activation of the Ge
crystals at sea level with 10 days exposure and 3 years
deactivation. For
$^{68}$Ge an initial saturation activity was assumed.}
\label{cosmo}
\end{figure}
The sum of all contributions from the cosmogenic activation of the Ge
crystals is 5.2$\times$10$^{-2}$ counts/(kg y keV) between 11 -- 100
keV, for an activation time of 10 days at the Earths surface and a
deactivation time of three years. Under the mentioned conditions, after 5 years of deactivation the background would be reduced to less than 1$\times$10$^{-2}$ counts/(kg y keV), allowing for the sensitivity for dark matter shown in Fig. \ref{limits}.
Since the cosmogenic activation of the Ge crystals will be the dominant background component in the
low-energy region, special attention to short crystal exposure times at
sea level is essential.
The best solution would be to produce the
detectors below ground and to apply strong shielding during the
transportation.
The two--neutrino accompanied double beta decay of $^{76}$Ge is not
negligible in spite of the low abundance (7.8\%) of this isotope in
natural germanium. The contribution to the background after three
years of measurement is shown in figure \ref{specall} . Due to the
already high statistics reached in the Heidelberg--Moscow experiment
\cite{HM97}, the half life and spectral form of the decay are well
known and a subtraction of this part raises no difficulties.
The statistical error of the subtraction is not shown in fig.~\ref{limits}.
The cosmogenic activation of the Ge crystals in the Gran Sasso laboratory
is negligible in comparison to the assumed activation scenario at sea level.
\subsection{Cosmic activation of the nitrogen at sea level}
An estimation of the production rates of long--lived
isotopes in the nitrogen at sea level reveals the importance
of $^7$Be, $^{10}$Be, $^{14}$C and $^3$H.
The neutron flux at sea level is 8.2$\times$10$^{-3}$cm$^{-2}$s$^{-1}$
for neutron energies between 80 MeV and 300 MeV \cite{allkofer}.
Since we did not find measurements of the cross sections of neutron-
induced spallation reactions in nitrogen,
we assumed that at high neutron energies
(10$^2$--10$^4$ MeV) the cross sections are similar to the proton-induced ones.
For the reaction $^{14}$N(n,t$\alpha$n)$^7$Be the cross section
is (9.0$\pm$2.1) mb at E$_p$ = 450 MeV, (9.3$\pm$2.1) mb at
E$_p$ = 3000 MeV by \cite{reyss} and (13.3$\pm$1.3) mb at E$_p$ = 1600 MeV
by \cite{michel}.
For the reaction $^{14}$N(n,$\alpha$p)$^{10}$Be
the cross sections are (1.5$\pm$0.4) mb at E$_p$ = 450 MeV,
(2.6$\pm$0.6) mb at E$_p$ = 3000 MeV \cite{reyss} and (1.75$\pm$0.11)
mb at E$_p$ = 1600 MeV \cite{michel}.
Taking 10 mb for the $^7$Be channel we obtain a production
rate of 3.3$\times$10$^9$d$^{-1}$ in the whole tank.
This corresponds with a realistic 10 days sea level exposure after
production by fractional distillation to 4$\times$10$^8$ decays per day.
The simulation of this activity yields a count rate of about
10 events/(kg y keV) in the energy region between 0 -- 100 keV.
This is three orders of magnitude higher than the required level.
However, a large fraction of $^7$Be will removed
from the liquid nitrogen at the cleaning process for Rn and in
addition by underground storage (T$_{1/2}$=53.3 d) the contribution of
$^7$Be will be reduced to less than 10$^{-2}$ events/(kg y keV).
For $^{10}$Be, with $\sigma$= 2 mb, the production rate is
6.6$\times$10$^{8}$d$^{-1}$, which is negligible due to the long
half life of T$_{1/2}$=1.6$\times$10$^6$ y.
Tritium in nitrogen can be produced in the following reactions:
$^{14}$N(n,t)$^{12}$C, $^{14}$N(n,t2$\alpha$)$^{4}$He,
$^{14}$N(n,t$\alpha$n)$^{7}$Be and $^{14}$N(n,tn)$^{11}$C.
The cross section for the production by $^{14}$N(n,t)$^{12}$C was
measured to be 40 mb \cite{Kincaid}. For a rough estimation, we
assumed the same cross sections for the other reactions as for the
production of $^{7}$Be to be 10 mb. The total production rate of tritium
corresponds to 2.3$\times$10$^{10}$d$^{-1}$. With T$_{1/2}$=12.33 y, the
activity after 10 days exposure at sea level would be
3.5$\times$10$^{7}$d$^{-1}$. We simulated 10$^{10}$ decays randomly
distributed in the nitrogen tank. No events were
detected mainly due to the absorption in the dead layer of the p--type
Ge detectors.
The muon flux at sea level is 1.6$\times$10$^7$m$^{-2}$d$^{-1}$.
The only long--lived isotopes which are produced by inelastic muon
scattering are $^{13}$N and $^{10}$C, with a production rate of about
3.7$\times$10$^7$ atoms/days (taking $\sigma$ = 500 $\mu$b for both reactions).
However, $^{13}$N and $^{10}$C are of no relevance due to the
short half lifes of 9.96 m and 19.3 s, respectively.
The isotopes produced through negative muon capture with half lifes $>$
1s are $^{14}$C, $^{10}$Be, $^{11}$C and $^{10}$C (see also table
\ref{myonspall}). Again, the number of $^{11}$C and $^{10}$C
isotopes are soon reduced to a negligible level due to their short
half lifes. The production rate for the whole tank for $^{14}$C is
8$\times$10$^{6}$d$^{-1}$ and 4$\times$10$^{5}$d$^{-1}$ for
$^{10}$Be, which have to be added to the production rates by neutron
capture or spallation reactions.
For the production of $^{14}$C due to the $^{14}$N(n,p)$^{14}$C capture reaction,
three neutron sources at sea level are relevant.
The flux of secondary cosmic ray neutrons with energies between a few
keV and 20 MeV is
about 2$\times$10$^{-2}$cm$^{-2}$s$^{-1}$\cite{allkofer}. These neutrons penetrate
the wall of the transportation tank and are captured in the liquid
nitrogen. For a tank surface of 678 m$^2$, about
1.3$\times$10$^5$ s$^{-1}$ neutrons are expected.
The second component are neutrons produced in fast neutron
spallation reactions in the liquid nitrogen. The production rate of these
neutrons is 2$\times$10$^{4}$ s$^{-1}$ in the nitrogen tank
\cite{lal}. The third component are neutrons produced in muon
reactions, which correspond to 0.85$\times$10$^{3}$ s$^{-1}$ \cite{lal}.
Thus the total flux at sea level is about 1.5$\times$10$^{4}$ s$^{-1}$.
Assuming that every neutron is captured in the nitrogen, yielding
a $^{14}$C nucleus, the production rate of $^{14}$C is about
1.3$\times$10$^{10}$d$^{-1}$. For a production and transportation
time of ten days, the simulation yields
less than 10$^{-4}$ counts/(kg keV y) in the relevant energy
region. Through the purification of the nitrogen this
contribution will be further reduced.
\subsection{Sum spectrum from simulations}
In table \ref{backlist} the components discussed so far are listed
and summed up.
Not included in the table are the contributions from the intrinsic
impurities in the Ge crystals and from the $^{7}$Be activation of the
liquid nitrogen during its transportation at sea level.
For the Ge--crystals we have only very conservative upper values for
their true contamination, which is expected to be much lower (see
Subsection 3.2.1).
Regarding the $^{7}$Be contamination of liquid nitrogen,
the cleaning efficiency of the liquid nitrogen
should already be high enough in order to
reduce this contribution to a negligible level.
Furthermore it is planned to use a recycling device for the evaporated
nitrogen, thus the activation with $^7$Be will be reduced further by storage
(T$_{1/2}$ =53 d).
\begin{table}[htb]
\begin{center}
\begin{tabular}{lcc}
\hline
Source & Component & [counts/(kg y keV)] \\
&&11--100 keV\\
\hline
Nitrogen & $^{238}$U & 7$\times$10$^{-4}$ \\
intrinsic & $^{232}$Th & 4$\times$10$^{-4}$ \\
& $^{40}$K & 1$\times$10$^{-4}$ \\
& $^{222}$Rn & 3$\times$10$^{-4}$ \\
N activation & $^{14}$C & 1$\times$10$^{-4}$ \\
\hline
Steel vessel & U/Th & 1.5$\times$10$^{-5}$ \\
\hline
Holder system & U/Th & 8$\times$10$^{-4}$ \\
\hline
Surrounding & Gammas & 4$\times$10$^{-3}$\\
& Neutrons & 4$\times$10$^{-4}$\\
& Muon shower & 2$\times$10$^{-3}$ \\
& $\mu$ $\rightarrow$ n & 1$\times$10$^{-3}$ \\
& $\mu$ $\rightarrow$ capture & $<<$1$\times$10$^{-4}$ \\
\hline
Cosmogenic & $^{54}$Mn & 3$\times10^{-3}$\\
activities & $^{57}$Co & 1$\times10^{-4}$\\
in the crystals & $^{60}$Co & 4$\times10^{-3}$\\
& $^{63}$Ni & 6$\times10^{-3}$\\
& $^{65}$Zn & 1.5$\times10^{-3}$\\
& $^{68}$Ge & 3.7$\times10^{-2}$\\
\hline
Total & & 6.1$\times$10$^{-2}$ \\
\hline
\end{tabular}
\end{center}
\caption{Summation of background components in the region 11 keV--100 keV.}
\label{backlist}
\end{table}
Assuming a background as stated above, we will achieve
a mean count rate in the interesting
region for dark matter search of about 6.1$\times$10$^{-2}$ events/(kg y keV). This
means a further reduction of background in comparison to our
best measurement (about 20 counts/(kg y keV) below 100 keV \cite{HM98})
by more than two orders of magnitude.
This count rate will further improve after another 2 years of running time
(decay of $^{68}$Ge), to reach a level of 1$\times$10$^{-2}$ events/(kg y keV).
These results have been confirmed by an independent simulation for the GENIUS project by the Kiev group \cite{kiev98}.
In figure \ref{specall} the spectra of individual contributions
and the summed-up total background spectrum are shown.
As mentioned before, the low-energy spectrum is dominated by events
originating from the cosmogenic activation of the Ge crystals at the
Earths surface. Production of the detectors underground would
significantly reduce this contribution.
\begin{figure}[!tt]
\epsfxsize12cm
\epsfbox{all_sim.eps}
\caption{Simulated spectra of the dominant background sources for
a nitrogen tank of 12 m diameter.
Shown are the contributions from the tank walls, the detector holder
system, from neutron capture in the nitrogen,
from natural
radioactivity and from the $^{222}$Rn contamination of the nitrogen.
The solid line represents the sum spectrum of all the simulated
components (note the different channel binning compared to figure \ref{cosmo}).}
\label{specall}
\end{figure}
\section{The GENIUS Facility: Technical Design}
\subsection{General description}
The GENIUS detector will consist of two concentric stainless steel
tanks with the Ge crystals positioned in the centre. The inner tank
will contain liquid nitrogen as working medium for the Ge crystals
and as shielding from the Gran Sasso rock backgrounds. The outer vessel
will contain the isolation material, which will be doped with Boron
as shielding against the neutron background. The Ge crystals will be
placed on a holder system made of teflon, which can house up to six
layers of 40 crystals each (see Fig. \ref{holders}).
On the top of the tank there will be a clean
room with a lock chamber, a room for the elecronics and the data acquisition
system and a muon veto shield.
\begin{figure}[h]
\epsfxsize10cm
\epsfbox{Halter_2.ps}
\caption{Schematic view of possible configurations for the holder
system of the Ge crystals.}
\label{holders}
\end{figure}
\subsection{Technical Data}
Capacity of the inner vessel = 1400000 l liquid nitrogen $\simeq$
95\% of geom. vol.\\
{\bf Outer vessel:}\\
Dimension: $\oslash \times$ H $\simeq$ 14.4 m $\times$ 19 m\\
Roof type: round roof\\
Floor type: flat floor\\
Maximum allowed pressure: 10 mbar/ -2 mbar\\
Working pressure: 5 mbar\\
Design temperature: 77 K for the floor and 60\% for the outer mantel, 293 K
for the rest cylinder and roof \\
{\bf Inner vessel:}\\
Dimension: $\oslash \times$ H $\simeq$ 12 m $\times$ 13.6 m\\
Roof type: round roof\\
Floor type: flat floor\\
Maximum allowed pressure: 100 mbar/ -2 mbar\\
Working pressure: 50 mbar\\
Design temperature: 77 K \\
Maximum waste gas through heat flow in: 0.27\% / day of the max.
content\\
\subsection{Design}
\begin{figure}[h!]
\centering
\leavevmode\epsfxsize=280pt
\epsfbox{tank4_2.eps}
\caption{\label{confA}Design of the GENIUS tank. The tank
is lowered 4m into the earth.}
\end{figure}
The standing, flat floor tank is made of an inner and an outer vessel in concentrical geometry.
The inner vessel is storing the liquid nitrogen, the outer vessel the isolating material.
A removement of the isolating material is possible. The fundament is shielded
by polystyrol. The isolation will be flushed with gaseous nitrogen,
thus preventing the immersion of humidity. The vessels are gas tight
welded.
Both inner and outer vessel are equipped with security devices against
over and under pressure.
Figure \ref{confA} displays a schematic view of the GENIUS tank
(the tank is lowered 4 m into the earth).
\subsection{Time schedule of the GENIUS experiment}
\begin{figure}[h!]
\centering
\leavevmode\epsfxsize=350pt
\epsfbox{time_2.eps}
\label{timescale}
\end{figure}
This schedule is valid under the assumptions, that the requested space is
provided in the Gran Sasso Laboratory in near future and that detector
production will start in the year 2000. It has been verified with potential
producers, that technically and logistically the production of the required
super-low level natural and enriched Ge detectors can be performed within
the given time periods.
|
1,116,691,500,332 | arxiv |
\section{Undercutting Mining Game}\label{theo}
In this section, we model the mining game in the presence of undercutting attacks. We proceed with (i) mempools with "sufficient" unconfirmed transactions and (ii) mempools with "limited" transactions. We also allow the undercutter to apply two different safe depths $D=1$ and $D=2$, where it gives up attacking after being one or two blocks behind the main chain. For higher $D$, conditions for profitable undercutting become tighter, and overall its performance worsens. We offer detailed reasoning in Appendix~\ref{choiced}.
\subsection{Game Definition}\label{gamedesp}
We define the mining game $G=\langle M, A, R\rangle$ as follows:
\begin{asparaenum
\item Players $M=\{M_0, M_1, ..., M_{\eta-1}\}$: here $\eta$ is the total number of miners in the system. Without loss of generality, we label a subset of the miners that have a total of $\beta_H$ mining power as honest; we label a miner with $\beta_U$ mining power as the undercutter; we label the remaining miners as non-undercutter rational miners and their total mining power is denoted as $\beta_R=1-\beta_H-\beta_U$. Honest miners are treated as one because they follow the same mining rules and we assume they are informed the same way. Flexible rational miners can shift among chains. Rational miners are flexible except for the undercutter in the course of an attack.
\item Actions $A=$\{undercut(), stay(), shift\_to\_chain()\}: we index chains during a game according to their timestamps after the branching point. Older chains have smaller indices, e.g., the original (main) chain with index $chain_0$. For honest miners, they always honest mine and may choose to stay or shift depending on circumstances. Rational miners may choose to undercut an existing chain and start a new chain or shift among existing chains. When there are no forks in the system, all miners "stay" on the main chain or "undercut".
\item Utility functions $R=\{u_i\}_{M_i\in M}$: the utility function of a player depends on its type and the total transaction fees it can obtain from the game. To make it more comparable across time and among miners, we consider the fees a miner receives at the end to be its utility from playing the game.
\end{asparaenum}
We allow no miner to own more than 50\% mining power (i.e., $\beta_U<0.5$). We let miners publish their discovered blocks immediately to attract other miners to join.
We assume the mempool to be the same for miners on the same chain considering that undercutting is not practical if miners have distinct mempools. Because wealthy transactions an attacker left unclaimed may not exist in others' mempools in the first place.
This assumption makes the attacker stronger and we intend to uncover what the attacker can obtain with more advantageous environment settings.
We let miners have knowledge of other miners' types after sufficient observations. We assume miners can approximate the amount of mining power concentrated on a chain based on the block generation time on that chain.
\subsection{Probability of A Chain Winning}\label{winprob}
Before we dive into the model, we describe one key drive for system evolution, the probability of a chain winning. In undercutting, the attacker forks an existing chain by leaving out wealthy transactions.
In the following discussions, we refer to the fork chain as "fork" and the other chain as "main". The "main" might not be on the main chain eventually. The relative height of a chain is the number of blocks it has accumulated after the forking point.
Overall, the process proceeds as follows. The undercutter sees a new block is appended to the main chain by some other miner. It starts to work on a block that takes out wealthy transactions appearing in that block. With some probability, it can discover this block faster than the next block on the main. When the undercutter publishes its block, some rational miners consider shifting to the fork because there are more high fee rate transactions that they can benefit from. To model this procedure, we screenshot the state of the system as a tuple which we denote as $\vec{S}=(m, n, \vec{F_m}, \vec{F_n}, O, \delta, \lambda_m, \lambda_n)$, where:
\begin{asparaitem}[-
\item $m$ is the relative height of the main chain and $n$ is the relative height of the fork after the forking point;
\item $\vec{F_m}$ is the list of transaction fee total in blocks on the main chain and $\vec{F_n}$ is a list of fee total inside blocks on the fork;
\item $O$ is the mining power currently working on the fork, which can increase or decrease based on new block appending events;
\item $\delta\in (-1,1)$ is the mining power shifting from the main chain to the fork, which is negative if miners are shifting to the main chain;
\item $\lambda_m$ and $\lambda_n$ are block generation rates for the main chain and fork respectively.
\end{asparaitem}
To obtain the winning probability measure for a chain from the state $\vec{S}$, we first present necessary discussions on the distribution of the time needed for a chain to complete $D$ confirmations and become solidly on-chain. We view the block generation event as a Poisson process and use a random variable to represent the waiting time between block occurrence events. We denote the waiting time for the main chain as $X$ and the fork as $Y$. They both follow exponential distribution but with different rates. The rate parameters depend on the mining power distribution.
Given a state $\vec{S}=(m, n, \vec{F_m}, \vec{F_n}, O,\delta, \lambda_m, \lambda_n)$, we obtain the block occurrence rate for this round as:
\begin{align*}
\lambda_m=\frac{1-O}{I};\; \lambda_n=\frac{O}{I}
\end{align*}
where $I$ is block generation interval.
This is derived based on the thinning theorem of the Poisson point process. The main idea is that independent sub-processes of a Poisson process are still Poisson processes with individual rates. With this property, we can determine the time interval for the next block to appear which follows an exponential distribution. More detailed proofs can be found in Appendix \ref{thinning}.
Next, we derive the probability of the fork winning. The winning probability of the main chain can be derived in the same way.
For $D=1$, there is only one state that the rational miners $\beta_R$ needs to make a decision. When the undercutter starts a forked chain before the main chain extends by one. The two competing chains are in a tie with $\tilde{D}=0$. The probability that the fork wins is simply
\begin{align*}
p = \Pr[Fork\; Wins]= \Pr[Y<X]=O+\delta
\end{align*}
For $D=2$, there is an infinite number of situations where rational miners need to make decisions about shifting. The winning probability of the two chains plays important role in this decision-making process. We let $\tilde{D}=n-m<D$, denoting the number of blocks by which the fork leads the main chain. For example, when $\tilde{D}=-1$, the fork is one block behind the main chain. Then the fork wins if it creates 3 blocks before the main chain extends by 1, or discovers 4 blocks before the main chain extends by 2, and so on. Thus we have
\begin{align*}
p =\sum_{i=0}^{\infty}\Pr[(D-\tilde{D}+i) Y<(i+1) X]
\end{align*}
When $\tilde{D}=-1$, the fork is behind the main chain. For the fork to win, we need $p=\sum_{i=0}^{\infty}\Pr[(3+i)Y<(1+i)X]=\sum_{i=0}^{\infty}(\beta_U+\delta)^{3+i}(1-\beta_U-\delta)^{i}$.
When $\tilde{D}=-1$, there is a tie between the fork and the main chain. In this case, $p=\sum_{i=0}^{\infty}\Pr[(2+i)Y<(1+i)X]=\sum_{i=0}^{\infty}(\beta_U+\delta)^{2+i}(1-\beta_U-\delta)^{i}$.
If $\tilde{D}=1$, the fork is leading. We have $p=\sum_{i=0}^{\infty}\Pr[(1+i)Y<(1+i)X]=\sum_{i=0}^{\infty}(\beta_U+\delta)^{1+i}(1-\beta_U-\delta)^{i}$.
\section{Acknowledgement}
This work has been partially supported by the National Science Foundation under grant CNS-1719196 and CNS-1846316.
\section{Proofs}
The following problems and discussions are in the context of blockchains. Some concepts may not apply to general contexts.
\subsection{Derive Block Generation Rates for Multiple Competing Chains}\label{thinning}
\textbf{Problem Statement}: Given the overall system block generation being a Poisson process with rate $\lambda$, and $c$ competing chains with $m_i$ ($\sum_{i=1}^c m_i = 1$) mining power on each chain, the block generation on an individual chain $i$ is a Poisson process with rate $m_i \cdot \lambda$.
\textit{Sketch.} This is essentially the thinning of a Poisson process. We may as well refer to it as the \textbf{thinning theorem}. For proof setup, we assume the system block generation rate $\lambda$ is constant and the puzzle complexity is adjusted in order to keep the average block generation rate constant. The high-level idea is to label each event with the corresponding chain ID. The probability of a block being appended to one chain is dependent upon the mining power concentrated on this chain.
\begin{proof}
We assume the system block generation rate $\lambda$ is constant and the puzzle complexity is adjusted in order to keep the average block generation rate constant.
We assume mining power distribution to be stable during the time period we are interested in.
When $i=1$, $m_1=1$. The block generation rate on the single chain is $\lambda$.
When $i=2$, we have two competing chains in the system with mining power $m_1$ and $m_2=1-m_1$ ($m_1\in (0,1)$). Since we assume mining power distribution $\{m_1,m_2\}$ to be stable and the consensus protocol for Bitcoin is PoW, the two counting processes of blocks appearing on each chain are independent. If we label each incoming block with "1" or "2", depending on which chain it is appended to, the probability of the next block being tagged "1" is $m_1$ and the probability of it being labelled "2" is $m_2$. We record the two processes as $\{N_1(t)\}$ and $\{N_2(t)\}$. With Theorem \ref{thin}, we say $\{N_1(t)\}_{t\geq 0}$ is a Poisson process with rate $m_1 \lambda$ and $\{N_2(t)\}_{t\geq 0}$ is a Poisson process with rate $m_2 \lambda$.
\medskip
\begin{theorem}\label{thin}
Suppose $\{N(t)\}_{t\geq 0}$ is a Poisson process with rate $\lambda$. If an event can be independently observed with probability $p$ and we record this process as $\{N_1(t)\}$, then $\{N_1(t)\}_{t\geq 0}$ is a Poisson process with rate $\lambda_1=p\lambda$.
\end{theorem}
When $i>2$, there are $i$ competing chains in the system. Similar to the previous case, we can observe the event of a block being appended to chain $i$ with probability $m_i$. We record this counting process as $\{N_i(t)\}$. With Theorem \ref{thin}, we say $\{N_i(t)\}_{t\geq 0}$ is a Poisson process with rate $m_i \lambda$.
We briefly give the proof for Theorem \ref{thin}.
\begin{enumerate}
\item[(1).] $N_1(0)=p\cdot N(0)=0$. The initial value for process $N_1(t)$ is 0.
\item[(2).] $N_1(t)\sim Poisson(p\lambda t)$.
\begin{align*}
P\{N_1(t)=k\}&=\sum_{n=k}^{\infty}P\{N_1(t)=k|N(t)=n\}P\{N(t)=n\}\\
&=\sum_{n=k}^{\infty} {n\choose k} p^k (1-p)^{n-k}\frac{(\lambda t)^n e^{-\lambda t}}{k!}\\
&=\frac{p^k (\lambda t)^k e^{-\lambda t}}{k!} \sum_{n=k}^{\infty} \frac{((1-p)\lambda t)^{n-k}}{(n-k)!}\\
&=\frac{p^k (\lambda t)^k e^{-p\lambda t}}{k!}
\end{align*}
\end{enumerate}
\end{proof}
\subsection{Choice of $D$}\label{choiced}
We have looked into safe depth $D=1$ and $D=2$ in the analysis. We do not explore into $D>2$ because (i) first, from $D=1$ to $D=2$, we observe a change towards smaller profitability space. Intuitively, there are more uncertainties when safe depths increase as undercutters rely on future bandwidth set to be less wealthy than the undercutting target block. Longer safe depths potentially reduces the premiums from each attack, and profitability conditions become tighter, which diminishes the total number of attacks. (ii) Second, when we apply avoidance techniques as a defense against undercutting, the avoidance for $D=1$ is the strongest as profitability conditions for higher depths $D$ are tighter. In other words, when we defend against $D=1$ attacker, we defend against others as well. (iii) Third, the probability of catching up after being more than 2 blocks behind is small. We give more details about this argument below.
\textbf{Problem Statement.} Let $m,n$ denotes the relative height of chain 1 and chain 2 after the forking point. Let $\alpha$ be the mining power on chain 2 and $\alpha\leq 0.5$. Let $\tilde{D}=n-m$ ($|\tilde{D}|<D$). Show that for $D>2$, the probability of chain 2 winning when $\tilde{D}\leq -2$ is negligible.
\begin{proof}
Let $X$ denotes the waiting time for chain 1 until it extends by one and $Y$ denotes the waiting time for chain 2 until it creates a new block.
We calculate the probability of chain 2 winning as follows:
\begin{align*}
p = \Pr[\text{Chain 2 wins}]=\sum_{i=0}^{\infty}\Pr[(D-\tilde{D}+i)\cdot Y<(i+1)\cdot X]
\end{align*}
We know that $D-\tilde{D}\geq 5$.
\begin{align*}
p &=\sum_{i=0}^{\infty} \alpha^{D-\tilde{D}+i}(1-\alpha)^i<\sum_{i=0}^{\infty} \alpha^{5+i}(1-\alpha)^i
\end{align*}
We know $\alpha^5$ and $(\alpha (1-\alpha))^i$ both take maximum at $\alpha=0.5$ for $\alpha\in (0,0.5]$. Let $\alpha=0.5$, we have
\begin{align*}
p <\sum_{i=0}^{\infty} (\frac{1}{2})^{5+2i}=\frac{\sum_{i=0}^{\infty}4^{-i}}{32}=\frac{1}{32}\lim_{n\rightarrow \infty} \frac{1-4^{-n}}{1-1/4}=\frac{1}{24}
\end{align*}
We consider this probability to be relatively small.
\end{proof}
\section{Game Analysis}\label{theo2}
\subsection{Giving Up After One Block Behind}\label{sec:sd1}
Now we discuss $D=1$. We use an abbreviated state vector $S^*=(m,n)$ in discussion. With $D=1$, rational miners only need to shift at state $S^*=(1,1)$.
We continue to denote the mining power shifting from one chain to another as $\delta$, the honest mining power as $\beta_H$, responding rational mining power as $\beta_R$, undercutter mining power as $\beta_U$, the transaction fees inside blocks on the main chain as $F_{m_1}$ and $F_{m_2}$, the transaction fees inside blocks on the fork chain as $F_{n_1}$ and $F_{n_2}$, the expected returns for responding rational miners $\beta_R$ as $R_r$ and the expected returns for the undercutter as $R_u$. If there is no undercutting, we denote their respective expected return as $R'_r$ and $R'_u$. With probability $\beta_U$, the undercutter creates a new chain and the game is started.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/safe-depth-1.pdf}
\caption{Safe depth $D=1$ state transition. Boxes with the "X" sign indicates terminal states. For non-terminal states, circles indicate ties. Every left branch indicates the main chain extends by one and every right branch refers to the fork creates a new block. The quantity on the line is the probability of state transitions. Here $\delta$ is the amount of rational mining power shifting to the fork.}
\label{fig:safedepth1}
\end{figure}
After the game has started, responding rational miners need to decide whether to stay on its current chain or shift to the undercutter's chain. Suppose they shift $x$ of its mining power $\beta_R$ to the fork. They can decide $x$ by solving
\begin{align*}
\max_{x\in [0,1]} E[R_r] = \max_{x\in [0,1]}\bigg( &\frac{\beta_R}{\beta_R+\beta_H}\cdot (1-p)\cdot F_{m_1}+\\ &\frac{(1-x)\beta_R}{\beta_H+(1-x)\beta_R}\cdot (1-p)\cdot F_{m_2}+\\
&\frac{x\beta_R}{x\beta_R+\beta_U}\cdot p\cdot F_{n_2}\bigg)
\end{align*}
where $p$ is the probability of the fork winning. Then we can calculate the shift as
\begin{align*}
\delta = x\beta_R
\end{align*}
We can observe that the optimization problem involves fees inside succeeding blocks after the forking point. We seek to represent the fees in a relative way such that the analysis can be applied more generally. We let $F_{m_1}=1$ and all other blocks have fee total relative to it.
Now we continue the discussion in two different mempool scenarios.
\subsubsection{Mempools with limited bandwidth set} Here, by "limited" we mean the current bandwidth set on the main chain has a small enough transaction fee total ($< \frac{\beta_U}{1-\beta_U}$). We provide more details concerning this quantity as we proceed. Without loss of generality, we assume $F_{m_1}=1$, $F_{m_2}=\gamma$ (a number $< \frac{\beta_U}{1-\beta_U}$ to be exact), $F_{n_1}=\gamma$ and $F_{n_2}=1$ (where $\gamma\in [0,1]$). They are expected values except for $F_{m_1}$ when the undercutter tries to decide whether to attack. $F_{n_2}$ is not better than 1 because $F_{m_1}$ is already one of the current "best" bandwidth set. The undercutter cannot expect to optimize this set without new incoming transactions and when it includes new transactions to $F_{n_2}$, $F_{m_2}$ can be bettered as well, neutralizing the effects. It does not make it worse than 1 either because otherwise, other rational miners can undercut and provide a better $F_{n_2}=1$. Note that although we use the same notation $\gamma$ for $F_{m_2}$ and $F_{n_1}$, the undercutter only has control over the latter.
With probability $p=\beta_U+\delta$, the fork actually wins and with probability $\beta_H+\beta_R-\delta$, the main chain wins. Then the expected profit of the undercutter is
\begin{align*}
E[R_u] = (\gamma +\frac{\beta_U}{\beta_U+\delta})\cdot \beta_U(\beta_U+\delta)
\end{align*}
The return for the undercutter if it does not attack is
\begin{align*}
E[R'_u]=\beta_U\gamma
\end{align*}
The attacker will starts the attack only if $E[R'_u]<E[R_u]$. This indicates that $\gamma < \frac{\beta_U}{1-\beta_U-\delta}$. With $\gamma < \frac{\beta_U}{1-\beta_U}$, $E[R'_u]<E[R_u]$ even if $\delta=0$. That is, even no rational miner shifts to the fork, there are so few fees left in the mempool that the attacker is always better off by undercutting the main chain compared with extending it. We discuss the scenario where there are more than "limited" fees in the mempool later.
Note that one special case is when there are no transactions left or the bandwidth set has negligible fees and $F_{m_2}=0$.
The undercutter will try to start the attack because originally there is nothing left on the main chain and $E[R'_u]=0$. Since the undercutter does not need to make positive payments to the system, we have $E[R'_u]\leq E[R_u]$.
One small detail is that the attacker needs to craft the block it generates to avoid being undercut again. A conservative mindset simply cuts the current chain head into two subsets with equal fees because assuming a potential 50\% undercutter, the "limited" bandwidth set threshold is 1.
With probability $\beta_U$, the game is started. We solve for the shifting mining power proportion $x$ to maximize the expected returns for the rational miners and update the shifting with $\delta=x\beta_R$. However, the attacker initiates undercutting regardless of expected shifting when there are only limited transaction fees unclaimed in the mempool bandwidth set. Therefore, we do not try to solve $x$ for now.
Since in this scenario, there are "limited" fees to start with, there will be undercutting after undercutting until there is "sufficient" bandwidth set again in the mempool. To avoid being undercut again, a rational miner or an undercutter can flatten the fees inside blocks they create by making the fee total close to what's inside the subsequent bandwidth sets. Inside the scope of the "limited" mempool condition, this "closeness" depends on $\frac{\beta_U}{1-\beta_U}$. By making the ratio of fees in the block a miner creates and fees in the bandwidth set greater than it, the miner can expect the block not to be undercut effortlessly. Note that here the miner has to estimate the mining power of a potential undercutter. We give a safer bound later.
In conclusion, for $D=1$, when the attacker is stronger ($\beta_U$ is larger), the requirements on the mempool bandwidth set fee total ($< \frac{\beta_U}{1-\beta_U}$) for undercutting to be profitable regardless of rational miners' actions is looser. For $\beta_U$ approximating 0.5, the limit is close to 1, which occurs with high frequency. For $\beta_U=0.2$, the upper bound is 0.25, where the current bandwidth set is 1/4 of the fees inside the main chain head. When this condition is met for an attacker, undercutting is always preferred.
\subsubsection{Mempools with sufficient bandwidth set} By "sufficient" we mean the current bandwidth set in the mempool is at least of more than "limited" transaction fee total ($\geq \frac{\beta_U}{1-\beta_U}$), on the contrary to the previous case.
We continue to assume $F_{m_1}=1$, $F_{m_2}=\gamma$, $F_{n_1}=\gamma$, $F_{n_2}=1$ ($\gamma\in [0,1]$). We know that with sufficient current bandwidth set, the undercutter needs to attract some rational miner at state (1,1). To make a decision whether to shift, the rational miner solve for $x$ in
\begin{align*}\label{shift1}
\max_{x\in [0,1]} E[R_r] =& \max_{x\in [0,1]}\bigg( \frac{\beta_R}{\beta_R+\beta_H} (1-p)+\\ &\frac{(1-x)\beta_R}{\beta_H+(1-x)\beta_R} (1-p) \gamma+\frac{x\beta_R}{x\beta_R+\beta_U} p\bigg) \\
=& \max_{x\in [0,1]}\bigg(\beta_R (1+\gamma+(\frac{\beta_H}{\beta_H+\beta_R}-\gamma)x)\bigg)\numberthis
\end{align*}
Here we let $p=O+\delta=\beta_U+x\beta_R$. One observation is that the rational miners either move to the fork with all their mining power or none. For the rational miners to join the fork, we need
\begin{align*}
\gamma<\frac{\beta_H}{\beta_R+\beta_H}=\frac{\beta_H}{1-\beta_U}
\end{align*}
Considering that $\gamma\geq\frac{\beta_U}{1-\beta_U}$, there is no solution $x$ that makes shifting more profitable if $\beta_H<\beta_U$. This means that it is not profitable for the undercutter to start the attack.
When $\beta_H\geq \beta_U$ and $\frac{\beta_U}{1-\beta_U}<\gamma<\frac{\beta_H}{1-\beta_U}$, the rational miners join the fork after it has been created.
In this case, to have $E[R'_u]<E[R_u]$ after rational miners joining the fork, we need
\begin{align*}
E[R_u] = (\gamma +\frac{\beta_U}{1-\beta_H})\cdot \beta_U(1-\beta_H)>\beta_U\gamma\\
\gamma < \frac{\beta_U}{\beta_H}<1
\end{align*}
Then for $\beta_H\geq \beta_U$, we need
\begin{equation}
\frac{\beta_U}{1-\beta_U}<\gamma<\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}
\end{equation}
Combining the two mempool conditions together, we need the following conditions to make undercutting profitable
\begin{equation}\label{bandwidthcond1}
\gamma<\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}
\end{equation}
For $D=1$, we can conclude that with sufficient bandwidth set ($\gamma \geq \frac{\beta_U}{1-\beta_U}$) in the mempool, when the honest mining power is less than the undercutter's mining power, it is not profitable to undercut the main chain. When honest mining power exceeds that limit and the bandwidth set fee total satisfies Equation \ref{bandwidthcond1}, rational miners join the fork and the rational miners including the undercutter have higher expected returns than extending the main chain.
With only a limited bandwidth set ($\gamma < \frac{\beta_U}{1-\beta_U}$), the undercutter is always better off to start the attack. For stronger attackers, the upper bound for $\gamma$, which characterizes how wealthy is the block being undercut compared with subsequent blocks, is higher and easier to satisfy. To avoid being undercut or being undercut again after undercutting, miners or the attacker can make the ratio of block fee total inside their blocks and remaining bandwidth set to be greater than $\frac{\beta_U}{1-\beta_U}$. After making the remaining bandwidth set "sufficient", we need to apply a new upper bound $\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}$ to the ratio as depicted in Equation \ref{bandwidthcond1}. Note that the miner has to estimate the mining power of honest miners and the potential undercutter. A conservative view is to treat $\beta_U$ as 50\%.
We present algorithms ($D=1$) for undercutter to start an attack, for other rational miners to decide whether to join a chain and for miners to avoid being undercut below.
\begin{tcolorbox}[breakable, enhanced]\label{sd1}
Part 1. \textbf{undercutter} decides whether to undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible\footnote{The "negligible" bound can be customized}. If Yes, start undercutting with the first block on the fork being half of the main chain head; else, continue.
\item Check if $\gamma < \frac{\beta_U}{1-\beta_U}$. If Yes, start undercutting with the first block on the fork being the current bandwidth set, leaving everything in the main chain head unclaimed; else, continue.
\item Check if $\gamma < \min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}$. If Yes, start undercutting with the first block on the fork being the current bandwidth set; else, stay on the main chain, exit.
\end{enumerate}
Part 2. \textbf{rational miners} decide whether to join a fork:
Check if $\gamma<\frac{\beta_H}{1-\beta_U}$. If Yes, join the fork; otherwise stay on the main chain.
Part 3. \textbf{miners} avoid being undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible. If Yes, wait; else, continue.
\item Check if $\gamma$ satisfies undercutting inequalities. If Yes, select a part of the bandwidth set to make the inequality no longer hold; else, construct a new block with the bandwidth set.
\end{enumerate}
\end{tcolorbox}
For undercutting avoidance, intuitively a miner can make its current block close or equivalent to the next possible block (the bandwidth set) in terms of fee totals. The spirit of undercutting is to take advantage of wealthy blocks.
Since we have spotted the profitability conditions for undercutting, by invalidating the condition, one can avoid its block being undercut.
\paragraph{Treating rational miners as one miner} In the above analysis, rational miners make decisions from a collective perspective by maximizing $E[R_r]$ instead of the expected returns for a specific rational miner $m_i$ ($i<n$). This can give rise to coordination problems. Fortunately, rational miners either move all their mining power or staying on their current chain. There is only one state $(1,1)$ where they need to make a decision. Most rational miners act similarly. Besides, miners are aware of other miners' types across time. So they know the amount of rational mining power. Then it becomes practical to make decisions collectively even from an individual point of view.
\subsection{Giving Up After Two Blocks Behind}\label{sec:sd2}
Now, we discuss $D=2$. Rational miners make decisions at states $S^*=\{(1,1),(1,2),(2,1),(2,2),...\}$. The probability $p$ now comprises of infinite series.
Similar to the previous case, we first let rational miners behave like honest miners, to arrive at an easy access condition on bandwidth set for the attacker.
\begin{figure}
\centering
\includegraphics[scale=0.45]{figures/safe-depth-2.pdf}
\caption{Safe depth $D=2$ state transition. Notations are the same as the ones in Figure \ref{fig:safedepth1}. Now we have infinite state transitions. $\delta'$ and $\delta''$ are the amount of rational mining power shifting from one chain to another. We do not label the probabilities of state transitions at the end of the graph to emphasize two facts: (i) the game can go on infinitely, and (ii) the corresponding probabilities are different when travelling from different paths.}
\label{fig:safedepth2}
\end{figure}
Without loss of generality, we assume $F_{m_1}=1$, $F_{m_2}=F_{m3}=\gamma$, $F_{n_1}=F_{n_2}=\gamma$ and $F_{n3}=1$ (where $\gamma\in [0,1]$). We bound $\gamma$ later. $F_{m_2},F_{m3}$ can be of different values in reality but here we use the same value to highlight the wealthiness of $F_{m_1}$. Besides, to make decisions for the undercutter, we are interested in comparing returns from undercutting and extending the current chain, it's satisfying to have $F_{m_2},F_{m3}$ and $F_{n_1},F_{n_2}$ being close.
When there is only one bandwidth set left and $F_{m3}=0$, the undercutter needs to split the set up to produce 2 blocks otherwise other rational miners can undercut. When there is no bandwidth set left and $F_{m_2}=F_{m3}=0$, the undercutter needs to split the current chain head into 3 blocks using the rules we described earlier to avoid being undercut again. We give more details about the two special cases in the end. Besides, the reason we have $F_{n3}=1$ is to attract rational miners and probably, more importantly, avoid being undercut again.
We know that if there is no attack, the undercutter expects to receive
\begin{align*}
E[R'_u]=2\beta_U\gamma
\end{align*}
If it starts the attack, its expected return is
\begin{align*}
E[R_u]
&=(2\gamma+\beta_U)\sum_{i=0}^{\infty} \beta_U^{i+2}(1-\beta_U)^i\\
&=\beta_U^2(2\gamma+\beta_U) \cdot \lim\limits_{i\rightarrow \infty} \frac{1-\beta_U^n (1-\beta_U)^n}{1-\beta_U(1-\beta_U)}\\
&=\frac{\beta_U^2(2\gamma+\beta_U)}{1-\beta_U(1-\beta_U)}
\end{align*}
When $\gamma <\frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$ (with limited bandwidth set), the undercutter can start the attack without rational miners joining the fork in a tie. This bound is more demanding than the one for $D=1$. For $\beta_U=0.5$, the upper bound is now $0.5$ instead of 1. For $\beta_U=0.2$, the bound is 0.03 instead of 0.25. Overall, for weak attackers, the condition is way more demanding than before. Intuitively, the probability of a weak attacker leads by one block is much higher than it leads by two blocks.
Next, we consider when $\gamma \geq \frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$ (with sufficient bandwidth set) and the undercutter needs rational miners to join the fork in a tie. Same as before, rational miners allocate their mining power among the two chains to maximize their expected returns. We solve for $x$ in
\begin{align*}\label{shift2}
\max_{x\in [0,1]} E[R_r] =& \max_{x\in [0,1]}\bigg( \frac{\beta_R}{\beta_R+\beta_H} p_m+ \frac{(1-x)\beta_R}{\beta_H+(1-x)\beta_R} 2\gamma p_m \\&+ \frac{x\beta_R}{x\beta_R+\beta_U} \gamma p_f +\frac{x\beta_R}{x\beta_R+\beta_U+\beta_H} p_f \bigg)\numberthis
\end{align*}
where $p_m=\beta_U(1-\beta_U-x\beta_R)^2$ is the probability of main chain leading by 2 blocks first and $p_f=\beta_U(\beta_U+x\beta_R)(\beta_U+x\beta_R+\beta_H)$ is the probability of the fork leading by 2 blocks first. Here we only consider the leftmost and rightmost path in Figure \ref{fig:safedepth2} because they are the two most significant paths. We can observe that the coefficient for the second-degree term in the quadratic function is positive. The expected returns for rational miners may reach maximum at either of the two ends.
Again we let $E[R_{r|x=0}]<E[R_{r|x=1}]$.
\begin{align*}
\gamma < \frac{\frac{\beta_H^2}{\beta_H+\beta_R}+1-2\beta_H-\beta_R}{2\beta_H+2\beta_R-1}=\frac{\frac{\beta_H^2}{1-\beta_U}+\beta_U-\beta_H}{1-2\beta_U}
\end{align*}
Here we implicitly assume $\beta_U<0.5$ to arrive at a neat inequality. When $\beta_U=0.5$, $E[R_{r|x=0}]<E[R_{r|x=1}]$ holds regardless of $\gamma$.
We can observe that $\frac{\beta_U-\beta_H}{1-2\beta_U}$ is the leading term. With rational miners joining, the expected return for undercutter on the rightmost branch is now
\begin{align*}
E[R_{u|r}] = (\gamma + \frac{\beta_U}{\beta_U+\beta_R}\gamma + \beta_U)\beta_U(\beta_U+\beta_R)
\end{align*}
We let $E[R_{u|r}] > E[R'_u]$ to arrive at a tighter bound and we have
\begin{align*}
\gamma < \frac{\beta_U(1-\beta_H)}{1+\beta_H-\beta_U}
\end{align*}
Overall we need
\begin{align*}
\frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}\leq \gamma < \min \{\frac{\frac{\beta_H^2}{1-\beta_U}+\beta_U-\beta_H}{1-2\beta_U}, \frac{\beta_U(1-\beta_H)}{1+\beta_H-\beta_U}\}
\end{align*}
The attack is not profitable if this inequality does not hold. Combining the two conditions together, we have
\begin{align*}\label{bandwidthcond2}
\gamma < \min \{\frac{\frac{\beta_H^2}{1-\beta_U}+\beta_U-\beta_H}{1-2\beta_U}, \frac{\beta_U(1-\beta_H)}{1+\beta_H-\beta_U}\}\numberthis
\end{align*}
\paragraph{Special cases} If there is only one bandwidth set remaining or more generally, the remaining fees are negligible after constructing $F_{m_2}$, the undercutter needs to craft $F_{n_1}$ and $F_{n_2}$ in such a way that undercutting $F_{n_1}$ or $F_{n_2}$ is not profitable in expectation. We have given profitability conditions in Equation \ref{bandwidthcond1} and \ref{bandwidthcond2}. The high-level idea is to make the attack profitable with respect to $F_{m_1}$ while making undercutting $F_{n_1}$ and $F_{n_2}$ not profitable. Note that for the latter part, the total mining power is no longer 1 but $1-\beta_U$ if the undercutter is determined and does not shift.
If there are only negligible fees remaining after $F_{m_1}$, the undercutter needs to divide the main chain head into three blocks. Necessary conditions for crafting these blocks are the same as before.
In conclusion, for $D=2$, the limited bandwidth set bound is now $\gamma < \frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$. This criterion can be hard to meet for weak miners with less than 30\% mining power ($\gamma <0.09$ for $\beta_U=0.3$). But for strong attackers with 40\%-50\% mining power (0.22 - 0.5), the conditions are easier to satisfy. In the sufficient bandwidth set scenarios, attackers also have tighter bounds on $\gamma$, especially for weak attackers.
We present algorithms for $D=2$ below.
\begin{tcolorbox}[breakable, enhanced]\label{sd2}
Part 1. \textbf{undercutter} decides whether to undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible. If Yes, start undercutting with the first block on the fork being 1/3 of the main chain head; else, continue.
\item Check if there is only one non-negligible bandwidth set left. If Yes, start undercutting with the first block on the fork being 1/2 of the bandwidth set; else, continue.
\item Check if $\gamma<\frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$. If Yes, start undercutting with the first block on the fork being the current bandwidth set; else, continue.
\item Check if $\gamma$ satisfies Equation \ref{bandwidthcond2}. If Yes, start undercutting with the first block on the fork being the current bandwidth set; else, stay on the main chain, exit.
\end{enumerate}
Part 2. \textbf{rational miners} decide whether to join a chain:
Solve for $x$ (the proportion of mining power to shift to the chain) in Equation \ref{shift3}.
Part 3. \textbf{miners} avoid being undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible. If Yes, wait or undercut; else, continue.
\item Check if there is only one non-negligible bandwidth set left in the mempool. If Yes, divide the current bandwidth set into 2 sets with equal transaction fees and construct a new block with one set; else, continue.
\item Check if $\gamma$ satisfies undercutting conditions. If Yes, select a part of the bandwidth set to make the inequality no longer hold; else, construct a new block with the bandwidth set.
\end{enumerate}
\end{tcolorbox}
Suppose the fork extends by one, rational miners decide the amount of mining power allocated on main chain to shift to the fork by solving $x$ in
\begin{align*}\label{shift3}
\max_{x\in [0,1]} E[R_r] =& \max_{x\in [0,1]}\bigg( \text{(fees already own on the main)}p_m\\
&+\text{(fees already own on the fork)}p_f\\
&+\text{(claimable fees on main)}\frac{(1-x)\beta_R}{1-O-x\beta_R} p_m\\ &+\text{(claimable fees on fork)}\frac{x\beta_R}{O+x\beta_R} p_f\bigg)\numberthis
\end{align*}
where $p_f=(O+x\beta_{R|m}^*)^{D-\tilde{D}}$ and $p_m=(1-O-x\beta_{R|m}^*)^{D+\tilde{D}}$ ($\beta_{R|m}^*$ is the rational mining power on the main chain).
Claimable fees are total fees that a miner can expect to obtain in the unconfirmed transaction sets of each chain within size limit $(D\mp\tilde{D})\cdot B$ ($B$ is the block size limit). When the main chain extends by one, the processing is similar to the previous scenario (conceptually equivalent). The only difference is that in calculation, $p_f=(O-x\beta_{R|f}^*)^{D-\tilde{D}}$ and $p_m=(1-O-x\beta_{R|f}^*)^{D+\tilde{D}}$ ($\beta_{R|f}^*$ is the rational mining power on the fork).
\subsection{Rearranging of Past Blocks in Extreme Case}
Let's consider an extreme case where there are only negligible fees unclaimed in the unconfirmed transaction set for a sufficiently long time (greater than multiple block generation intervals $I$). As we have noticed, when there is only a limited amount of fees left in the mempool, a rational miner can avoid undercutting by claiming only partial of the bandwidth set. But if the situation continues to worsen and no new transactions enter into the system, the remaining transaction fees become negligible at a certain point. In this extreme case, rational miners may look back to the previous blocks and start undercutting at certain block height and rearrange blocks afterward. Suppose an undercutter goes back $C$ blocks. As long as there are only negligible transaction fees flowing into the system during $ \frac{C\cdot I}{\beta_U}$, it's more desirable for an attacker who earned less than its fair share in the past $C$ blocks to attack. The previous analysis is not applicable to this extreme case. However, one good news is that Bitcoin, Monero, and possibly other important altcoins currently do not suffer from this problem and may continue to remain this way. If this is no longer the case, undercutting may not be the major concern.
\section{Background and Definitions}
\label{sec:Def}
\subsection{Mempool}\label{invariant}
Mempool is an unconfirmed transaction set maintained by miners locally. When a transaction is announced to the network, it enters into miners' mempools. A miner can also set up fee rate or size thresholds to filter transactions. Miners select transactions from their mempools to form new blocks. Usually, a miner will choose the bandwidth set or near bandwidth set (see \ref{definition}) with respect to the local mempool and global block size limit. When a new block is published, miners verify the block and then update their local mempools to exclude transactions included in the newly published block.
A miner can also intentionally select less wealthy transactions to form blocks to attract rational miners. Wealthy transactions are those with high fee rates. Here we have two measures for fee rates. One is the fee per byte (F/B). As the name suggests, it is calculated by dividing the total fee by the total size of a transaction. After SegWit is introduced to the Bitcoin system, a new fee rate measure fee per weight unit (F/WU) appears. For our analytical discussion, we may refer to both terms simply as fee rates.
\subsection{Definitions}\label{definition}
Now we introduce definitions intensely used or important in the context of later discussions.
\theoremstyle{definition}
\begin{definition}{\textbf{(Bandwidth Set.)}}
Given block size limit $B$ and an unconfirmed transaction set $A$ composed of $N$ transactions, $S$ is one bandwidth set of $A$ \textit{w.r.t} $B$ if $S\in P(A) \land S.size \leq B, \forall S_i\in P(A), S_i.size\leq B, S.fee\geq S_i.fee$, where $P(A)$ is the power set of $A$ and $P(A)=\{S_i\}_{1\leq i \leq 2^N}$.
\end{definition}
\begin{remark}
If the unconfirmed transaction set is of size smaller or equal to the block size limit $B$, then the bandwidth set is the memory pool itself. If its size is greater than $B$, then the bandwidth set only includes partial transactions from the pool. But bandwidth set is not necessarily a unique dominant partition of the mempool. When several different sets share the same fee total, they are all bandwidth sets. If we sort transactions by their fee rates (amount of fee for each byte), these bandwidth sets usually have similar components with differences occurring on the lower end.
\end{remark}
\begin{definition}{\textbf{(Near Bandwidth Set.)}} Given block size limit $B$ and an unconfirmed transaction set $A$ composed of $N$ transactions, $\tilde S$ is a near bandwidth set of $A$ \textit{w.r.t} $B$ if $\tilde S.size \leq B \land (\tilde S \cap S).fee \geq p*S.fee$ where $S$ is one bandwidth set of $A$ \textit{w.r.t} $B$, $p$ is a proportion parameter.
\end{definition}
\begin{remark}
Due to propagation delay, miners' balancing between block size and bandwidth, and potentially other causes, near bandwidth set appears more often than bandwidth set according to history block statistics.
\end{remark}
\begin{definition}{\textbf{(Safe depth D.)}}
A block is considered safe from being undercut after $D$ confirmations.
\end{definition}
\begin{remark}
An undercutter gives up its attack after being $D$ blocks behind the chain it targets. For $D=1$, the attacker initiates the attack anyways unless the main chain is 2 blocks ahead. For $D\geq 2$, it undercuts unless it's $D$ blocks behind. After the attack has been started, the attacker gives up its chain when $D$ blocks behind.
\end{remark}
\section{Experiments}
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/experiments/bitcoin_rewards_45_undercutter_0.5month_0.8avoidance_1inflation.pdf}
\caption{Tentative Experiments with 2 week Bitcoin transaction data starting from May 15th, 2020. Stricter avoidance technique is applied for Bitcoin strong 45\% attacker. Suppose the normal avoidance suggests a miners to claim 80\% of the claimable fees in bandwidth set. The strict avoidance suggests the miner to claim $0.8\cdot 0.8=64\%$.}
\label{avoidance08}
\end{figure}
\section{Introduction}\label{sec:intro}
The Bitcoin network~\cite{nakamoto2008bitcoin} and several cryptocurrencies rely on nodes participating in transaction verification, ordering and execution, and mining new blocks for their security and performance.
Bitcoin incentivizes these nodes (or miners) with block mining rewards and transaction fees. Historically, the fixed block reward has been the dominating source of Bitcoin miners' revenues.
However, the block reward is a system parameter for Bitcoin and halves approximately every four years.\footnote{The currently fixed block reward is $6.25$ BTC. The next halving event to $3.125$ BTC is scheduled for May 2024. \cite{btchalving2}}
With the deteriorating reward design choice, as the Bitcoin network grows further, the block rewards domination is expected to vanish to a negligible level and the transaction fees will become the major mining revenue generator. Today, with a stable reward value, a miner's expected revenues rely mostly on the probability of it finding a block, which itself is contingent on the miners hashing power. However, once the transaction fees start to dominate, the fair sharing of revenue based on the hashing power may not be maintained: transaction fees are voluntary and arbitrary under the current fee scheme, the total fee inside different blocks can differ and the miner's reward can no longer be relatively stable.
For example, consider two miners $A$ and $B$ in the system with 50\% mining power each such that they both are expected to mine an almost equal number of blocks. If miner $A$ mines blocks with the total fee around $1$~BTC each and $B$ always encounter wealthy transactions and mines blocks of the total fee around $2$~BTC each, $B$'s revenue is going to be twice of $A$'s revenue.
In general, under the transaction fee-based incentive setting, a miner's revenue mostly depends on users' offerings and its transaction selections. As both parameters are time-variant and the timing of discovering new blocks matters, rational miners aim for deviating mining strategies offering better rewards.
There have already been rigorous discussions on attacks related to mining strategies. Most notable attacks are selfish mining~\cite{eyal2018majority, sapirshtein2016optimal, nayak2016stubborn}, block withholding~\cite{rosenfeld2011analysis, luu2015demystifying, courtois2014subversive, luu2015power, eyal2015miner}, and fork after withholding~\cite{kwon2017selfish}. Defenses against these game-theoretic attacks have also been well-studied ~\cite{heilman2014one,zhang2017publish, pass2017fruitchains,kwon2019eye,lavi2019redesigning}. Nevertheless, the transaction fee-based incentivization framework brings about new challenges: the freedom of choosing transactions to form new blocks and the voluntary feature of fee rates nurtures a possible deviating mining strategy called undercutting~\cite{carlsten2016instability}.
In undercutting, the attacker intentionally forks an existing chain by leaving wealthier transactions out in its new block to attract other (petty compliant) miners to join the fork.
Unlike honest miners, who follow the longest chain that appears first, petty compliant miners break ties by selecting the chain that leaves out the most transaction fees.
Carlsten et al. find that undercutting can become the equilibrium strategy for miners, thus making the system unstable as miners are undercutting each other.
Undercutting is not desired because it hurts the expected profits for honest miners. Also, the success of undercutting harms fairness to users who attach high fee rates to transactions and may discourage users from paying higher rates.
However, in \cite{carlsten2016instability}, the fees arrive at a constant and continuous rate and miners can claim all the accumulated fees. This can be unrealistic because for many prominent blockchains including Bitcoin, scalability has been an important unresolved issue and transaction fees are voluntary, thus deviating fee accumulation rate from fixed. More importantly, the unconfirmed transaction set is often large \cite{unconfirmed} and miners are typically not able to insert all transactions into one block due to the block size limit. Moreover, petty compliant actions can be irrational in some cases, especially when the attacker's mining power is small or the number of extra remaining fees on a chain is small. If following the undercutter is not always advantageous, the assumed petty compliant miners may not join the fork, which also makes the performance of undercutting questionable. Additionally, if wealthy transactions continue to arrive, the premiums deliberately left by the undercutter may not be as appealing. This is not captured as fees accumulate at a constant rate in \cite{carlsten2016instability}.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.65]{figures/overview_example_all_withCoins.pdf}
\caption[a]{Example Undercutting Attacks with Petty Compliant Followers~\cite{carlsten2016instability} (top) and Rational Followers (bottom two). In all three scenarios, we have the main chain on the top and the forked chain on the bottom. The owner of each block is shown on the left bottom corner inside each square, and the timestamp is on the bottom right corner. The miners that are working on extending each chain are on the top of each square after the block is published. The arrows from/to different chains are presenting the chain switch of the miners when a new block is published. The coins represent the remaining unclaimed fees.
The petty compliant miners act like honest miners but break ties by choosing the chain that has claimed the fewest transaction fees. Rational miners can choose to extend any one of the chains based on expected returns from each chain. They may not join a weak attacker's fork when the bonus for joining is small. Likewise, they may stay with a strong attacker if the extra fee amount is considerable. Aside from these two cases, rational miners can shift around depending on a chain's winning probability and extra fees unclaimed on that chain.
}
\label{fig:attackexample}
\end{figure*}
Towards modeling undercutting attack more realistically and analyzing its potential profitability, we construct a new model to capture its performance. The fees in our model arrive with transactions. By sorting transactions in the unconfirmed transactions set and packing at most a block size limit of transactions, we obtain the maximum claimable fees at a certain timestamp. Miners can choose to claim no more than this maximum fee. Petty compliant miners are made "rational" in our model. Overall, a rational miner takes actions that will lead to the maximization of its returns: it can selfishly pick transactions to form blocks, intentionally fork a chain by undercutting or extend a specific existing chain. As illustrated in Fig.\ \ref{fig:attackexample}, petty compliant miners mostly honest mine as shown in trace 2 but select the chain with the most fees available when faced with a tie as in trace 1 and 3. When faced with a weak undercutter who leaves only a limited amount of extra fees, the rational miner may not join the fork, as demonstrated in the middle case in Fig.\ \ref{fig:attackexample}. If the attacker is strong with an attractive amount of unclaimed fees, the rational miner joins the fork early and may not shift to other chains, as shown in the bottom-most case.
When undercutting, the miner's goal is to earn more than what it can potentially gain not undercutting. To realize this goal, the attacker needs to first (i) attract other rational miners to join its fork if necessary, and second (ii) avoid being undercut by others.
If the undercutter leaves out too many fees, it may end up being worse off. If it claims more than necessary, other rational miners may undercut its fork, annihilating its efforts. Then how many fees should an undercutter take to achieve the two goals simultaneously? And is there a way to make it not possible to achieve both goals?
We seek to first locate such a feasible area for an undercutter to secure its premiums and next, uncover defenses against this attack.
\subsection{Contributions}
\textbf{Defining Analytical Model} Conceptually, the model has three major modules: chains, miners, and unconfirmed transaction sets. Chains module keeps track of states of chains, including height and list of blocks, each of which comprises of size, owner, and fee total. Miners module records information of miners, with mining powers, types, and available strategy sets for every miner. Unconfirmed transaction sets module is also called the memory pool module and tracks namely the unconfirmed transactions of different chains. All the modules evolve in the form of state transitions as new transactions being fed into the system and miners taking action. We allow fees to arrive with transactions instead of at a constant rate and measure the total fees inside a block relative to its parent block as the parameter $\gamma$. In experiments with real-world data flows, we calculate the absolute amount of fee total and also express it in relative ratios. The evolution terminates when the memory pools become empty. We calculate the profits for each miner at terminal states with absolute fees.
The analytical analysis focuses on a single round of undercutting attack, which can be of finite length, if the undercutter gives up after being one block behind after it has created a fork, or of infinite length otherwise. Specifically, we perform analysis for "limited" and "sufficient" transaction fee settings. In the former setting, there are so few fees left to claim that undercutters are better off by undercutting the current chain even if no other miners join the gang quickly. In the latter case, there is still a decent amount of fees to claim, and the undercutter inputs more efforts to attract other miners.
\textbf{Identifying mempool conditions for profitable undercutting attacks.}
As a key contribution, we offer closed-form conditions on the unconfirmed transaction set to make undercutting profitable. Assume the mining power fraction of the undercutter to be $\beta_U$ and that of the honest miner to be $\beta_H$.
(i) In the best case for the undercutter in our model, the undercutter forgoes the fork after being one block behind. (ii) ($\gamma<\frac{1-\beta_U}{\beta_U}$) The undercutter earns more through undercutting than extending the target chain when the targeted chain head block has fewer fees than $\frac{1-\beta_U}{\beta_U}$ times of what can be stuffed into the next block on the target chain. (iii) ($\gamma<\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}$) When there are more transaction fees in the unconfirmed transaction set but still less than $\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}$ times of what is inside the target chain head, the attacker can expect to earn a premium by undercutting only if it can attract rational miners to join. It should be careful to craft the first block on its fork in such a way that rational miners can be attracted to join the fork but not tempted to undercut it again. Moreover, the conditions for the case where the undercutter holds on for one more block are stricter as noted in (i) and the overall returns are worse.
\textbf{Proposing undercutting avoidance techniques to make undercutting not desired.} Our next contribution involves mitigating undercutting attacks. Miners are incentivized to defend against undercutting as their blocks being undercut means nullification of their returns. Once we have identified effective conditions for undercutting to bring about higher expected returns than fair shares, we work backward to proactively check the conditions before creating a new block. By making the conditions no longer satisfied, potential undercutters are no longer motivated to undercut.
Undercutting behaviors motivate miners to apply undercutting avoidance techniques, which essentially avoids incorporating a significantly larger amount of fees in one's block and leave so few fees in the memory pool. In other words, when faced with a non-overloaded unconfirmed transaction set, the potential threat of being undercut forces a miner to include only a proper amount of fees into its block.
\textbf{Experiment with real-world data.} We design simulations with real-world data from Bitcoin and Monero blockchains fed into the system. Bitcoin is representative of swamped blockchains and Monero typically has a small unconfirmed transaction set. Memory pools are reconstructed from blocks retrieved and they are not crowded. One part of the experiments proceed with undercutting avoidance feature disabled for all miners except for the undercutter. This is devoted to examining the effectiveness of the conditions we have identified in the analytical study. The other part of the experiments is carried out with undercutting avoidance techniques enabled for all miners.
We observe that undercutters have higher expected returns than fair shares if they check the conditions before attacking and no undercutting avoidance is enabled. Although there are uncertainties, they can manage especially when they give up attacking faster in less promising situations. In the Bitcoin blockchain, when we allow the undercutter to give up after being one block behind after starting the attack, the average return for a 17.6\% undercutter is 18\%. For a 45\% mining power attacker, the expected profit share is 49.5\%.
These profits are further increased in Monero, where the 35\% attacker is able to gain 43\% of the profits on average.
When we equip miners with undercutting avoidance techniques, there are fewer attacks and the expected returns for the undercutter are around its fair share.
The expected returns decrease from 18\% to 17.5\% for the 17.6\% undercutter and from 49.5\% to 45.8\% for the 45\% attacker.
We observe the same trend in Monero, where the expected returns are decreased from 43\% to 35\%.
\paragraph{Organization of the paper} The rest of the paper is organized as follows:
In \Cref{sec:Related}, we visit undercutting literature and related economic theories. \cref{sec:Def} offers a brief overview of blockchain mempools and mining, and define relevant concepts. In \Cref{theo}, we model the mining game with undercutting and include analytical study, while in \cref{sec:Exp}, we evaluate the profitability of undercutting and the effectiveness of the avoidance technique using real-world blockchain data. Finally, \Cref{sec:Conclusion} concludes the discussion.
\section{Conclusions}\label{sec:Conclusion}
We construct a system model for studying the profitability of the undercutting attack, where a miner forks an existing chain and deliberately leave more fees unclaimed. The undercutter's intentional balancing of undercutting others and avoiding its fork being undercut again demands specific conditions on the unconfirmed transaction set at the time of its decision-making. If the condition is met, the undercutter can expect positive premiums from attacking compared with extending an existing chain. However, because such conditions are not easy to satisfy, time-dependent (can be invalidated if new transactions arrive), and can be manipulated, it opens a door for mitigation. For all miners in the system, by applying undercutting avoidance techniques to invalidate the aforementioned conditions, they can avoid being undercut. The avoidance feature requires miners to claim fewer fees if the current bandwidth set or the potential new block is sufficiently wealthier than the subsequent block. As a result, the competition of undercutting can involuntarily help promote the fair sharing of fees even in a time-variant fee system.
\section{Related Work}\label{sec:Related}
Carlsten et al.\ \cite{carlsten2016instability} introduce the undercutting mining strategy to show the instability of the Bitcoin incentivization system without fixed block rewards. They consider three mining strategies: default honest mining, undercutting, and {\em petty compliant}.
Carlsten et al.\ find that an equilibrium exists where all miners use the same undercutting strategy and this brings about instability to the system. In their model, fees pile up at a fixed rate and miners can accrue all accumulated fees at a certain timestamp without considering size limits. In the current Bitcoin system, there are often large amounts of unconfirmed transactions. \cite{unconfirmed} When there is only a small amount of fees left because the previous block miner takes all accrued fees, it is desirable to undercut. This is also one result of our model as the ratio $\gamma$ between fees in the next possible block and the target block approaches 0 (negligible). But due to the existence of candidate transactions, this rarely appears for Bitcoin and appears more often for Monero in our empirical analysis.
The fee generation design also enhances the profitability of undercutting because the unconfirmed transaction set is not captured in the model and it cushions the effects of undercutting.
Intuitively, following the undercutter is more profitable because there are more remaining fees. But miners cannot claim all fees at once. A miner's expected returns from a chain depend on the probability of the chain becoming the main chain eventually and the proportion of blocks the miner can own along with fees inside those blocks. If the fees inside blocks a miner expects to own are similar on all chains due to the block size limit and the unconfirmed transactions awaiting, extending the chain with the most mining power increases the expected returns. If a miner is rational, it would seek to maximize its income. Then following the undercutter is not always the best choice and the probability of the attacker winning decreases.
\paragraph{Together with other non-compliant mining strategies} It is possible to combine undercutting with other mining strategies like selfish mining~\cite{eyal2018majority, sapirshtein2016optimal, nayak2016stubborn} and block withholding~\cite{rosenfeld2011analysis, luu2015demystifying, courtois2014subversive, luu2015power, eyal2015miner}. For block withholding, because undercutters prefer larger mining power, the two attacks have opposite goals and can be awkward to be blended together. Selfish mining purposely hides discovered blocks, while undercutting intends to publish a block and attract other miners. They do not share the same rationale but we can schedule the two strategies and apply the one with higher expected returns at a certain time. In this work, we put our focus on the profitability and mitigation of undercutting, which affects the undercutting part of this strategy scheduler.
\paragraph{Intuitions from Economics - Risk Premium}
In analogy to investment theory, in the mining process, miners obtain their risk-free rate of return by performing honest mining. When they intentionally undercut an existing chain, they are taking a risk of losing their share that can be acquired from subsequent main chain blocks. If the undercutter decides to take the risk, it is worthwhile if it is earning a risk premium\cite{bodie2009investments}.
The block being undercut needs to be much wealthier compared with blocks to come, to compensate for such risk.
\paragraph{Sunk Cost} In traditional microeconomics \cite{mankiw2014principles}, a rational agent makes decisions based on prospective costs and disregard sunk costs. In the meanwhile, from behavioral economics \cite{benartzi1995myopic,tom2007neural} point of view, decision-makers can have an irrational bias towards probability distribution of future events, loss aversion, and other illusions. When an undercutter decides whether to continue the fork it creates or shift to other chains, the attacker can be influenced by sunk costs including time already spent and blocks mined. We capture this mindset by allowing the attacker to give up after being $D\geq 1$ blocks. Larger $D$ indicates a greater influence from sunk costs.
\paragraph{Lemon Market} Another angle to look at the problem on a higher level is through the market for "lemons"\cite{akerlof1978market}, the brand new car that becomes defective the minute you bought it. In the Bitcoin block space market, users are bidders, and miners are sellers. Users decide prices to pay based on their observation of the relationship between confirmation time and fee rates. They attach fee rates corresponding to the desired waiting time. If undercutting is prevailing, users who attach high fee rates but are ghosted are provided with "lemons" instead of "peaches" -- fast confirmation. This can possibly result in a decrease in the overall fee rates, diminishing the profitability of undercutting.
\section{System Evaluation}\label{sec:Exp}
In this section, we evaluate the profitability of undercutting using Bitcoin and Monero's real-world data. Bitcoin is a typical example of congested blockchains, and Monero is an example of a more available one. The simulation codes and a sample data set have been made open source\footnote{https://github.com/haas256/UndercuttingProject}. In the analytical analysis (previous section), we let the undercutter be aware of future transaction flows in and out of the mempool. In reality, there is more uncertainty involved. We seek to explore the profitability of undercutting using real-word transaction flows that can fall out of the undercutter's hands. Another key difference is that miners are considered as different players regardless of their types in experiments. It is possible in this discrete case, one rational miner stays on its current chain, but other rational miners join a fork.
\subsection{Data Collection}\label{sec:data_collcetion}
\paragraph{Transactions}
We obtained the blocks from height $630,457$ (May 15th, 2020 after the Bitcoin's block reward halving)
to {$634,928$ (June 15th, 2020)} from the Bitcoin blockchain using the API provided by blockchain.com~\cite{blockchaindotcom}, comprising of $9,167,040$ transactions.
Similarly, we further obtained data for the Monero blockchain using a similar API from \url{xmrchain.net}. In total we obtained $1,482,296$ transactions from block height $2,100,000$ (May 17th, 2020) to $2,191,000$ (Sept 20th, 2020).
For each of these transactions, we extracted the size, fee, and timestamp.
The timestamp serves as a proxy for the time that the transaction arrives at the miners' mempool. Note that transactions appeared during this time frame but not in any of the collected blocks are not included. Therefore, the memory pools reconstructed are not the exact mempools miners were faced with.
\paragraph{Miners} There are three types of miners in the experiment.
The \textit{undercutting} miner undercuts.
The \textit{honest} miners follow the policies stated in the Bitcoin protocol, which is to extend the longest chain and break ties according to the block's broadcast time.
The \textit{rational} miners shift among chains to maximize their expected profits regardless of default rules. Conceptually the \textit{undercutting} miner is also a \textit{rational} miner. The largest rational miner is made to be the undercutter.
To mimic the Bitcoin network's current state, we follow the mining power distribution of miners published by blockchain.com~\cite{miningpowers} on July 30th, 2020.
In total, we have $16$ miners,
with mining powers ranging from $0.6$ to $17.6$ percent. To give the adversary the advantage, we select the strongest miner with $17.6\%$ of the mining power as the undercutting miner. The remaining 15 miners are distributed between honest and rational miners, as explained later.
Similar to Bitcoin, we took the same approach for the Monero network. Following the mining power distributions published by exodus~\cite{exodusmoneropools} and moneropool.com~\cite{moneropoolsdotcom}, we selected the strongest mining pool with 35\% mining power as the undercutting miner.
\paragraph{Hypothetical Bitcoin Scenario}
We also consider a hypothetical but a potentially realizable scenario where the undercutting attacker owns 45\% of the entire mining power. This scenario is intended to show the profitability of undercutting for a strong attacker.
\makeatletter
\newcommand{\let\@latex@error\@gobble}{\let\@latex@error\@gobble}
\makeatother
\begin{figure}[t]
\colorbox[gray]{0.95}{
\begin{minipage}{0.95\columnwidth}
\SetAlgoLined
\SetNlSty{textbf}{}{:}
\begingroup
\let\@latex@error\@gobble
\begin{algorithm}[H]
\small
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetAlgoLined
\DontPrintSemicolon
\Input{\texttt{txSet}, \texttt{minerSet}, \texttt{chainsTime}}
\While{\texttt{txSet} not empty}{
extChain $\leftarrow$ nextChainToExtend(\texttt{chainsTime});
m $\leftarrow$ selectNextBlockMiner(extChain);
nextBlock $\leftarrow$ publishBlock(m);
\vspace{1mm}
updateChains(extChain, nextBlock);
\vspace{1mm}
updateMiners(extChain);
\vspace{1mm}
updateMempool(extChain);
}
\caption{Simulation Overview}\label{alg:simulation}
\end{algorithm}
\endgroup
\end{minipage}}
\colorbox[gray]{0.95}{
\begin{minipage}{0.95\columnwidth}
\SetAlgoLined
\SetNlSty{textbf}{}{:}
\begingroup
\let\@latex@error\@gobble
\begin{algorithm}[H]\small
\DontPrintSemicolon
\SetKwFunction{Fminers}{updateMiners}
\SetKwFunction{Fchains}{updateChains}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\Fchains{extChain, nextBlock}}{
extChain.append(nextBlock);
\ForEach{chain in \texttt{chainsTime}}
{remove from \texttt{chainsTime} if it is non-wining}
t $\leftarrow$ NextBlockCreationTime(extChain);
update \texttt{chainsTime} with tuple (extChain, t);
}
\vspace{2mm}
\SetKwProg{Pn}{Function}{:}{}
\Pn{\Fminers{extChain}}{
\ForEach{miner in minerSet}{
\uIf{miner = undercutter}{decide to fork or not and craft the new block as described in Part 1 of the $D=1$ algorithm in \ref{sd1}, the $D=2$ algorithm in \ref{sd2};}
\uIf{miner = honest}{\uIf{extChain longest chain}{switch to extChain;}}
\uIf{miner = rational} {decide to switch to extChain or stay on current chain as described in Part 2 of the $D=1$ algorithm in \ref{sd1}, the $D=2$ algorithm in \ref{sd2};}
}
}
\caption{Chain and Miner Updates \label{alg:chain_miner_update} }
\end{algorithm}
\endgroup
\end{minipage}}
\end{figure}
\subsection{Experiment Setup}
We model the blockchain system as event-based, where the events are new block creations. Parameters and states of the system are updated upon creating a new block that we denote as $B_i$ for the remaining of this section.
We assume that all miners have the same view of the network and the same latency in propagating the blocks and transactions. So miners working on the same chain have the same mempool.
\paragraph{Initial setup}
We begin the simulation by first setting the system's time to the earliest timestamp ($t_0$) of the collected transaction. Next, we create the empty genesis block $B_0$ and create the chain $C_0$ by appending $B_0$ to it. Then, we insert the tuple ($C_0$, $t_0$) to an empty list \texttt{chainsTime}. The tuples inside this list indicate when the next block for each of the chains will be generated.
Algo.~\ref{alg:simulation} provides an overview of the simulation after the initial setup.
The simulation takes the transactions (\texttt{txSet}), miners (\texttt{minerSet}) and
the tuple list (\texttt{chainsTime}) as inputs.
We consider these inputs as global for all functions in the simulation. Each iteration of the while loop indicates a new event.
\paragraph{Block creation (line 2-4 in Algo. \ref{alg:simulation})}
In the first step of each iteration, the chain to be extended, \texttt{extChain}, is selected using the \texttt{nextChainToExtend} function. It sorts all the tuples in \texttt{chainsTime} and picks the chain with the smallest next block creation time.
Next, the algorithm selects the new block's miner using the \texttt{selectNextBlockMiner} function. This function randomly selects miner \texttt{m}, from all the miners that are working on \texttt{extChain}, weighted by their mining power.
Finally, the selected miner \texttt{m} publishes the next block using the transactions in its mempool.
\paragraph{Chain updates (line 5 in Algo. \ref{alg:simulation})}
After the creation of the new block $B_i$, all the chains in the system are updated via the procedure depicted in Algo.~\ref{alg:chain_miner_update}.
In the first step, block $B_i$ is appended to the current extending chain \texttt{extChain}.
Next, all other chains are checked against the \texttt{extChain} to see if they are in a non-winning situation (\texttt{extChain} has at least $D$ blocks from the forking point).
If a chain is non-winning, it will be removed from the system, and \texttt{chainsTime} will be updated.
Finally, the \texttt{chainsTime} list is updated with the new time for the next block on \texttt{extChain}.
\paragraph{Miner updates (line 6 in Algo. \ref{alg:simulation})}
Following the chain updates, miners update their working chains.
Each miner based on its type (undercutter, honest, rational) decides whether to change its working chain (shown in \texttt{updateMiners} function in Algo.~\ref{alg:chain_miner_update}).
\begin{enumerate}[leftmargin=*]
\item If \texttt{extChain} is an competing chain of a undercutter miner, the miner checks whether \texttt{extChain} is $D$ blocks ahead of its own chain and switches to \texttt{extChain} if Yes.
\item If \texttt{extChain} is not a forked chain created by the undercutter, and miner \texttt{m} (miner of block $B_i$) is not the undercutting miner itself, the undercutter begins condition checking routine. If the condition is ripe, it forks block $B_i$. Otherwise, it continues to extend \texttt{extChain}.
\item All honest miners check whether \texttt{extChain} is the longest chain in the system. If Yes, they switch to chain \texttt{extChain}.
\item Rational miners that are not on \texttt{extChain}, compare the length of \texttt{extChain} with their current chain and calculate their expected returns as described in \cref{sec:sd1,,sec:sd2}, and decide whether to switch to \texttt{extChain}.
\end{enumerate}
\paragraph{Mempool update (line 7 in Algo. \ref{alg:simulation})}
The last system update before moving to the creation of the next block ($B_{i+1}$) is the mempool update.
In this step, all transactions in \texttt{txSet} with a timestamp between the creation time of $B_{i-1}$ and $B_i$ are added to the mempool of the miners on \texttt{extChain}.
\paragraph{Simulation run}
In a normal run, we repeat the above steps until all transactions have been consumed (\texttt{txSet} is empty). To moderate fluctuations caused by the random selections (of block generation time and block owners), we repeat the experiments 50 times (for each parameter set) for Bitcoin, 10 times for Monero, and report the mean values of profit proportions along with the 95\% confidence intervals.
We maintain two parameters in the simulation: (i) safe depth $D$: the undercutter gives up on the forked chain as soon as the competing chain is $D(=1\; or \;2)$ blocks ahead; (ii) honest miners $\beta_H$ in the system: we consider six variations with $0\%, 10\%, 20\%, 30\%, 40\%, 50\%$ honest mining power.
We also simulate with undercutting avoidance enabled. The difference from a normal run is that when miners extend a chain, they implement "Part 3" described in the two summarized algorithms in \cref{sd1,sd2}.
Undercutting avoidance, implemented by other miners, makes the original attacking conditions no longer satisfiable; therefore, the undercutter will not undercut in this situation. We do not carry out the exact avoidance technique for experiment purposes and potentially give some advantage to the undercutter. For example, suppose there are 4 BTC in the current bandwidth set and 2 BTC in the next bandwidth set, both of size one block size limit. Let $\gamma=1$. For exact avoidance, we claim 3 BTC fitted into one block size limit. However, in the experiments, we claim 3 BTC from the current bandwidth set, which results in a remaining bandwidth set of around 2 BTC, not 3 BTC, as the remaining transactions do not fit into one block.
\begin{figure*}[ht]
\centering
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/experiments/bitcoin_rewards_45_undercutter__1__cutoff_1month_0.9avoidance_1inflation.pdf}
\caption{Returns for $D=1$. }
\label{fig:bitcoin1}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/experiments/bitcoin_rewards_45_undercutter__2__cutoff_1month_0.9avoidance_1inflation.pdf}
\caption{Returns for $D=2$. }
\label{fig:bitcoin2}
\end{subfigure}
\caption{Bitcoin Undercutting Returns: normal runs and runs with avoidance feature enabled. The dash line depicts the profits from undercutting without victim miners applying any defenses. The solid line is the undercutting gain when miners apply undercutting avoidance. The shadowed band is statistics' 95\% confidence interval.}
\label{fig:bitcoin}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/experiments/monero_rewards_35_undercutter__2__cutoff_avoidance.pdf}
\caption{Monero Undercutting Returns for 35\% attacker: normal runs and runs with avoidance feature enabled.}
\label{fig:monero}
\end{figure}
\subsection{Experiment Results}
\paragraph{Normal runs} Overall in a normal run, an undercutter can expect to earn more than fair shares by conditional undercutting as shown in figures \ref{fig:bitcoin1}, \ref{fig:bitcoin2} and \ref{fig:monero}. In Bitcoin runs, the 17.6\% undercutter receives on average (for safe depth = 1, 2) a slight advantage of 17.85\% shares for 0-50\% honest mining power. The strong 45\% undercutter receives a greater profit of 49.32\% of the shares.
In Monero runs, the 35\% undercutter obtains 43\% of the profit on average (for safe depth = 1, 2) for different honest miner portions. Undercutting is more efficient in Monero because it has small mempools, providing a limited cushion effect from unconfirmed transaction sets.
Below, we discuss the results for different parameters.
\begin{enumerate}
\item \textbf{Safe depth $D$}. For $D=1$, undercutters earn more than fair shares. For $D=2$, the overall profits for both weak and strong attackers shrink. Weak attackers can suffer from losses when undercut. Strong attackers can expect a profit increase if honest mining power is weak. If no honest miner is present, strong attackers benefit from holding on longer. Moderately strong attackers have slightly better performance by persisting when honest mining power is strong.
\item \textbf{Honest mining power $\beta_H$}. With more honest miners, the undercutter initiates slightly fewer attacks. If $D=1$, the profits from undercutting are stable with different honest mining power settings. This is a mixture of positive and negative rational effects for stronger attackers and positive and negative honest effects for weaker attackers. In the prolonged version of the game where $D=2$, more honest mining power worsens the strong undercutter's returns in Bitcoin. One reason is that since the remaining mining power is only 55\%, the positive rational effect decreases significantly as honest mining power grows. In Monero, in the case of $D=2$, as honest mining power grows, the returns for 35\% undercutter increases at first and then decreases. The reason is that rational miners can boost the undercutting winning probability but can also dilute the undercutter's mining power on the fork. 35\% undercutter is moderately strong and affected by both positive and negative honest effects.
\end{enumerate}
Here, \textbf{honest} and \textbf{rational effects} are the influence of honest and rational miners when they join a chain. On the one hand, they boost a chain's probability of becoming on the main chain eventually, \textbf{positively} affecting undercutter's returns. On the other hand, they dilute the undercutter's mining power on its chain, \textbf{negatively} decreasing the probability of the attacker mining the next block.
\paragraph{With undercutting avoidance}
We observe profit reduction for both weak and strong attackers after enabling the undercutting avoidance feature, as shown in~\cref{fig:bitcoin,fig:monero}. Monero runs provide more straightforward results because its mempool is often about the size of its block size limit; however, Bitcoin's mempool size and total fees are more volatile. In the Bitcoin runs with avoidance enabled, both weak and strong undercutters attack less and earn around their fair shares. In the best case for Bitcoin 45\% undercutter, it can still expect to do slightly better than fair shares in most honest miner proportion variations we have tested. Monero 35\% undercutter experiences similar changes in profits.
\subsection{Undercutting and Avoidance in the wild}
In real-world implementations of undercutting, optimized conditions can be applicable based on mempool states, miner type composition, network latency, and possibly others.
Avoidance parameters should be adjusted accordingly based on observations or more sophisticated models.
To differentiate between normal forks and undercutting, one can examine the timestamp and differences of embedded transactions inside competing chain heads, and whether this happens regularly. Undercutter often starts attacking after the target block has been created and includes only part of claimable transactions.
The implementation of avoidance techniques can vary and yield different results. We conveniently claim part of the transaction fees in the current bandwidth set when applying avoidance. This is not exact avoidance because, as we have discussed earlier, we ignore the fact that the transactions we left outside may not fit into the next bandwidth set, thus making the condition not invalidated. This can be accomplished more subtly by claiming a certain proportion of the fees while keeping the block sizes balanced. The performance and efficiency of such algorithms can also be of practical interest.
Network and propagation latency has large impacts on mempool states, thus affecting both undercutting attack and defense. If some transactions are not broadcast across the network, it is inevitable to partition miners into subsets and perform analysis inside subsystems. However, there can be numerous partitions depending on the transactions we are examining. This can make undercutting attacks hard to implement and make it difficult to differentiate between normal forks and undercutting forks.
Although we have shown that after applying the avoidance feature, attackers earn around fair shares, strong attackers can be slightly harder to defend against. Strong attackers earn around fair shares when stricter avoidance is implemented as shown in Appendix Figure \ref{avoidance08}. Another note is that avoidance techniques claim no more than the fees in the bandwidth set. This can impact blockchain scalability negatively.
\section{Undercutting Mining Game}\label{sec:Theo}
In this section, we model the mining game in the presence of undercutting attacks. We proceed with (i) mempools with "sufficient" unconfirmed transactions and (ii) mempools with "limited" transactions. We also allow the undercutter to apply two different safe depths $D=1$ and $D=2$, where it gives up attacking after being 1 or 2 blocks behind the main chain. We do not apply higher $D$ because the probability of a chain catches up when being 3 or more blocks behind is negligible with high probability.
\subsection{Game Definition}\label{gamedesp}
We define the mining game $G=\langle M, A, R\rangle$ as follows:
\begin{asparaenum
\item Players $M=\{\beta_U, \beta_H, \beta_R\}$:
$\beta_U$ denotes mining power of the undercutter, $\beta_H$ is honest mining power, and $\beta_R$ refers to remaining rational miners.
Honest miners are treated as one because they follow the same mining rules. Flexible rational miners can shift among chains. Rational miners are mostly flexible except for the undercutter in the course of an attack.
\item Actions $A=$\{undercut, stay, shift\_to\_$chain_1$, shift\_to\_$chain_2$,... \}: we index chains during a game according to their timestamps after the branching point. Older chains have smaller indices, eg. the original main chain with index $chain_1$. For honest miners, they always honest mine and may choose to stay or shift depending on circumstances. Rational miners may choose to undercut the current common chain and start a new chain, or shift among the existing chains. When there are no forks in the system, all miners "stay" on the main chain.
\item Utility functions $R=\{u_i\}_{i\in M}$: the utility function of a player depends on its type $\theta_i$ and the total transaction fees it can obtain from the game. Suppose the result at the end of a game is $o$ (eg. the main chain wins), then we can write the utility function for player $i$ as $u_i=u_i(o, \theta)+payment$. We focus on the more quantifiable part of their utility, the $payment$. Rational miners actively take actions to maximize the transaction fees they can obtain from the system in a game.
\end{asparaenum}
We allow each miner to own no more than 50\% mining power ($\beta_U<0.5$). We let miners publish their discovered blocks immediately to attract other miners to join.
We assume the mempool to be the same for miners on the same chain considering that undercutting is not practical if miners have distinct mempools. Because wealthy transactions an attacker left unclaimed may not exist in others' mempools in the first place.
This assumption makes the attacker stronger and we intend to uncover what the attacker can obtain with more advantageous environment settings.
\subsection{Probability of A Chain Winning}\label{winprob}
Before we dive into the model, we describe one key drive of the system evolution, the probability of a chain winning. In undercutting, the attacker forks an existing chain by selecting leaving more wealthy transactions, compared with the block it is forking outside, at time $t$.
In the following discussions, we refer to the fork chain as "fork" and the other chain as "main". The "main" might not be on the main chain eventually. The relative height of a chain is the number of blocks it has accumulated after the forking point.
Overall, the process proceeds as follows. The undercutter sees a new block being appended to the main chain by some other miner. It starts to work on a block that takes out wealthy transactions appearing in that block. With some probability, it can discover this block faster than the next block on the main chain. When the undercutter publishes its block, some rational miners will consider shifting to the fork because there are more high fee rate transactions that they can benefit from. To model this procedure, we screenshot the state of the system as a tuple which we denote as $S=(m, n, F_f, F_m, O, \delta, \lambda_m, \lambda_n)$, where:
\begin{asparaitem}[-
\item $m$ is the relative height of the main chain and $n$ is the relative height of the fork after the forking point;
\item $F_m$ is the list of transaction fee total in blocks on the main chain and $F_f$ is a list of fee total inside blocks on the fork;
\item $O$ (Power on the Fork) is the mining power currently working on the fork, which can increase or decrease based on new block appending events
\item $\delta\in (-1,1)$ is the mining power shifting from the main chain to the fork, which is negative if miners are shifting to the main chain;
\item $\lambda_m$ and $\lambda_n$ are block generation rates for the main chain and fork respectively.
\end{asparaitem}
To obtain the winning probability measure for a chain from the state $S$, we first present necessary discussions on the distribution of the time needed for a chain to complete $D$ confirmations and become solidly on-chain. We view the block generation event as a Poisson process and use a random variable to represent the waiting time between block occurrence events. We denote the waiting time for the main chain as $X$ and the fork as $Y$. They both follow exponential distribution but with different rates. The rate parameters depend on the mining power distribution.
Given a state $S=(m, n, R, F, O,\delta, \lambda_m, \lambda_n)$, we obtain the block occurrence rate for this round as:
\begin{align*}
\lambda_m=\frac{1-O}{I};\; \lambda_n=\frac{O}{I}
\end{align*}
where $I$ is block generation interval.
This is derived based on the thinning theorem of the Poisson point process. The main idea is that independent sub-processes of a Poisson process are still Poisson processes with individual rates. With this property, we can determine the time interval for the next block to appear which follows an exponential distribution. More detailed proofs can be found in Appendix \ref{thinning}.
Next we derive the probability of a chain winning.
For $D=1$, there is only one state that the rational miners $\beta_R$ needs to make a decision. When the undercutter starts a fork chain before the main chain extends by one. The two competing chains are in a tie with $\tilde{D}=0$. The probability that the fork wins is simply
\begin{align*}
p = \Pr[Fork\; Wins]= \Pr[Y<X]=O+\delta
\end{align*}
For $D=2$, there are infinite number of situations where rational miners need to make decisions about shifting. The winning probability of the two chains play important roles in this decision making process. We let $\tilde{D}=n-m<D$, denoting the number of blocks by which the fork leads the main chain. For example, when $\tilde{D}=-1$, the fork is one block behind the main chain. Then the fork wins if it creates 3 blocks before the main chain extends by 1, or discovers 4 blocks before the main chain extends by 2, and so on. Thus we have
\begin{align*}
p =\sum_{i=0}^{\infty}\Pr[(D-\tilde{D}+i) Y<(i+1) X]
\end{align*}
When $\tilde{D}=-1$, the fork is behind the main chain. For the fork to win, we need $p=\sum_{i=0}^{\infty}\Pr[(3+i)Y<(1+i)X]=\sum_{i=0}^{\infty}(\beta_U+\delta)^{3+i}(1-\beta_U-\delta)^{i}$.
When $\tilde{D}=-1$, there is a tie between the fork and the main chain. In this case, $p=\sum_{i=0}^{\infty}\Pr[(2+i)Y<(1+i)X]=\sum_{i=0}^{\infty}(\beta_U+\delta)^{2+i}(1-\beta_U-\delta)^{i}$.
If $\tilde{D}=1$, the fork is leading. We have $p=\sum_{i=0}^{\infty}\Pr[(1+i)Y<(1+i)X]=\sum_{i=0}^{\infty}(\beta_U+\delta)^{1+i}(1-\beta_U-\delta)^{i}$.
\subsection{Giving Up After 1 Block Behind}
Now we discuss $D=1$. We use an abbreviated state vector $S^*=(m,n)$ in discussion. With $D=1$, rational miners only need to shift at state $S^*=(1,1)$.
We continue to denote the mining power shifting from one chain to another as $\delta$, the honest mining power as $\beta_H$, responding rational mining power as $\beta_R$, undercutter mining power as $\beta_U$, the transaction fees inside blocks on the main chain as $F_{m1}$ and $F_{m2}$, the transaction fees inside blocks on the fork chain as $F_{f1}$ and $F_{f2}$, the expected returns for responding rational miners $\beta_R$ as $R_r$ and the expected returns for the undercutter as $R_u$. If there is no undercutting, we denote their respective expected return as $R'_r$ and $R'_u$. With probability $\beta_U$, the undercutter creates a new chain and the game is started.
\begin{figure}
\centering
\includegraphics[scale=0.5]{ndss/figures/safe-depth-1.pdf}
\caption{Safe depth $D=1$ state transition. Boxes with "X" sign indicates terminal states. For non-terminal states, circles indicate ties. Every left branch indicate the main chain extends by one and every right branch refers to the fork creates a new block. The quantity on the line is the probability of state transitions. Here $\delta$ is the amount of rational mining power shifting to the fork.}
\label{fig:safedepth1}
\end{figure}
After the game has started, responding rational miners need to decide whether to stay on its current chain or shift to the undercutter's chain. Suppose they shift $x$ of its mining power $\beta_R$ to the fork. They can decide $x$ by solving
\begin{align*}
\max_{x\in [0,1]} E[R_r] = \max_{x\in [0,1]}\bigg( &\frac{\beta_R}{\beta_R+\beta_H}\cdot (1-p)\cdot F_{m1}+\\ &\frac{(1-x)\beta_R}{\beta_H+(1-x)\beta_R}\cdot (1-p)\cdot F_{m2}+\\
&\frac{x\beta_R}{x\beta_R+\beta_U}\cdot p\cdot F_{f2}\bigg)
\end{align*}
where $p$ is the probability of the fork winning. Then we can calculate the shift as
\begin{align*}
\delta = x\beta_R
\end{align*}
We can observe that the optimization problem involves fees inside succeeding blocks after the forking point. We seek to represent the fees in a relative way such that the analysis can be applied in a more general way. We let $F_{m1}=1$ and all other blocks have fee total relative to it.
Now we continue the discussion in two different mempool scenarios.
\subsubsection{Mempools with limited bandwidth set} Here by "limited" we mean the current bandwidth set on the main chain has small enough transaction fee total ($< \frac{\beta_U}{1-\beta_U}$). We provide more details concerning this quantity as we proceed. Without loss of generality, we assume $F_{m1}=1$, $F_{m2}=\gamma$ (a number $< \frac{\beta_U}{1-\beta_U}$ to be exact), $F_{f1}=\gamma$ and $F_{f2}=1$ (where $\gamma\in [0,1]$). $F_{f2}$ is not better than 1 because $F_{m1}$ is already one of the current "best" bandwidth set. The undercutter cannot expect to optimize this set without new incoming transactions. It does not make it worse than 1 either because otherwise, other rational miners can undercut and provide a better $F_{f2}=1$. Note that although we use the same notation $\gamma$ for $F_{m2}$ and $F_{f1}$, the undercutter only has control over the latter.
With probability $p=\beta_U+\delta$, the fork actually wins and with probability $\beta_H+\beta_R-\delta$, the main chain wins. Then the expected profit of the undercutter is
\begin{align*}
E[R_u] = (\gamma +\frac{\beta_U}{\beta_U+\delta})\cdot \beta_U(\beta_U+\delta)
\end{align*}
The return for the undercutter if it has not started the attack is
\begin{align*}
E[R'_u]=\beta_U\gamma
\end{align*}
The attacker will starts the attack only if $E[R'_u]<E[R_u]$. This indicates that $\gamma < \frac{\beta_U}{1-\beta_U-\delta}$. With $\gamma < \frac{\beta_U}{1-\beta_U}$, $E[R'_u]<E[R_u]$ even if $\delta=0$. That is, even no rational miner shifts to the fork, there are so few fees left in the mempool that the attacker is always better off by undercutting the main chain compared with extending it. We discuss the scenario where there are more than "limited" fees in the mempool later.
Note that one special case is when there is no transactions left or the bandwidth set has negligible fees and $F_{m2}=0$.
The undercutter will try to start the attack because originally there is nothing left on the main chain and $E[R'_u]=0$. Since the undercutter does not need to make positive payments to the system, we have $E[R'_u]\leq E[R_u]$.
One small detail is that the attacker needs to craft the block it generates to avoid being undercut again. A conservative mindset simply cuts the current chain head into two subsets with equal fees because assuming a potential 50\% undercutter, the "limited" bandwidth set threshold is 1.
With probability $\beta_U$, the game is started. We solve for the shifting mining power proportion $x$ to maximize the expected returns for the rational miners
and update the shifting with $\delta=x\beta_R$. However, the attacker initiates undercutting regardless of expected shifting when there is only limited transaction fees unclaimed in the mempool bandwidth set. Therefore, we do not try to solve $x$ for now.
Since in this scenario, there is "limited" fees to start with, there will be undercutting after undercutting until there is "sufficient" bandwidth set again in the mempool. To avoid being undercut again, a rational miner or an undercutter can flatten the fees inside blocks they create by making the fee total close to what's inside the subsequent bandwidth sets. Inside the scope of the "limited" mempool condition, this "closeness" depends on $\frac{\beta_U}{1-\beta_U}$. By making the ratio of fees in the block a miner creates and fees in the bandwidth set greater than it, the miner can expect the block not to be undercut effortlessly. Note that here the miner has to estimate the mining power of a potential undercutter. We give a safer bound later.
In conclusion, for $D=1$, when the attacker is stronger ($\beta_U$ is larger), the requirements on the mempool bandwidth set fee total ($< \frac{\beta_U}{1-\beta_U}$) for undercutting to be profitable regardless of rational miners' actions is looser. For $\beta_U$ approximating 0.5, the limit is close to 1, which occurs with high frequency. For $\beta_U=0.2$, the upper bound is 0.25, where the current bandwidth set is 1/4 of the fees inside the main chain head. When this condition is met for a specific attacker, it's always preferred for the undercutter to attack.
\subsubsection{Mempools with sufficient bandwidth set} By "sufficient" we mean the current bandwidth set in the mempool are at least of more than "limited" transaction fee total ($\geq \frac{\beta_U}{1-\beta_U}$), on the contrary to the previous case.
We continue to assume $F_{m1}=1$, $F_{m2}=\gamma$, $F_{f1}=\gamma$, $F_{f2}=1$ ($\gamma\in [0,1]$). We know that with sufficient current bandwidth set, the undercutter needs to attract some rational miner at state (1,1). To make a decision whether to shift, the rational miner solve for $x$ in
\begin{align*}\label{shift1}
\max_{x\in [0,1]} E[R_r] =& \max_{x\in [0,1]}\bigg( \frac{\beta_R}{\beta_R+\beta_H} (1-p)+\\ &\frac{(1-x)\beta_R}{\beta_H+(1-x)\beta_R} (1-p) \gamma+\frac{x\beta_R}{x\beta_R+\beta_U} p\bigg) \\
=& \max_{x\in [0,1]}\bigg(\beta_R (1+\gamma+(\frac{\beta_H}{\beta_H+\beta_R}-\gamma)x)\bigg)\numberthis
\end{align*}
Here we let $p=O+\delta=\beta_U+x\beta_R$. One observation is that the rational miners either move to the fork with all their mining power or none. For the rational miners to join the fork, we need
\begin{align*}
\gamma<\frac{\beta_H}{\beta_R+\beta_H}=\frac{\beta_H}{1-\beta_U}
\end{align*}
Considering that $\gamma\geq\frac{\beta_U}{1-\beta_U}$, there is no solution $x$ that makes shifting more profitable if $\beta_H<\beta_U$. This means that it is not profitable for the undercutter to start the attack.
When $\beta_H\geq \beta_U$ and $\frac{\beta_U}{1-\beta_U}<\gamma<\frac{\beta_H}{1-\beta_U}$, the rational miners join the fork after it has been created.
In this case, to have $E[R'_u]<E[R_u]$ after rational miners joining the fork, we need
\begin{align*}
E[R_u] = (\gamma +\frac{\beta_U}{1-\beta_H})\cdot \beta_U(1-\beta_H)>\beta_U\gamma\\
\gamma < \frac{\beta_U}{\beta_H}<1
\end{align*}
Then for $\beta_H\geq \beta_U$, we need
\begin{equation}
\frac{\beta_U}{1-\beta_U}<\gamma<\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}
\end{equation}
Combining the two mempool conditions together, we need the following conditions to make undercutting profitable
\begin{equation}\label{bandwidthcond1}
\gamma<\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}
\end{equation}
For $D=1$, we can conclude that with sufficient bandwidth set ($\gamma \geq \frac{\beta_U}{1-\beta_U}$) in the mempool, when the honest mining power is less than the undercutter's mining power, it is not profitable to undercut the main chain. When honest mining power exceeds that limit and the bandwidth set fee total satisfies Equation \ref{bandwidthcond1}, rational miners join the fork and the rational miners including the undercutter have higher expected returns than extending the main chain.
With only limited bandwidth set ($\gamma < \frac{\beta_U}{1-\beta_U}$), the undercutter is always better off to start the attack. For stronger attackers, the upper bound for $\gamma$, which characterizes how wealthy is the block being undercut compared with subsequent blocks, is higher and easier to satisfy. To avoid being undercut or being undercut again after undercutting, miners or the attacker can make the ratio of block fee total inside their blocks and remaining bandwidth set to be greater than $\frac{\beta_U}{1-\beta_U}$. After making the remaining bandwidth set "sufficient", we need to apply a new upper bound $\min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}$ to the ratio as depicted in Equation \ref{bandwidthcond1}. Note that the miner has to estimate mining power of honest miners and the potential undercutter. Usually a conservative view is to treat $\beta_U$ as 50\%.
We present algorithms for undercutter to start attack, for other rational miners to decide whether to join a chain and for rational miners to avoid being undercut below.
\begin{tcolorbox}[breakable, enhanced]
Part 1. \textbf{undercutter} decides whether to undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible\footnote{The "negligible" bound can be customized}. If Yes, start undercutting with the first block on the fork being half of the main chain head; else, continue.
\item Check if $\gamma < \frac{\beta_U}{1-\beta_U}$. If Yes, start undercutting with the first block on the fork being the current bandwidth set, leaving everything in main chain head unclaimed; else, continue.
\item Check if $\gamma < \min\{\frac{\beta_H}{1-\beta_U}, \frac{\beta_U}{\beta_H}\}$. If Yes, start undercutting with the first block on the fork being the current bandwidth set; else, stay on the main chain, exit.
\end{enumerate}
Part 2. \textbf{rational miners} decide whether to join a fork:
Check if $\gamma<\frac{\beta_H}{1-\beta_U}$. If Yes, join the fork; otherwise stay on the main chain.
Part 3. \textbf{rational miners} avoid being undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible. If Yes, divide the current bandwidth set into 2 sets with equal transaction fees and construct a new block with one set; else, continue.
\item Check if $\gamma<\min\{2\beta_H, \frac{1}{2\beta_H}\}$\footnote{Conservatively $\beta_U=0.5$}. If Yes, select part of the bandwidth set to make the inequality no longer hold; else, construct a new block with the bandwidth set.
\end{enumerate}
\end{tcolorbox}
For undercutting avoidance, intuitively a miner can make its current block equivalent to the next possible block (the bandwidth set) in terms of fee totals. The spirit of undercutting is to take advantage of wealthy blocks.
Since we have spotted the profitability conditions for undercutting attack, by invalidating the condition, one can avoid its block being undercut and leave out less than 1/2 of the fees.
\subsection{Giving Up After 2 Blocks Behind}
Now we discuss $D=2$. Rational miners make decisions at states $S^*=\{(1,1),(1,2),(2,1),(2,2),...\}$. The probability $p$ now comprises of infinite series.
Similar to the previous case, we first let rational miners behave like honest miners, to arrive at an easy access condition on bandwidth set for the undercutter.
\begin{figure}
\centering
\includegraphics[scale=0.45]{ndss/figures/safe-depth-2.pdf}
\caption{Safe depth $D=2$ state transition. Notations are the same as the ones in Figure \ref{fig:safedepth1}. Now we have infinite state transitions. $\delta'$ and $\delta''$ are the amount of rational mining power shifting from one chain to another. We do not label the probabilities of state transitions at the end of the graph to emphasize two facts: (i) the game can go on infinitely, and (ii) the corresponding probabilities are different when travelling from different paths.}
\label{fig:safedepth2}
\end{figure}
Without loss of generality, we assume $F_{m1}=1$, $F_{m2}=F_{m3}=\gamma$, $F_{f1}=F_{f2}=\gamma$ and $F_{f3}=1$ (where $\gamma\in [0,1]$). We bound $\gamma$ later. $F_{m2},F_{m3}$ can be of different values in reality but here we use the same value to highlight the wealthiness of $F_{m1}$. When there is only one bandwidth set left and $F_{m3}=0$, the undercutter needs to split the set up to produce 2 blocks otherwise other rational miners can undercut. When there is no bandwidth set left and $F_{m2}=F_{m3}=0$, the undercutter needs to split the current chain head into 3 blocks using the rules we described earlier to avoid being undercut again. We gave more details for the two special cases at the end. Besides, the reason we have $F_{f3}=1$ is to attract rational miners and probably more importantly, avoid being undercut again.
We know that if there is no attack, the undercutter expects to receive
\begin{align*}
E[R'_u]=2\beta_U\gamma
\end{align*}
If it starts the attack, its expected return is
\begin{align*}
E[R_u
&=(2\gamma+\beta_U)\sum_{i=0}^{\infty} \beta_U^{i+2}(1-\beta_U)^i\\
&=\beta_U^2(2\gamma+\beta_U) \cdot \lim\limits_{i\rightarrow \infty} \frac{1-\beta_U^n (1-\beta_U)^n}{1-\beta_U(1-\beta_U)}\\
&=\frac{\beta_U^2(2\gamma+\beta_U)}{1-\beta_U(1-\beta_U)}
\end{align*}
When $\gamma <\frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$ (with limited bandwidth set), the undercutter can start the attack without rational miner joining the fork in a tie. This bound is more demanding than the one for $D=1$. For $\beta_U=0.5$, the upper bound is now $0.5$ instead of 1. For $\beta_U=0.2$, the bound is 0.03 instead of 0.25. Overall, for weak attackers, the condition is way more demanding than before. Intuitively, the probability of a weak attacker leads by one block is much higher than it leads by two blocks.
Next we consider when $\gamma \geq \frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$ (with sufficient bandwidth set) and the undercutter needs rational miners to join the fork in a tie. Same as before, rational miners allocate their mining power among the two chains to maximize their expected returns. We solve for $x$ in
\begin{align*}\label{shift2}
\max_{x\in [0,1]} E[R_r] =& \max_{x\in [0,1]}\bigg( \frac{\beta_R}{\beta_R+\beta_H} p_m+ \frac{(1-x)\beta_R}{\beta_H+(1-x)\beta_R} 2\gamma p_m \\&+ \frac{x\beta_R}{x\beta_R+\beta_U} \gamma p_f +\frac{x\beta_R}{x\beta_R+\beta_U+\beta_H} p_f \bigg)\numberthis
\end{align*}
where $p_m=\beta_U(1-\beta_U-x\beta_R)^2$ is the probability of main chain leading by 2 blocks first and $p_f=\beta_U(\beta_U+x\beta_R)(\beta_U+x\beta_R+\beta_H)$ is the probability of the fork leading by 2 blocks first. Here we only consider the leftmost and rightmost path in Figure \ref{fig:safedepth2} because they are the two most significant paths. We can observe that the coefficient for the second-degree term in the quadratic function is positive. The expected returns for rational miners may reach maximum at either of the two ends.
Again we let $E[R_{r|x=0}]<E[R_{r|x=1}]$.
\begin{align*}
\gamma < \frac{\frac{\beta_H^2}{\beta_H+\beta_R}+1-2\beta_H-\beta_R}{2\beta_H+2\beta_R-1}=\frac{\frac{\beta_H^2}{1-\beta_U}+\beta_U-\beta_H}{1-2\beta_U}
\end{align*}
Here we implicitly assume $\beta_U<0.5$ to arrive at a neat inequality. When $\beta_U=0.5$, $E[R_{r|x=0}]<E[R_{r|x=1}]$ holds regardless of $\gamma$.
We can observe that $\frac{\beta_U-\beta_H}{1-2\beta_U}$ is the leading term. With rational miners joining, the expected return for undercutter on the rightmost branch is now
\begin{align*}
E[R_{u|r}] = (\gamma + \frac{\beta_U}{\beta_U+\beta_R}\gamma + \beta_U)\beta_U(\beta_U+\beta_R)
\end{align*}
We let $E[R_{u|r}] > E[R'_u]$ to arrive at a tighter bound and we have
\begin{align*}
\gamma < \frac{\beta_U(1-\beta_H)}{1+\beta_H-\beta_U}
\end{align*}
Overall we need
\begin{align*}
\frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}\leq \gamma < \min \{\frac{\frac{\beta_H^2}{1-\beta_U}+\beta_U-\beta_H}{1-2\beta_U}, \frac{\beta_U(1-\beta_H)}{1+\beta_H-\beta_U}\}
\end{align*}
The attack is not profitable if this inequality does not hold. Combining the two conditions together, we have
\begin{align*}\label{bandwidthcond2}
\gamma < \min \{\frac{\frac{\beta_H^2}{1-\beta_U}+\beta_U-\beta_H}{1-2\beta_U}, \frac{\beta_U(1-\beta_H)}{1+\beta_H-\beta_U}\}\numberthis
\end{align*}
\paragraph{Special cases} If there is only one bandwidth set remaining or more generally, the remaining fees are negligible after constructing $F_{m_2}$, the undercutter needs to craft $F_{f_1}$ and $F_{f_2}$ in such a way that undercutting $F_{f_1}$ or $F_{f_2}$ is not profitable in expectation. We have given profitability conditions in Equation \ref{bandwidthcond1} and \ref{bandwidthcond2}. The high level idea is to make the attack profitable with respect to $F_{m_1}$ while making undercutting $F_{f_1}$ and $F_{f_2}$ not profitable. Note that for the latter part, the total mining power is no longer 1 but $1-\beta_U$ if the undercutter is determined and do not shift.
If there is only negligible fees remaining after $F_{m_1}$, the undercutter needs to divide the main chian head into three blocks. Necessary conditions for crafting these blocks are the same as before.
In conclusion, for $D=2$, the limited bandwidth set bound is now $\gamma < \frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$. This criteria can be hard to meet for weak miners with less than 30\% mining power ($\gamma <0.09$ for $\beta_U=0.3$). But for strong attackers with 40\%-50\% mining power (0.22 - 0.5), the conditions are easier to satisfy. In sufficient bandwidth set scenarios, attackers also have tighter bounds on $\gamma$, especially for weak attackers.
We present algorithms for $D=2$ below.
\begin{tcolorbox}[breakable, enhanced]
Part 1. \textbf{undercutter} decides whether to undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible. If Yes, start undercutting with the first block on the fork being 1/3 of the main chain head; else, continue.
\item Check if $\gamma<\frac{\beta_U^2}{2-4\beta_U+2\beta_U^2}$. If Yes, start undercutting with the first block on the fork being the current bandwidth set; else, continue.
\item Check if $\gamma$ satisfies Equation \ref{bandwidthcond2}. If Yes, start undercutting with the first block on the fork being the current bandwidth set; else, stay on the main chain, exit.
\end{enumerate}
Part 2. \textbf{rational miners} decide whether to join a chain:
Solve for $x$ (the proportion of mining power to shift to the chain) in Equation \ref{shift3}.
Part 3. \textbf{rational miners} avoid being undercut:
\begin{enumerate}
\item Check if $\gamma$ is negligible. If Yes, wait; else, continue.
\item Check if there is only one non-negligible bandwidth set left in the mempool. If Yes, divide the current bandwidth set into 3 sets with equal transaction fees and construct a new block with one set; else, continue.
\item Check if $\gamma$ satisfies Equation \ref{bandwidthcond2}. If Yes, select part of the bandwidth set to make the inequality no longer hold; else, construct a new block with the bandwidth set.
\end{enumerate}
\end{tcolorbox}
Suppose the fork extends by one, rational miners decide the amount of mining power allocated on main chain to shift to the fork by solving $x$ in
\begin{align*}\label{shift3}
\max_{x\in [0,1]} E[R_r] =& \max_{x\in [0,1]}\bigg( \text{(fees already own on the main)}p_m\\
&+\text{(fees already own on the fork)}p_f\\
&+\text{(claimable fees on main)}p_m\\ &+\text{(claimable fees on fork)}p_f\bigg)\numberthis
\end{align*}
where $p_f=(O+x\beta_{R|m}^*)^{D-\tilde{D}}$ and $p_m=(1-O-x\beta_{R|m}^*)^{D+\tilde{D}}$ ($\beta_{R|m}^*$ is the rational mining power on the main chain).
Claimable fees are total fees that a miner can expect to obtain in the unconfirmed transaction sets of each chain within size limit $(D\mp\tilde{D})\cdot B$ ($B$ is the block size limit). When the main chain extends by one, the processing is similar to the previous scenario (conceptually equivalent). The only difference is that in calculation, $p_f=(O-x\beta_{R|f}^*)^{D-\tilde{D}}$ and $p_m=(1-O-x\beta_{R|f}^*)^{D+\tilde{D}}$ ($\beta_{R|f}^*$ is the rational mining power on the fork).
\subsection{Rearranging Past Blocks in Extreme Case}
In this section, we consider the extreme case where there is only negligible fees unclaimed in the unconfirmed transaction set for sufficiently long time ($>$ multiple block generation intervals $I$). As we have noticed, when there is only limited amount of fees left in the mempool, a rational miner can avoid undercutting by claiming only partial of the bandwidth set. But if the situation continues to worsen and no new transactions enter into the system, the remaining transaction fees become negligible at certain point. In this extreme case, rational miners may look back to the previous blocks and start undercutting at certain block height and rearrange blocks afterwards. Suppose an undercutter goes back $C$ blocks. As long as there are only negligible transaction fees flowing into the system during $C\cdot \frac{I}{\beta_U}$, it's more desirable for the undercutter who earned less than its fair share in the past $C$ blocks to attack. |
1,116,691,500,333 | arxiv | \section{
Introduction
\label{sec:introduction}
}
Since its discovery in the early 19\textsuperscript{th} century,
the elemental metal \ch{Nb} has been the subject of intense study,
culminating in a detailed understanding of its chemical properties
(see e.g.,~\cite{1999-Nowak-CR-99-3603,2009-Schlewitz-KOECT-17-1,2017-Arblaster-JPED-38-707}).
Of equal (if not greater) interest are the element's electronic properties,
especially those relating to its superconductivity.
For example,
while \ch{Nb} is one of the many superconducting elements~\cite{2004-Buzea-SST-18-R1},
its transition temperature $T_{c} \approx \SI{9.25}{\kelvin}$
is the highest among them at ambient pressure.
Accompanying this accolade is a lower critical field $B_{c1} \approx \SI{170}{\milli\tesla}$
that exceeds all other (type-II) superconductors,
making the element particularly suited for devices that must remain flux-free
under modest magnetic fields.
These intrigues inspired comprehensive measurements~\cite{1964-Leupold-PR-134-A1322,1969-Webb-PR-181-1127,1978-Karim-JLTP-30-389,2000-Chainani-PRL-85-1966}
and calculations~\cite{1970-Mattheiss-PRB-1-373,1979-Crabtree-PRL-42-390,1981-Pinski-PRB-23-5080,1984-Blaschke-JPFMP-14-175,1987-Crabtree-PRB-35-1728,1991-Weber-PRB-44-7585}
relating to its electronic structure,
as well as how it is modified under pressure~\cite{1983-Neve-PRB-28-629}.
This in turn has led to a thorough understanding of its
superconductivity~\cite{1965-Maxfield-PR-139-A1515,1966-Finnemore-PR-149-231,1968-French-C-8-301},
including details
such as its ``strong''~\cite{1967-Nam-PR-156-470} electron-phonon coupling~\cite{1996-Savrasov-PRB-54-16487,1998-Bauer-PRB-57-11276,2020-Giri-MTP-12-100175},
non-local electrodynamics~\cite{1953-Pippard-PRSLA-216-547,1957-Bardeen-PR-108-1175} in ``clean'' samples~\cite{2005-Suter-PRB-72-024506},
and
vortex lattice structure~\cite{2013-Maisuradze-PRB-88-140509,2014-Yaouanc-PRB-89-184503,2015-Reimann-NC-6-8813}.
In fact,
this high degree of understanding has prompted \ch{Nb}'s frequent use complex heterostructures where
a superconducting layer is required
(see e.g.,~\cite{2014-Flokstra-PRB-89-054510,2015-DiBernardo-PRX-5-041021,2016-Flokstra-NP-12-57,2018-Flokstra-PRL-120-247001,2019-Flokstra-APL-115-072602,2019-Stewart-PRB-100-020505,2020-Krieger-PRL-125-026802,2021-Rogers-CP-4-69,2021-Flokstra-RPB-104-L060506,2021-Alpern-PRM-5-114801}).
These physical traits,
in conjunction with the metal's mechanical properties~\cite{2007-Myneni-AIPCP-927-41,2015-Ciovati-MSEA-642-117},
have also made \ch{Nb} particularly suited for use in
\gls{srf} cavities~\cite{2008-Padamsee-RFSA-2,2009-Padamsee-RFSSTA,2017-Padamsee-SST-30-053003},
which use a large electric field, $E_{\mathrm{acc}}$, to accelerate charged
particles in accelerator beamlines.
Of particular importance for this application
is the maximum achievable value for $E_{\mathrm{acc}}$,
which is fundamentally limited by \ch{Nb}'s ability to expel magnetic flux
(i.e., to remain in the Meissner state).
This application, in particular,
has been a driving factor in the continued refinement
of our understanding of the element.
Central to \ch{Nb}'s performance in \gls{srf} cavities is the preparation of its surface,
with many empirical ``recipes'' developed
(e.g.,
low-temperature baking~\cite{2004-Ciovati-JAP-96-1591},
two-step baking~\cite{arXiv:1806.09824},
nitrogen doping~\cite{2013-Grassellino-SST-26-102001},
nitrogen infusion~\cite{2017-Grassellino-SST-30-094004},
etc.)
explicitly for boosting cavity performance
(i.e., maximizing its quality factor $Q$ for the largest possible range of $E_{\mathrm{acc}}$).
While it has long been known that surfaces play an important role
in determining the field of first-flux-entry
(see e.g.,~\cite{1964-Bean-PRL-12-14}),
some significant progress has been made recently.
For example,
it has been demonstrated that \ch{Nb} can expel flux up to its so-called
superheating field $B_{\mathrm{sh}}$
(see e.g.,~\cite{2011-Transtrum-PRB-83-094505})
by coating it with a thin superconducting film~\cite{2017-Junginger-SST-30-125012},
with sample geometry and surface preparations also playing an important role~\cite{2018-Junginger-PRAB-21-032002}.
For the latter,
the effect can be subtle for samples with identical geometries~\cite{2022-Turner-SR-12-5522};
however,
measurements of their Meissner screening profiles revealed significant differences
between select treatments~\cite{2014-Romanenko-APL-104-072601},
with low-temperature baking~\cite{2004-Ciovati-JAP-96-1591}
postulated at creating an ``effective''
superconductor-superconductor bilayer~\cite{2014-Kubo-APL-104-032603,2017-Kubo-SST-30-023001,2019-Kubo-JJAP-58-088001}
near the surface
(i.e., from a thin region of ``dirty'' \ch{Nb} near the surface on top of the ``clean'' bulk).
While this has stimulated a renewed interesting in using multilayers for \gls{srf} applications,
the results are controversial and warrant further investigation.
Currently, there are relatively few experimental techniques with the right
combination of electromagnetic and spatial sensitivity to achieve this goal
(e.g., ion-implanted \gls{bnmr}~\cite{2015-MacFarlane-SSNMR-68-1,2022-MacFarlane-ZPC-236-757}
and
\gls{le-musr}~\cite{2004-Bakule-CP-45-203,2004-Morenzoni-JPCM-16-S4583,2022-Hillier-NRMP-2-4}).
One possibility is to use \gls{le-musr}~\cite{2004-Bakule-CP-45-203,2022-Hillier-NRMP-2-4},
which uses muons implanted in the near-surface region
(depths less than \SI{\sim 150}{\nano\meter}) as local ``magnetometers''.
This technique has been used to study Meissner screening in \ch{Nb} with success,
revealing:
the importance of non-local electrodynamics~\cite{1953-Pippard-PRSLA-216-547,1957-Bardeen-PR-108-1175}
and strong-coupling corrections~\cite{1967-Nam-PR-156-470} in ``clean'' samples~\cite{2005-Suter-PRB-72-024506};
the impact of growth methods on the screening properties of \ch{Nb/Cu} films used in \gls{srf} cavities~\cite{2017-Junginger-SST-30-125013};
as well as the aforementioned ``anomalous'' modification in the character of the screening profile
by mild baking~\cite{2014-Romanenko-APL-104-072601}.
In order to clarify the origin of the latter,
here we used \gls{le-musr} to quantify the screening profile in \ch{Nb}
samples with surface treatments commonly employed in \gls{srf}
applications~\cite{2004-Ciovati-JAP-96-1591,arXiv:1806.09824,2017-Grassellino-SST-30-094004}.
Our results revealed no ``anomalous'' modifications to the screening profiles for \emph{any} of the surface treatments,
with all profiles being well-described by a London model~\cite{1935-London-PRSLA-149-71}.
The surface treatments were found to produce different magnetic penetrations depths,
which can be explained by dissimilar carrier mean-free-paths within the first \SI{\sim 150}{\nano\meter}
below the surface.
\section{
Experiment
\label{sec:experiment}
}
\Gls{le-musr} experiments were performed at the Paul Sherrer Institute's
Swiss Muon Source (located in Villigen, Switzerland).
Using the $\mu E4$ beamline~\cite{2008-Prokscha-NIMA-595-317},
``low-energy'' muons were generated by moderating the energy of a
\SI{\sim 4}{\mega\electronvolt} ``surface'' muon beam
using a film of condensed cryogenic gas~\cite{1994-Morenzoni-PRL-72-2793,2001-Prokscha-ASS-172-235}
and electrostatically re-accelerating the eluting epithermal (\SI{\sim 15}{\electronvolt}) muons
to energies on the order of \SI{\sim 15}{\kilo\electronvolt}.
The resulting beam,
with a typical intensity of \SI{\sim e4}{\per\second},
was delivered to a dedicated spectrometer~\cite{2000-Morenzoni-PB-289-653,2008-Prokscha-NIMA-595-317,2012-Salman-PP-30-55}
using electrostatic optics housed within an \gls{uhv} beamline.
The $\mu^{+}$ arrival times were triggered on a thin
(\SI{\sim 10}{\nano\meter}) carbon foil detector,
causing a slight reduction in their mean kinetic energy
(\SI{\sim 1}{\kilo\electronvolt})
and an (asymmetric) energy spread (\SI{\sim 450}{\electronvolt})
before reaching the sample.
Control over the $\mu^{+}$ implantation energy was achieved by biasing
an electrically isolated sample holder using a \gls{hv} power supply,
providing access to stopping depths between \SIrange{\sim 10}{\sim 150}{\nano\meter}
below the sample surface.
The stopping of $\mu^{+}$ in solids can be accurately computed~\cite{2002-Morenzoni-NIMB-192-245}
using Monte Carlo codes
(e.g., TRIM.SP~\cite{1984-Eckstein-NIMB-2-550,1991-Eckstein-SSMS-10,1994-Eckstein-REDS-1-239}),
which we used here to simulate $\mu^{+}$ stopping profiles in \ch{Nb}
(see \Cref{fig:implantation-profiles}).
To ensure the accuracy of these predictions,
we revised the parameterization of \ch{Nb}'s electronic stopping cross section
for proton-like projectiles~\cite{1977-Anderson-SRIM-3,1993-ICRU-49}
using a Varelas-Biersack fit~\cite{1970-Varelas-NIM-79-213}
to an up-to-date compilation~\cite{2017-Montanari-NIMB-408-50}
of experimental values~\cite{1984-Sirotinin-NIMB-4-337, 1986-Bauer-NIMB-13-201, 1988-Ogino-NIMB-33-155, 2020-Moro-PRA-102-022808}.
The revised fit suggests that earlier calculations
(see e.g.,~\cite{2005-Suter-PRB-72-024506,2014-Flokstra-PRB-89-054510,2014-Romanenko-APL-104-072601,2015-DiBernardo-PRX-5-041021,2016-Flokstra-NP-12-57,2017-Junginger-SST-30-125013,2018-Flokstra-PRL-120-247001,2019-Flokstra-APL-115-072602,2019-Stewart-PRB-100-020505,2020-Krieger-PRL-125-026802,2021-Rogers-CP-4-69,2021-Flokstra-RPB-104-L060506,2021-Alpern-PRM-5-114801})
likely underestimate the $\mu^{+}$ range in \ch{Nb}
(or \ch{Nb} layers).
Further details are given in \Cref{sec:trimsp}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{implantation-profiles.pdf}
\caption{
\label{fig:implantation-profiles}
Typical stopping profiles $\rho(z, E)$ for $\mu^{+}$ implanted in a \ch{Nb2O5}(\SI{5}{\nano\meter})/\ch{Nb} target
at different energies $E$ (indicated in the inset),
simulated using the Monte Carlo code
TRIM.SP~\cite{1984-Eckstein-NIMB-2-550,1991-Eckstein-SSMS-10,1994-Eckstein-REDS-1-239}.
The profiles, represented as histograms with \SI{1}{\nano\meter} bins,
were generated from \num{e5} projectiles.
The solid black lines denote fits to a model for $\rho(z, E)$
[\Cref{eq:stopping,eq:beta-pdf} --- see \Cref{sec:results}],
clearly capturing all features of the individual profiles.
Additional simulation details can be found in \Cref{sec:trimsp}.
}
\end{figure}
The basis of the \gls{le-musr} technique~\cite{2004-Bakule-CP-45-203,2004-Morenzoni-JPCM-16-S4583,2021-Prokscha-MSI-18-274,2022-Hillier-NRMP-2-4}
involves implanting a beam of (\SI{\sim 100}{\percent}) spin-polarized $\mu^{+}$
into a sample of interest and observing their spins, $\mathbf{S}$,
reorient in their local magnetic field, $\mathbf{B}$.
This process is monitored via the anisotropic $\beta$-emissions
from $\mu^{+}$ decay
(mean lifetime $\tau_{\mu} = \SI{2.1969811 \pm 0.0000022}{\micro\second}$~\cite{2020-Zyla-PTEP-2020-083C01}),
wherein the direction of the emitted $\beta$-rays are probabilisitically correlated
with $\mathbf{S}$ at the moment of decay
(see e.g.,~\cite{2011-Yaouanc-MSR}).
When $\mathbf{B}$ is transverse to the spin direction,
$\langle S \rangle$ will precess at a rate equal to the probe's Larmor frequency:
\begin{equation}
\omega_{\mu} = \gamma_{\mu} B ,
\end{equation}
where $\gamma_{\mu} / (2 \pi) = \SI{135.538 809 4 \pm 0.000 003 0}{\mega\hertz\per\tesla}$
is the muon gyromagnetic ratio~\cite{2021-Tiesinga-RMP-93-025010}.
In the experiments performed here,
this so-called transverse-field geometry was used
(see e.g.,~\cite{2004-Bakule-CP-45-203,2011-Yaouanc-MSR}),
wherein an external field $B_\mathrm{applied} \approx \SI{25}{\milli\tesla}$
was applied perpendicular to the initial direction of $\mu^{+}$ spin-polarization
and parallel to the surface of our \ch{Nb} samples.
This configuration is highly sensitive to inhomogeneities in $B$
(as expected near the surface of a superconductor in the Meissner state),
allowing for the local field distribution, $p(B)$, to be measured,
which reflects the the screening properties of the samples.
In our \gls{le-musr} measurements,
the temporal evolution of the \emph{asymmetry}, $A(t)$,
in the $\beta$-rates recorded for two opposing detectors
(i.e., \SI{180}{\degree} opposite one another),
was monitored after $\mu^{+}$ implantation.
The counts in a single detector, $N_{\pm}$, are given by:
\begin{equation}
\label{eq:counts}
N_{\pm}(t) = N_{0, \pm} \exp \left ( - \frac{t}{\tau_{\mu}} \right ) \left [ 1 \pm A(t) \right ] + b_{\pm} ,
\end{equation}
where $N_{0, \pm}$ and $b_{\pm}$ are denote the incoming rates of
``good'' and ``background'' decay events.
While \Cref{eq:counts} can be re-arranged for $A(t)$,
in a two-counter experiment it can be determined directly from the normalized difference
in $\beta$-rates from the two counters:
\begin{equation}
\label{eq:asymmetry}
A(t) \equiv \frac{ \left [ N_{+}(t) - b_{+} \right ] - \alpha \left [ N_{-}(t) - b_{-} \right ] }{ \left [ N_{+}(t) - b_{+} \right ] + \alpha \left [ N_{-}(t) - b_{-} \right ] } = A_{0} P_{\mu}(t) ,
\end{equation}
where $A_{0}$ is a constant whose precise value depends on both the geometry of the experiment
and the details of $\mu^{+}$ decay,
and the factor $\alpha \equiv N_{0, +} / N_{0, -}$
accounts for differences between the detector pair
(e.g., detection efficiencies, solid angle coverage, etc.~\cite{1994-Riseman-HI-87-1135,2011-Yaouanc-MSR}).
The time-dependence in \Cref{eq:asymmetry} is determined entirely
by the spin-polarization of the muon ensemble, $P_{\mu}(t)$,
which depends on $p(B)$ according to:
\begin{equation}
\label{eq:polarization}
P_{\mu}(t) = \int_{-\infty}^{+\infty} p(B) \cos ( \omega_{\mu} t + \phi ) \, \mathrm{d}B ,
\end{equation}
where $t$ is the time (in \si{\micro\second}) after implantation
and
$\phi$ is a phase factor that depends on the experimental setup
(approximately \SI{-40}{\degree} here).
Note that \Cref{eq:polarization} makes the simplifying assumption that $P_{\mu}(0) \approx 1$
(i.e., the $\mu^{+}$ are initially \SI{\sim 100}{\percent} spin-polarized).
Thus,
from the synergistic information encapsulated within $P_{\mu}(t)$
and the simulated $\mu^{+}$ stopping profiles
(see \Cref{fig:implantation-profiles}),
it is feasible to reconstruct how $B$ varies with depth, $z$,
below the sample surface
(see below).
\subsection{
Samples
\label{sec:experiment:samples}
}
In accord with the standard practice used when fabricating \gls{srf} cavities~\cite{2008-Padamsee-RFSA-2,2009-Padamsee-RFSSTA,2017-Padamsee-SST-30-053003},
all samples were sourced from high \gls{rrr} \ch{Nb}
(i.e., $\mathrm{RRR} \gtrsim 300$).
Each sample consisted of a piece of the ``stock'' metal
machined to into a flat plate
(\SI{\sim 25 x \sim 25 x \sim 1.5}{\milli\meter})
with a small circular aperture
(\SI{\sim 6.5}{\milli\meter} diameter)
in one corner.
The pieces were then hand polished to remove any sharp edges,
followed by \gls{bcp} to remove any damaged layers near the surface
(see e.g.,~\cite{2011-Ciovati-JAE-41-721}).
Subsequently, the samples were annealed at \SI{1400}{\celsius} for \SI{5}{\hour}
to remove any mechanical stresses remaining in the metal.
Afterwards, an additional round of \gls{bcp} was performed to
remove the topmost \SI{\sim 10}{\micro\meter} of material from the surface
(i.e., to remove any contaminants introduced from the oven during annealing).
In the remainder of this manuscript,
we denote this as the ``baseline'' surface treatment for \gls{srf} cavity grade \ch{Nb}.
This process has been shown to remove virtually all pinning~\cite{2018-Junginger-PRAB-21-032002}.
On top of the ``baseline'' preparation,
several samples underwent additional surface treatments.
One sample was baked at \SI{120}{\celsius} for \SI{48}{\hour} in \gls{uhv}~\cite{2004-Ciovati-JAP-96-1591},
which we call ``\SI{120}{\celsius} bake''.
Another sample underwent a two-step baking procedure,
wherein it was initially heated to \SI{75}{\celsius} for \SI{5}{\hour} and
then additionally to \SI{120}{\celsius} for \SI{48}{\hour}~\cite{arXiv:1806.09824},
which we denote as ``\SI{75}{\celsius}/\SI{120}{\celsius} bake''.
Lastly,
one sample was initially heated to \SI{800}{\celsius} for \SI{3}{\hour} under high vacuum
and subsequently baked at \SI{120}{\celsius} for \SI{48}{\hour} in a \SI{25}{\milli\torr} \ch{N_{2}} atmosphere~\cite{2017-Grassellino-SST-30-094004},
which we refer to as ``\ch{N2} infusion''.
A magnetometric characterization of \ch{Nb} samples with identical surface treatments
can be found elsewhere~\cite{2022-Turner-SR-12-5522}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{baseline-spectra.pdf}
\caption{
\label{fig:baseline-spectra}
Implantation energy dependence of transverse-field \gls{le-musr} data
in \ch{Nb} (``baseline''),
measured in both the normal (\SI{\sim 11}{\kelvin})
and Meissner (\SI{\sim 2.6}{\kelvin}) states
with an applied magnetic field of \SI{\sim 25}{\milli\tesla}.
The $\mu^{+}$ energy $E$ is indicated in the inset of each subplot.
In the normal state
(\textbf{A})
there is no significant energy dependence
to the temporal evolution of $A(t)$,
implying that all implanted muons sample the same local field
distribution $p(B)$ below the sample surface.
By contrast, in Meissner state
(\textbf{B})
$A(t)$ depends strongly on the implantation energy.
As the implantation energy increases,
the $\mu^{+}$ spin-precession frequency decreases,
accompanied by increased damping of the signal,
consistent with screening of the magnetic field with increasing depths
below the sample surface.
The coloured lines denote a fit to \emph{all} of the the data
(i.e., a global fit) using
\Cref{eq:fit-function,eq:polarization-skewed-gaussian,eq:polarization-skewed-gaussian-parts,eq:polarization-gaussian},
where the phase $\phi$ was shared as a common parameter.
Clearly, the model captures all of the data's main features.
Note that the Gaussian term in \Cref{eq:fit-function} accounts for
the small (\SI{< 10}{\percent}) fraction of muons that do not stop in the sample
(e.g., due to backscattering).
}
\end{figure}
\section{
Results \& Analysis
\label{sec:results}
}
Typical time-differential \gls{le-musr} data for our surface-treated \ch{Nb} samples
are shown in \Cref{fig:baseline-spectra}.
In the normal state ($T > T_{c}$),
there is no significant energy dependence to the temporal evolution of $A(t)$,
indicating that all implanted muons sample the same local field
distribution below the sample surface.
This is evident from the identical precession frequencies
and damping envelopes,
the latter being (predominantly) a result of the host \ch{^{93}Nb} nuclei
(spin $I = 9/2$;
$\gamma/(2\pi) = \SI{10.4523 \pm 0.0005}{\mega\hertz\per\tesla}$;
\SI{100}{\percent} natural abundance)~\cite{2011-Baglin-NDS-112-1163}.
By contrast,
$A(t)$ depends strongly on implantation energy in Meissner state.
As the implantation energy increases,
the $\mu^{+}$ spin-precession frequency decreases,
accompanied by substantial damping of the signal.
These features are expected for a broad $p(B)$
whose mean shifts to lower values at increasing depths below the surface,
consistent with the expected ``signature'' for screening of the applied magnetic field.
To quantify these details,
we now consider an analysis of the data,
which amounts to choosing an (analytic) approximation
for the field distribution in \Cref{eq:polarization}.
Often, $p(B)$ can be approximated by a Gaussian distribution
(see e.g,~\cite{2011-Yaouanc-MSR}):
\begin{equation}
\label{eq:gaussian}
p_{\mathrm{G}}(B) = \frac{1}{\sqrt{2 \pi} } \left ( \frac{\gamma_{\mu}}{\sigma} \right ) \exp \left \{ -\frac{1}{2} \left [ \frac{B - B_{0}}{ \left ( \sigma / \gamma_{\mu} \right ) } \right ]^{2} \right \} ,
\end{equation}
where $B_{0}$ and $\sigma$ denote the distribution's location (i.e., mean) and width, respectively.
Upon substitution of \Cref{eq:gaussian} for $p(B)$ into \Cref{eq:polarization},
one gets:
\begin{equation}
\label{eq:polarization-gaussian}
P_{\mathrm{G}} = \exp \left ( -\frac{\sigma^{2} t^{2}}{2} \right ) \cos \left ( \gamma_{\mu} B_{0} t + \phi \right ) ,
\end{equation}
which says that the observed precession frequency is given by
the mean of the distribution and that the degree of damping is determined by its width.
While this symmetric distribution works well in many instances,
the field distribution below the surface of a material in the Meissner state
is expected to be intrinsically \emph{asymmetric}
(i.e., because the applied field decays to zero inside the material).
Therefore,
a better approximation for $p(B)$ in our samples
is given by a \emph{skewed} Gaussian~\cite{2008-Suter-M}:
\begin{widetext}
\begin{equation}
\label{eq:skewed-gaussian}
p_{\mathrm{SG}}(B) = \sqrt{\frac{2}{\pi}} \left ( \frac{ \gamma_{\mu} }{ \sigma_{-} + \sigma_{+} } \right ) \times \begin{cases}
\displaystyle \exp \left \{ - \frac{1}{2} \left [ \frac{B - B_{0} }{ \left ( \sigma_{-} / \gamma_{\mu} \right ) } \right ]^{2} \right \} , & \text{for } B < B_{0} , \\
1, & \text{for } B = B_{0}, \\
\displaystyle \exp \left \{ - \frac{1}{2} \left [ \frac{B - B_{0} }{ \left ( \sigma_{+} / \gamma_{\mu} \right ) } \right ]^{2} \right \} , & \text{for } B > B_{0} ,
\end{cases}
\end{equation}
\end{widetext}
where $B_{0}$ is the ``peak'' field
(i.e., \emph{not} the mean of the distribution)
and
$\sigma_{\pm}$ define the distribution's width
(i.e., on either side of $B_{0}$).
Note that the definition in \Cref{eq:skewed-gaussian} is somewhat unusual for a
skewed Gaussian distribution;
it is more commonly defined as:
\begin{equation*}
p_{\mathrm{SG}}(B) = p_{\mathrm{G}}(B) \left ( 1 + \erf \left \{ \frac{\varsigma}{\sqrt{2}} \left [ \frac{ B - B_{0}} { \left ( \sigma / \gamma_{\mu} \right )} \right ] \right \} \right ),
\end{equation*}
where $\erf (z)$ is the error function and $\varsigma \in [-\infty, +\infty]$
is the ``skewness'' parameter~\footnote{In contrast to the ``unusual'' definition in \Cref{eq:skewed-gaussian}, the ``conventional'' expression for $p_{\mathrm{SG}}(B)$ relies on ``weighting'' $p_{\mathrm{G}}(B)$ via the term in parentheses $[1 + \erf(z)]$ through (positive or negative) values of the parameter $\varsigma$.}.
While this formulation is elegant,
the piecewise definition in \Cref{eq:skewed-gaussian} has the pragmatic advantage
of being amenable to fast computation during fitting.
Specifically,
upon substituting \Cref{eq:skewed-gaussian} for $p(B)$ into \Cref{eq:polarization},
the solution to the integral can be written as~\cite{2008-Suter-M}:
\begin{equation}
\label{eq:polarization-skewed-gaussian}
P_{\mathrm{SG}}(t) = P_{\mathrm{SG}}^{-}(t) + P_{\mathrm{SG}}^{+}(t),
\end{equation}
where
\begin{widetext}
\begin{equation}
\label{eq:polarization-skewed-gaussian-parts}
P_{\mathrm{SG}}^{\pm}(t) = \left ( \frac{ \sigma_{\pm} }{\sigma_{+} + \sigma_{-}} \right ) \exp \left ( -\frac{\sigma_{\pm}^{2}t^{2} }{2} \right ) \left [ \cos ( \gamma_{\mu} B_{0} t + \phi ) \mp \erfi \left ( \frac{\sigma_{\pm} t}{\sqrt{2}} \right ) \sin ( \gamma_{\mu} B_{0} t + \phi ) \right ] ,
\end{equation}
\end{widetext}
and $\erfi (z)$ is the complex error function~\footnote{$\erfi (z)$ is usually defined in terms of one of several closely related functions (see e.g.,~\cite{2010-Olver-NISTHMF}). For example, our implementation~\cite{2008-Suter-M,2012-Suter-PP-30-69} used the confluent hypergeometric function of the first kind, $_{1}F_{1}(a, b, z)$, which was made available through the GNU Scientific Library~\cite{gsl} and a ``wrapper'' within the ROOT framework~\cite{1997-Brun-NIMA-389-81}.}.
We find that \Cref{eq:polarization-skewed-gaussian,eq:polarization-skewed-gaussian-parts}
give the best agreement with the signal in our \ch{Nb} samples over the full time range of
the measurement,
without overparamaterization~\footnote{A reasonable alternative to this could be to use a sum of $P_{\mathrm{G}}(t)$s; however, even with only two terms the sum's degrees of freedom would exceed that of \Cref{eq:polarization-skewed-gaussian,eq:polarization-skewed-gaussian-parts}.}.
Returning to our task of fitting the \gls{le-musr} data,
explicitly, we used the expression:
\begin{equation}
\label{eq:fit-function}
A(t) = A_{0} \left [ f P_{\mathrm{SG}}(t) + (1 - f) P_{\mathrm{G}}(t) \right ] ,
\end{equation}
where $A_{0}$ is an energy-dependent constant
(on the order of \num{\sim 0.2} here),
$f$ is the fraction of the signal originating from our sample
(typically \num{> 0.9}),
and the remaining terms $P_{\mathrm{SG}}(t)$ and $P_{\mathrm{G}}(t)$
were given by
\Cref{eq:polarization-skewed-gaussian,eq:polarization-skewed-gaussian-parts}
and
\Cref{eq:polarization-gaussian},
respectively.
Additionally,
all measurements for a given sample were fit simultaneously
(i.e., in a so-called ``global'' fit)
using a common $\phi$.
This constraint was necessary,
as the phase becomes ill-defined when $A(t)$ is strongly
damped and few full precession periods are resolved
(e.g., for measurements in Meissner state at high implantation energies,
where the $\mu^{+}$ stopping depths are far below the surface)~\footnote{A detailed account of how using a shared phase $\phi$ systematically affects the results when measuring Meissner screening profiles with \gls{le-musr} can be found elsewhere~\cite{2012-Hossain-PhD}, showing that the effect is minimal in all cases.}.
All fitting was performed using musrfit~\cite{2012-Suter-PP-30-69},
which makes use of the MINUIT2 minimization routines~\cite{2005-Hatlo-IEEETNS-52-2818}
implemented within the ROOT framework~\cite{1997-Brun-NIMA-389-81}.
In all cases,
this fitting approach yielded excellent agreement with the data
(reduced $\chi^{2} \approx 1.06$)
and a subset of the results are shown in \Cref{fig:baseline-spectra}.
In order to reconstruct the field profile below the surface,
at each implantation energy we identified the
mean field sensed by the implanted $\mu^{+}$ using~\cite{2008-Suter-M}:
\begin{equation}
\label{eq:skewed-gaussian-mean}
\langle B \rangle \equiv \int_{-\infty}^{+\infty} B \, p_{\mathrm{SG}}(B) \, \mathrm{d}B = B_{0} + \sqrt{\frac{2}{\pi}} \left ( \frac{ \sigma_{+} - \sigma_{-} }{ \gamma_{\mu} }\right ) ,
\end{equation}
and the results for each surface treatment are shown in
\Cref{fig:field-profiles}~\footnote{One can also use integral reconstruction to deduce $B(z)$ (see e.g.,~\cite{2004-Morenzoni-JPCM-16-S4583,2004-Suter-PRL-92-087001,2005-Suter-PRB-72-024506}); however, the approach relies on a \gls{fft} of the ``raw'' \gls{le-musr} data, making it numerically ill-posed.}.
As expected,
the $\langle B \rangle$s measured in the normal state are independent of implantation energy,
whereas $\langle B \rangle$ decreases monotonically with increasing $E$ in the Meissner state.
It is evident that the screening properties of each surface treatment are different;
the applied field is attenuated most strongly for the
``baseline'' and ``\SI{75}{\celsius}/\SI{120}{\celsius} bake''~\cite{arXiv:1806.09824} samples,
whereas the screening is weaker for the ``\SI{120}{\celsius} bake''~\cite{2004-Ciovati-JAP-96-1591}
and even more so for the ``\ch{N2} infusion''~\cite{2017-Grassellino-SST-30-094004} treatments.
Interestingly,
measurements in some of the samples at the lowest $E$s
show that $\langle B \rangle$ plateaus at a value close to the nominal applied field,
suggesting the presence of a thin layer near the surface where the external field isn't screened
(i.e., a so-called ``dead layer'' at the superconductor's surface).
Such a region is fairly generic and observed in a wide range of superconductors
(see e.g.,~\cite{2000-Jackson-PRL-84-4958,2004-Suter-PRL-92-087001,2005-Suter-PRB-72-024506,2014-Romanenko-APL-104-072601,2017-Junginger-SST-30-125013,2010-Kiefl-PRB-81-180502,2012-Ofer-PRB-85-060506,2013-Kozhevnikov-PRB-87-104508,2015-Stilp-PRB-89-020510,2018-Howald-PRB-97-094514}),
though there is considerable variability between materials or even samples
(e.g., as a result of surface roughness~\cite{2012-Lindstrom-PP-30-249,2014-Lindstrom-JEM-85-149,2016-Lindstrom-JSNM-29-1499}).
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{mean-field-vs-energy.pdf}
\caption{
\label{fig:field-profiles}
Plot of the mean magnetic field $\langle B \rangle$
(normalized by the ``effective'' applied field $\tilde{B}_{0}$)
sensed by the implanted $\mu^{+}$ at different energies $E$
in \ch{Nb} samples that received different
surface treatments
(``baseline'',
``\SI{120}{\celsius} bake''~\cite{2004-Ciovati-JAP-96-1591},
``two-step bake''~\cite{arXiv:1806.09824},
and
``\ch{N2} infusion''~\cite{2017-Grassellino-SST-30-094004}
---
see \Cref{sec:experiment:samples}).
For increasing $E$,
the mean $\mu^{+}$ stopping depth $\langle z \rangle$ increases,
covering a length scale comparable to the magnetic penetration
depth $\lambda$.
In the normal state ($T > T_{c}$),
there is no depth dependence to $\langle B \rangle$ for any of the samples
and its value corresponds to the applied magnetic field $B_{\mathrm{applied}}$.
Conversely,
in the Meissner state (\SI{\sim 2.7}{\kelvin}),
$\langle B \rangle$ decays rapidly with increasing $E$
above a threshold value,
reflecting a small (non-superconducting) ``dead layer'' $d$ at the surface
and
the increased screening of $B_{\mathrm{applied}}$ at deeper depths.
The solid and dashed colored lines represent a (global) fit of
the data in both the normal and Meissner states using:
\Cref{eq:london,eq:effective-field} to describe $B(z)$;
\Cref{eq:stopping,eq:beta-pdf} to parameterize $\rho(z, E)$;
and \Cref{eq:average-field} to convolve the terms into
an expression for $\langle B \rangle (E)$
(see \Cref{sec:results}).
The fit quality is excellent,
with the model capturing all features of the data.
Values obtained for $\lambda$ are indicated in the inset
of each subplot,
while the full set of fit parameters are tabulated in \Cref{tab:results}.
}
\end{figure}
In order to evaluate the magnetic penetration depth $\lambda$,
it is necessary to construct a model capable of describing the data.
The model must account for two crucial details:
how the magnetic field is screened below the surface as a function of depth $z$;
and
the depth distribution $\rho(z, E)$ sampled by the implanted $\mu^{+}$.
We shall consider each of these below.
Note that while our approach differs somewhat from earlier work in \ch{Nb}
(see e.g.,~\cite{2005-Suter-PRB-72-024506,2014-Romanenko-APL-104-072601,2017-Junginger-SST-30-125013}),
it is capable of accurately reproducing all measured quantities
derived from our experiments.
First, we consider the magnetic field profile, $B(z)$,
below \ch{Nb}'s surface.
In the simplest case, $B(z)$ decreases exponentially with increasing depth, $z$,
in the Meissner state,
as predicted by the London model~\cite{1935-London-PRSLA-149-71}.
Recalling that our data suggests the presence of a ``dead layer'' at
the sample surface,
we incorporate this detail \emph{ad hoc} into the London result~\cite{1935-London-PRSLA-149-71}
with the expression (see e.g.,~\cite{2000-Jackson-PRL-84-4958,2010-Kiefl-PRB-81-180502}):
\begin{equation}
\label{eq:london}
B(z) = \tilde{B}_{0} \times \begin{cases}
1, & \text{for } z < d , \\
\displaystyle \exp \left \{ -\frac{ ( z - d ) }{ \lambda } \right \} , & \text{for } z \geq d ,
\end{cases}
\end{equation}
where
$\lambda$ is the magnetic penetration depth,
$d$ is the thickness of the ``dead layer''
(i.e., where $\tilde{B}_{0}$ isn't screened),
and $\tilde{B}_{0}$ is the (effective) applied magnetic field.
The latter quantity is given by:
\begin{equation}
\label{eq:effective-field}
\tilde{B}_{0} = B_{\mathrm{applied}} \times \begin{cases}
1, & \text{for } T > T_{c} , \\
\left ( 1 - \tilde{N} \right )^{-1} , & \text{for } T \ll T_{c} , \\
\end{cases}
\end{equation}
where $B_{\mathrm{applied}}$ is the applied magnetic field
and
$\tilde{N}$ is the sample's (effective) demagnetization factor~\cite{2018-Prozorov-PRA-10-014030}.
Note that the inclusion of the factor
$(1 - \tilde{N})^{-1}$ in \Cref{eq:effective-field}
accounts for any apparent ``enhancement'' of the applied field
due to the sample's geometry
(i.e., from flux expulsion in the Meissner state --- see e.g.,~\cite{2000-Brandt-PC-332-99}).
Though \Cref{eq:london} is rather simple
compared to other models for $B(z)$
(see e.g.,~\cite{1953-Pippard-PRSLA-216-547,1957-Bardeen-PR-108-1175}),
it sufficiently describes the behavior we observe (see below).
We now consider the $\mu^{+}$ implantation profiles.
As alluded to in \Cref{sec:experiment},
the slowing down of implanted $\mu^{+}$ is a stochastic process,
resulting in a \emph{distribution} of stopping depths
that can be reliably simulated~\cite{2002-Morenzoni-NIMB-192-245}
using Monte Carlo codes
such as TRIM.SP~\cite{1984-Eckstein-NIMB-2-550,1991-Eckstein-SSMS-10,1994-Eckstein-REDS-1-239}
(see \Cref{sec:trimsp} for specific details).
For our analysis,
it was convenient to have the ability to describe these profiles at
\emph{arbitrary} $E$, which was achieved by fitting the simulated
profiles
and interpolating their ``shape'' parameters.
We found that
the $\mu^{+}$ stopping probability, $\rho(z, E)$,
at a given $E$ can be described, in general, by:
\begin{equation}
\label{eq:stopping}
\rho(z, E) = \sum_{i}^{n} f_{i} p_{i} (z) ,
\end{equation}
where
$p_{i}(z)$ is a probability density function,
$f_{i} \in [0, 1]$ is the $i^{\mathrm{th}}$ stopping fraction,
constrained such that
\begin{equation*}
\sum_{i}^{n} f_{i} \equiv 1 ,
\end{equation*}
and $z$ is the depth below the surface.
For our target
(\ch{Nb2O5}(\SI{5}{\nano\meter})/\ch{Nb} --- see e.g.,~\cite{1987-Halbritter-APA-43-1}),
the stopping data are well-described using $n = 2$ and a $p(z)$ given by
a modified beta distribution~\cite{2004-Gupta-HBDA}.
Explicitly,
\begin{equation}
\label{eq:beta-pdf}
p (z) =
\begin{cases}
0, & \text{for } z < 0 , \\
\dfrac{ \left ( z / z_{0} \right )^{\alpha -1} \left (1 - z / z_{0} \right )^{\beta - 1} }{ z_{0} \, B ( \alpha, \beta ) } , & \text{for } 0 \leq z \leq z_{0} , \\
0, & \text{for } z > z_{0} ,
\end{cases}
\end{equation}
where $z \in [0, z_{0}]$ is the depth below the surface
and $B ( \alpha, \beta )$ is the beta function:
\begin{equation*}
B ( \alpha, \beta ) \equiv \frac{ \Gamma (\alpha ) \Gamma (\beta) }{ \Gamma ( \alpha + \beta ) } ,
\end{equation*}
with $\Gamma (s)$ denoting the gamma function:
\begin{equation*}
\Gamma (s) \equiv \int_{0}^{\infty} x^{s-1} \exp (-x) \, \mathrm{d}x .
\end{equation*}
Note that the ``extra'' $z_{0}$ in the denominator of \Cref{eq:beta-pdf}
ensures proper normalization of $p(z)$.
In order to achieve good ``coverage'' across the range of $E$s
achievable by \gls{le-musr}
(\SIrange{\sim 0.5}{\sim 30}{\kilo\electronvolt}),
we simulated $\mu^{+}$ stopping profiles in small energy
increments (\SI{500}{\electronvolt}) spanning the entire $E$-range.
We then fit each of the simulated stopping profiles using \Cref{eq:stopping,eq:beta-pdf}
and interpolated the resulting ``shape'' (i.e., fit)
parameters to generate $\rho(z, E)$ at arbitrary $E$.
Results from this procedure are shown in \Cref{fig:implantation-profiles},
in excellent agreement with the Monte Carlo simulations.
Following the above discussion,
with our expressions for
$B(z)$ [\Cref{eq:london,eq:effective-field}]
and
$\rho(z, E)$ [\Cref{eq:stopping,eq:beta-pdf}]
in hand,
it is now straightforward to construct an expression for
$\langle B \rangle$ that depends on $E$:
\begin{equation}
\label{eq:average-field}
\langle B \rangle (E) = \int_{0}^{\infty} B(z) \rho(z, E) \, \mathrm{d}z ,
\end{equation}
where the dependence on $E$ is accounted for \emph{implictly} by $\rho(z, E)$~\footnote{Formally, \Cref{eq:average-field} is the integral transform of $B(z)$ by the \emph{kernel} $\rho(z, E)$, wherein $B(z)$ is ``mapped'' from $z$-space to $\langle B \rangle (E)$ in $E$-space (see e.g.,~\cite{2005-Arfken-MMP-6}).}.
Note that, as described above,
$\rho(z, E)$'s ``shape'' parameters are all \emph{predetermined}
from fitting a series of implantation profiles and interpolating their values.
Consequently,
this approach uses the \emph{maximum} amount of
available information when fitting the data and does not,
for example,
assume that the average stopping depth, $\langle z \rangle$,
is an adequate proxy for the \emph{full} stopping distribution~\footnote{As noted elsewhere~\cite{2005-Suter-PRB-72-024506}, using $\langle z \rangle$ can influence the apparent ``curvature'' in the trend of $\langle B \rangle$, presumably because the mapping from $E$ to $\langle z \rangle$ is non-linear (see e.g., \Cref{fig:field-profiles}).}.
Therefore, \Cref{eq:average-field} depends on the main parameters that define the
shape of $B(z)$ [\Cref{eq:london,eq:effective-field}]:
$\lambda$, $d$, $B_{\mathrm{applied}}$, and $\tilde{N}$.
Before proceeding,
we point out that the integral \Cref{eq:average-field} must be evaluated
\emph{numerically};
however,
it was found that adaptive Gaussian quadrature routines
(see e.g.,~\cite{quadpack}),
which are widely available in free scientific software
(e.g., the Python package SciPy~\cite{2020-Virtanen-NM-17-261}),
are adequate for this task.
Fit results for each sample are given in \Cref{fig:field-profiles},
showing excellent agreement with the data,
and a tabulation of the resulting fit parameters is given \Cref{tab:results}.
\begin{table*}
\centering
\caption{
\label{tab:results}
Fit results for our \ch{Nb} samples with different surface treatments
commonly used to fabricate \gls{srf} cavities (see \Cref{sec:experiment:samples}),
obtained using the analysis approach described in \Cref{sec:results}
(see also \Cref{fig:field-profiles}).
Here,
$T$ is the absolute temperature ,
$B_{\mathrm{applied}}$ is the strength of the magnetic field
applied parallel to the sample surface,
$\tilde{N}$ is the sample's (effective) demagnetization factor,
$d$ is the thickness of the (non-superconducting) ``dead layer'' at the
sample surface,
and $\lambda(T)$ is the magnetic penetration depth
(measured at temperature $T$).
Also included are quantities derived from \Cref{eq:two-fluid,eq:dirty,eq:effective-coherence-length}:
the magnetic penetration depth at \SI{0}{\kelvin}, $\lambda_{0}$,
the carrier mean-free-path, $\ell$,
and
the ``effective'' coherence length, $\xi_{0}^{\prime}$.
For comparison,
we have also included values for several \ch{Nb/Cu} films commonly used
in \gls{srf} cavities~\cite{2017-Junginger-SST-30-125013}
(obtained from a re-analysis of the data using the formalism described in \Cref{sec:results}),
and
results for a ``clean'' \ch{Nb/Al2O3} film~\cite{2005-Suter-PRB-72-024506}.
The abbreviations listed with these samples correspond to:
direct current magnetron sputtering (DCMS);
high-power impulse magnetron sputtering (HIPIMS);
and
high-intensity and energy isotope mass separator on-line (HIE-ISOLDE).
The dependence of $\lambda_{0}$ on $\ell$ is also shown in \Cref{fig:mean-free-path}.
}
\footnotesize
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} S S S S S S S S l}
\botrule
{Sample} & {$T$ (\si{\kelvin})} & {$B_{\mathrm{applied}}$ (\si{\milli\tesla})} & {$\tilde{N}$} & {$d$ (\si{\nano\meter})} & {$\lambda(T)$ (\si{\nano\meter})} & {$\lambda_{0}$ (\si{\nano\meter})} & {$\ell$ (\si{\nano\meter})} & {$\xi_{0}^{\prime}$ (\si{\nano\meter})} & Ref. \\
\hline
Nb (baseline) & 2.63 & 25.11 \pm 0.05 & 0.000 \pm 0.027 & 21.8 \pm 0.7 & 31.3 \pm 0.7 & 31.2 \pm 0.7 & 260 \pm 80 & 34.8 \pm 3.0 & This work \\
Nb ($120^{\circ}$C bake) & 2.72 & 25.179 \pm 0.034 & 0.006 \pm 0.011 & 25.4 \pm 1.3 & 42.6 \pm 1.3 & 42.4 \pm 1.3 & 35 \pm 5 & 18.8 \pm 1.6 & This work \\
Nb ($75^{\circ}$C / $120^{\circ}$C bake) & 2.69 & 25.162 \pm 0.032 & 0.000 \pm 0.028 & 18.7 \pm 0.6 & 32.3 \pm 0.5 & 32.2 \pm 0.5 & 175 \pm 34 & 32.8 \pm 2.6 & This work \\
Nb (N$_{2}$ infusion) & 2.83 & 25.11 \pm 0.06 & 0.009 \pm 0.011 & 24.1 \pm 1.6 & 70.2 \pm 2.6 & 69.9 \pm 2.5 & 8.4 \pm 1.0 & 6.9 \pm 0.7 & This work \\
\hline
Nb/Cu (DCMS) & 3.25 & 15.14 \pm 0.05 & 0 & 14.3 \pm 1.2 & 51.1 \pm 1.2 & 50.7 \pm 1.2 & 19.6 \pm 2.2 & 13.2 \pm 1.1 & \cite{2017-Junginger-SST-30-125013} \\
Nb/Cu (HIE-ISOLDE) & 2.65 & 15.010 \pm 0.024 & 0 & 13.1 \pm 0.5 & 37.8 \pm 0.7 & 37.7 \pm 0.7 & 59 \pm 7 & 23.9 \pm 1.7 & \cite{2017-Junginger-SST-30-125013} \\
Nb/Cu (HIPIMS) & 3.75 & 15.09 \pm 0.04 & 0 & 17.3 \pm 0.6 & 34.6 \pm 0.4 & 34.1 \pm 0.4 & 106 \pm 14 & 29.2 \pm 2.1 & \cite{2017-Junginger-SST-30-125013} \\
\hline
\ch{Nb/Al2O3} (DCMS) & & 8.82 & 0 & 2 \pm 2 & & 27 \pm 3 & 359 & 36 & \cite{2005-Suter-PRB-72-024506} \\
\botrule
\end{tabular*}
\end{table*}
\section{
Discussion
\label{sec:discussion}
}
From \Cref{fig:field-profiles},
it is clear that the different surface treatments affect the Meissner screening
profile in \ch{Nb} within the first \SI{\sim 150}{\nano\meter} below its surface.
As mentioned in \Cref{sec:results}, a hierarchy is evident;
the applied field is attenuated most strongly in the ``baseline'' sample,
yielding a $\lambda$ of \SI{31.3 \pm 0.7}{\nano\meter},
followed closely by the ``\SI{75}{\celsius}/\SI{120}{\celsius} bake'' treatment,
where $\lambda = \SI{32.3 \pm 0.5}{\nano\meter}$.
In the ``\SI{120}{\celsius} bake'' sample,
the screening was weakened significantly from the previous two treatments,
amounting to a magnetic penetration depth of $\SI{42.6 \pm 1.3}{\nano\meter}$.
This was diminished even further by the ``\ch{N2} infusion'' treatment,
whose $\lambda = \SI{70.2 \pm 2.6}{\nano\meter}$.
The results suggest that further preparation beyond our ``baseline'' treatment
serves to diminish \ch{Nb}'s capacity to prevent magnetic flux from ``leaking''
below its surface in the Meissner state.
For a closer quantitative comparison between our results,
it is first necessary to account for the (minor) temperature differences
between measurements (see \Cref{fig:field-profiles}).
For this,
we used the well-known ``two-fluid'' expression~\cite{1996-Tinkham-IS-2}:
\begin{equation}
\label{eq:two-fluid}
\lambda (T) = \frac{ \lambda_{0} }{ \sqrt{ 1 - \left ( T / T_{c} \right )^{4} } } ,
\end{equation}
where $\lambda_{0}$ is the magnetic penetration depth at \SI{0}{\kelvin},
to extrapolate the $\lambda$s down to absolute zero
(see \Cref{tab:results})~\footnote{Note that $T_{c}$ is essentially identical for all surface treatments used here (see e.g.,~\cite{2022-Turner-SR-12-5522}).}.
Extrapolating to this limit is convenient,
since at \SI{0}{\kelvin} we also have the simple relationship~\cite{1996-Tinkham-IS-2}:
\begin{equation}
\label{eq:dirty}
\lambda_{0} = \lambda_{L} \sqrt{ 1 + \frac{\xi_{0}}{\ell} } ,
\end{equation}
where $\lambda_{L}$ is the so-called London penetration depth,
$\xi_{0}$ is the Pippard~\cite{1953-Pippard-PRSLA-216-547} or \gls{bcs}~\cite{1957-Bardeen-PR-108-1175} coherence length,
and $\ell$ is the carrier mean-free-path
(i.e., the average distance traveled before being scattered).
As both $\lambda_{L}$ and $\xi_{0}$ can be regarded as material properties intrinsic to \ch{Nb},
differences in $\lambda_{0}$ can be understood in terms of different $\ell$s
for our samples.
By aid of \Cref{eq:dirty} and literature estimates~\footnote{For $\lambda_{L}$, we used a weighted average, correcting for temperature differences using \Cref{eq:two-fluid}. For $\xi_{0}$, we used a statistical average, as most studies do no quote uncertainties for their estimates.}
for both
$\lambda_{L} = \SI{29.01 \pm 0.10}{\nano\meter}$~\cite{1965-Maxfield-PR-139-A1515,1966-Finnemore-PR-149-231,1968-French-C-8-301,1973-Auer-PRB-7-136,1974-Varmazis-PRB-10-1885,1981-Epperlein-PBC-108-931,1984-Felcher-PRL-52-1539,1991-Weber-PRB-44-7585,1992-Korneev-PSPIE-1738-254,1994-Kim-JAP-75-8163,1995-Andreone-PRB-52-4473,1995-Zhang-PRB-52-10395,1998-Pronin-PRB-57-14416}
and
$\xi_{0} = \SI{40.3 \pm 3.5}{\nano\meter}$~\cite{1965-Maxfield-PR-139-A1515,1966-Finnemore-PR-149-231,1968-French-C-8-301,1973-Auer-PRB-7-136,1974-Varmazis-PRB-10-1885,1981-Donnelly-PVM-118,1981-Epperlein-PBC-108-931,1991-Weber-PRB-44-7585,1992-Wood-NIMA-314-86,1995-Andreone-PRB-52-4473,1998-Pronin-PRB-57-14416},
we calculate $\ell$ for our samples,
with the results tabulated in \Cref{tab:results}.
These values compare well with typical $\ell$s found in
\gls{srf} \ch{Nb}~\cite{2005-Casalbuoni-NIMA-538-45,2016-Martinello-APL-109-062601};
however,
to better understand their differences,
we must consider the material modifications introduced by these treatments.
We shall start with the ``baseline'',
which is simplest case to consider.
As described in \Cref{sec:experiment:samples},
this treatment first removes mechanical stresses through annealing and
afterwards purges surface imperfections in the topmost material
to mitigate any contamination from the furnace.
The procedure is highly successful,
as evidenced by our measured $\lambda$'s close proximity to $\lambda_{L}$,
suggesting that the level of impurities is low,
corresponding to an $\ell = \SI{260 \pm 80}{\nano\meter}$.
This is somewhat lower than the $\ell \sim \SI{810}{\nano\meter}$
expected for $\mathrm{RRR} \approx 300$ \ch{Nb}
(see e.g,~\cite{1968-Goodman-JPF-29-240,1972-Garwin-APL-20-154});
however,
we point out that our \emph{microscopic} method of determining $\ell$
only samples the spatial region probed by the $\mu^{+}$ beam,
making it more sensitive to the surface region where, for example,
interstitial impurities are likely more prevalent.
Similarly,
it was at first surprising to find that
non-local electrodynamics~\cite{1953-Pippard-PRSLA-216-547,1957-Bardeen-PR-108-1175}
were not necessary to describe the data;
however,
this is consistent with our $\ell$,
which equivalently yields a short ``effective'' coherence length $\xi_{0}^{\prime}$
(at \SI{0}{\kelvin})
according to~\cite{1996-Tinkham-IS-2}:
\begin{equation}
\label{eq:effective-coherence-length}
\frac{1}{ \xi_{0}^{\prime} } = \frac{1}{ \xi_{0} } + \frac{1}{\ell} .
\end{equation}
For the ``baseline'' sample,
we get $\xi_{0}^{\prime} = \SI{34.8 \pm 3.0}{\nano\meter}$,
which is very close to $\lambda_{0}$
and
equivalent to an $\xi_{0}^{\prime} / \lambda_{0} = \num{1.11 \pm 0.10}$.
Thus,
we conclude that this sample is close to the ``boundary'' where the influence of
non-local electrodynamics becomes significant.
We now consider the ``\SI{120}{\celsius} bake'' sample.
The main effect of the baking~\cite{2004-Ciovati-JAP-96-1591}
is to ``dissolve'' some of the
surface oxide into the bulk of \ch{Nb}.
This treatment instigated a refinement of the oxygen transport model in \ch{Nb}~\cite{2006-Ciovati-APL-89-022507}
which has received renewed attention as of late~\cite{2021-Lechner-APL-119-082601}.
Even before the invention of this ``recipe'',
oxygen diffusion profiles in \ch{Nb} were of interest for their
influence on the surface barrier associated with flux penetration~\cite{1978-vanderMey-PBC-95-369}.
Consistent with the empirical observation that this mild-baking
helps mitigate the so-called ``$Q$-slope'' observed in \gls{srf} cavities at high $E_{\mathrm{acc}}$,
we observe a $\lambda_{0}$ appreciably larger than $\lambda_{L}$
(equivalent to a reduced supercurrent density at the surface --- see e.g.,~\cite{2017-Kubo-SST-30-023001}),
accompanied by an
$\ell = \SI{35 \pm 5}{\nano\meter}$
and
$\xi_{0}^{\prime} = \SI{18.8 \pm 1.6}{\nano\meter}$.
These values are consistent with a sample whose surface region has been ``dirtied''
by the (intentional) addition of impurities.
Interestingly,
not only is this $\ell$ much larger than the values reported for this treatment
in another \gls{le-musr} study~\cite{2014-Romanenko-APL-104-072601},
the Meissner screening profile is also different.
While the bipartite behavior reported previously~\cite{2014-Romanenko-APL-104-072601}
has been suggested to originate from the baking~\cite{2004-Ciovati-JAP-96-1591}
producing an ``effective'' superconductor-superconductor bilayer~\cite{2014-Kubo-APL-104-032603,2017-Kubo-SST-30-023001,2019-Kubo-JJAP-58-088001}
(i.e., from a thin ``dirty'' surface on top of a ``clean'' bulk),
no evidence for such behavior is observed here.
In fact,
separate \gls{le-musr} measurements on \emph{real} bilayers~\cite{Asaduzzaman-tbp}
reveal screening profiles that are qualitatively distinct from those reported here
(see \Cref{fig:field-profiles}).
Thus,
we suggest that low-temperature baking~\cite{2004-Ciovati-JAP-96-1591}
does \emph{not} fundamentally alter the character of Meissner screening in \ch{Nb}
and that the earlier results~\cite{2014-Romanenko-APL-104-072601}
must find an alternative explanation.
Next, we consider the ``\SI{75}{\celsius}/\SI{120}{\celsius} bake'' sample.
Given what is known about mild baking~\cite{2004-Ciovati-JAP-96-1591},
the results for this two-step treatment~\cite{arXiv:1806.09824}
confound expectations.
The similarity of its $\lambda_{0}$ and derived quantities
to the ``baseline'' treatment
(see \Cref{tab:results})
suggests that the ``extra'' baking time
undermines the level of defects near the surface.
Explicit investigations into this matter are limited;
however,
one study using positron annihilation spectroscopy
proposed that the procedure ~\cite{2020-Wenskat-SR-10-8300}:
1) initially causes an increase in the \ch{Nb} vacancy concentration
through the decomposition of hydride-vacancy complexes;
2) that subsequent annealing at \SI{120}{\celsius} gradually removes the
complexes by thermally activated release;
and
3) that the remaining vacancies are vanquished by diffusion to trapping sites
and gradually annealed out.
While this mechanism is plausible,
it does not consider the dissolution of oxygen from the surface
during the second step~\cite{2004-Ciovati-JAP-96-1591},
which should have the \emph{opposite} influence on $\lambda$.
Thus,
we suggest that further investigation into
the near-surface chemical composition
(e.g., using secondary ion mass spectrometry)
is needed to be conclusive.
Lastly,
we consider the remaining surface treatment ``\ch{N2} infusion''~\cite{2017-Grassellino-SST-30-094004},
which is quite different from the other surface treatments.
In this ``recipe'',
\ch{N2} gas is intentionally introduced during baking to dope \ch{Nb} with nitrogen.
The ``infusion'' is performed at the relatively low temperature of \SI{120}{\celsius}
which limits the diffusivity of nitrogen~\cite{2020-Dhakal-PO-5-100034},
but mitigates the requirement of surface removal after the treatment
(cf.\ the original doping ``recipe''~\cite{2013-Grassellino-SST-26-102001}).
Given this treatment's substantial dopant ``supply''~\cite{2017-Grassellino-SST-30-094004}
and
nitrogen's diffusivity in \ch{Nb}
(see e.g.,~\cite{2020-Dhakal-PO-5-100034}),
it isn't surprising that we obtain our longest $\lambda_{0}$
of all the surface treatments,
and,
correspondingly,
the shortest $\ell$ and $\xi_{0}^{\prime}$
---
\SI{8.4 \pm 1.0}{\nano\meter}
and
\SI{6.9 \pm 0.7}{\nano\meter},
respectively.
Thus,
following the above discussion,
we propose that the observed hierarchy in $\lambda_{0}$ for the studied surface treatments
is readily explained by their propensity to dope \ch{Nb}'s near-surface region.
This (relatively light) doping alters $\ell$ in the spatial region sampled by \gls{le-musr},
resulting in $\lambda_{0} > \lambda_{L}$.
This relationship is shown graphically in \Cref{fig:mean-free-path},
accompanied by results from related studies for comparison~\cite{2005-Suter-PRB-72-024506,2017-Junginger-SST-30-125013}.
The results imply that either
$\ell$ is sufficiently homogeneous over the range of $\mu^{+}$ stopping depths
(see \Cref{fig:implantation-profiles})
to be encapsulated by a single (average) value
or
that the effect of any inhomogeneity in $\ell$
is beyond the detection limit of the current measurements.
Alternatively,
the largest inhomogeneity may be localized very close the surface,
comparable to the non-superconducting region observed
of our samples (see \Cref{fig:field-profiles}),
considered below.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{mean-free-path.pdf}
\caption{
\label{fig:mean-free-path}
Dependence of \ch{Nb}'s magnetic penetration depth at \SI{0}{\kelvin}, $\lambda_{0}$,
on the carrier mean-free-path, $\ell$,
for common surface treatments used to fabricate \gls{srf} cavities.
The values were calculated according to \Cref{eq:dirty} (solid black line),
using representative values for
the London penetration depth
$\lambda_{L} = \SI{29.01 \pm 0.10}{\nano\meter}$~\cite{1965-Maxfield-PR-139-A1515,1966-Finnemore-PR-149-231,1968-French-C-8-301,1973-Auer-PRB-7-136,1974-Varmazis-PRB-10-1885,1981-Epperlein-PBC-108-931,1984-Felcher-PRL-52-1539,1991-Weber-PRB-44-7585,1992-Korneev-PSPIE-1738-254,1994-Kim-JAP-75-8163,1995-Andreone-PRB-52-4473,1995-Zhang-PRB-52-10395,1998-Pronin-PRB-57-14416}
and
the \gls{bcs} coherence length
$\xi_{0} = \SI{40.3 \pm 3.5}{\nano\meter}$~\cite{1965-Maxfield-PR-139-A1515,1966-Finnemore-PR-149-231,1968-French-C-8-301,1973-Auer-PRB-7-136,1974-Varmazis-PRB-10-1885,1981-Donnelly-PVM-118,1981-Epperlein-PBC-108-931,1991-Weber-PRB-44-7585,1992-Wood-NIMA-314-86,1995-Andreone-PRB-52-4473,1998-Pronin-PRB-57-14416}.
Also included for comparison are values for \ch{Nb/Cu} films~\cite{2017-Junginger-SST-30-125013}
prepared with different techniques
(re-evaluated using the approach described in \Cref{sec:results}),
and a very ``clean'' \ch{Nb/Al2O3} film~\cite{2005-Suter-PRB-72-024506}.
All plotted values derived from this work are tabulated in \Cref{tab:results}.
}
\end{figure}
It is not uncommon to find a thin layer at a superconductor's surface that does not
screen an external field, colloquially called at ``dead layer''.
One typically accounts for this ``feature'' by incorporating the \emph{ad hoc} parameter $d$
into models of the screening profile --- see \Cref{eq:london}.
A literature survey suggests that $d$ is a generic feature of superconductors
(see e.g.,~\cite{2000-Jackson-PRL-84-4958,2004-Suter-PRL-92-087001,2005-Suter-PRB-72-024506,2010-Kiefl-PRB-81-180502,2012-Ofer-PRB-85-060506,2014-Romanenko-APL-104-072601,2013-Kozhevnikov-PRB-87-104508,2015-Stilp-PRB-89-020510,2017-Junginger-SST-30-125013,2018-Howald-PRB-97-094514}),
indicating that the quantity is representative of a particular \emph{sample},
rather than being intrinsic to the material.
For example, while a ``dead layer'' on the order of \SI{\sim 20}{\nano\meter}
is common for \gls{srf} \ch{Nb}~\cite{2014-Romanenko-APL-104-072601,2017-Junginger-SST-30-125013}
(which we also obtain here --- see \Cref{tab:results}),
values comparable to the thickness of the (native) surface oxide layer
(\SI{\sim 5}{\nano\meter}~\cite{1987-Halbritter-APA-43-1})
are found in high-quality epitaxial thin films~\cite{2005-Suter-PRB-72-024506,arXiv:2212.01137}.
Some of this variance is likely attributed to differences in surface roughness,
which can reduce a sample's screening capacity at the surface~\cite{2012-Lindstrom-PP-30-249,2014-Lindstrom-JEM-85-149,2016-Lindstrom-JSNM-29-1499};
however,
it alone cannot account for the full extent of $d$ in certain materials,
leading us to consider other possibilities.
Recently,
several authors have considered the possibility of $\lambda$ being
spatially \emph{inhomogeneous},
resulting from a varying defect concentration profile close to \ch{Nb}'s
surface~\cite{2019-Ngampruetikorn-PRR-1-012015,2020-Checchin-APL-117-032601}.
For a sufficiently high concentration of surface-localized defects,
it is plausible that $\lambda(z)$ could become long enough to qualitatively mimic
the effect of a ``dead layer''.
Such a scenario has already been considered theoretically for
$\lim_{z \to 0} \lambda(z) = \infty$~\cite{2014-Barash-JPCM-26-045702},
producing a gradual (rather than sharp) transition
between non-superconducting and superconducting regions.
While such an idea is intriguing,
our data in \Cref{fig:field-profiles} are not refined enough
to resolve such features and
further measurements
(e.g., with fine $E$ steps below \SI{\sim10}{\kilo\electronvolt})
are required to be more conclusive.
Such measurements may require a condensed \ch{N2} overlayer,
which can be grown \emph{in situ} at these low-$T$
(see e.g.,~\cite{2017-Junginger-SST-30-125013}).
Finally,
it is worth noting that a similar analysis approach to that described in
\Cref{sec:results} has also been employed in
\ch{^{8}Li} \gls{bnmr}~\cite{2015-MacFarlane-SSNMR-68-1,2022-MacFarlane-ZPC-236-757}
measurements on a \ch{Nb} thin film~\cite{arXiv:2212.01137}.
While the \gls{bnmr} technique shares many similarities with
and is complementary to \gls{le-musr}~\cite{2000-Kiefl-PB-289-640},
it has the advantage of being able to operate in (surface-parallel)
magnetic fields up to \SI{200}{\milli\tesla}~\cite{arXiv:2211.15619},
covering the operating conditions of \gls{srf} cavities
and close to \ch{Nb}'s $B_{\mathrm{sh}}$~\cite{2017-Junginger-SST-30-125012,2018-Junginger-PRAB-21-032002}.
Though equivalent measurements using \gls{le-musr} are not currently possible,
the results presented here will provide a good point of comparison
for future investigations using \gls{bnmr}.
\section{
Conclusions
\label{sec:conclusions}
}
Using \gls{le-musr},
we determined the Meissner screening profile in \ch{Nb} samples
that received surface treatments commonly used to prepare \gls{srf}
cavities.
In contrast to an earlier report~\cite{2014-Romanenko-APL-104-072601},
we find no evidence for any ``anomalous'' modifications to the Meissner profiles,
ruling out that low-temperature baking~\cite{2004-Ciovati-JAP-96-1591},
two-step baking~\cite{arXiv:1806.09824},
or \ch{N2} infusion~\cite{2017-Grassellino-SST-30-094004}
produces an ``effective'' bilayer superconductor~\cite{2014-Kubo-APL-104-032603,2017-Kubo-SST-30-023001,2019-Kubo-JJAP-58-088001}.
Instead,
we find that the observed field screening is well-described
by a simple London model~\cite{1935-London-PRSLA-149-71},
with magnetic penetration depths
(extrapolated to \SI{0}{\kelvin}) of:
\SI{31.3 \pm 0.7}{\nano\meter} for the ``baseline'' sample;
\SI{42.6 \pm 1.3}{\nano\meter} for the ``\SI{120}{\celsius} bake'' treatment;
\SI{32.3 \pm 0.5}{\nano\meter} for the ``\SI{75}{\celsius}/\SI{120}{\celsius} bake'' recipe;
and
\SI{70.2 \pm 2.6}{\nano\meter} for ``\ch{N2} infusion''.
Differences in screening properties between surface treatments can be
explained by changes to the carrier mean-free-paths resulting from
dopant profiles near \ch{Nb}'s surface.
A relatively large (\SI{\sim 20}{\nano\meter}) non-superconducting ``dead layer''
was found in all samples, exceeding the thickness of the native oxide layer that forms
at \ch{Nb}'s surface~\cite{1987-Halbritter-APA-43-1}.
This observation may suggest a narrow region near the surface where $\lambda$
is \emph{depth-dependent}~\cite{2014-Barash-JPCM-26-045702,2019-Ngampruetikorn-PRR-1-012015,2021-Lechner-APL-119-082601}.
Further \gls{le-musr} experiments,
with finer steps energy steps where $E < \SI{10}{\kilo\electronvolt}$
may illuminate the matter.
\begin{acknowledgments}
We thank:
P.~Kolb, R.~E.~Laxdal, W.~A.~MacFarlane, and E.~Thoeng for useful discussions;
TRIUMF's \gls{srf} group for providing several of the \ch{Nb} samples
(``baseline'', ``\SI{120}{\celsius} bake'', and \SI{75}{\celsius}/\SI{120}{\celsius} bake'');
and
M.~Martinello for preparing the ``\ch{N2} infusion'' sample.
This work is based on experiments performed at the Swiss Muon Source S$\mu$S,
Paul Scherrer Institute, Villigen, Switzerland.
Financial support was provided by an \gls{nserc} Award to T.~Junginger.
\end{acknowledgments}
|
1,116,691,500,334 | arxiv |
\section{Introduction}
\label{sec:introduction}
Gravitational lensing~\citep{1992grle.book.....S} has become a key probe of the content, structure, and physical laws of our Universe. While weak lensing teaches us about the distribution of matter on cosmic scales~\citep[e.g.][]{Asgari:2020wuj, Gatti:2019clj}, strong lensing lies amongst the best tools to measure today's cosmic expansion rate~$H_0$~\citep{1964MNRAS.128..307R, 2020MNRAS.498.1420W}, and encapsulates valuable information on the small-scale distribution and nature of the dark matter~\citep{2018MNRAS.475.5424D, Diego:2017drh, Harvey:2019bco, Blum:2020mgu}.
The quality of current imaging and data analysis requires an equally elaborate theoretical modelling of the strong lenses. That means complex models for the mass distribution of the main deflector, responsible for, e.g., the formation of multiple images of the same source, but also of the secondary deflectors which may perturb the effect of the main lens~\citep{1997ApJ...482..604K,2018MNRAS.477.5657T,Sengul:2020yya,Li:2020fpq}. Such perturbations are referred to as line-of-sight effects.
Two distinct frameworks have been developed to model line-of-sight perturbations in strong gravitational lensing. The first approach~\citep{1987ApJ...316...52K,1994A&A...287..349S,1996ApJ...468...17B,Schneider:1997bq,Birrer:2016xku} consists in treating secondary deflectors in the tidal regime, i.e., adding external convergence and shear to the main lens model. This technique is well suited to describe cosmological perturbations which would add to the effect of a lens; it has been successfully applied to measuring cosmic shear with Einstein rings by \citet{Kuhn:2020wpy}. On the contrary, the second approach consists in modelling all the secondary deflectors as thin lenses. This formalism, called multi-plane lensing~\citep{1986ApJ...310..568B}, is thus adapted to the description of isolated perturbers within an otherwise empty Universe, or within an ideal Friedmann-Lemaître-Robertson-Walker (FLRW) cosmology. The applicability of the original multi-plane formalism was then extended by the introduction of tidal planes~\citep{McCully:2013fga}, voids~\citep{McCully:2016yfe}, or cosmological perturbations~\citep{Schneider:2014vka}; however, it has never been considered in full generality.
The present article proposes to fill this gap with an ultimate multi-plane formalism, where one or several lenses with arbitrary velocities may be placed in any smooth space-time background. This formalism encapsulates all the key lensing observables into a single versatile language. Our results may be applied to a wide range of set-ups; three specific examples will be provided in this article: (1) one lens with cosmological perturbations; (2) one lens in anisotropic cosmology, which recently attracted renewed attention~\citep{Akrami:2019bkn,Migkas:2020fza, 2021ApJ...908L..51S, Fosalba:2020gls}; and (3) multiple lenses in an under-dense Universe. Furthermore, this work establishes the necessary tools to accurately describe line-of-sight corrections in strong lensing beyond the standard convergence and shear~\citep{Fleury:2018odh, FLU21}.
The article is organised as follows. We start in \cref{sec:preliminary_discussion} with a discussion on the nature of the gravitational fields encountered by light beams, thereby defining the dichotomy between reference space-time and lenses. In \cref{sec:one_lens} we consider the case where a single lens is placed within an arbitrary reference space-time, before moving to the general $N$-lens case in \cref{sec:N_lenses}. We conclude in \cref{sec:conclusion}.
We adopt units for which the speed of light is unity. Bold symbols ($\vect{\alpha}, \vect{\beta}, \ldots$) indicate two-dimensional Euclidean vectors, i.e., components of two-dimensional vectors over an orthonormal basis. Bold calligraphic symbols ($\mat{\mathcal{A}}, \mat{\mathcal{D}}, \ldots$) refer to matrices, and cursive letters~($\geo{P}, \geo{F}, \geo{L},\ldots$) to time-like or null geodesics.
\section{Preliminary discussion: reference space-time and lenses}
\label{sec:preliminary_discussion}
\subsection{Rays and beams of light}
Let a source of light and an observer be placed in an arbitrary space-time. We call \emph{light beam} the set of light rays that connect the source to the observer; a beam may have several components if the source is multiply imaged. We assume that the wavelength of light is much smaller than any relevant length scale of the problem (eikonal approximation) so that light rays are null geodesics.
\subsection{Smooth and rough fields}
As a beam of light propagates from the source to the observer, it may be deflected, distorted, and somehow split, by the gravitational field that it experiences. The formalism proposed in this article relies on the broad distinction between two categories of gravitational fields: \emph{smooth fields} on the one hand, and \emph{rough fields} on the other hand. In smooth-field regions, the light beam is continuously (and possibly strongly) distorted, but its integrity is preserved. In rough-field regions, the light beam experiences sudden deflections, which may give rise to multiple images. Thus, rough-field regions correspond to what is usually referred to as \emph{lenses}, while smooth-field regions constitute a \emph{reference space-time}, where light propagates from one lens to the next one. In the standard multi-plane lensing framework, the reference space-time is either Minkowski or FLRW. We shall not make this assumption here.\footnote{Rigorously speaking, the set of all smooth-field regions traversed by a light beam does not constitute a physically well-defined space-time. In particular, it does not necessarily satisfy Einstein's equation, whose right-hand side should also include the matter clumps of the rough-field regions.} These considerations are illustrated in \cref{fig:smooth_vs_rough}.
\begin{figure}[t]
\centering
\import{figures/}{smooth_vs_rough.pdf_tex}
\caption{Illustrating the dichotomy between smooth and rough gravity fields.}
\label{fig:smooth_vs_rough}
\end{figure}
Specifically, we shall say that a gravitational field is smooth if the light beam can be considered \emph{infinitesimal} within that field. In other words, a field is smooth if the corresponding space-time curvature is slowly varying and homogeneous within the light beam's cross section~\citep{Fleury:2017owg, Fleury:2018cro, Fleury:2018odh}. Otherwise, we shall say that the field is rough. Let us illustrate this terminology with the example of a point mass~$M$. The curvature that it produces at a distance $r$ reads $R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}=12(2GM/r^3)^2$; hence the typical scale over which it changes appreciably is $r$. As a light beam with cross-sectional area~$A$ passes next to the mass, it experiences a smooth field if $A\ll r^2$, and a rough field otherwise. The picture gets slightly more complicated if we recall that the beam's cross-sectional area~$A(r)$ to depend on $r$, due to light focusing. Denoting $A_0$ the unlensed area of the beam, its lensed counterpart reads $A(r)=\mu(r) A_0$, where $\mu(r)\define (1-r\e{E}^4/r^4)^{-1}$ is the point-lens magnification, and $r\e{E}$ its Einstein radius. The smooth-field condition then becomes $A_0\ll r^2(1-r\e{E}^4/r^4)$.
Another example is light propagating through a diffuse distribution of matter, such as a gas cloud or a dark-matter halo. In that situation, space-time curvature is effectively dominated by its Ricci component~\citep{Fleury:2017owg}, which is mostly controlled by the matter density field $\rho$. Consequently, the corresponding gravitational field is smooth if $\rho$ is almost homogeneous on the scale of the beam's cross-section, and rough otherwise.
Finally, we note that gravitational fields are not always directly generated by matter distributions in a Newtonian-like manner. The most immediate example is gravitational waves, which are nothing but propagating curvature. In our terminology, a gravitational wave is a smooth field if its wavelength is much larger than the beam's diameter, and rough otherwise. Perhaps even more relevant to lensing are the infinite-wavelength gravitational waves that characterise anisotropic cosmologies. For instance, the anisotropic expansion of Bianchi~I models produces a homogeneous Weyl curvature, i.e. a smooth tidal field, which continuously shears and rotates light beams as they propagate~\citep{Fleury:2014rea}.
\subsection{Embedding lenses in the reference space-time}
In the remainder of this article, we aim to concretely implement the distinction between smooth and rough fields into a generalised multi-plane lensing framework. But before doing so, let us briefly explain how one may technically treat the embedding of a lens within an arbitrary reference space-time.
\begin{figure}[t]
\centering
\import{figures/}{embedding.pdf_tex}
\caption{Embedding a lens (rough-field region) in a smooth reference space-time.}
\label{fig:embedding}
\end{figure}
Consider some rough-field region that is traversed by the light beam. In astrophysically relevant situations, the roughness of the field is due to a localised lumpy matter distribution (the lens), as depicted in \cref{fig:embedding}. Let $\geo{L}$ be the world-line of the lens's centre of mass. If we neglect the lens's self force, then $\geo{L}$ is a time-like geodesic of the space-time metric~$\bar{g}_{\mu\nu}$ generated by the rest of the universe, i.e., the reference space-time. Thus, we may introduce Fermi normal coordinates~$X^\alpha$ along $\geo{L}$, which materialise the \emph{rest frame of the lens}. With such coordinates, the reference metric is essentially Minkowskian across the rough-field region, $\bar{g}_{\alpha\beta}= \eta_{\alpha\beta}$, up to small corrections due to the local curvature of the reference space-time. These corrections are negligible in any astrophysically relevant situation, because the size of the region is on the order of the beam's cross section. Using the coordinates~$X^\alpha$, one may then compute the gravitational field generated by the lens as if it were isolated, modulo the aforementioned small curvature corrections.
In the remainder of this article, we shall assume that the lenses are non-compact matter distributions with non-relativistic velocity dispersion. This notably excludes black-hole or neutron-star systems, and relativistic gases, but it provides an accurate description of any other astronomically relevant lens. In that context, we can treat rough gravitational fields as linear Newtonian perturbations, and hence apply the standard description of thin lenses~\citep{1992grle.book.....S}. Relaxing this assumption would affect the lens modelling and complicate the computation of time delays, but it would not change the essence of the framework that we propose in \cref{sec:one_lens,sec:N_lenses}.
\section{One lens in an arbitrary reference space-time}
\label{sec:one_lens}
In this section, we tackle the case of a light beam that only propagates in smooth-field regions, except one localised rough-field region modelled as a thin lens. This problem has been studied by several authors in order to evaluate the impact of cosmological perturbations on the properties of a strong lens. \citet{1987ApJ...316...52K} refers to it as a \emph{thick lens in the telescope approximation}; \citet{1992grle.book.....S, 1994A&A...287..349S} calls it \emph{generalised quadrupole lens}; \citet{1996ApJ...468...17B} writes about \emph{lensing with large-scale structure}; and \citet{Schneider:1997bq}, whose presentation is the closest to ours, simply calls it the \emph{cosmological lens}. The corresponding formalism has been recently applied to the weak lensing of Einstein rings by \citet{Birrer:2016xku}, with the perspective of novel synergies between weak-lensing and strong-lensing observations~\citep{Kuhn:2020wpy}.
The results derived in this section include, or are formally equivalent to, the aforementioned works'. However, we insert them in a broader context and extend their range of application --- a novel example is proposed in \cref{subsec:one_lens_Bianchi}. Furthermore, to our knowledge we propose in \cref{subsec:time_delay_one_lens} the first rigorous proof of the expression of the time delay for lensing within an arbitrary reference space-time.
\subsection{Geometric set-up}
\label{subsec:set-up}
Let us describe in detail the geometry of the problem. We shall start with a four-dimensional picture, which forms the rigorous basis of the subsequent three-dimensional picture, which in turn is more adapted to practical calculations. The four-dimensional discussion may be skipped by any reader who is not particularly interested in fully accurate definitions.
\subsubsection{Four-dimensional picture (\cref{fig:one_lens_4d})}
Let $\geo{S}, \geo{L}, \geo{O}$ be the world-lines of the source, lens and observer respectively. Let $S\in\geo{S}$ and $O\in\geo{O}$ be the emission and observation events; we call \emph{physical ray}~$\geo{P}$ a null geodesic connecting $S$ to $O$. As we model the rough-field region as a thin lens, we will treat the physical ray as a set of two geodesics of the reference space-time, which undergoes a sudden deflection and a pause in the vicinity of the lens. The \emph{unlensed ray}~$\geo{U}$ is the null geodesic of the reference space-time that connects $S$ to $\geo{O}$. The intersection event $O'$ is earlier than $O$; their separation is the lensing time delay~$\Delta t$.
\begin{figure}[t]
\centering
\import{figures/}{one_lens_4d.pdf_tex}
\caption{Space-time diagram for the lensing by a thin lens within an arbitrary smooth reference space-time. See the main text for precise definitions of the various objects. The green vertical lines~$\geo{O}, \geo{L}, \geo{S}$ are respectively the world-lines of the observer, lens and source. The solid thick line indicates the physical ray~$\geo{P}$, the dashed line is the unlensed ray~$\geo{U}$, the dotted line is the fiducial ray~$\geo{F}$, and the thin solid line is the continuation~$\geo{C}$ of the physical ray without deflection beyond the lens plane. The two grey shaded regions represent the lens and source world-sheets~$\Sigma\e{d}, \Sigma\e{s}$; their sections (thick grey lines) orthogonal to $\geo{L}$ or $\geo{S}$ indicate the lens and source planes at different times.}
\label{fig:one_lens_4d}
\end{figure}
We introduce a \emph{fiducial ray} $\geo{F}$ defined as the geodesic of the reference space-time that connects $O$ to the lens's world-line~$\geo{L}$.\footnote{We make this choice for simplicity, but any reference-space-time geodesic that remains close to the physical ray could equally play the role of a fiducial ray.} We denote with $L\define\geo{F}\cap\geo{L}$ the intersection event. Importantly, we assume that the rays $\geo{P}, \geo{U}, \geo{F}$ are all very close to each other, so that any of their respective separations can be treated as infinitesimal in the reference space-time.
At $L$ we define the \emph{lens plane} as the two-dimensional space that is orthogonal to both $\geo{L}$ and $\geo{F}$. In other words, the lens plane is strictly spatial (made of simultaneous events) in the lens's rest frame, and it is orthogonal to the spatial direction of the fiducial ray in that frame. The lens plane is well defined in the immediate vicinity of $L$, i.e., up to distances that are much smaller than the curvature radii of the reference space-time. So far we have defined the lens plane at the time of the event $L$; we may then generalise it to other times through parallel transport along $\geo{L}$. Physically speaking, it means that the lens plane is non rotating. The three-dimensional time-like space that is swept by the lens plane as time goes on will be referred to as the \emph{lens world-sheet}~$\Sigma\e{d}$.
As the physical ray passes near the lens, it is effectively slowed down and deflected by its gravitational potential. In the thin-lens model adopted here, the deflection and delay are instantaneous. In other words, everything happens as if the photon were pausing and suddenly changing direction as it intersects $\Sigma\e{d}$. This pause is of course an idealised modelling of an otherwise continuous process, but it is an integral part of the thin lens approximation. We call $P'$ and $P$ the beginning and end of the pause, respectively. Since the pause happens at a fixed position in the lens's frame, the separation between $P$ and $P'$ is parallel to $\geo{L}$. The duration of the pause represents the so-called potential time delay;\footnote{For the thin-lens approximation to be valid, the duration of the pause must be very short compared to the local evolution time scale of the reference space-time.} it generally differs from the final time delay~$\Delta t$ measured along~$\geo{O}$, not only because these intervals are expressed in different rest frames, but also because $\Delta t$ contains additional, geometrical contributions.
We call $\vect{x}$ the (common) position of $P$ and $P'$ in the lens plane, with respect to the origin set by $\geo{L}$.\footnote{Throughout the article, bold symbols like $\vect{x}$ refer to the set of \emph{components} of screen-space vectors over an orthonormal basis. This allows us to treat them as Euclidean vectors.} Similarly, we call $\vect{r}$ the position, in the lens plane, of the intersection $R\define\geo{U}\cap\Sigma\e{d}$ between the unlensed ray and the lens world-sheet.
Let us now describe what happens in the vicinity of the source. Just like we defined the lens world-sheet~$\Sigma\e{d}$ from $\geo{F}$ and $\geo{L}$, we define the \emph{source world-sheet}~$\Sigma\e{s}$ from $\geo{F}$ and $\geo{S}$.\footnote{A slight difference is that $\geo{F}$ does not intersect~$\geo{S}$ in general. The explicit definition of $\Sigma\e{s}$ is the three-dimensional space that locally contains $\geo{S}$ and that intersects $\geo{F}$ orthogonally to its spatial direction in the source's frame.} Let us call $F\define\geo{F}\cap\Sigma\e{s}$ the intersection of the fiducial ray and the source world-sheet. The line parallel to $\geo{S}$ and passing through $F$ will be taken as the origin of the source plane. We call $\vect{s}$ the position of $S$ in the source plane with respect to that fiducial origin.
Finally, let $\geo{C}$ be the continuation of the physical ray without deflection nor delay in the lens plane. We call $I\define\geo{C}\cap\Sigma\e{s}$ its intersection with the source world-sheet. This event may be though of as the \emph{image event}, i.e. the event that would be observed at $O$ in the same direction as the physical ray in the absence of the lens. We denote with $\vect{i}$ the position of $I$ in the source plane.
The above discussion shows that, as long as the various rays can be considered infinitesimally close to each other in the reference space-time, we can univocally define the notions of lens plane and source plane, and the various position vectors~$\vect{x}, \vect{r}, \vect{s}, \vect{i}$ in those planes. We shall now safely proceed with a three-dimensional representation of the problem, which will allow us to represent angles more easily.
\subsubsection{Three-dimensional picture} \Cref{fig:one_lens_3d}, which is a spatial representation of the space-time diagram of \cref{fig:one_lens_4d}, corresponds to the more traditional way of picturing gravitational lensing by a thin deflector. The fiducial ray~$\geo{F}$, which is a null geodesic of the reference space-time, was chosen to go through the lens's position for simplicity, but any other nearby ray would be eligible. The lens plane (resp. source plane) is perpendicular to the fiducial ray in the lens's (resp. source's) rest frame. The position of the source~$S$ with respect to the fiducial origin $F$ of the source plane is $\vect{s}$.
\begin{figure}[t]
\centering
\import{figures/}{one_lens_3d.pdf_tex}
\caption{Three-dimensional representation of the lensing by a thin lens in an arbitrary smooth reference space-time. All the lines are portions of geodesics of the reference space-time. The unlensed ray~$\geo{U}$ (dashed line) connects the observer~$O$ to the source~$S$. Its angular separation with the arbitrary fiducial ray~$\geo{F}$ (dotted line) is denoted $\vect{\beta}$. The physical ray~$\geo{P}$ (thick solid line) is separated from the fiducial ray by $\vect{\theta}=\vect{
\beta}+\vect{\alpha}$; it is deflected by $\hat{\vect{\alpha}}$ at $P$ in the lens plane. The continued ray~$\geo{C}$ (light solid line) is the continuation of the physical ray if it were not deflected. The lens plane and source plane (grey) are orthogonal to the fiducial ray. The lens at $L$ is not represented in the figure.}
\label{fig:one_lens_3d}
\end{figure}
The unlensed ray~$\geo{U}$ is the geodesic of the reference space-time connecting $O$ to the $S$; we call $\vect{\beta}$ its angular separation with respect to the fiducial ray. Hence, $\vect{\beta}$ represents the direction in which the source would be observed in the reference space-time, without the lens. The unlensed ray intersects the lens plane at $R$, whose position with respect to $L$ is $\vect{r}$.
The physical ray~$\geo{P}$ is made of two portions of geodesics of the reference space-time, which intersect at $P$ in the lens plane. The position~$\vect{x}$ of $P$ with respect to $L$ is where the photon pauses and is deflected. We denote with $\hat{\vect{\alpha}}$ the \emph{deflection angle} of the physical ray at $P$. Importantly, $\hat{\vect{\alpha}}$ is defined in the rest frame of the lens; it is thus subject to aberration effects when evaluated in another frame.
We denote with $\vect{\theta}$ the separation between the physical ray and the fiducial ray at $O$, i.e. the observed direction of the image in the presence of the lens. The difference $\vect{\alpha}\define \vect{\theta}-\vect{\beta}$, not to be confused with $\hat{\vect{\alpha}}$, may be called \emph{displacement angle}. The analogue of $\vect{\alpha}$ in the source plane, i.e. the difference between the emission directions of the physical and unlensed rays, is denoted $\vect{\sigma}$.
Finally, the continued ray~$\geo{C}$ is the null geodesic of the reference space-time that coincides with the physical ray between $O$ and $P$. As such, it is not deflected at $P$ and intersects the source plane at a different point~$I$, called image position. It represents the position of a source that would be observed in the direction $\vect{\theta}$ in the absence of the lens. We call $\vect{i}$ the vector connecting $F$ to $I$ in the source plane.
All the angles~$\vect{\theta}, \vect{\beta}, \vect{\alpha}, \hat{\vect{\alpha}}, \vect{\sigma}$ are assumed to be very small.
\subsection{Lens equation for one lens}
\label{subsec:lens_equation_one_lens}
Now that the geometric set-up has been fully described, we are ready to derive the lens equation for one thin deflector embedded in the arbitrary reference space-time.
\subsubsection{Light propagation in the reference space-time} In the reference space-time, by definition, the rays~$\geo{F}, \geo{U}, \geo{P}, \geo{C}$ are considered infinitesimally close to each other. Thus, the relative behaviour of any two of these rays can be described by the Sachs framework \citep{1961RSPSA.264..309S}, which is based on geodesic deviation. In what follows, we shall use a number of results of this formalism without deriving them; we refer the interested reader to, e.g., \citet[][Chapt. 2]{Fleury:2015hgz} for further details.
Let $\lambda$ be a past-oriented affine parameter along the fiducial ray~$\geo{F}$, such that $\lambda=0$ at $O$. Just like we defined the lens plane and source plane, we may introduce a local screen space at any point~$\lambda$ of $\geo{F}$. Let us denote with $\vect{x}(\lambda)$ the screen-space separation between the physical and fiducial rays at $\lambda$. This vector field thus interpolates between $\vect{0}$ at $O$, $\vect{x}$ in the lens plane and $\vect{s}$ in the source plane.
In the reference space-time, $\vect{x}(\lambda)$ satisfies the differential equation
\begin{equation}
\label{eq:Sachs_equation}
\ddf[2]{\vect{x}}{\lambda} = \vect{\mathcal{R}}(\lambda) \, \vect{x}(\lambda) \ ,
\end{equation}
where $\vect{\mathcal{R}}$ is a particular screen-space projection of the Riemann curvature tensor of the reference space-time, called the optical tidal matrix. In fact, because \cref{eq:Sachs_equation} is linear, we immediately conclude that it would equally apply to the separation of \emph{any} two rays in the reference space-time.
\subsubsection{Jacobi matrices} \Cref{eq:Sachs_equation} is a linear second-order differential equation, so any of its solution is linearly related to its initial conditions. If this initial condition is considered at $O$ where $\vect{x}(0)=\vect{0}$, then there exists then there exists a $2\times 2$ matrix~$\mat{\mathcal{D}}$, called \emph{Jacobi matrix}, such that $\vect{x}(\lambda)=\mat{\mathcal{D}}(\lambda)\dot{\vect{x}}(0)$, where a dot denotes a derivative with respect to $\lambda$. The affine-parameter derivative~$\dot{\vect{x}}(0)$ of $\vect{x}$ at $O$ is related to the angle~$\vect{\theta}$ measured in the observer's rest frame as $\dot{\vect{x}}(0)=\omega_0\vect{\theta}$, with $\omega_0$ the cyclic frequency of light in that frame.\footnote{This relation is due to the most natural normalisation of the affine parameter. Let us denote with $u^\mu=\mathrm{d} x^\mu/\mathrm{d}\tau$ the four-velocity of an observer, with $\tau$ its proper time, and $k^\mu=\mathrm{d} x^\mu/\mathrm{d}\lambda$ the wave four-vector of the fiducial ray. It is customary identify the photon's cyclic frequency with $\omega=-u_\mu k^\mu=\mathrm{d}\tau/\mathrm{d}\lambda$. This normalisation condition implies that for a change $\mathrm{d}\lambda$ the photon has travelled a proper distance $\mathrm{d}\ell=\mathrm{d}\tau=\omega\mathrm{d}\lambda$ in the observer's frame.\label{footnote:distance_affine_parameter}}
More generally, if any two rays of the reference space-time emerge from, or converge to, a point (a) with angular separation $\vect{\theta}\e{a}$, then their transverse separation~$\vect{x}\e{b}$ at another point (b) reads
\begin{equation}
\label{eq:Jacobi_definition}
\vect{x}\e{b}
= \mat{\mathcal{D}}\e{a\langle b} \, \dot{\vect{x}}\e{a}
= \mat{\mathcal{D}}\e{a\langle b} \, \omega\e{a}\vect{\theta}\e{a} \ ,
\end{equation}
where $\omega\e{a}$ is the fiducial photon's cyclic frequency as measured in the same rest frame where $\vect{\theta}\e{a}$ was defined.
The Jacobi matrix $\mat{\mathcal{D}}\e{a\langle b}$ and the product $\omega\e{a} \vect{\theta}\e{a}$ are frame-independent. The presence of $\omega\e{a}$ is thus essential to account for aberration effects in $\vect{\theta}\e{a}$. We stress that a,b are not indices; they represent the positions where $\vect{x}, \vect{\theta}$ are evaluated.
The non-standard notation ``a$\langle$b'' in $\mat{\mathcal{D}}\e{a\langle b}$ is designed to clearly indicate that the two rays merge at (a). If the roles of (a) and (b) were swapped, i.e. if we considered two rays merging at (b) instead of (a), then their separation at (a) would read $\vect{x}\e{a} = \mat{\mathcal{D}}\e{a\rangle b} \, \omega\e{b} \vect{\theta}\e{b}$, with
\begin{equation}\label{eq:Etherington}
\mat{\mathcal{D}}\e{a\rangle b} = - \mat{\mathcal{D}}\h{T}\e{a\langle b} \ ,
\end{equation}
where the T superscript indicates the matrix-transpose operation. \Cref{eq:Etherington} is known as Etherington's reciprocity law~\citep{1933PMag...15..761E}; see \citet[][\S~2.2.3]{Fleury:2015hgz} for a derivation using the same conventions as this article.
\subsubsection{Lens equation} From the definition of the Jacobi matrix, we can express the position of the image~$\vect{i}$ with respect to the source~$\vect{s}$ in two different ways,
\begin{equation}\label{eq:i-s_two_expressions}
\vect{i}-\vect{s}
= \omega\e{o}\mat{\mathcal{D}}\e{o\langle s} \, \vect{\alpha}
= \omega\e{d}\mat{\mathcal{D}}\e{d\langle s} \, \hat{\vect{\alpha}} \ ,
\end{equation}
where o, d, s respectively refer to the observer, deflector (or lens), and source positions. The deflection angle $\hat{\vect{\alpha}}(\vect{x})$ depends on the position $\vect{x}$ where the physical ray pierces the lens plane. For a thin lens made of non-relativistic and non-compact matter, $\hat{\vect{\alpha}}$ is dictated by the surface mass density~$\Sigma(\vect{x})$ in the lens plane~\citep{1992grle.book.....S},
\begin{equation}\label{eq:def_alpha}
\hat{\vect{\alpha}}(\vect{x})
= \int \mathrm{d}^2\vect{y} \; 4G\Sigma(\vect{y}) \,
\frac{\vect{x}-\vect{y}}{|\vect{x}-\vect{y}|^2} \ ,
\end{equation}
where $|\ldots|$ denotes the Euclidean norm. The deflection angle can also be expressed as the gradient of $\hat{\psi}$, which is twice the projected gravitational potential of the lens,
\begin{equation}\label{eq:pot_lens}
\hat{\vect{\alpha}}(\vect{x})
= \ddf{\hat{\psi}}{\vect{x}} \ ,
\qquad
\hat{\psi}(\vect{x}) \define \int \mathrm{d}^2\vect{y} \; 4G \Sigma(\vect{y}) \ln|\vect{x}-\vect{y}| \ .
\end{equation}
The latter indeed satisfies the projected Poisson equation $\Delta \hat{\psi}=8\pi G\Sigma(\vect{x})$, where $\Delta$ denotes the two-dimensional Laplacian and $G$ is Newton's constant.
Since $\vect{x}=\mat{\mathcal{D}}\e{o\langle d} \omega\e{o} \vect{\theta}$, we conclude from \cref{eq:i-s_two_expressions} that the lens equation reads
\begin{empheq}[box=\fbox]{equation}
\label{eq:lens_equation_one_lens}
\vect{\beta}(\vect{\theta})
= \vect{\theta}
- (1+z\e{d}) \mat{\mathcal{D}}\e{o\langle s}^{-1}
\mat{\mathcal{D}}\e{d\langle s}
\hat{\vect{\alpha}}
(\mat{\mathcal{D}}\e{o\langle d}\omega\e{o}\vect{\theta}) \ ,
\end{empheq}
with $z\e{d}$ the observed redshift of the lens, $1+z\e{d}=\omega\e{d}/\omega\e{o}$.
\subsubsection{Important remarks}
\Cref{eq:lens_equation_one_lens} is fully general, provided that light indeed encounters only one rough-field region, which can be modelled as a thin lens. No assumption is made on the reference space-time apart from the smoothness of its curvature. Hence, the deflector's redshift $z\e{d}$ must not be understood as a cosmological redshift in general.
Since $\hat{\vect{\alpha}}$ represents the deflection angle in the rest frame of the lens, it is by definition independent of the lens' motion. In that context, the redshift term $1+z\e{d}$ encodes aberration effects. For instance, if the lens recedes from the observer, then its redshift increases, and the net observed deflection $(1+z\e{d})\hat{\vect{\alpha}}$ increases as well.
Let us finally stress that $\vect{\beta}$ is fundamentally linked to the reference space-time. Indeed, $\vect{\beta}$ represents the direction in which one would observe the source without the lens, i.e., if light were only propagating in smooth-field regions. In particular, it does not represent the direction where the source would be seen in an empty Universe, because the smooth-field regions do affect light propagation. If one wishes to work with another reference space-time, which does not correspond to the geometry where light propagates from one lens to another (e.g. Minkowski or FLRW), then one must explicitly allow for the difference between $\vect{\beta}$ and the new notion of unlensed direction. This will be illustrated with two concrete examples in \cref{subsec:one_lens_perturbed_FLRW,subsec:one_lens_Bianchi}
\subsection{Amplification matrix for one lens}
\label{subsec:amplification_one_lens}
From the lens equation~\eqref{eq:lens_equation_one_lens}, we immediately derive the amplification matrix of the system, i.e. the Jacobian matrix of the lens mapping~$\vect{\theta}\mapsto\vect{\beta}(\vect{\theta})$,
\begin{equation}
\mat{\mathcal{A}}(\vect{\theta})
\define \ddf{\vect{\beta}}{\vect{\theta}}
= \mat{1} - \omega\e{d} \mat{\mathcal{D}}\e{o\langle s}^{-1}
\mat{\mathcal{D}}\e{d\langle s} \,
\hat{\mat{H}}
(\mat{\mathcal{D}}\e{o\langle d}\omega\e{o}\vect{\theta})\,
\mat{\mathcal{D}}\e{o\langle d} \ ,
\end{equation}
where $\hat{H}_{ij}\define \partial^2\hat{\psi}/\partial x^i \partial x^j$ is the Hessian matrix of the deflection potential. Note that, contrary to the latter, $\mat{\mathcal{A}}$ is generally not symmetric, due to the coupling between the lens and the reference space-time.
\subsection{Time delay for one lens}
\label{subsec:time_delay_one_lens}
The time delay between the images of strong-lensing systems is a key observable in astronomy and cosmology~\citep{2020MNRAS.498.1420W}. Its expression in the presence of cosmological perturbations can be found in \citet{1992grle.book.....S, 1996ApJ...468...17B, Schneider:1997bq}, but without a direct proof. Here we propose a rigorous derivation of the time-delay formula, inspired from the wave-front method introduced by \citet[\S~5.3]{1992grle.book.....S}.
Let $\Delta t$ denote the time delay between the physical signal and the unlensed signal. In other terms, if a signal emitted by the source reached the observer at $t_0$ in the absence of the lens, then it would reach it at $t=t_0+\Delta t$ in the presence of the lens. Note that $\Delta t$ is not directly observable, because it involves two signals that propagate in different space-times. It is, however, a convenient theoretical intermediate.
The time delay is conveniently parameterised as $\Delta t(\vect{\theta}, \vect{\beta})$, because it generally depends on the source position $\vect{s}=\omega\e{o}\mat{\mathcal{D}}\e{o\langle s}\vect{\beta}$, and on the point $\vect{x}=\omega\e{o}\mat{\mathcal{D}}\e{o\langle d}\vect{\theta}$ where the physical ray pierces the lens plane. In terms of that parameterisation, the observable time delay between two images A and B of the same source reads $\Delta t\e{AB}(\vect{\beta})=t\e{A}(\vect{\beta})-t\e{B}(\vect{\beta})=\Delta t(\vect{\theta}\e{A},\vect{\beta})-\Delta t(\vect{\theta}\e{B},\vect{\beta})$.
\begin{figure}[t]
\centering
\begin{minipage}{0.3\textwidth}
\import{figures/}{time_delay_differential.pdf_tex}
\end{minipage}
\hfill
\begin{minipage}{0.65\textwidth}
\caption{Two signals, emitted simultaneously with an angle $\vect{\sigma}$ from $\vect{s}+\mathrm{d}\vect{s}$, are equivalent to two signals emitted with a relative delay $\mathrm{d} t\e{s}=\vect{\sigma}\cdot\mathrm{d}\vect{s}$ from $\vect{s}$. Here, everything happens as if the physical signal (solid) was emitted slightly before the unlensed signal (dashed), so that $\mathrm{d} t\e{s}<0$ consistently with the opposite orientations of $\vect{\sigma}$ and $\mathrm{d}\vect{s}$.}
\label{fig:time_delay_differential}
\end{minipage}
\end{figure}
\subsubsection{Differential time delay} The time delay between two signals emitted simultaneously in different directions depends on the source's position $\vect{s}$. Indeed, as depicted in \cref{fig:time_delay_differential}, if the source lies at $\vect{s}+\mathrm{d}\vect{s}$, then everything happens as if the two signals were emitted from $\vect{s}$ but with a slight relative delay~$\mathrm{d} t\e{s}=\vect{\sigma}\cdot\mathrm{d}\vect{s}$. Thus, if two signals emitted from $\vect{s}$ are observed with a time delay $\Delta t$, then shifting the source by $\mathrm{d}\vect{s}$ results in an additional delay
\begin{equation}
\label{eq:differential_time_delay}
\mathrm{d} \Delta t = (1+z\e{s})\mathrm{d} t\e{s} = (1+z\e{s})\vect{\sigma}(\vect{s})\cdot\mathrm{d}\vect{s} \ ,
\end{equation}
where the redshift factor $1+z\e{s}=\mathrm{d} t\e{o}/\mathrm{d} t\e{s}$ allows for the fact that $\mathrm{d} t\e{s}$ and $\mathrm{d}\Delta t$ were defined in distinct frames.
\subsubsection{Time-delay formula} From \cref{eq:differential_time_delay}, we see that the expression of the time delay~$\Delta t$ may be obtained by a line integral of the vector field $\vect{\sigma}(\vect{s})$, between an arbitrary reference point and $\vect{s}$. In general, the line integral of a vector field depends on the path along which the integral is performed; except if the vector field is a gradient, which turns out to be the case. Namely, the emission angle $\vect{\sigma}$ between the lensed and unlensed rays depends on the source position $\vect{s}$ as
\begin{equation}
\label{eq:sigma_is_gradient}
(1+z\e{s})\vect{\sigma}(\vect{s}) = \ddf{T}{\vect{s}} \ ,
\end{equation}
where the scalar function $T$ reads
\begin{empheq}[box=\fbox]{align}
\label{eq:time_delay_one_lens}
T(\vect{\theta},\vect{\beta})
&\define \frac{1}{2} \, (\vect{\theta}-\vect{\beta})\cdot\mat{\mathcal{T}}(\vect{\theta}-\vect{\beta})
- (1+z\e{d})\hat{\psi}[\omega\e{o} \mat{\mathcal{D}}\e{o\langle d} \vect{\theta})] \ ,
\\
\label{eq:time_delay_matrix}
\text{with}\quad
\mat{\mathcal{T}} &\define \omega\e{o} \mat{\mathcal{D}}\e{o\langle d}\h{T}\mat{\mathcal{D}}\e{d\langle s}^{-1}\mat{\mathcal{D}}\e{o\langle s} \ .
\end{empheq}
In \cref{eq:sigma_is_gradient}, the derivative $\mathrm{d} T/\mathrm{d}\vect{s}$ must be understood as a total derivative, in the sense that it accounts for the variation of \emph{both} natural variables $\vect{\theta}, \vect{\beta}$ of $T$ under a variation of $\vect{s}$. A detailed proof of \cref{eq:sigma_is_gradient} is provided in \cref{app:sigma_is_gradient}. Combining it with \cref{eq:differential_time_delay}, we immediately conclude that $\mathrm{d} \Delta t = \mathrm{d} T$, that is\footnote{Having noticed that $\vect{\sigma}(\vect{s})$ is a gradient makes our derivation of the time-delay formula simpler and more general than the one originally proposed in \citet{1992grle.book.....S}. In particular, we do not need to introduce a reference source whose contribution would be later set to zero on a caustic.}
\begin{empheq}[box=\fbox]{equation}
\Delta t(\vect{\theta}, \vect{\beta}) = T(\vect{\theta}, \vect{\beta}) + \mathrm{cst} \ .
\end{empheq}
Therefore, the time delay between two different images A, B of the same source reads $\Delta t\e{AB}(\vect{\beta})=T(\vect{\theta}\e{A},\vect{\beta})-T(\vect{\theta}\e{B},\vect{\beta})$. Note that any function of $\vect{\beta}$ could be added to the expression of $T(\vect{\theta},\vect{\beta})$ without changing the observable $\Delta t\e{AB}$.
The \emph{time-delay matrix}~$\mat{\mathcal{T}}$ generalises the more common notion of time-delay distance to an arbitrary reference space-time. Indeed, if the latter is chosen as the FLRW space-time, then $\mat{\mathcal{T}}=\bar{\tau}\,\mat{1}$, where $\bar{\tau}\define (1+z\e{d})\bar{D}\e{o\langle d}\bar{D}\e{o\langle s}/\bar{D}\e{d\langle s}$ is usually called the time-delay distance. Contrary to the Jacobi matrices that compose it, the time-delay matrix is symmetric,
\begin{equation}
\mat{\mathcal{T}}\h{T} = \mat{\mathcal{T}} \ .
\end{equation}
The time-delay matrix is related to, but different from, the telescope matrix introduced by \citet{1987ApJ...316...52K}, and then used by \citet{1992grle.book.....S, 1994A&A...287..349S, Schneider:1997bq} within the generalised quadrupole lens model. \citet{1987ApJ...316...52K, 1994A&A...287..349S, Schneider:1997bq} proved its symmetry with purely algebraic arguments. In \cref{app:symmetry_time_delay_matrix}, instead, we propose a geometric proof relying on Etherington's reciprocity law~\eqref{eq:Etherington}.
\subsubsection{Fermat's principle} Just like time-like and space-like geodesics can be defined from a stationary-time and stationary-length principle, null geodesics may be defined from Fermat's principle; see e.g. \citet{1992grle.book.....S}. In the present context, Fermat's principle states that the arrival time of a physical ray is stationary with respect to small variations of the position~$\vect{x}$ where it pierces the lens plane. In other words, all things being fixed (notably $\vect{s}, \vect{r}$), physical rays must satisfy $\partial T/\partial\vect{x}=\vect{0}$.
This property can be checked explicitly as follows. We first rewrite the first term of $T$ as $\vect{\alpha}\cdot\mat{\mathcal{T}}\vect{\alpha}=(1+z\e{d})\hat{\vect{\alpha}}\cdot\hat{\mat{\mathcal{T}}}\hat{\vect{\alpha}}$, where $\hat{\mat{\mathcal{T}}}=\omega\e{d}\mat{\mathcal{D}}\e{o\langle d}\mat{\mathcal{D}}\e{o\langle s}^{-1}\mat{\mathcal{D}}\e{d\langle s}$ is also a symmetric matrix. Then, using the identity $\hat{\mat{\mathcal{T}}}\hat{\vect{\alpha}}=\vect{x}-\vect{r}$ we immediately find
\begin{equation}
\pd{T}{\vect{x}} = (1+z\e{d}) \pac{ \hat{\vect{\alpha}}(\vect{x}) - \ddf{\hat{\psi}}{\vect{x}} } ,
\end{equation}
so that physical light rays are indeed those whose deflection angle $\hat{\vect{\alpha}}$ is dictated by the physics of the lens plane.
\subsection{Example: one lens in a perturbed cosmological background}
\label{subsec:one_lens_perturbed_FLRW}
Suppose that the reference space-time can be treated as a weakly perturbed homogeneous-isotropic FLRW model. The associated Jacobi matrix takes the form
\begin{equation}
\label{eq:Jacobi_perturbed_FLRW}
\omega\e{a}\mat{\mathcal{D}}\e{a\langle b}
= \bar{D}\e{a\langle b} \, \mat{\mathcal{A}}\e{a\langle b} \ .
\end{equation}
In \cref{eq:Jacobi_perturbed_FLRW}, $\bar{D}\e{a\langle b}=(1+z\e{b})^{-1}f_K(\chi\e{b}-\chi\e{a})$ denotes the angular diameter distance of (b) measured from (a) in the FLRW space-time, $\chi$ being the radial comoving distance, and $f_K(\chi)\define\sin(\sqrt{K}\chi)/\sqrt{K}$, with $K$ is the spatial-curvature parameter.
The second quantity,
\begin{equation}
\mat{\mathcal{A}}\e{a\langle b}
= \begin{bmatrix}
1-\kappa\e{a\langle b} - \Re(\gamma\e{a\langle b}) & - \Im(\gamma\e{a\langle b}) \\
- \Im(\gamma\e{a\langle b}) & 1-\kappa\e{a\langle b} + \Re(\gamma\e{a\langle b})
\end{bmatrix} ,
\end{equation}
is the amplification matrix due to cosmological perturbations about FLRW, i.e. caused by the large-scale matter inhomogeneities in the Universe. Its key components are the convergence~$\kappa\e{a\langle b}$ and complex shear~$\gamma\e{a\langle b}$; we have neglected the anti-symmetric part of $\mat{\mathcal{A}}\e{a\langle b}$, which represents the rotation of light beams with respect to parallel transport, because it is of the order of $\gamma^2$ if $\gamma\ll 1$ \citep[][\S~2.3.2]{Fleury:2015hgz}. At linear order in the matter density contrast~$\delta(\eta,\chi,\vect{x})\define(\rho-\bar{\rho})/\bar{\rho}$, the convergence and shear read~\citep{Fleury:2018cro}
\begin{align}
\label{eq:convergence}
\kappa\e{a\langle b}(\vect{\theta})
&= \frac{3}{2} \Omega\e{m0} H_0^2
\int_{\chi\e{a}}^{\chi\e{b}} \mathrm{d}\chi \; (1+z) \, \frac{f_K(\chi\e{b}-\chi)f_K(\chi-\chi\e{a})}{f_K(\chi\e{b}-\chi\e{a})}\,
\delta[\eta_0-\chi, \chi, f_K(\chi)\vect{\theta}] \ , \\
\gamma\e{a\langle b}(\vect{\theta})
&= -\frac{3}{2} \Omega\e{m0} H_0^2
\int_{\chi\e{a}}^{\chi\e{b}} \mathrm{d}\chi \; (1+z) \, \frac{f_K(\chi\e{b}-\chi)f_K(\chi-\chi\e{a})}{f_K(\chi\e{b}-\chi\e{a})}
\nonumber\\&\hspace{5cm} \times
\int_{\mathbb{R}^2} \frac{\mathrm{d}^2\vect{x}}{\pi x^2} \; \ex{2\ii\ph}\,
\delta[\eta_0-\chi, \chi, f_K(\chi)\vect{\theta}+\vect{x}] \ ,
\end{align}
where $\eta_0$ denotes today's conformal time, and $\ph$ is the polar angle of $\vect{x}=x(\cos\ph, \sin\ph)$.
\subsubsection{Lens equation}
\label{subsubsec:lens_equation_perturbed_FLRW}
In the lens equation~\eqref{eq:lens_equation_one_lens}, $\vect{\beta}$ stands for the direction in which a source would be observed without the lens, but still in the presence of the weak cosmological perturbations. In the lensing literature, however, it is understandably more common to write the lens equation in terms of the completely unlensed source position. That direction, which we shall denote $\bar{\vect{\beta}}=\mat{\mathcal{A}}\e{o\langle s}\vect{\beta}$, would be the observed position of the source without \emph{both} the strong lens and the cosmological perturbations, i.e. in an ideal FLRW Universe. In terms of $\bar{\vect{\beta}}$, \cref{eq:lens_equation_one_lens} reads
\begin{equation}
\label{eq:lens_equation_one_lens_cosmological_perturbations}
\bar{\vect{\beta}}(\vect{\theta})
= \mat{\mathcal{A}}\e{o\langle s}\vect{\theta}
- \frac{f_K(\chi\e{s}-\chi\e{d})} {f_K(\chi\e{s})}\,
\mat{\mathcal{A}}\e{d\langle s} \,
\hat{\vect{\alpha}}(\bar{D}\e{o\langle d}\mat{\mathcal{A}}\e{o\langle d}\vect{\theta}) \ .
\end{equation}
\Cref{eq:lens_equation_one_lens_cosmological_perturbations} had already been obtained in the literature under various forms; for instance, it is equivalent to eq.~(6.7) of \citet{1987ApJ...316...52K}, eq.~(14) of \citet{1996ApJ...468...17B}, eq.~(35) of \citet{McCully:2013fga}; it also coincides with eq.~(3.3) of \citet{Birrer:2016xku} in the critical-mass-sheet approximation defined therein.
\subsubsection{Equivalent lens}
In the absence of cosmological perturbations, i.e. for $\mat{\mathcal{A}}\e{a\langle b}=\vect{1}$, the lens equation would take the form
\begin{equation}\label{eq:one_lens_without_perturbation}
\bar{\vect{\beta}}=\vect{\theta}-\bar{\vect{\alpha}}(\vect{\theta}) \ ,
\end{equation}
with $\bar{\vect{\alpha}}(\vect{\theta})=[f_K(\chi\e{s}-\chi\e{d})/f_K(\chi\e{s})]\hat{\vect{\alpha}}(\bar{D}\e{\rm o\langle d}\vect{\theta})$. In the presence of perturbations, the lens equation~\eqref{eq:lens_equation_one_lens_cosmological_perturbations} is formally equivalent to \cref{eq:one_lens_without_perturbation}, if one replaces $\bar{\vect{\alpha}}(\vect{\theta})$ by
\begin{align}
\vect{\alpha}\e{eq}(\vect{\theta})
&= (\mat{1} - \mat{\mathcal{A}}\e{o\langle s})\vect{\theta}
+ \mat{\mathcal{A}}\e{d\langle s} \bar{\vect{\alpha}}
(\mat{\mathcal{A}}\e{o\langle d}\vect{\theta})
\\
&\approx
\pac{
(\mat{1} - \mat{\mathcal{A}}\e{o\langle s})
+ \ddf{\bar{\vect{\alpha}}}{\vect{\theta}}
(\mat{\mathcal{A}}\e{o\langle d}-\mat{1})
} \vect{\theta}
+ \mat{\mathcal{A}}\e{d\langle s} \bar{\vect{\alpha}}(\vect{\theta}) \ ,
\end{align}
which may be called the equivalent lens. This shows that, in full generality, a lens model~$\bar{\vect{\alpha}}$ must be supplemented with 9 real parameters (3 $\kappa\e{a\langle b}$ and 3 complex $\gamma\e{a\langle b}$) to properly account for cosmological perturbations. Degeneracies between these parameters might occur if $\bar{\vect{\alpha}}$ enjoys symmetries.
In general, $\vect{\alpha}\e{eq}(\vect{\theta})$ cannot be written as a gradient, which means that it does not derive from a potential. An alternative approach \citep{Schneider:1997bq} which circumvents this issue consists in first applying a transformation $\vect{\beta}\mapsto\tilde{\vect{\beta}}\define \mat{\mathcal{A}}\e{o\langle d}\mat{\mathcal{A}}\e{d\langle s}^{-1}\vect{\beta}$ to the source plane. The resulting equivalent lens then derives from a potential.
\subsubsection{Amplification matrix}
Following the discussion of \cref{subsubsec:lens_equation_perturbed_FLRW}, we shall consider amplifications with respect to the homogeneous Universe, rather than the amplification due to the sole lens. The corresponding ``total'' amplification matrix is defined as $\mat{\mathcal{A}}\e{tot}=\mathrm{d}\bar{\vect{\beta}}/\mathrm{d}\vect{\theta}=\mat{\mathcal{A}}\e{o\langle s}\mat{\mathcal{A}}$, and reads
\begin{align}
\mat{\mathcal{A}}\e{tot}(\vect{\theta})
&= \mat{\mathcal{A}}\e{o\langle s}
- \frac{f_K(\chi\e{s}-\chi\e{d})f_K(\chi\e{d})} {(1+z\e{d})f_K(\chi\e{s})} \,
\mat{\mathcal{A}}\e{d\langle s} \,
\hat{\mat{H}}(\bar{D}\e{o\langle d}\mat{\mathcal{A}}\e{o\langle d}\vect{\theta}) \,
\mat{\mathcal{A}}\e{o\langle d} \\
&= \mat{\mathcal{A}}\e{o\langle s}
- \mat{\mathcal{A}}\e{d\langle s} \pac{\mat{1}-\overline{\mat{\mathcal{A}}}(\mat{\mathcal{A}}\e{o\langle d}\vect{\theta})} \mat{\mathcal{A}}\e{o\langle d} \ ,
\label{eq:total_amplification_one_lens_cosmological_perturbations}
\end{align}
where $\overline{\mat{\mathcal{A}}}\define\mat{1}- \mathrm{d}\bar{\vect{\alpha}}/\mathrm{d}\vect{\theta}$ is the amplification matrix that would characterise the lens in the absence of cosmological perturbations, i.e. in an ideal FLRW reference space-time. \Cref{eq:total_amplification_one_lens_cosmological_perturbations} confirms that line-of-sight perturbations do not only add to the effect of a lens, but they also modify the effect of the lens itself.
\subsubsection{Time delays}
For a perturbed FLRW reference space-time, the general expression~\eqref{eq:time_delay_one_lens} of the time delay becomes
\begin{align}
T(\vect{\theta},\bar{\vect{\beta}})
&= \frac{\bar{\tau}}{2} \,
(\vect{\theta}-\mat{\mathcal{A}}\e{o\langle s}^{-1}\bar{\vect{\beta}})\cdot
\mat{\mathcal{A}}\e{o\langle d}
\mat{\mathcal{A}}\e{d\langle s}^{-1}
\mat{\mathcal{A}}\e{o\langle s}
(\vect{\theta}-\mat{\mathcal{A}}\e{o\langle s}^{-1}\bar{\vect{\beta}})
- (1+z\e{d})\hat{\psi}(\bar{D}\e{o\langle d}\mat{\mathcal{A}}\e{o\langle d}\vect{\theta})
\\
&=
\bar{T}(\vect{\theta},\bar{\vect{\beta}})
+ \delta T(\vect{\theta},\bar{\vect{\beta}})
\end{align}
at first order in the cosmological perturbations, where, on the one hand
\begin{equation}
\bar{T}(\vect{\theta}, \bar{\vect{\beta}})
\define
\bar{\tau}
\pac{\frac{1}{2}
|\vect{\theta}-\bar{\vect{\beta}}|^2
- \bar{\psi}(\vect{\theta})
}
\end{equation}
would be the time-delay function if the reference space-time were strictly homogeneous and isotropic, with $\bar{\tau}=(1+z\e{d}) \bar{D}\e{o\langle d}\bar{D}\e{o\langle s}/\bar{D}\e{d\langle s}$ the FLRW time-delay distance and $\bar{\psi}(\vect{\theta})= \bar{D}\e{d\langle s} \bar{D}\e{o\langle s}^{-1} \bar{D}\e{o\langle d}^{-1} \hat{\psi}(\bar{D}\e{o\langle d}\vect{\theta})$ the background lensing potential; on the other hand,\footnote{Note that we substituted the lens equation to obtain this expression of $\delta T$. A perhaps surprising consequence is that $\partial T/\partial\vect{\theta}\neq \vect{0}$ using that expression.}
\begin{equation}
\delta T(\vect{\theta},\bar{\vect{\beta}})
\define
\frac{1}{2} \, \bar{\tau}
(\vect{\theta}-\bar{\vect{\beta}}) \cdot
\pac{
(\delta\mat{\mathcal{A}}\e{o\langle s}-
\delta\mat{\mathcal{A}}\e{o\langle d})
(\vect{\theta}+\bar{\vect{\beta}})
-
\delta\mat{\mathcal{A}}\e{d\langle s}
(\vect{\theta}-\bar{\vect{\beta}})
} ,
\end{equation}
where we denoted $\delta\mat{\mathcal{A}}\e{a\langle b}\define \mat{\mathcal{A}}\e{a\langle b}-\mat{1}$ for short, gathers all the corrections due to cosmological perturbations.
Taken at face value, the correction~$\delta T$ thereby induced is quite complex. However, for practical analyses of time-delay observations, these may be reduced to a \emph{single external convergence and shear}. First, since the source position~$\bar{\vect{\beta}}$ is unknown and hence a free parameter in such analyses, it does not make any difference whether one considers $\vect{\beta}=\mat{\mathcal{A}}\e{o\langle s}^{-1}\bar{\vect{\beta}}$ instead. Second, if the lens model $\bar{\psi}(\vect{\theta})$ is general enough,\footnote{In particular, an elliptic model may not suffice, since $\gamma\e{o\langle d}$ is generally not aligned with the intrinsic ellipticity of the lens.} then it may effectively account for the corrections due to $\mat{\mathcal{A}}\e{o\langle d}$ in the argument of $\hat{\psi}$. In that context, the time-delay model that must be used reads
\begin{equation}
T\e{mod}(\vect{\theta}, \vect{\beta})
=
\bar{\tau}
\pac{\frac{1}{2}
(\vect{\theta}-\vect{\beta})\cdot
\mat{\mathcal{A}}\e{ext}
(\vect{\theta}-\vect{\beta})
- \psi\e{mod}(\vect{\theta})
} ,
\end{equation}
with an ``external'' amplification matrix
\begin{equation}
\mat{\mathcal{A}}\e{ext}
\define
\mat{\mathcal{A}}\e{o\langle d}
\mat{\mathcal{A}}\e{d\langle s}^{-1}
\mat{\mathcal{A}}\e{o\langle s}
\approx
\mat{1} -
\begin{bmatrix}
\kappa\e{ext}+\Re(\gamma\e{ext}) & \Im(\gamma\e{ext}) \\
\Im(\gamma\e{ext}) & \kappa\e{ext}-\Re(\gamma\e{ext})
\end{bmatrix} ,
\end{equation}
featuring an external convergence
$
\kappa\e{ext} = \kappa\e{o\langle d} + \kappa\e{o\langle s} - \kappa\e{d\langle s}
$
and shear
$
\gamma\e{ext} = \gamma\e{o\langle d} + \gamma\e{o \langle s} - \gamma\e{d\langle s}
$.
While the external convergence is routinely implemented in current time-delay analyses~\citep{2020A&A...642A.194G}, we are not aware of any practical implementation of the external shear, although its relevance was suggested by \citet{McCully:2016yfe}.
\subsection{Example: lensing in an anisotropic Universe}
\label{subsec:one_lens_Bianchi}
As a second illustration, consider the case of a lens placed in a homogeneous but anisotropic Universe. If its homogeneity hyper-surfaces have no intrinsic curvature, then it may be described by the Bianchi~I geometry~\citep{1969CMaPh..12..108E}. In the Bianchi~I space-time, cosmic expansion is the same everywhere, but it is faster in some directions than in others. In comoving coordinates, its metric reads
\begin{equation}
\mathrm{d} s^2 = a^2(\eta)
\pac{ -\mathrm{d}\eta^2
+ \ex{2\beta_x(\eta)}\mathrm{d} x^2
+ \ex{2\beta_y(\eta)}\mathrm{d} y^2
+ \ex{2\beta_z(\eta)}\mathrm{d} z^2 } ,
\end{equation}
where $a(\eta)$ is the volume scale factor, and the three $\beta_i(\eta)$, which sum to zero, encode the anisotropy of expansion.
The propagation of light in Bianchi~I cosmologies has been thoroughly investigated in \citet{Fleury:2014rea, Fleury:2016htl}, thereby extending previous works on the same topic~\citep{saunders_observations_1969}. Let us restrict for simplicity to the weak-anisotropy limit ($\beta_i\ll 1$), where the optical properties of Bianchi~I reduce to those of a perturbed FLRW whose amplification matrix reads
\begin{align}
\mat{\mathcal{A}}\h{BI}\e{o\langle s}
= \mat{1}
+ \mat{\mathcal{B}}(\eta\e{s})
- \mat{\mathcal{B}}(\eta\e{o})
- \frac{2}{\eta\e{o}-\eta\e{s}} \int_{\eta\e{s}}^{\eta\e{o}} \mathrm{d}\eta
\pac{\mat{\mathcal{B}}(\eta)+\frac{1}{2}\mathrm{tr}\mat{\mathcal{B}}(\eta)}
\ .
\end{align}
We used the matrix $\mathcal{B}_{AB}(\eta)\define \vect{s}_A\h{o} \cdot \mathrm{diag}[\beta_x(\eta), \beta_y(\eta), \beta_z(\eta)] \vect{s}_B\h{o}$ where $\vect{s}_1\h{o}, \vect{s}_2\h{o}$ denote the Sachs basis at the observer. Note that in $\mat{\mathcal{A}}\h{BI}\e{o\langle s}$ the source (s) and observer (o) are defined from their conformal time; this does not account for the change in redshift due to the anisotropic expansion, which reads $1+z=(1+\bar{z})[1+\mathrm{tr}\mat{\mathcal{B}}(\eta\e{s})-\mathrm{tr}\mat{\mathcal{B}}(\eta\e{o})]$.
All the results of \cref{subsec:one_lens_perturbed_FLRW} can be used to describe lensing in a weakly anisotropic Universe, if one uses the $\mat{\mathcal{A}}\h{BI}\e{o\langle s}$ as amplification matrix all along.
\section{$N$ lenses in an arbitrary reference space-time}
\label{sec:N_lenses}
Let us now turn to the more involved case where light travels through an arbitrary number $N$ of rough-field regions. A three-dimensional representation of the set-up is depicted in \cref{fig:N_lenses}, where the various quantities are defined in the same way as in the one-lens case. To our knowledge, such a situation had never been considered in full generality, although the hybrid framework proposed by \citet{McCully:2013fga} may allow one to treat the most relevant cases in practice.
\begin{figure}[t]
\centering
\import{figures/}{multi-plane_N_lenses.pdf_tex}
\caption{Same as \cref{fig:one_lens_3d}, but with $N$ lenses labelled by $l$. The transverse vector~$\vect{x}_l$ represents the position where the physical ray intersects the $l$th lens plane. The observer would correspond to $l=0$ ($\vect{x}_0=0$) and the source to $l=N+1$ ($\vect{x}_{N+1}=\vect{s}$).}
\label{fig:N_lenses}
\end{figure}
\subsection{Lens equation for $N$ lenses}
For $N$ lenses, the most direct derivation of the lens equation differs from the single-lens case in its structure. Let us introduce again a past-oriented affine parameter~$\lambda$ along $\geo{F}$, such that $\lambda=0$ at the observer and increases towards the source. We still denote with $\vect{x}(\lambda)$ the transverse separation between the physical and fiducial rays. Between two successive lens planes, $\vect{x}(\lambda)$ evolves smoothly according to the Sachs equation~\eqref{eq:Sachs_equation} of the reference space-time. When light reaches a lens plane, its sudden deflection implies a discontinuity of the derivative~$\dot{\vect{x}}\define\mathrm{d}\vect{x}/\mathrm{d}\lambda$.
\subsubsection{Deflection by a lens}
Let us denote $\Delta\dot{\vect{x}}_l\define\dot{\vect{x}}(\lambda_l^+)-\dot{\vect{x}}(\lambda_l^-)$ the discontinuity of $\dot{\vect{x}}$ in the $l$th plane. This quantity is not exactly the deflection angle $\hat{\vect{\alpha}}_l$, because the latter represents the discontinuity in the way $\vect{x}$ changes with \emph{proper distance}~$\ell$ in the lens' rest frame. As shown in \cref{footnote:distance_affine_parameter}, proper distance and affine parameter are related by $\mathrm{d}\ell=\omega\mathrm{d}\lambda$, so that
\begin{equation}
\Delta\dot{\vect{x}}_l
= \pa{\ddf{\ell}{\lambda}}_l \Delta\pa{\ddf{\vect{x}}{\ell}}_l
= - \omega_l \hat{\vect{\alpha}}_l \ ,
\end{equation}
where the minus sign comes from the conventional orientation of $\hat{\vect{\alpha}}$.
\subsubsection{Between two lenses}
Contrary to \cref{sec:one_lens}, the propagation from one lens plane to the next one does not necessarily involve merging rays; as such, it cannot be expressed in terms of Jacobi matrices only. However, we may still exploit the fact that $\vect{x}(\lambda)$ is the solution of a linear second-order differential equation~\eqref{eq:Sachs_equation}. In full generality, the state of that solution at $\lambda$ is encoded in the ``phase space'' vector
\begin{equation}
\vect{X}(\lambda) \define
\begin{bmatrix}
\vect{x}(\lambda)\\
\dot{\vect{x}}(\lambda)
\end{bmatrix} \ .
\end{equation}
Cauchy's theorem then implies the existence of a $4\times 4$ \emph{Wronski matrix}~$\mat{\mathcal{W}}$ such that for any $\lambda_1, \lambda_2$,
\begin{equation}
\label{eq:definition_Wronski}
\vect{X}(\lambda_2) = \mat{\mathcal{W}}(\lambda_2\leftarrow\lambda_1) \vect{X}(\lambda_1) \ .
\end{equation}
Although the Wronski matrix will not appear in the final lens equation, it is a very convenient tool for its derivation. Note that the top-right $2\times 2$ block of $\mat{\mathcal{W}}(\lambda_2\leftarrow\lambda_1)$ is actually the Jacobi matrix~$\mat{\mathcal{D}}_{1\langle 2}$. Indeed, if two rays cross at $\lambda_1$, then $\vect{X}_1=(\vect{0},\dot{\vect{x}}_1)$, and $\vect{X}_2=(\mat{\mathcal{D}}_{1\langle 2}\dot{\vect{x}}_1,\dot{\vect{x}}_2)$. Another important property of $\mat{\mathcal{W}}$, which trivially follows from its definition~\eqref{eq:definition_Wronski}, is the product law
\begin{equation}
\mat{\mathcal{W}}(\lambda_3\leftarrow\lambda_1)
=
\mat{\mathcal{W}}(\lambda_3\leftarrow\lambda_2)
\mat{\mathcal{W}}(\lambda_2\leftarrow\lambda_1) \ .
\end{equation}
In what follows, we denote by $\mat{\mathcal{W}}(l+1\leftarrow l)$ the Wronski matrix that relates $\vect{X}_{l+1}^-\define\vect{X}(\lambda_{l+1}^-)$ before its deflection in the $(l+1)$th plane, to $\vect{X}_l^+\define\vect{X}(\lambda_l^+)$ after its deflection in the $l$th plane,
\begin{equation}
\label{eq:transfer_l_l+1}
\vect{X}_{l+1}^- = \mat{\mathcal{W}}(l+1\leftarrow l) \vect{X}_{l}^+ .
\end{equation}
\subsubsection{Recursion relation and lens equation} Since $\vect{x}(\lambda)$ is continuous at $\lambda_l$, the discontinuity of the phase-space vector $\vect{X}$ reads
\begin{equation}
\label{eq:discontinuity_X}
\Delta\vect{X}_l \define \vect{X}_l^+ - \vect{X}_l^-
=
\begin{bmatrix}
\Delta\vect{x}_l \\
\Delta\dot{\vect{x}_l}
\end{bmatrix}
=
\begin{bmatrix}
\vect{0}\\
-\omega_l\hat{\vect{\alpha}_l}
\end{bmatrix} \ .
\end{equation}
Denoting $\vect{X}_l\define\vect{X}_l^-$ (just before deflection at $\lambda_l$) for short, \cref{eq:transfer_l_l+1,eq:discontinuity_X} yield the recursion relation
$
\vect{X}_{l+1} = \mat{\mathcal{W}}(l+1\leftarrow l) (\vect{X}_l + \Delta\vect{X}_l)
$,
which is solved as
\begin{equation}
\vect{X}_l
= \mat{\mathcal{W}}(l\leftarrow\mathrm{o}) \vect{X}\e{o}
+ \sum_{m=1}^{l-1} \mat{\mathcal{W}}(l\leftarrow m) \Delta\vect{X}_m \ .
\end{equation}
Finally, isolating the first two components, $\vect{x}_l$, of $\vect{X}_l$ in the above, noting that $\vect{X}_0=(\vect{0},\omega\e{o}\vect{\theta})$, and expressing the result in terms of $\vect{\beta}_l\define (\omega\e{o}\mat{\mathcal{D}}_{\mathrm{o}\langle l})^{-1}\vect{x}_l$, we find
\begin{empheq}[box=\fbox]{equation}
\label{eq:recursion_beta_l}
\vect{\beta}_l = \vect{\theta}
- \sum_{m=1}^{l-1} (1+z_m)
\mat{\mathcal{D}}_{\mathrm{o}\langle l}^{-1} \mat{\mathcal{D}}_{m\langle l}
\hat{\vect{\alpha}}_m(\vect{x}_m)
\end{empheq}
for any $l$, which only involves Jacobi matrices. The angle $\vect{\beta}_l$ represents the direction in which a source at $\vect{x}_l$ in the $l$th plane would be observed in the absence of foreground lenses $m<l$. The case $l=N+1$, with $\vect{\beta}_{N+1}=\vect{\beta}$ yields the explicit lens equation
\begin{empheq}[box=]{equation}
\label{eq:lens_equation_N_lenses}
\vect{\beta}
= \vect{\theta}
- \sum_{l=1}^N (1+z_l) \mat{\mathcal{D}}\e{o\langle s}^{-1}\mat{\mathcal{D}}_{l\langle\mathrm{s}} \, \hat{\vect{\alpha}}_l(\vect{x}_l) \ .
\end{empheq}
Note that the single-lens equation~\eqref{eq:lens_equation_one_lens} is recovered for $N=1$. \Cref{eq:recursion_beta_l} matches eq.~(10) of \citet{Schneider:2014vka}, derived for a perturbed FLRW reference space-time.
\subsection{Amplification matrix for $N$ lenses}
Just like the lens equation is a recursion relation, the amplification matrix for $N$ lenses takes a recursive form. From $\vect{\beta}_l$ we shall define the intermediate amplification matrix $\mat{\mathcal{A}}_l\define \mathrm{d}\vect{\beta}_l/\mathrm{d}\vect{\theta}$, which characterises the distortions of an infinitesimal source in the $l$th plane due to the foreground lenses~$m<l$. By differentiating \cref{eq:recursion_beta_l}, we find the recursion relation
\begin{equation}
\mat{\mathcal{A}}_l
= \mat{1}
- \sum_{m=1}^{l-1} \omega_m
\mat{\mathcal{D}}_{\mathrm{o}\langle l}^{-1}
\mat{\mathcal{D}}_{m\langle l} \,
\hat{\mat{H}}_m(\vect{x}_m) \,
\mat{\mathcal{D}}_{\mathrm{o}\langle m}
\mat{\mathcal{A}}_m \ ,
\end{equation}
where $\hat{H}^m_{ij}\define \partial\hat{\psi}_m/\partial x^i \partial x^j$, and with initial condition $\mat{\mathcal{A}}_{1}=\mat{1}$ since $\vect{\beta}_1=\vect{\theta}$. The complete amplification matrix, accounting for all the lenses, is $\mat{\mathcal{A}}\define\mathrm{d}\vect{\beta}/\mathrm{d}\vect{\theta}= \mat{\mathcal{A}}_{N+1}$.
\subsection{Time delay for $N$ lenses}
\begin{figure}[t]
\centering
\import{figures/}{time_delay_N_lenses.pdf_tex}
\caption{Same as \cref{fig:N_lenses} with $N=3$ lenses. Dashed lines represent rays of the reference space-time that connect the positions $\vect{x}_l$ of the physical ray to the observer.}
\label{fig:time_delay_N_lenses}
\end{figure}
\subsubsection{Expression of the time delay} The $N$-lens case can be intuitively deduced from the single-lens case as follows. First connect each point $\vect{x}_l$ to the observer with a fictitious ray of the reference space-time, as depicted in \cref{fig:time_delay_N_lenses} for $N=3$ lenses. We may then identify $N$ triangles formed by the points $O$, $\vect{x}_l$, $\vect{x}_{l+1}$. Let us apply the single-lens time-delay formula~\eqref{eq:time_delay_one_lens} in each of these triangles. Precisely, in the $l$th triangle, two signals emitted simultaneously at $\vect{x}_{l+1}$, the first one being undeflected (observed in the direction $\vect{\beta}_{l+1}$) and the other one deflected in the $l$th plane (observed in the direction $\vect{\beta}_{l}$), would be received with a delay
\begin{equation}
\label{eq:TdelayGen}
T_l(\vect{\beta}_l, \vect{\beta}_{l+1})
= \frac{1}{2}
(\vect{\beta}_l-\vect{\beta}_{l+1})
\cdot\mat{\mathcal{T}}_{l(l+1)}
(\vect{\beta}_l-\vect{\beta}_{l+1})
- (1+z_l) \hat{\psi}_l(\vect{x}_l) \ ,
\end{equation}
up to a constant, with for any $l,m$ such that $0<l<m$,
\begin{equation}
\mat{\mathcal{T}}_{lm}
\define
\omega\e{o} \mat{\mathcal{D}}\h{T}_{\mathrm{o}\langle l}
\mat{\mathcal{D}}^{-1}_{l\langle m}
\mat{\mathcal{D}}_{\mathrm{o}\langle m} \ .
\end{equation}
We note that the time-delay matrix satisfies the following unfolding relation;\footnote{\Cref{eq:unfolding_td_matrix} generalises eq.~(6.21) of \citet{2001stgl.book.....P}, which was also reported and exploited by \citet{McCully:2013fga}.};
for any three planes $l,m,n$ such that $0<l<m<n$,
\begin{equation}\label{eq:unfolding_td_matrix}
\mat{\mathcal{T}}_{ln}^{-1} = \mat{\mathcal{T}}_{lm}^{-1} + \mat{\mathcal{T}}_{mn}^{-1} \ .
\end{equation}
See \cref{app:unfold_td_matrix} for a proof. Although we shall not use it here, \cref{eq:unfolding_td_matrix} is very useful to derive the expression of time delays in the context of multi-plane lensing with a dominant lens~\citep{FLU21}.
The delay between the actual ray (observed in the direction $\vect{\theta}$) and the undeflected ray (observed in the direction $\vect{\beta}$) is then the sum of all these partial delays:
\begin{empheq}[box=\fbox]{align}
\label{eq:time_delay_N_lenses}
\Delta t &= T(\vect{\beta}_1,\ldots, \vect{\beta}_N) + \mathrm{cst} \ ,
\\
T(\vect{\beta}_1,\ldots, \vect{\beta}_N)
&\define \sum_{l=1}^N T_l(\vect{\beta}_l, \vect{\beta}_{l+1}) \ .
\end{empheq}
\subsubsection{Rigorous derivation} Although it yields the correct result, the above intuitive derivation of \cref{eq:time_delay_N_lenses} is actually incomplete. Indeed, we applied the single-lens time delay formula~\eqref{eq:time_delay_one_lens} to two non-physical rays. In particular, the ``deflection angle'' of the $l$th triangle, which is formed by the rays observed in the directions $\vect{\beta}_{l}, \vect{\beta}_{l+1}$, and which we may denote $\tilde{\vect{\alpha}}_l$, is \emph{not} the physical deflection angle $\hat{\vect{\alpha}}_l=\partial\hat{\psi}_l/\partial\vect{x}_l$ (see \cref{fig:time_delay_N_lenses}). This is not a mere detail, because the equality between the deflection angle and the gradient of the lensing potential was a key of the derivation of \cref{eq:time_delay_one_lens} proposed in \cref{app:sigma_is_gradient}. Therefore, nothing guarantees in principle that \cref{eq:time_delay_one_lens} can be applied to rays that do not exhibit the correct deflection angle.
Fortunately, despite this weakness in the way that \cref{eq:time_delay_N_lenses} was obtained, the formula itself turns out to be correct. Let us prove this point. First of all, we note that the differential expression~\eqref{eq:differential_time_delay}, i.e. $\mathrm{d}\Delta t=(1+z_s)\vect{\sigma}\cdot\mathrm{d}\vect{s}$, still applies here because it relies on strictly local arguments in the source plane. Therefore, if we can show that $(1+z\e{s})\vect{\sigma}=\mathrm{d} T/\mathrm{d}\vect{s}$, then it would imply $\mathrm{d}\Delta t = \mathrm{d} T$ just like in the single-lens case, and hence \cref{eq:time_delay_N_lenses} would follow.
Under a small variation of the source position $\vect{s}$, all the intermediate positions~$\vect{x}_l$ (and thus $\vect{\beta}_l$) change accordingly, so that each contribution $T_l$ of $T$ varies as
\begin{equation}
\ddf{T_l}{\vect{s}}
= \pa{\ddf{\vect{\beta}_l}{\vect{s}}}\h{T} \pd{T_l}{\vect{\beta}_l}
+ \pa{\ddf{\vect{\beta}_{l+1}}{\vect{s}}}\h{T} \pd{T_l}{\vect{\beta}_{l+1}} \ .
\end{equation}
A similar calculation has already been performed in \cref{app:sigma_is_gradient}, except that (i) what was denoted $\vect{s}$ there is now $\vect{x}_{l+1}$; and (ii) the deflection angle must be replaced by the geometrical angle $\tilde{\vect{\alpha}}_l$. In other words, \cref{eq:result_appendix_gradient} applied to the configuration of the $l$th triangle reads
\begin{equation}
\ddf{T_l}{\vect{x}_{l+1}}
= (1+z_l) \pa{ \ddf{\vect{x}_l}{\vect{x}_{l+1}} }\h{T}
\pa{ \tilde{\vect{\alpha}}_l - \hat{\vect{\alpha}}_l }
+ (1+z_{l+1}) \vect{\sigma}_l \ ,
\end{equation}
from which it is easy to get the derivative with respect to $\vect{s}$ by multiplying both sides with $(\mathrm{d}\vect{x}_{l+1}/\mathrm{d}\vect{s})\h{T}$; the variable with respect to which one differentiates is just a matter of parameterisation here. The last step consists in noticing the identity
\begin{equation}
\tilde{\vect{\alpha}}_l + \vect{\sigma}_{l-1} = \hat{\vect{\alpha}}_l \ ,
\end{equation}
which clearly appears in \cref{fig:time_delay_N_lenses}, and from which we finally get
\begin{equation}
\ddf{T_l}{\vect{s}}
= (1+z_{l+1}) \pa{ \ddf{\vect{x}_{l+1}}{\vect{s}} }\h{T} \vect{\sigma}_l
- (1+z_l) \pa{ \ddf{\vect{x}_l}{\vect{s}} }\h{T} \vect{\sigma}_{l-1} \ .
\end{equation}
When summing the $\mathrm{d} T_l/\mathrm{d}\vect{s}$, all the terms cancel two by two, except the first one proportional to $\vect{\sigma}_0\define\vect{0}$, and the last one proportional to $\vect{\sigma}_N\define\vect{\sigma}$. Therefore,
\begin{equation}
\ddf{T}{\vect{s}} = \sum_{l=1}^N \ddf{T_l}{\vect{s}} = (1+z\e{s}) \vect{\sigma} \ ,
\end{equation}
which concludes our proof.
\subsubsection{Fermat's principle}
Like in the single-lens case, Fermat's principle states that a light ray is physical if and only if the time-delay function is stationary for this ray. Considering $T$ as a function of $\vect{x}_1, \ldots, \vect{x}_N$ instead of $\vect{\beta}_1, \ldots, \vect{\beta}_N$, one gets
\begin{equation}
\pd{T}{\vect{x}_l} = (1+z_l) \pac{ \hat{\vect{\alpha}}_l(\vect{x}_l) - \ddf{\hat{\psi}_l}{\vect{x}_l} }
\end{equation}
with similar calculations as in \cref{app:sigma_is_gradient}. Therefore, the function $T$ is stationary with respect to changes of $\vect{x}_1, \ldots, \vect{x}_N$ for and only for the physical ray.
\subsection{Example: lenses in an under-dense Universe}
As an illustration of the framework developed in this section, consider the situation where a fraction $f\in[0,1]$ of the matter in the Universe is homogeneously distributed, while the rest is under the form of lenses. Such a scenario is comparable to the Einstein-Straus Swiss-cheese model~\citep{1945RvMP...17..120E,1969ApJ...155...89K,Fleury:2013sna}, although in the latter the point-masses are surrounded by empty regions, while here we rather consider lenses placed within a homogeneous but under-dense cosmos. This under-dense Universe stands for our reference space-time, for which the Jacobi matrix is scalar,
\begin{equation}
\omega\e{a}\mat{\mathcal{D}}\e{a\langle b} = D\e{a\langle b} \mat{1} \ ,
\end{equation}
where $D\e{a\langle b}$ is given by the Kantowski-Dyer-Roeder distance~\citep{1969ApJ...155...89K, 1973ApJ...180L..31D, 1973PhDT........17D, 1974ApJ...189..167D, Fleury:2014gha} with smoothness parameter $f$. For $z<2$, it may be approximated up to a few percent precision by the standard FLRW distance corrected by a negative convergence \citep{Kainulainen:2009dw}, $D\e{a\langle b}\approx \bar{D}\e{a\langle b}(1+\kappa\e{a\langle b})$, with
\begin{equation}
\kappa\e{a\langle b}
= \frac{3}{2} \Omega\e{m0} H_0^2 (f-1)
\int_{\chi\e{a}}^{\chi\e{b}} \mathrm{d}\chi \; (1+z) \, \frac{f_K(\chi\e{b}-\chi)f_K(\chi-\chi\e{a})}{f_K(\chi\e{b}-\chi\e{a})} \ , \\
\end{equation}
which was obtained from \cref{eq:convergence} with $\delta=f-1$.
In that setup, the lens recursion~\eqref{eq:recursion_beta_l} becomes
\begin{equation}
\vect{\beta}_l
= \vect{\theta}
- \sum_{m=1}^{l-1} \frac{D_{m\langle l}}{D_{\mathrm{o}\langle l}} \,
\hat{\vect{\alpha}}_m(D_{\mathrm{o}\langle m}\vect{\beta}_m) \ ,
\end{equation}
which is identical to the original multi-plane recursion~\citep{1986ApJ...310..568B}, up to the expression of the distances. Such an approach efficiently meets the somehow heavier formalism developed by \citet[][\S~2.2.2]{McCully:2016yfe} to describe lenses separated by cosmic voids.
Time delays are also affected by the under-density of the reference space-time; the associated function is identical to the one of the standard multi-plane framework,
\begin{align}
T(\vect{\beta}_1, \ldots \vect{\beta}_N)
&= \sum_{l=1}^N T_l(\vect{\beta}_l, \vect{\beta}_{l+1}) \ ,
\\
\text{with} \qquad
T_l(\vect{\beta}_l, \vect{\beta}_{l+1})
&= \frac{1}{2} \, \tau_{l(l+1)} \, |\vect{\beta}_{l+1}-\vect{\beta}_l|^2
- (1+z_l)\hat{\psi}_l(D_{\mathrm{o}\langle l}\vect{\beta}_l) \ ,
\end{align}
except that the involved distances are changed, in particular
\begin{equation}
\tau_{l(l+1)}
= (1+z_l)
\frac{D_{\mathrm{o}\langle l}D_{\mathrm{o}\langle l+1}}
{D_{l\langle l+1}}
\approx \bar{\tau}_{l(l+1)}
(1 + \kappa_{\mathrm{o}\langle l}
+ \kappa_{\mathrm{o}\langle l+1}
- \kappa_{l\langle l+1}
) \ .
\end{equation}
\section{Conclusion}
\label{sec:conclusion}
In this article, we have proposed a comprehensive and efficient framework to model gravitational lensing by one or several deflectors placed in an arbitrary reference space-time. Our formalism relies on the dichotomy between \emph{smooth-field regions}, which form our \emph{reference space-time} where light beams can be considered infinitesimal, and \emph{rough-field regions} which can be described as isolated thin lenses. In that context, we have derived the lens equations, and the expressions of the amplification matrix and time delays. We illustrated our general results to: a single lens with cosmological perturbations; a single lens in an anisotropic Universe; and to $N$ lenses in an under-dense Universe.
In our derivations of the lens equation, amplification matrix and time delays, we have assumed that the lenses could be individually described by their Newtonian gravitational potential. This assumption is well motivated in most astrophysically relevant situations, but it does not apply to very hot systems, or in the vicinity of compact objects. Dealing with such relativistic lenses would require to change that specific part of the modelling, but it would not affect how lenses are embedded in the smooth reference space-time.
Lensing by multiple deflectors had already been actively studied in the literature. The specific additions of the present work can be summarised as follows. In \cref{sec:one_lens}, we extended the description of a single lens with cosmological perturbations~\citep[e.g.][]{Schneider:1997bq} to a lens within any reference space-time. In particular, we proposed in \cref{subsec:time_delay_one_lens} the first rigorous derivation of the expression of time delays in that general context. \Cref{sec:N_lenses} further extended the results of \cref{sec:one_lens} to an arbitrary number of lenses, thereby generalising the standard multi-plane lensing formalism, which was hitherto limited to the Minkowski or FLRW reference space-times.
Our work establishes a firm basis for the description of gravitational lensing by several deflectors; in particular, it shall be applied to the accurate treatment of line-of-sight effects in strong gravitational lensing beyond the tidal approximation \citep[][in prep.]{FLU21}.
\section*{Acknowledgements}
PF thanks Sherry Suyu for kindly drawing his attention to the high-quality works of McCully et al., during a workshop in Benasque in 2019. We thank the anonymous referees of Classical and Quantum Gravity for their relevant comments, which improved the quality of our manuscript. We also thank Théo Duboscq and Daniel Johnson whose very careful reading revealed a few residual typos. PF received the support of a fellowship from ``la Caixa'' Foundation (ID 100010434). The fellowship code is LCF/BQ/PI19/11690018.
|
1,116,691,500,335 | arxiv | \section{Introduction}
Providing a systematic procedure for the design of stabilizing feedback control for a general nonlinear system will have a significant impact on a variety of application domains. The lack of proper structure for a general nonlinear system makes this design problem challenging. There have been several attempts to provide such a systematic approach, including convex optimization-based Sum-of-Squares (SoS) programming \cite{SOS_book, Parrilothesis} and differential geometric-based feedback linearization control \cite{sastry2013nonlinear,astolfi2015feedback}. The introduction of operator theoretic methods from the ergodic theory of dynamical systems provides another opportunity for the development of systematic methods for the design of feedback controllers \cite{Lasota}. The operator theoretic methods provide a linear representation for a nonlinear dynamical system. This linear representation of the nonlinear system is made possible by shifting the focus from state space to space of functions using two linear and dual operators, namely, the Perron-Frobenius (P-F) and Koopman operators. The work involving the third author \cite{VaidyaMehtaTAC, Vaidya_CLM,raghunathan2014optimal} provided a systematic linear programming-based approach involving transfer P-F operator for the optimal control of nonlinear systems. This contribution was made possible by exploiting the linearity and the positivity properties of the P-F operator.
More recently, there has been increased research activity on the use of Koopman operator for the analysis and control of nonlinear systems \cite{Meic_model_reduction,mezic_koopmanism,susuki2011nonlinear,kaiser2017data,surana_observer,peitz2017koopman,mauroy2016global,surana2018koopman}. This recent work is mainly driven by the ability to approximate the spectrum (i.e., eigenvalues and eigenfunctions) of the Koopman operator from time-series data \cite{rowley2009spectral, DMD_schmitt, EDMD_williams, Umesh_NSDMD}. The data-driven approach for computing the spectrum of the Koopman operator is attractive as it opens up the possibility of employing operator theoretic methods for data-driven control. Research works in \cite{kaiser2017data,peitz2017koopman,korda2018linear,korda2018power,arbabi2018data,hanke2018koopman,sootla2018optimal} are proposing to develop Koopman operator-based data-driven methods for the design of optimal control and model predictive control for nonlinear and partial differential equations as well. The existing approaches rely on identification of linear predictors and the use of linear control design techniques for Koopman-based control. However, the tightness of these linear predictors cannot be theoretically guaranteed. In comparison, this book chapter proposes data-driven identification and bilinear representation of nonlinear control systems in Koopman eigenfunction coordinates. The bilinear representation is tight and theoretically justified in the sense that in the limit as the number of basis function approaches infinity, the finite-dimensional bilinear representation will approach the true lifting of a control system in the function space. To address the control design problem of a more complex bilinear system, we propose a control Lyapunov function-based approach for feedback stabilization. Furthermore, sample complexity results from \cite{sample_complexity} are used to characterize the relationship between the amount of training data and the approximation error of our bilinear predictor. The work in this book chapter is the extended version of the work presented in \cite{bowen_koopmanstabilziationCDC}, where the data-driven identification for control component is new.
The main contributions of the book chapter are as follows. We present a data-driven approach for feedback stabilization of a nonlinear system (refer to Fig. \ref{fig_schematic}). We first show that the nonlinear control system can be identified from the time-series data generated by the system for two different input signals, namely zero input and step input. For this identification, we make use of linear operator theoretic framework involving Fokker Planck equation. Furthermore, sample complexity results developed in \cite{sample_complexity} are used to determine the data required to achieve the desired level for the approximation.
This process of identification leads to a finite-dimensional bilinear representation of the nonlinear control system in Koopman eigenfunction coordinates. This finite-dimensional approximation of the bilinear system is used for the design of a stabilizing feedback controller. While the control design for a bilinear system is, in general, a challenging problem, we propose a systematic approach based on the theory of control Lyapunov function (CLF) and inverse optimality for feedback control design \cite{Khalil_book}. While the search for CLFs for a general nonlinear system is a difficult problem, we use a bilinear representation of the nonlinear control system in the Koopman eigenfunction space to search for a CLF for the bilinear system. By restricting the search of CLFs to a class of quadratic Lyapunov functions, we can provide a convex programming-based systematic approach for determining the CLF \cite{Boyd_book}. The principle of inverse optimality allows us to connect the CLF to an optimal cost function. The controller designed using CLF also optimizes an appropriate cost. Using this principle, we comment on the optimality of the controller designed using CLF.
The main contributions of this work are as follows. We present a data-driven approach for the identification and representation of a nonlinear control system as a bilinear system. The bilinear structure of the control dynamical system is exploited to provide a systematic approach for the feedback stabilization of nonlinear systems. The proposed systematic approach relies on control Lyapunov function (CLF) and quadratic stabilization in Koopman eigenfunction space. A convex optimization-based formulation is proposed for searching quadratic CLFs. The CLF is used to propose a different formula for the stabilizing feedback control. One of them is the Sontag formula which allows us to comment on the optimality of the designed stabilizing feedback controller using the principle of inverse optimality.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{block_diag}
\caption{Data-Driven Identification and Control of Nonlinear System}
\label{fig_schematic}
\end{figure}
This book chapter is organized as follows. In Section \ref{section_prelim}, we present some preliminaries on the Koopman operator, Fokker Planck equation, and control Lyapunov functions. In Section \ref{section_bilinear}, we present the identification scheme for the data-driven identification of a nonlinear control system as a bilinear system in Koopman eigenfunction coordinates. In Section \ref{section_main}, a convex optimization-based formulation is proposed to search for quadratic CLFs and for the design of stabilizing feedback controller. Simulation results are presented in Section \ref{section_simulation}, followed by conclusion in Section \ref{section_conclusion}.
\section{Preliminaries}\label{section_prelim}
In this section, we present some preliminaries on the Koopman operator, Fokker Planck equation, and control Lyapunov function-based approach on the design of stabilizing feedback controllers for nonlinear systems.
\subsection{Koopman Operator}
Consider a continuous-time dynamical system of the form
\begin{eqnarray}
\dot{\vec x}=\vec F(\vec x)\label{system}
\end{eqnarray}
where $\vec x\in X\subset \mathbb{R}^n$ and the vector field $\vec F$ is assumed to be continuously differentiable. Let $\vec S(t,\vec x_0)$ be the solution of the system (\ref{system}) starting from initial condition $\vec x_0$ and at time $t$. Let $\cal O$ be the space of all observables $f: X\to \mathbb{C}$.
\begin{definition}[Koopman operator] The Koopman semigroup of operators $U_t :{\cal O}\to {\cal O}$ associated with system (\ref{system}) is defined by
\end{definition}
\begin{eqnarray}
[U_t f](\vec x)=f(\vec S(t,\vec x)).
\end{eqnarray}
It is easy to observe that the Koopman operator is linear on the space of observables although the underlying dynamical system is nonlinear. In particular, we have
\[[U_t (\alpha f_1 +f_2)](\vec x)=\alpha [U_t f_1](\vec x)+[U_t f_2](\vec x).\]
Under the assumption that the function $f$ is continuously differentiable, the semigroup $[U_t f](\vec x)=\rho(\vec x,t)$ can be obtained as the solution of the following partial differential equation
\[\frac{\partial \rho}{\partial t}=\vec F\cdot \nabla \rho=: L \rho\]
with initial condition $\rho(\vec x,0)=f(\vec x)$. From the semigroup theory it is known \cite{Lasota} that the operator $L$ is the infinitesimal generator for the Koopman operator, i.e.,
\[L \rho=\lim_{t\to 0}\frac{U_t \rho-\rho}{t}.\]
The linear nature of Koopman operator allows us to define the eigenfunctions and eigenvalues of this operator as follows.
\begin{definition}[Koopman eigenfunctions] The eigenfunction of Koopman operator is a function $\phi_\lambda\in {\cal O}$ that satisfies
\begin{eqnarray}
[U_t \phi_\lambda](x)=e^{\lambda t} \phi_\lambda(x)
\end{eqnarray}
for some $\lambda\in \mathbb{C}$. The $\lambda$ is the associated eigenvalue of the Koopman eigenfunction and is assumed to belong to the point spectrum.
\end{definition}
The spectrum of the Koopman operator is far more complex than simple point spectrum and could include continuous spectrum \cite{Meic_model_reduction}. The eigenfunctions can also be expressed in terms of the infinitesimal generator of the Koopman operator $L$ as follows
\[L \phi_\lambda =\lambda \phi_\lambda.\]
The eigenfunctions of Koopman operator corresponding to the point spectrum are smooth functions and can be used as coordinates for linear representation of nonlinear systems.
\subsection{Fokker Planck Equation}
We need the preliminaries on Fokker Planck equation for the purpose of data-driven identification of nonlinear control system. Consider a nonlinear dynamical system perturbed with white noise process.
\begin{eqnarray}
\dot {\vec x}={\vec F}({\vec x})+{\vec \omega}\label{sde}
\end{eqnarray}
where ${\vec \omega}$ is the white noise process. Following assumption is made on the vector function $\vec F$.
\begin{assumption}\label{assume_diff}
Let $\vec F =(\vec F_1,\ldots, \vec F_n)^\top$. We assume that the functions $\vec F_i$ $i=1,\ldots n$ are ${\vec C}^4$ functions.
\end{assumption}
We assume that the distribution of ${\vec x}(0)$ is absolutely continuous and has density $p_0({\vec x})$. Then we know that ${\vec x}(t)$ has a density $p({\vec x},t)$ which satisfies following Fokker-Planck (F-P) equation also known as Kolomogorov forward equation.
\begin{eqnarray}
\frac{\partial p({\vec x},t)}{\partial t}=- \nabla \cdot\left( {\vec F}({\vec x})p({\vec x},t)\right) +\frac{1}{2}\nabla^2 p({\vec x},t)
\end{eqnarray}
Following Assumption \ref{assume_diff}, we know the solution $p({\vec x},t)$ to F-P equation exists and is differentiable (Theorem 11.6.1 \cite{Lasota}). Under some regularity assumptions on the coefficients of the F-P equation (Definition 11.7.6 \cite{Lasota}) it can be shown that the F-P admits a generalized solution. The generalized solution is used in defining stochastic semi-group of operators ${\vec \{\mathbb{P}}_t\}_{t\geq 0}$ such that
\begin{eqnarray}
[\mathbb{P}_t p_0]({\vec x})=p({\vec x},t).
\end{eqnarray}
Furthermore, the right hand side of the F-P equation is the infinitesimal generator for stochastic semi-group of operators $\mathbb{P}_t$ i.e.,
\begin{eqnarray}
\mathbb{A} \varphi=\lim_{t\to 0}\frac{(\mathbb{P}_t^s-I)\varphi}{t}
\end{eqnarray}
where
\[\mathbb{A}\varphi :=- {\vec \nabla} \cdot\left( ({\vec F}({\vec x}) \varphi)\right) +\frac{1}{2}\nabla^2 \varphi\]
Let $\psi(x)\in {\vec C}^2(\mathbb{R}^n)$ be an observable we have
\begin{eqnarray}
\frac{d}{dt}\int p({\vec x},t) \psi({\vec x}) dx=\int {\mathbb{A}}p({\vec x},t) \psi({\vec x})d{\vec x}\nonumber\\=\int p({\vec x},t)\mathbb{A}^*\psi({\vec x})d{\vec x}
\end{eqnarray}
where $\mathbb{A}^*$ is adjoint to $\mathbb{A}$ and is defined as \begin{equation}\label{eq:Koopmangenerator}
\mathbb{A}^*\psi={\vec F}\cdot \nabla \psi+\frac{1}{2}\nabla^2 \psi
\end{equation}
The semi-group corresponding to the $\mathbb{A}^*$ operator is given by
\begin{eqnarray}\mathbb{A}^*\psi =\lim_{t\to 0}\frac{(\mathbb{U}_t-I)\psi}{t}\label{generatorstoc_koopman}
\end{eqnarray}
where
\begin{equation}\label{eq:Koopmangroup}
[\mathbb{U}_t\psi]({\vec x})=\mathbb{E}[\psi({\vec x}(t))\mid {\vec x}(0)={\vec x}].
\end{equation}
For the deterministic dynamical system $\dot {\vec x}={\vec F}({\vec x})$, i.e., in the absence of noise term, the above definitions of generators and semi-groups reduces to Perron-Frobenius and Koopman operators. In particular, the propagation of probability density function capturing uncertainty in initial condition is given by Perron-Frobenius (P-F) operator and is defined as follows.
\begin{definition}
The P-F operator for deterministic dynamical system $\dot {\vec x}={\vec F}({\vec x})$ is defined as follows
\begin{eqnarray}
[P_t p_0]({\vec x})=p_0(\vec S(-t,\vec x))\left|\frac{\partial \vec S(-t,\vec x)}{\partial {\vec x}}\right|
\end{eqnarray}
where $\vec S(t,\vec x)$ be the solution of the system (\ref{system}) starting from initial condition $\vec x$ and at time $t$, and $\left|\cdot \right|$ stands for the determinant.
\end{definition}
The infinitesimal generator for the P-F operator is given by
\begin{eqnarray}
A \varphi := -\nabla \cdot ({\vec F}({\vec x}) \varphi)=\lim_{t\to 0}\frac{(P_t-I)\varphi}{t}
\end{eqnarray}
\subsection{Feedback Stabilization and Control Lyapunov Functions}
For the simplicity of the presentation, we will consider only the case of single input in this paper. All the results carry over to the multi-input case in a straightforward manner. Consider a single input control affine system of the form.
\begin{align}
\label{non_lin_sys}
\dot{\vec x} = \vec F(\vec x)+\vec G(\vec x)u,
\end{align}
where $\vec x(t) \in \mathbb{R}^n$ denotes the state of the system, $u(t) \in \mathbb{R}$ denotes the single input of the system, and $\vec F, \vec G: \mathbb{R}^n \rightarrow \mathbb{R}^n$ are assumed to be continuously differentiable mappings. We assume that $\vec F(\vec0) = \vec0$ and the origin is an unstable equilibrium point of the uncontrolled system $\dot{\vec x}=\vec F(\vec x)$.
The \textit{state feedback stabilization} problem associated with system (\ref{non_lin_sys}) seeks a possible feedback control law of the form
\begin{align*}
u = k(\vec x)
\end{align*}
with $k: \mathbb{R}^n \rightarrow \mathbb{R}$ such that $\vec x = \vec 0$ is asymptotically stable within some domain $\mathcal{D} \subset \mathbb{R}^n$ for the closed-loop system
\begin{align}
\dot{\vec x} = \vec F(\vec x)+\vec G(\vec x)k(\vec x). \label{closed_loop}
\end{align}
One of the possible approaches for the design of stabilizing feedback controllers for the nonlinear system (\ref{non_lin_sys}) is via control Lyapunov functions that are defined as follows.
\begin{definition}
Let $\mathcal{D} \subset \mathbb{R}^n$ be a neighborhood that contains the equilibrium $\vec x = \vec0$. A \textit{control Lyapunov function} (CLF) is a continuously differentiable positive definite function $V: \mathcal{D} \rightarrow \mathbb{R}_+$ such that for all $\vec x \in \mathcal{D} \setminus \{\vec0\}$ we have
\end{definition}
\begin{align*}
\infm_u \ \left[ \frac{\partial V}{\partial x} \cdot \vec F(\vec x) + \frac{\partial V}{\partial x} \cdot \vec G(\vec x)u \right] := \infm_u \ \Big[ V_x\vec F(\vec x) + V_x\vec G(\vec x) u \Big] <0
\end{align*}
It has been shown in \cite{artstein1983stabilization, sontag1989universal} that the existence of a CLF for system (\ref{non_lin_sys}) is equivalent to the existence of a stabilizing control law $u = k(\vec x)$ which is almost smooth everywhere except possibly at the origin $\vec x = \vec 0$.
\begin{theorem} [see \cite{astolfi2015feedback}, Theorem 2] \label{clf_thm}
There exists an almost smooth feedback $u = k(\vec x)$, i.e., $k$ is continuously differentiable for all $\vec x \in \mathbb{R}^n \setminus \{\vec0\}$ and continuous at $\vec x = \vec0$, which globally asymptotically stabilizes the equilibrium $\vec x = \vec0$ for system (\ref{non_lin_sys}) if and only if there exists a radially unbounded CLF $V(\vec x)$ such that
\begin{enumerate}
\item For all $\vec x \neq \vec0$, $V_x\vec G(\vec x) = 0$ implies $V_x\vec F(\vec x) < 0$;
\item For each $\varepsilon > 0$, there is a $\delta > 0$ such that $\|\vec x\| < \delta$ implies the existence of a $|u| < \varepsilon$ satisfying $V_x\vec F(\vec x) + V_x\vec G(\vec x) u < 0$.
\end{enumerate}
\end{theorem}
In the theorem above, condition 2) is known as the small control property, and it is necessary to guarantee continuity of the feedback at $x \neq 0$. If both conditions 1) and 2) hold, an almost smooth feedback can be given by the so-called Sontag's formula
\begin{align}
\label{sontag} k(\vec x) := \begin{cases} -\frac{V_x\vec F + \sqrt{(V_x\vec F)^2 + (V_x\vec G)^4}}{V_x\vec G} & \text{if } V_x\vec G(\vec x) \neq 0 \\ 0 & \text{otherwise.} \end{cases}
\end{align}
Besides Sontag's formula, we also have several other possible choices to design a stabilizing feedback control law based on the CLF given in Theorem \ref{clf_thm}. For instance, if we are not constrained to any specifications on the continuity or amplitude of the feedback, we may simply choose
\begin{align}\label{control2}
k(\vec x) &:= -K \sgn\big[ V_x\vec G(\vec x) \big] \\
\label{control1}
k(\vec x) &:= -K V_x\vec G(\vec x)
\end{align}
with some constant gain $K > 0$. Then, differentiating the CLF with respect to time along trajectories of the closed-loop (\ref{closed_loop}) yields
\begin{align*}
\dot V & = V_x\vec F(\vec x) - K \big| V_x\vec G(\vec x) \big| \\
\dot V & = V_x\vec F(\vec x) - K V_x\vec G(\vec x)^2.
\end{align*}
Hence, by the stabilizability property of condition 1), there must exist some $K$ large enough such that $\dot V < 0$ for all $\vec x \neq \vec0$, because whenever $V_x\vec F(\vec x) \geq 0$ we have $V_x\vec G(\vec x) \neq 0$.
On the other hand, the CLFs also enjoy some optimality property using the principle of inverse optimal control. In particular, consider the following optimal control problem
\begin{align}
\minimize_u \quad & \int_0^\infty (q(x) + u^\top u)dt \label{cost} \\
\subjt \quad & \dot{\vec x} = \vec F(\vec x)+\vec g(\vec x)u \nonumber
\end{align}
for some continuous, positive semidefinite function $q: \mathbb{R}^n \rightarrow \mathbb{R}$. Then the modified Sontag's formula
\begin{align}
\label{mod_sontag} k(\vec x) := \begin{cases} -\frac{V_x\vec F + \sqrt{(V_x\vec F)^2 + q(x)(V_x\vec G)^2}}{V_x\vec G} & \text{if } V_x\vec G(\vec x) \neq 0 \\ 0 & \text{otherwise} \end{cases}
\end{align}
builds a strong connection with the optimal control. In particular, if the CLF has level curves that agree in shape with those of the value function associated with cost (\ref{cost}), then the modified Sontag's formula (\ref{mod_sontag}) will reduce to the optimal controller \cite{freeman1996control, primbs1999nonlinear}.
\section{Data-driven Identification of Nonlinear System }\label{section_bilinear}
In this section we discuss the application of linear operator theoretic framework for the identification of nonlinear dynamical system in the Koopman eignfunctions space. Consider the control dynamical system perturbed by stochastic noise process.
\begin{eqnarray}\dot {\vec x}={\vec F}({\vec x})+{\vec G}({\vec x})u+{\vec\omega}\label{control_sde}
\end{eqnarray}
where ${\vec \omega}\in \mathbb{R}^n$ is the white noise process. The presence of noise term is essential to ensure persistency of excitation for the purpose of identification. Following assumption is made on the vector functions $\vec F$ and $\vec G$.
\begin{assumption}\label{assume1}
Let $\vec F =(\vec F_1,\ldots, \vec F_n)^\top$ and $\vec G=(\vec G_1,\ldots, \vec G_n)^\top$. We assume that the functions $\vec F_i$ and $\vec G_i$ for $i=1,\ldots n$ are ${\vec C}^4$ functions.
\end{assumption}
The objective is to identify the nonlinear vector fields $\vec F$ and $\vec G$ using the time-series data generated by the control dynamical system and arrive at a continuous-time dynamical system of the form
\begin{eqnarray}
\dot {\vec z}=\Lambda {\vec z}+u B{\vec z}\label{sys_bilinear_cont}
\end{eqnarray}
where ${\vec z}\in \mathbb{R}^N$ with $N\geq n$. We now make following assumption on the control dynamical system (\ref{control_sde}).
\begin{assumption}
We assume that all the trajectories of the control dynamical system (\ref{control_sde}) starting from different initial conditions for control input $u=0$ and for step input remains bounded.
\end{assumption}
\begin{remark}
This assumption is essential to ensure that the control dynamical system can be identified from the time-series data generated by the system for two different inputs signals.
\end{remark}
The goal is to arrive at a continuous-time bilinear representation of the nonlinear control system (\ref{control_sde}). Towards this goal we assume that the time-series data from the continuous time dynamical system (\ref{control_sde}) is available for two different control input namely zero input and step input. The discrete time-series data is generated from the continuous time dynamical system with sufficiently small discretization time step $\Delta t$ and this time-series data is represented as
\begin{eqnarray}
({\vec x}^s_{k+1},{\vec x}^s_k)\label{data}
\end{eqnarray}
The subscript $s$ signifies that the data is generated by dynamical system of the form
\begin{eqnarray}\dot {\vec x}={\vec F}({\vec x})+{\vec G}({\vec x})s+{\vec\omega}\label{eq_s}
\end{eqnarray}
So that $s=0$ and $s=1$ corresponds to the case of zero input and step input respectively.
Let
\[\Psi=[\psi_1,\ldots, \psi_N]\]
be the set of observables with $\psi_i:\mathbb{R}^n\to \mathbb{R}$. The time evolution of these observables under the continuous time control dynamical system with no noise can be written as
\begin{eqnarray}\frac{d \Psi}{dt}&=&{\vec F}({\vec x})\cdot \nabla \Psi +u {\vec G}({\vec x})\cdot \nabla \Psi\nonumber\\
&=&{\cal A}\Psi +u{\cal B}\Psi
\end{eqnarray}
where ${\cal A}$ and $\cal B$ are linear operators. The objective is to construct the finite dimensional approximation of these linear operators, $\cal A$, and $\cal B$ respectively from time-series data to arrive at a finite dimensional approximation of control dynamical system as in Eq. (\ref{sys_bilinear_cont}).
With reference to Eq. (\ref{eq:Koopmangenerator}), let $\mathbb{A}_1^*$ and $\mathbb{A}_0^*$ be the generator corresponding to the control dynamical system with step input i.e., $s=1$ and $s=0$ respectively in Eq. (\ref{eq_s}). We have
\begin{eqnarray}(\mathbb{A}_1^*-\mathbb{A}_0^*)\psi={\vec G}({\vec x})\cdot \nabla \psi\label{g_approx}
\end{eqnarray}
Under the assumption that the sampling time $\Delta t$ between the two consecutive time-series data point is sufficiently small, the generators $\mathbb{A}_s^*$ can be approximated as
\begin{eqnarray}\mathbb{A}_s^*\approx \frac{\mathbb{U}_{\Delta t}^s-I}{\Delta t}\label{f_approx}
\end{eqnarray}
Substituting for $s=1$ and $s=0$ in (\ref{f_approx}) and using (\ref{g_approx}), we obtain
\begin{eqnarray}
\frac{\mathbb{U}_{\Delta t}^1-\mathbb{U}_{\Delta t}^0}{\Delta t}\approx {\vec G}({\vec x})\cdot \nabla={\cal B}\label{B_approx}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\mathbb{U}_{\Delta t}^0-I}{\Delta t}\approx {\vec F}({\vec x})\cdot \nabla={\cal A}\label{A_approx}
\end{eqnarray}
Using the time-series data generated from dynamical system (\ref{eq_s}) for $s=0$ and $s=1$, it is possible to construct the finite dimensional approximation of the operators $\mathbb{U}_{\Delta t}^0$ and $\mathbb{U}_{\Delta t}^1$ respectively thereby approximating the operators $\cal A$ and $\cal B$ respectively. In the following we explain the extended dynamic mode decomposition-based procedure for the approximation of these operators from time-series data.
\subsection{Finite Dimensional Approximation}
We use Extended Dynamic Mode Decomposition (EDMD) algorithm for the approximation of $\mathbb{U}_{\Delta t}^1$ and $\mathbb{U}_{\Delta t}^0$ thereby approximating $\cal A$ and $\cal B$ in Eqs. (\ref{A_approx}) and (\ref{B_approx}) respectively \cite{EDMD_williams}. For this purpose let the time-series data generated by the dynamical system (\ref{eq_s}) be given by
\begin{eqnarray}
\overline{\vec X} = [\vec x^s_1,\vec x^s_2,\ldots,\vec x^s_M],&\overline{\vec Y} = [\vec y^s_1,\vec y^s_2,\ldots,\vec y^s_M] \label{data}
\end{eqnarray}
where $\vec y^s_k=\vec x^s_{k+1}$ with $s=0$ or $s=1$ i.e., zero input and step input. Furthermore, let $\mathcal{H}=
\{\psi_1,\psi_2,\ldots,\psi_N\}$ be the set of dictionary functions or observables and ${\cal G}_{\cal H}$ be the span of $\cal H$.
The choice of dictionary functions is very crucial and it should be rich enough to approximate the leading eigenfunctions of the Koopman operator. Define vector-valued function $\varPsi:X\to \mathbb{C}^{N}$
\begin{equation}
\varPsi(\vec x):=\begin{bmatrix}\psi_1(\vec{x}) & \psi_2(\vec{x}) & \cdots & \psi_N(\vec{x})\end{bmatrix}^\top.
\end{equation}
In this application, $\varPsi$ is the mapping from state space to function space. Any two functions $f$ and $\hat{f}\in \mathcal{G}_{\cal H}$ can be written as
\begin{eqnarray}
f = \sum_{k=1}^N a_kh_k=\varPsi^\top \vec{a},\quad \hat{f} = \sum_{k=1}^N \hat{a}_kh_k=\varPsi^\top \vec{\hat{a}}
\end{eqnarray}
for some coefficients $\vec{a}$ and $\vec{\hat{a}}\in \mathbb{C}^N$. Let \[ \hat{f}(\vec{x})=[U^s_{\Delta t}f](\vec{x})+r\]
where $r$ is a residual function that appears because $\mathcal{G}_{\cal H}$ is not necessarily invariant to the action of the Koopman operator. To find the optimal mapping which can minimize this residual, let $\vec U$ be the finite dimensional approximation of the Koopman operator $U^s_{\Delta t}$. Then the matrix $\vec U^s$ is obtained as a solution of least-squares problem as follows
\begin{equation}\label{edmd_op}
\minimize_{{\vec U}^s} \quad \|{\vec G}^s{\vec U}^s-{\vec A}^s\|_F
\end{equation}
where
\begin{eqnarray}\label{edmd1}
{\vec G}^s=\frac{1}{M}\sum_{m=1}^M \varPsi(\vec{x}^s_m)^\top \varPsi(\vec{x}^s_m),\;\;\;
{\vec A}^s=\frac{1}{M}\sum_{m=1}^M \varPsi(\vec{x}^s_m)^\top \varPsi(\vec{y}^s_m)
\end{eqnarray}
with ${\vec U}^s,{\vec G}^s,{\vec A}^s\in\mathbb{C}^{N\times N}$. The optimization problem (\ref{edmd_op}) can be solved explicitly with a solution in the following form
\begin{eqnarray}
{\vec U}^s=({\vec G}^s)^\dagger {\vec A}^s\label{EDMD_formula}
\end{eqnarray}
where $({\vec G}^s)^{\dagger}$ denotes the psedoinverse of matrix ${\vec G}^s$.
Under the assumption that the leading Koopman eigenfunctions are contained within $\mathcal{G}_{\mathcal{H}}$, the eigenvalues of $\vec U$ are approximations of the Koopman eigenvalues. The right eigenvectors of ${\vec U}^{s=0}$ can be used then to generate the approximation of Koopman eigenfunctions. In particular, the approximation of Koopman eigenfunction is given by
\begin{equation}\label{EDMD_eigfunc_formula}
\phi_j=\varPsi ^\top v_j, \quad j = 1,\ldots,N
\end{equation}
where $v_j$ is the $j$-th right eigenvector of ${\vec U}^0$, and $\phi_j$ is the approximation of the eigenfunction of Koopman operator corresponding to the $j$-th eigenvalue, $\lambda_j\in \mathbb{C}$.
The bilinear representation of nonlinear control dynamical system can be constructed either in the space of basis function ${\varPsi}$ or the eigenfunctions of the Koopman operator ${\varPhi}$, where
\[\varPhi(\vec x):=[\phi_1(\vec x),\ldots, \phi_N(\vec x)]^\top.\]
In this work, we constructed the bilinear representation in the Koopman eigenfunctions coordinates. Towards this goal, we define
\[\hat{\varPhi}(\vec x):=[\hat \phi_1(\vec x),\ldots, \hat \phi_N(\vec x)]^\top\] where $\hat \phi_i:=\phi_i$ if $\phi_i$ is a real-valued eigenfunction and $\hat \phi_i:=2 {\rm Re}(\phi)$, $\hat \phi_{i+1}:=-2{\rm Im}(\phi_i)$, if $i$ and $i+1$ are complex conjugate eigenfunction pairs. Consider now the transformation as $\hat {\varPhi}: \mathbb{R}^n\to \mathbb{R}^N$ as
\[\vec z=\hat{\varPhi}(\vec x).\]
Then in this new coordinates system Eq. (\ref{non_lin_sys}) takes the following form
\begin{eqnarray}
\dot{\vec z}=\Lambda \vec z+uB{\vec z} .\label{bilinear1}
\end{eqnarray}
where the matrix $\Lambda$ has a block diagonal form where the block corresponding to the eigenvalue $\hat\lambda_i$, such that $\Lambda_{(i,i)} =\hat\lambda_i$ if $\phi_i$ is real, and
\begin{align}\label{eig_conversion}
\begin{bmatrix}\Lambda_{(i,i)}&\Lambda_{(i,i+1)}\\ \Lambda_{(i+1,i)}&\Lambda_{(i+1,i+1)}\end{bmatrix} =\lvert\lambda_i\rvert\begin{bmatrix}
\cos(\angle{\hat\lambda_i})&\sin(\angle{\hat\lambda_i})\\-\sin(\angle{\hat\lambda_i})&\cos(\angle{\hat\lambda_i})
\end{bmatrix}
\end{align}
if $\phi_i$ and $\phi_{i+1}$ are complex conjugate pairs.
The $\hat\lambda_i$ associated with the continuous time system dynamics. The relationship between discrete-time Koopman eigenvalues $\lambda_i$ and continuous time $\hat\lambda_i$ can be written as $\hat\lambda_i = \log(\lambda_i)/\Delta t$.
Similarly data generated using step for the control dynamical system is used to generate time-series data $\{{\vec x}_k^1\}$ and for the approximation of ${\vec U}^1$. The approximation of the operator ${\cal B}$ in the coordinates of basis functions, $\varPsi({\vec x})$ denoted by $\bar B$, and the eigenfunction coordinates $\hat {\varPhi}({\vec x})$ denoted by $B$ can be obtained as follows:
\begin{eqnarray}
\bar B = \frac{\vec U^1-\vec U^0}{\Delta t},\;\;\;\;\;
B = V^\top{\bar B}(V^\top)^{-1}
\end{eqnarray}
where each column of $V$, $v_j$ is the $j$th eigenvector of $\vec U^0$.
There are two sources of error in the approximation of Koopman operator and its spectrum and will be reflected in the bilinear representation of nonlinear system namely in the $\Lambda$ and $B$ matrices. The first source of error is due to a finite number of basis functions used in the approximation of the Koopman operator. Under the assumption that the choice of basis functions is sufficiently rich and $N$ is large this approximation error is expected to be small. However, selection of basis function is a actively research topic with no agreement on the best choice of basis function for general nonlinear system. The second source of error, which is more relevant to this work, arise due to the finite length of data used in the approximation of the Koopman operator. Sample complexity results for control dynamical systems are developed in \cite{sample_complexity} to derive an analytical formula for the approximation of Koopman operator as the function of data length. We proved that the approximation error between the true Koopman operator and its approximation decreases as $\frac{1}{\sqrt{T}}$, where $T$ is the time length of the data. These sample complexity results are used to determine the data required to achieve the desired level of accuracy of the approximation. In particular, the bilinear representation of control dynamical system with approximation error due to the finite length of data explicitly accounted for can be written as
\begin{eqnarray}
\dot {\vec z}=(\Lambda +\Delta \Lambda) {\vec z}+u(B+\Delta B){\vec z},\label{bilinear_uncertain}
\end{eqnarray}
where $\Delta \Lambda$ and $\Delta B$ are approximation error. Using sample complexity results discovered in \cite{sample_complexity}, we can determine the data length $M$ so that $\parallel \Delta \Lambda\parallel\leq \epsilon_\Lambda$ and $\parallel \Delta B\parallel\leq \epsilon_B$, with $\epsilon_{\Lambda}$ and $\epsilon_B$ being the predetermined acceptable bounds.
\section{Feedback Controller Design}\label{section_main}
The control Lyapunov function provides a powerful tool for the design of a stabilizing feedback controller which also enjoys some optimality property using the principle of inverse optimality. However, one of the main challenges is providing a systematic procedure to find CLFs. For a general nonlinear system finding a CLF remains a challenging problem. We exploit the bilinear structure of the nonlinear system in the Koopman eigenfunction space to provide a systematic procedure for computing control Lyapunov function. We restrict the search for the control Lyapunov function to the class of quadratic Lyapunov function of the form $V({\vec z})={\vec z}^\top P{\vec z}$. It is important to emphasize that although the Lyapunov function is restricted to be quadratic in Koopman eigenfunctions space $\vec z$, the Lyapunov function contains higher order nonlinearities in the original state space $\vec x$. Theorem \ref{bilin_stb} can be stated for the quadratic stabilization of the following bilinear control system.
\begin{eqnarray}
\dot {\vec z}=\Lambda {\vec z}+u B{\vec z}\label{bilinear_sys}
\end{eqnarray}
In the sequel, if there exists a quadratic CLF for the bilinear system (\ref{bilinear_sys}), then we will say that the system (\ref{bilinear_sys}) is {\it quadratic stabilizable}.
\begin{thm}
\label{bilin_stb}
System (\ref{bilinear_sys}) is quadratic stabilizable if and only if there exists an $N \times N$ symmetric positive definite $P$ such that for all non-zero $\vec z \in \mathbb{R}^N$ with $\vec z^\top(P\Lambda+\Lambda^\top P)\vec z \geq 0$, we have $\vec z^\top(PB+B^\top P)\vec z \neq 0$.
\end{thm}
\begin{proof}
Sufficiency $(\Leftarrow)$: Suppose there is a symmetric, positive definite $P$ that satisfies the condition of Theorem \ref{bilin_stb}. We can use it to construct $V(\vec z) = \vec z^\top P\vec z$ as our Lyapunov candidate function, and the derivative of $V$ with respect to time along trajectories of (\ref{bilinear_sys}) is given by
\begin{align*}
\dot V & = \vec z^\top P\dot{\vec z} + \dot{\vec z}^\top P\vec z \nonumber \\
& = \vec z^\top(P\Lambda+\Lambda^\top P)\vec z + u\vec z^\top(PB+B^\top P)\vec z.
\end{align*}
Since for all $\vec z \neq 0$ we have $\vec z^\top(PB+B^\top P)\vec z \neq 0$ when $\vec z^\top(P\Lambda+\Lambda^\top P)\vec z \geq 0$, we can always find a control input $u(\vec z)$ such that
\begin{align*}
\dot V < 0, \quad \forall \vec z \in \mathbb{R}^N \setminus \{0\}.
\end{align*}
Therefore, $V(\vec z)$ is indeed a CLF for system (\ref{bilinear_sys}).
Necessity ($\Rightarrow$): We will prove this by contradiction. Suppose that system (\ref{bilinear_sys}) has a CLF in the form of $V(\vec z) = \vec z^\top P\vec z$, where $P$ does not satisfy the condition of Theorem \ref{bilin_stb}. That is, there exists some $\bar{\vec z} \neq 0$ such that ${\bar{\vec z}}^\top(P\Lambda+\Lambda^\top P)\bar{\vec z} \geq 0$ but $\bar{\vec z}^\top(PB+B^\top P)\bar{\vec z} = 0$. In this case, we have
\begin{align*}
\dot V(\bar{\vec z}) = \bar{\vec z}^\top(P\Lambda+\Lambda^\top P)\bar{\vec z} \geq 0
\end{align*}
for any input $u$, which contradicts the definition of a CLF. This completes the proof.
\end{proof}
Following convex optimization formulation can be formulated to search for quadratic Lyapunov function for bilinear system without uncertainty in Eq. (\ref{bilinear_sys}).
\begin{eqnarray}
\label{opt}
\minimize_{t > 0, \ P = P^\top} & \quad t - \gamma{\rm Trace}(P B) \nonumber \\
\subjt & \quad tI - (P\Lambda+\Lambda^\top P) \succeq 0 \nonumber \\
& \quad c^\text{max} I \succeq P \succeq c^\text{min} I
\end{eqnarray}
where $c^\text{max} > c^\text{min} > 0$, respectively, are two given positive scalars forming bounds for the largest and the least eigenvalues of $P$. The variable $t$ here represents an epigraph form for the largest eigenvalue of $P\Lambda+\Lambda^\top P$.
Optimization (\ref{opt}) has combined two objectives. On the one hand, we minimize the largest eigenvalue of $P\Lambda+\Lambda^\top P$. On the other hand, we try to maximize the least singular value of $PB+B^\top P$ the same time. Noticing that it may be difficult to maximize the least singular value of $PB+B^\top P$ directly, we maximize the trace of $PB$ instead and employ a parameter $\gamma > 0$ to balance these two objectives.
\begin{remark}
When an optimal $P^\star$ is solved from (\ref{opt}), we still need to check whether it satisfies the condition of Theorem \ref{bilin_stb} or not. So if one $P^\star$ fails the condition check, then we may tune the parameter $\gamma$ and solve the above optimization again until we obtain a correct $P^\star$. Nevertheless, we observe from simulations (see the multiple examples in our simulation section) that when we choose a $\gamma = 2$, optimization (\ref{opt}) will always yield an optimal $P^\star$ that satisfies the condition of Theorem \ref{bilin_stb}.
\end{remark}
\begin{remark}
We also need to point out that, compared to searching for a nonlinear CLF for the original nonlinear system (\ref{non_lin_sys}), the procedure for seeking a quadratic CLF for the bilinear system (\ref{bilinear_sys}) becomes quite easier and more systematic. Furthermore, a quadratic CLF for the bilinear system is, in fact, non-quadratic (i.e., contains higher order nonlinear terms) for the system (\ref{non_lin_sys}).
\end{remark}
Once a quadratic control Lyapunov function $V(\vec z) = \vec z^\top P\vec z$ is found for bilinear system (\ref{bilinear_sys}), we have several choices for designing a stabilizing feedback control law. For instance, applying the control law (\ref{control2}) or (\ref{control1}) we can construct
\begin{align}
k(\vec z) &= -\beta_k \sgn\big[\vec z^\top(PB+B^\top P)\vec z \big] \\
k(\vec z) &= -\beta_k \vec z^\top(PB+B^\top P)\vec z.\label{control_formula}
\end{align}
Moreover, given a positive semidefinite cost $q(\vec z) \geq 0$, we may also apply the inverse optimality property to design an optimal control via Sontag's formula (\ref{mod_sontag}) to obtain
\begin{align}
k(\vec z) = \begin{cases} -\frac{\vec z^\top(P\Lambda+\Lambda^\top P)\vec z + \sqrt{(\vec z^\top(P\Lambda+\Lambda^\top P)\vec z)^2 + q(x)(\vec z^\top(PB+B^\top P)\vec z)^2}}{\vec z^\top(PB+B^\top P)\vec z} & \text{if } \vec z^\top(PB+B^\top P)\vec z \neq 0 \\ 0 & \text{otherwise}. \end{cases} \label{sontag_formula}
\end{align}
Following algorithm can be outlined for the design of stabilizing feedback controller from time-series data.
\begin{algorithm2e}[h]\label{algorithm}
\SetKwBlock{Phaseone}{Phase I: Modeling}{end}
\SetKwBlock{Phasetwo}{Phase II: Optimization}{end}
\SetAlgoLined
\KwData{Given open-loop time-series data $\{\vec{x}_k^0\}=\{\vec{x}_0^0,\vec{x}_1^0,\ldots,\vec{x}_M^0\}$, and $\{\vec x_k^1\}$ with $s=1$ in (\ref{eq_s}) both with Gaussian process noise added}
\KwResult{Feedback control $u=k(\vec z)$}
\Phaseone{
Choose $N$ dictionary functions $\varPsi(\vec x):=\begin{bmatrix}\psi_1(\vec{x}) & \psi_2(\vec{x}) & \cdots & \psi_N(\vec{x})\end{bmatrix}^\top$.
\For{$\vec{x}_i, \;i = 0,1,2,\ldots,M$}{
$\varPsi(\vec x_i):=\begin{bmatrix}\psi_1(\vec{x}_i) & \psi_2(\vec{x}_i) & \cdots & \psi_N(\vec{x}_i)\end{bmatrix}^\top$
}
Obtain $\vec G^0$ and $\vec A^0$ matrices
${\vec G^0}=\frac{1}{M} \sum_{m=1}^M \varPsi(\vec{x}_m) \varPsi(\vec{x}_m)^\top$;
${\vec A^0}=\frac{1}{M} \sum_{m=0}^{M-1} \varPsi(\vec{x}_m)\varPsi(\vec{x}_{m+1})^\top$.
Compute $\vec U^0 = ({\vec G}^0)^\dagger {\vec A}^0$, and its eigenfunctions $\phi_j=\varPsi^\top v_j$, where $v_j$ is the $j$th eigenvector of $\vec U^0$ with respect to eigenvalue $\lambda_j$, $j=1,2,\ldots,N$.
Convert to continuous time eigenvalues $\hat\lambda_i = \log(\lambda_i)/\Delta t$
Get $\varLambda = diag(\hat\lambda_1,\hat\lambda_2,\ldots,\hat\lambda_N)$ by block diagonalization of eigenvalues $\lambda_i$, use (\ref{eig_conversion}) if $i$, ${i+1}$ complex conjugate.
Obtain the new eigenfuntion $\hat\varPhi(\vec x)$ similarly, where $\hat \phi_i:=\phi_i$ if $\phi_i$ is a real-valued and $\hat \phi_i:=2 {\rm Re}(\phi)$, $\hat \phi_{i+1}:=-2{\rm Im}(\phi_i)$, if $i$ and $i+1$ are complex conjugate.
Replace the dictionary function $\varPsi(\vec x)$ with $\vec z = \hat{\varPhi}(\vec x)$ and repeat Step $2$ to $7$ with the datasets $\{\vec{x}_k^0\}$ and $\{\vec{x}_k^1\}$ to get $\bar{\vec{U}}^0$ and $\bar{\vec{U}}^1$.
Get $B = (\bar{\vec{U}}^1-\bar{\vec{U}}^0)/\Delta t$ }
\Phasetwo{
Solve the following convex problem for optimal $P*$ with $\Lambda$ and $B$,
\begin{eqnarray*}
\minimize_{t > 0, \ P = P^\top} & \quad t - \gamma{\rm Trace}(P B) \nonumber \\
\subjt & \quad tI - (P\Lambda+\Lambda^\top P) \succeq 0 \nonumber \\
& \quad c^\text{max} I \succeq P \succeq c^\text{min} I
\end{eqnarray*}
where $c^\text{max} > c^\text{min} > 0$, $\gamma>0$ are chosen properly.
}
Feedback control $u=k(\vec z)= -\beta_k \vec z^\top(PB+B^\top P)\vec z$ or modified Sontag's formula,
\begin{align*}
k(\vec z) = \begin{cases} -\frac{\vec z^\top(P\Lambda+\Lambda^\top P)\vec z + \sqrt{(\vec z^\top(P\Lambda+\Lambda^\top P)\vec z)^2 + q(x)(\vec z^\top(PB+B^\top P)\vec z)^2}}{\vec z^\top(PB+B^\top P)\vec z} & \text{if } \vec z^\top(PB+B^\top P)\vec z \neq 0 \\ 0 & \text{otherwise}. \end{cases}
\end{align*}
\caption{Data-driven Stabilization Controller design framework}
\end{algorithm2e}
\section{Simulation Results}\label{section_simulation}
\noindent{\bf Example 1: Duffing Oscillator}\\
\noindent The first example we present is for the stabilization of duffing oscillator. The controlled duffing oscillator equation is written as follows.
\begin{eqnarray}\label{duffing_sys}
\dot{x}_1 &=& x_2\\\nonumber
\dot{x}_2 &=& (x_1-x_1^3)-0.5x_2+u.
\end{eqnarray}
The uncontrolled equation for duffing oscillator consists of three equilibrium points, two of the equilibrium points at $(\pm 1,0)$ are stable, and one equilibrium point at the origin is unstable. For identification of the control system dynamics, we excite the system with white noise with zero mean and $0.01$ variance. The continuous time control equation is discretized with a sampling time of $\Delta t=0.25 s$. In Fig. \ref{fig:duffing}a, we show the sampling complexity plot for the approximation error as the function of data length. As proved in \cite{sample_complexity}, the error for the approximation of the $\Lambda$ and $B$ matrix decreases as $\frac{1}{\sqrt{T}}$, where $T$ is a data length. The error plot in Fig. \ref{fig:duffing}a satisfies this rate of decay. The sample complexity results in Fig. \ref{fig:duffing}a are obtained using ten randomly chosen initial condition and generating time-series data over the different length of time ranging from six-time steps to 30-time steps. For each fixed time step we compute the $\Lambda$ and $B$ matrices. The error $\parallel\Lambda-\bar \Lambda\parallel_2$ and $\parallel B-\bar B\parallel_2$ is computed at each fixed time step where $\bar \Lambda$ and $\bar B$ are computed using data collected over $50$ time steps. The dictionary function used in the approximation of the Koopman operator has a maximum degree of five, i.e., $21$ basis function, $N=21$. In particular, following choice of dictionary function is made in the approximation.
\[{\varPsi}({\bf x}) = [1,\;x_1,\;x_2,\;x_1x_2,\;\ldots,\;x_1^5,\;x_1^4x_2,\;x_1^3x_2^2,\;x_1^2x_2^3,\;x_1x_2^4,\;x_2^5]\]
For control design, we use an approximation of $\Lambda$ and $B$ matrices computed over $30$ time steps. The controller is designed using the Algorithm \ref{algorithm}.
For this duffing oscillator example, we use a control design formula in Eq. (\ref{control_formula}). To verify the effectiveness of the designed controller we simulate the closed loop system with the \texttt{ode15s} solver in \textbf{MATLAB} starting from $10$ randomly chosen initial conditions within the region $[-1.5,1.5]\times [-1,1]$. In Fig. \ref{fig:duffing}c, we show the closed loop trajectories in red starting from different initial conditions overlaid on the open loop trajectories in blue. We notice that the controller force the trajectories of the closed-loop system along the stable manifold of the open loop system before the trajectories slide to the origin. The control plots from different initial conditions are shown in Fig. \ref{fig:duffing}c.
\begin{figure}[!htp]
\begin{framed}
\centering
\subfigure[]{\includegraphics[width=1.9in]{Error_A_B_duffing.pdf}}\qquad
\subfigure[]{\includegraphics[width=1.9in]{x_y_t_duffing.pdf}}
\subfigure[]{\includegraphics[width=1.9in]{u_t_duffing1.pdf}}\qquad
\subfigure[]{\includegraphics[width=1.9in]{x_y_duffing2.pdf}}
\caption{Data-driven stabilization of Duffing oscillator. a) Sample complexity error bounds for the approximation of $\Lambda$ and $B$ matrices as the function of data length; b) Closed-loop trajectories vs time from multiple initial conditions; c) Control value vs time from different initial conditions; d) Comparison of closed loop and open loop trajectories in state space.}
\label{fig:duffing}
\end{framed}
\end{figure}
\[\]
\noindent{\bf Example 2: Lorenz System}\\
\noindent The second example we pick is that of Lorentz system. The control Lorentz system can be written as follows
\begin{eqnarray}\label{L}
\dot{x}_1 &=& \sigma(x_2-x_1)\\\nonumber
\dot{x}_2 &=& x_1(\rho-x_3)-x_2+u\\\nonumber
\dot{x}_3 &=& x_1x_2 - \beta x_3
\end{eqnarray}
where $\vec x\in\mathbb{R}^3$ and $u\in\mathbb{R}$ is the single input. With the parameter values of $\rho=28$, $\sigma=10$, $\beta=\frac{8}{3}$, and control input $u=0$ the Lorenz system exhibit chaotic behavior. In this 3D example, we generated the time-series data from $1000$ random chosen initial conditions and propagate each of them for $T_{final}=10s$ with sampling time $\Delta t = 0.001s$. For the purpose of identification the system is excited with white noise input with zero mean and $0.01$ variance. The dictionary functions ${\varPsi}(\boldsymbol{\bf x})$ consists of 20 monomials of most degree $D = 3$.
\[\boldsymbol{\varPsi}(\boldsymbol{\bf x}) = [1,\;x_1,\;x_2,\;x_3,\;\ldots,\;x_1^3,\;x_1^2x_2,\;x_1^2x_3,\;x_1x_2x_3,\;\ldots\;x_3^3]\]
The objective is to stabilize one of the critical points $(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1)$ of the Lorentz system. The system is stabilized using the control formula in Eq. (\ref{control_formula}).
To validate the closed loop control designed using the Algorithm \ref{algorithm}, we perform the closed loop simulation with five randomly chosen initial conditions in the domain $[-5,5]\times[-5,5]\times[0,10]$ and solve the closed-loop system with \texttt{ode15s} solver in \textbf{MATLAB}. In Fig.~\ref{fig:lorenz}a , we show the open loop and closed loop trajectories starting from five different initial conditions and converging to the critical point.
\[\]
\begin{figure}[!htp]
\begin{framed}
\centering
\subfigure[]{\label{noerror}\includegraphics[width=1.9in]{xyz_lorenz5.pdf}}\qquad
\subfigure[]{\label{synerror}\includegraphics[width=1.9in]{x_t_lorenz.pdf}}
\subfigure[]{\label{deadlock}\includegraphics[width=1.9in]{y_t_lorenz.pdf}}\qquad
\subfigure[]{\label{lacksync}\includegraphics[width=1.9in]{z_t_lorenz.pdf}}
\caption{Feedback Stabilization of Lorentz system. a) Comparison of open loop and closed loop trajectories in state space; b) $x(t)\; vs $ time, open loop (blue) and closed loop (red); c) $y(t)\; vs $ time, open loop (blue) and closed loop (red); d) $z(t)\; vs $ time, open loop (blue) and closed loop (red).}
\label{fig:lorenz}
\end{framed}
\end{figure}
\noindent{\bf Example 3: IEEE 9 bus Power System}
\noindent In the last example, we consider the IEEE 9 bus system, the line diagram of which is shown in Fig.~\ref{fig:9bus}a. The model we are using is based on the modified 9 bus test system in ~\cite{Sauer_pai_book}. The system consists of 3 synchronous machines(generators) with IEEE type-I exciters, loads and transmission lines.
The synthetic data is generated using PST (Power System Toolbox) in MATLAB~\cite{207380}, the 9 bus power system network can be described by a set of differential algebraic equations (DAE), consider a power system model with $n_g$ generator buses and $n_l$ load buses. the closed-loop generator dynamics for the $i$th generator bus can be represented as a $2^{nd}$ order dynamical model with the control $u$:
\begin{eqnarray}\label{generator_dynamic_eq}
\begin{aligned}
&\frac{d\delta_i}{dt} = \omega_i - \omega_s \\
&\frac{d\omega_i}{dt} = \frac{1}{M_i}\left(P_{m_i}-\sum_{j\in {\cal N}_i}\frac{E_i E_j}{X_{ij}}\sin(\delta_i-\delta_j)- D_i (\omega_i-\omega_s)\right)+u_i \\
\end{aligned}
\end{eqnarray}
where $\delta_i$, $\omega_i$ are the dynamic states of the generator and correspond to the generator rotor angle, the angular velocity of the rotor. The values for the other parameters is chosen as follows: $\omega_s=1$, the generator mass $M_i = 23.64,\; 6.4,\;3.1$, the internal damping $D_i = 0.05,\;0.95,\;0.05$, the generator power $P_{m_i}= 0.719,\;1.63,\;0.85$ for $i=1,2,3$. The values of $X_{ij}$ are taken from the PST in MATLAB.
For the approximation of Koopman operator and eigenfunctions, the time-series data are generated from 100 initial conditions. Each initial condition are propagated for $T_{final}=10s$ and $\Delta t=0.01s$. The dictionary function $H(x)$ in this example are chosen as 84 monomials of most degree $D=3$. The data-driven stabilizing control is designed using modified Sontag's formula control in Eq. (\ref{mod_sontag}), where $q(x) = 10x^\top x$. The simulation results for this example are shown in Fig. \ref{fig:9bus}. We notice that the open loop system is marginally stable with sustained oscillations. The objective of the stabilizing controller is to stabilize to frequencies to $\omega_s=1$ and the point for the stabilization of $\delta$ dynamics is determined by $P_{m_i}$. Simulation results show that the data-driven stabilizing controller is successful in stabilizing the power system dynamics.
\begin{figure}[!htp]
\begin{framed}
\centering
\subfigure[]{\label{noerror}\includegraphics[width=2.2in]{IEEE_9_bus_diag.pdf}}\qquad
\subfigure[]{\label{synerror}\includegraphics[width=1.9in]{u_t_9bus.pdf}}
\subfigure[]{\label{deadlock}\includegraphics[width=1.9in]{x1_t_9bus.pdf}}\qquad
\subfigure[]{\label{lacksync}\includegraphics[width=1.9in]{x4_t_9bus.pdf}}
\caption{Stabilization of IEEE nine bus system. a) Line diagram for IEEE nine bus system; b) Control value vs time; c) Comparison of open loop and closed loop trajectory for phase angle $\delta_1(t)$ of generator 1; d) Comparison of open loop and closed loop trajectory for frequency $\omega_1(t)$ of generator 1.
}\label{fig:9bus}
\end{framed}
\end{figure}
\section{Conclusion}\label{section_conclusion}
In this chapter, we provided a systematic approach for the data-driven feedback stabilization of nonlinear control systems. A data-driven approach is proposed for the identification of nonlinear control system and control Lyapunov function-based stabilizing feedback controller. The bilinear structure of the control system in Koopman eigenfunction coordinate is exploited to provide a convex optimization-based approach for the search of control Lyapunov function. Simulation results are presented to verify the applicability of the developed framework.
\bibliographystyle{spmpsci.bst
|
1,116,691,500,336 | arxiv | \section{Introduction}
Phase-field equations are used in the field of phase transition. Recently, the relationships between the physical properties and the parameters of these equations have been determined and as a result, phase-field equations have been established as the position of the model for simulation of the phase transition behavior in engineering field.\\
\qquad Phase-field equations are also applied to the double layer model of interface between fluids for example, interface of liquid-gas and liquid-liquid contact. However in the cases of contact of gas-sold and liquid-solid, we should modify these equations for the diffusion layer not to be double, but the width of diffusion layer tends to zero.\\
\qquad The singular limit of the phase-field equations have been investigated theoretically and numerically. Caginalp and a co-researcher [1-4] have studied the reduction of these equations in the singular limit of Stefan problems and Hele-Shaw problem. Furthermore H. Soner [11] showed that the solution of the phase-field equations converged to that of the Stefan problem. An asymptotic behavior of solutions for the Stefan problem is calculated by Carslaw and Jaeger [5], Friedman [6]. In their study, the moving interface distance on the process of the solidification is a half-square of time, namely, this result is coincided with the dependence on time as the free moving interface length, with two points from the origin, of the phase-field equations. \\
\qquad It is sufficient to describe the phase transition of single component by the phase-field equation. Therefore we analyze the equation and obtain the asymptotic behavior of the equation in order to investigate that the equation shows the Stefan problem.\\
\qquad This paper consists as follows. Sec.2 is devoted to the review of studies by many researchers in order to make clear the relations among the theories concerning of this problem for applied mathematician, physicist and engineer. Sec.3 is a numerical example.\\
\section{Review on Mathematics of This Problem}
Caginalp and co-researchers reported that the phase-field equations converged to the equations for the Stefan problem as singular limit. These equations are for temperature $u$ and the phase-field $\varphi$, consists a heat equation:
$$
u_t+\ell\varphi_t=\Delta u,
$$
and a Ginzburg-Landau equation:
$$
\epsilon\varphi_t=\epsilon\Delta\varphi -\frac{1}{\epsilon}W^\prime (\varphi )+\ell (\varphi )u,
$$
where $\ell$ is a latent heat and $W$ is a double-well potential whose wells, of equal depth, correspond to the solid and liquid phases. \\
\qquad When $\epsilon\to 0$, the velocity of the moving boundary $v$ in one dimension and that of the radius in the cylinder or sphere are shown as the following Stefan problem and let $\Gamma (t)$ be the interface separating the two regions $\Omega(t) = \{x: \varphi (t, x) = - 1\}$ or $\{\varphi =1\}$,\\
$$
\left\{
\begin{array}{l}
u_t-\Delta u =0,\,\, \mathrm{in}\,\,\mathrm{\Omega = \{x: \varphi (t, x) = \pm 1\}}\\\\
\displaystyle v=\frac{1}{2}\left[\frac{\partial u}{\partial n}\right]_\Gamma\\\\
\displaystyle u=-\frac{m}{2\ell}[\kappa -\alpha v]_\Gamma
\end{array}
\right.
$$
where $\alpha$ is a positive parameter, $[\frac{\partial u}{\partial n}]_\Gamma$ is the jump of the normal derivatives of $u$ (from solid to liquid), and $m=\int_{-1}^1\left(2W(\varphi)\right)^{1/2}d\varphi$. On the other hand, Iida [7] suggested that those equations did to the equations for the two phase Stefan problem by the same singular limit. These of the fundamental idea is based on the work about the model of ``Caginalp type''. More precisely, these results can be derived by the method of matched asymptotic expansion between the inner expansion and the outer expansion. \\
\qquad Soner reported that the solution of the phase-field equations converges to that of the Stefan problem, when $\epsilon\to 0$. Now, when $\epsilon\to 0$, the solution $(u^\epsilon, \varphi^\epsilon)$ of
$$
\left\{
\begin{array}{l}
u^\epsilon_t+\ell\varphi^\epsilon_t=\Delta u^\epsilon,\\\\
\displaystyle\epsilon\varphi^\epsilon_t=\epsilon\Delta\varphi^\epsilon -\frac{1}{\epsilon}W^\prime (\varphi^\epsilon )+\ell (\varphi^\epsilon )u^\epsilon,
\end{array}
\right.
$$
converges to the solution $(u,\varphi)$ of
$$
\left\{
\begin{array}{l}
u_t-\Delta u= -(h(\varphi))_t\\\\
\displaystyle u=\pm\frac{m}{2\ell}[\kappa -\alpha v]_\Gamma
\end{array}
\right.
$$
where $+$ stands for solidification and $-$ melting, and $\ell (\varphi)=h^\prime (\varphi)=\sqrt{2W(\varphi)}$.\\
\qquad From these results, the moving velocity of interface is obtained from the limit of the phase-field equations. The velocity of interface without the interface tension is
$$
v=\pm\frac{2\ell}{\alpha m}u,
$$
when the temperature $u=0$ on $\Gamma$, the velocity
$$
v=\pm\frac{1}{\alpha}\kappa
$$
where $\alpha$ is a positive constant.
\section{Numerical Example}
We consider the following equations,
\begin{equation}
\Phi_t(r,t)=\epsilon^2\Delta\varphi +f(\varphi).
\end{equation}
Suppose that the width of the interface between solid and liquid $\epsilon$ is sufficiently small in comparison with the radius of solid $R(t)$ as shown in Fig.1. Let us consider the time variation $R(t)$. In the case that $\epsilon/R\ll 1$, we can write
\begin{equation}
\Phi (r,t)=\Phi_0(r\pm R(t)).
\end{equation}
Substituting (3.2) to (3.1), we get
\begin{equation}
\pm\dot{R}\Phi_0^\prime =\epsilon^2\left(\Phi_0^{\prime\prime}+\frac{d-1}{r}\Phi_0^\prime\right)+f(\Phi_0).
\end{equation}
where $d$ is a dimensional number, 2 or 3.\\
\qquad Since the equilibrium is assumed at the interface, the following equation is given, namely,
\begin{equation}
\epsilon^2 \frac{d^2}{dx^2}\Phi_0+f(\Phi_0)=0
\end{equation}
holds. Here $\Phi_0^\prime$ is a large value at the neighborhood of $r=R$ and rapidly decrease to zero out of that neighbor. Hence we replace $\Phi_0^\prime/r$ to $\Phi_0^\prime/R$ and as a result, from $(3.3)$, we get
\begin{equation}
\dot{R}\cong\epsilon^2\frac{d-1}{R},
\end{equation}
and
\begin{equation}
(R(t))^2=R_0^2+2\epsilon^2(d-1)t
\end{equation}
where $R_0$ is the initial radius. The relation of $(R_0^2$ vs. $t$ is good agreement with tendency of the exact solutions [5] of the Stefan problem.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=7cm,clip]
{fig1.pdf}
\caption{$R(t)$, $R_0$ and $x,y$-coordinate}
\end{center}
\end{figure}
\qquad Next, we consider the velocity of interface which depends on the temperature. The control of width of diffusion layer by the parameter of phase-field equations is investigated in order to realize the singular limit of phase field equations by the numerical method. Karma and Rappel [8] in 1996 derived parameters for the so-called thin-interface limit of phase-field equations, where the interface thickness can be controlled to be small, and the classical interface conditions are satisfied for a finite thickness. Their analysis allowed for the first time fully resolved computations to be made for three dimensional dendrites with arbitrary interface kinetics, by Karma and Rappel [9] in 1998.\\
\qquad We used their model to simulate the Stefan problem by the phase-field equations. In this simulation, we gave the function as
\begin{equation}
W(\varphi)=-\frac{\varphi^2}{2}+\frac{\varphi^4}{4}
\end{equation}
and
\begin{equation}
\frac{\partial u}{\partial t}=\Delta u+\frac{1}{2}\frac{\partial\varphi}{\partial t}
\end{equation}
\begin{equation}
\frac{\partial\varphi}{\partial t}=\Delta\varphi +W^\prime (\varphi)-\lambda (u)
\end{equation}
The right hand side of the equations is introduced to the parameter $\lambda$. The parameter $\lambda$ is controlled by the thickness of diffusion layer between solid and liquid, and is similar to the one used by Caginalp and others (e.g. see MacFadden, {\it et. al.} (1993) [10]).\\
\qquad Now, we consider the steady state of one dimensional phase-field equation to compare the sharp interface model. The boundary conditions of the equations are
\begin{equation}
V=\frac{\partial u}{\partial x},
\end{equation}
\begin{equation}
u_i=-\beta V,
\end{equation}
\begin{equation}
u(+\infty)=-\delta.
\end{equation}
where in the equation $(3.11)$, subscript $i$ stands for the interface. Using the equations, we obtained the solution of the steady state equations as
\begin{equation}
V=\frac{\delta -1}{\beta}
\end{equation}
\begin{equation}
u=\exp [-Vx]-\delta
\end{equation}
in the liquid $(x\geqq 0)$ and $u=1-\delta$ in the solid $(x\leqq 0)$.\\
\qquad In one dimension, phase-field equations take the form
\begin{equation}
\frac{\partial\varphi}{\partial t}=\frac{\partial^2 \varphi}{\partial x^2}+\left(\varphi -\varphi^3\right)+\lambda u
\end{equation}
\begin{equation}
\frac{\partial u}{\partial t}=\frac{\partial^2 u}{\partial x^2}+\frac{1}{2}\frac{\partial\varphi}{\partial t}.
\end{equation}
The steady-state growth equations in the moving frame of interface, which yield
\begin{equation}
V\frac{\partial\varphi}{\partial x}+\frac{\partial^2\varphi}{\partial x^2}+\varphi-\varphi^3+\lambda u=0,
\end{equation}
\begin{equation}
V\frac{\partial u}{\partial x}+\frac{\partial^2 u}{\partial x^2}-\frac{V}{2}\frac{\partial\varphi}{\partial x}=0.
\end{equation}
The solution of these equations with $u$ subject to the far field boundary conditions determines the planer interface velocity as a function of undercooling.\\
\qquad We numerically analyzed the equations $(3.17)$, $(3.18)$. Figure 2 and Figure 3 show the profiles of $\varphi$ and $U$. The thickness of the interface is dependent on the parameter $\lambda$. The thickness of $\varphi$ and $U$ with $\lambda$ decreasing. This procedure of thin interface limit gives the solution of the Stefan problem as singular limit of phase-field equations.
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm, height=13cm, clip]
{phasefield63.pdf}
\caption{Profile of $\phi$, $u$: $\phi:-$, $u:\cdots$, on the conditions that $\beta =0.2572$, $\delta=0.6$, $\lambda =0.3$.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm, height=13cm, clip]
{phasefield71.pdf}
\caption{Profile of $\phi$, $u$: $\phi:-$, $u:\cdots$, on the conditions that $\beta =0.2572$, $\delta =0.7$, $\lambda =0.1$}
\end{center}
\end{figure}
\section{Conclusion}
\qquad Rearranged the theorems of phase-field equations in singular limit of Stefan problem, we reviewed the previous work reported by mathematician. The velocity of the moving interface between solid and liquid was simply determined.
|
1,116,691,500,337 | arxiv | \section{Introduction}
Biolocomotion in fluids is in many cases influenced by the presence of a boundary. A well known observation is the case of bird flight near a surface, where the animal can glide with a fixed wing configuration for long distances without loss of altitude \cite{Withers:1977, Blake:1983}. This so-called \emph{ground effect}, which is also of importance in the aerodynamics of aircraft \cite{Staufenbiel:1988} and cars \cite{Katz:2006}, can account in some cases such as the gliding flight of pelicans for induced drag savings of up to $ 50\%$ \cite{Hainsworth:1988}. The physical mechanisms governing the dynamics of the ground effect in such cases where the lifting surface is steady have been extensively studied (see e.g. \cite{Cui:2010} for a short general review or \cite{Rayner:1991} for an in-depth discussion applied to animal flight). The most often cited mechanisms are related to the reduction of downwash in presence of a substrate. In particular, the fact that induced drag is reduced because wing-tip vortices are inhibited by the presence of the boundary, as well as the enhanced pressure between the lifting surface and the substrate. Moreover, it has been shown that the ground effect acts to increase not only the lift in steady flight but also the thrust and propulsive efficiency in oscillating modes \cite{Tanida:2001,Quinn:2014b}.
In the case of fish, some species such as batoids swim very close to the substrate, making ground effects an unavoidable element of their locomotor strategy \cite{Blevins:2013}. The main kinematic trait of the pectoral fin of batoids is the production of a backward-propagating wave \cite{Blevins:2012,Rosenberger:2001}, and the physics of the interaction of such an undulating flexible body with a close boundary are likely to be if not completely different, at least significantly modified with respect to their steady counterparts cited above. These issues have only very recently been started to be addressed, for instance using heaving flexible panels \cite{Quinn:2014c} where the ground effect was shown to provide notable hydrodynamic benefits in the form of enhanced thrust peaks during the heaving oscillation cycle. In the same manner as Quinn \emph{et al.} \cite{Quinn:2014c}, the experimental setup used in the present study joins the recent flourishing literature on robotic models using elasticity to mimic fish-like swimming kinematics through a passive mechanism \cite{Alben:2012,Ramananarivo:2013,Dewey:2013,Raspa:2014}.
{In the present manuscript we focus on the effect of swimming near a solid boundary, by studying the self-propulsion of a flexible foil along a rectilinear trajectory actuated by pitching oscillations at the leading-edge. The emphasis is given to the cases with large pitch amplitudes in the head of the foil, that end up developing large deformations in the foil.} Although we focus here in the cruising regimes of our artificial foil, the dynamics of this type of large amplitude undulation influenced by a boundary are certainly a crucial issue for natural or bio-inspired systems on a broader spectrum of swimming regimes, such as the fast-start of fish near a wall \cite{Eaton:1991,Mirjany:2011}. We show that the presence of the wall produces an enhancement of the swimming performance in the large amplitude undulation cases, mainly through a favourable redistribution of momentum in the wake. This effect in terms of cruising velocity can give an enhancement of up to 25\% and defines an optimal position of the foil trajectory parallel to the wall at around 0.4 times the characteristic size of the foil used in the present experiments.
{The main goal of this work is thus to study how the self-propulsion of a model flexible foil performing large amplitude oscillations is affected by the presence of a wall. Experimental measurements of cruise velocities, thrust forces and time-resolved velocity flow fields are analysed. The next section describes the experimental setup and methods and is followed by the presentation and discussion of the results. In addition to performance measurements, based on the trajectory tracking of the foil, Particle Image Velocimetry (PIV) measurements are presented, which permit us to relate the observed effects of swimming near a wall to changes in the wake vortex topology. At the end of the paper we discuss the use of a proper orthogonal decomposition technique to analyse the changes in the energy distribution among the different components of the experimental velocity fields associated to the effect of swimming near the wall.}
\section{Methods}
\subsection{Experimental setup}
The experiment was conducted in a water tank ($900 \times 800 \times 500$ mm$^3$), where a model of a self-propelled undulatory foil was allowed to move along the rectilinear direction imposed by an air bearing, installed outside the tank (see figure \ref{fig_setup}). The foil was made of a rectangular flexible Mylar foil of thickness 130 $\mu$m, chord $L=110$mm and span $W=100$mm, giving an aspect ratio $AR=W/L=0.9$. The foil was held at one of its edges by a cylindrical shaft of diameter 5mm, acting as the head of the foil. {Although three-dimensional structures are inherent to this type of flows because of edge effects, the quasi-two-dimensional hypothesis can be justified here because of the aspect ratio used for the foil, as other authors have previously suggested \cite{Buchholz:2008}.} The lowest natural frequency of the foil in water $f_0=0.42$ Hz was measured from the response of the foil to an impulse perturbation of the trailing edge as in \cite{Paraz:2014}. A pitching oscillation was imposed through this shaft by means of a stepper motor supported by the moving carriage of the air bearing (see also \cite{Raspa:2013}). A motor driver card was used to control in time the angular position of the shaft, with 0.5$^{\circ}$ of accuracy. A sinusoidal pitch motion was imposed to the shaft yielding to a smooth travelling wave along the foil, providing the desired undulatory kinematics. The self-propelled foil's speed was obtained from time series of the position ($x(t)$), measured using an ultrasonic proximity sensor with an accuracy of 3 mm (see figure \ref{fig_setup}). Additionally, the deformation of the foil was obtained from high-speed video recordings.
The parameters controlled in the experiments were the swept angle ($\theta_0$), the frequency of the pitch motion ($f$) and the gap ($d$). The pitch motion imposed to the shaft or foil's head, can be described by the harmonic expression $\theta=0.5\theta_0\sin{(2\pi f t)}$. {The pitching frequency was stepped with increments of 0.5Hz between each experimental case, except for the case with $\theta$=240 degrees, in which the maximum frequency that the stepper motor could achieve was 3.3Hz instead 3.5Hz.} The third important experimental parameter was the distance to the wall ($D$), written in dimensionless form ($d=D/L$), using the chord of the foil ($L$). {Six distances to the wall were investigated with dimensionless distances to the wall between 0.55 and 1.54. The strongest effect was observed for separations to the wall in the range 0.25-0.45. The wall effect was considerably weaker for d$>$0.45 with very small velocity and thrust variation. Other distances $d$ were investigated between 0.55 and 1.54 showing practically no differences. Only the largest of those is shown here, corresponding to $d$=1.54.} The Reynolds number based on the foil length ($Re=UL/\nu$), $\nu$ being the kinematic viscosity of the fluid, was between 2200 and 19000. {We recall that the experiment is conducted in a still water tank, so that $U$ is the self-propelled swimming speed and there is no externally imposed free stream, which would have brought the additional effect of the boundary layer near the wall. The latter has been addressed by other authors \cite{Quinn:2014b}, who have studied the effect of the boundary layer in a rigid panel with ground effect.} The parameter space explored for this work ended up in more than 150 experimental cases summarised in Table \ref{t1}.
\begin{figure}[t]
\centering
\includegraphics[width=0.62\linewidth]{fig1_exp.pdf} \caption{Experimental set up: (a) lateral and (b) top views.} \label{fig_setup}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|}
\hline
$d$& 0.25 & 0.3 &0.38 &0.45 & 0.55 & 1.54 \\
\hline
$\theta_0$&\multicolumn{6}{|c|}{$f$ (min : step : max)}\\
\hline
40$^{\circ}$ &\multicolumn{6}{c|}{ 1.5 : 0.5 : 5 (Hz) }\\
80$^{\circ}$ &\multicolumn{6}{c|}{ 1 : 0.5 : 4 (Hz) }\\
160$^{\circ}$ &\multicolumn{6}{c|}{ 0.5 : 0.5 : 4 (Hz) }\\
240$^{\circ}$ &\multicolumn{6}{c|}{ 0.5 : 0.5 : 3.3 (Hz) }\\
\hline
\end{tabular}
\end{center}
\caption{Parameters of the experiment.}\label{t1}
\end{table}
\subsection{Particle image velocimetry setup}
In order to investigate the flow around the foil, Digital Particle Image Velocimetry (DPIV) was done to obtain two-dimensional velocity fields. DPIV data were acquired using a system based on a 20mJ Nd-YLF double pulse green laser that produced a planar light sheet, and a high-speed camera at full 1632$\times$1200 pixel resolution, synchronised with the laser in order to capture the illuminated particle cloud images. The flow was seeded using 20 $\mu$ m polyamide particles. A total of 2000 images were recorded for each experiment at a rate of 300, 350 or 400 images/second depending of the frequency of the foil oscillation. Before the velocity fields were calculated, the foil projection was removed from each image by applying a mask able to detect the outline of the foil at each instant in time. Two dimensional velocity fields were computed by applying a Fast Fourier Transform (FFT) based multipass window-deformation technique (\cite{Willert_EiF91}). The algorithm evaluated the images in two steps, first with an interrogation area of $64 \times 64$ pixels and after reducing the size of the window to $40 \times 40$ pixels, all with $50\%$ overlap. Two different types of experiment were measured with DPIV. In some cases, the foil was allowed to move freely along the direction imposed by the rail of the air bearing system (free swimming configuration). In the other type of experiments, the foil was kept at a fixed position by locking the rail of the air bearing system (stationary foil configuration). All DPIV interrogations were made at an horizontal plane located at the middle of the foil's height. {The laser was mounted in the back of the water tank, illuminating the foil from the trailing edge (see Fig.~\ref{fig_setup}b)}. The camera was placed below the tank looking upwards, covering a field of view of approximately 25 cm in the direction of motion and 12.6 cm transversely (see Fig.~\ref{fig_setup}a).
\section{Results and discussion}
\subsection{Foil kinematics}
The undulation, tail amplitude and wavelength are influenced by the distance to the wall and play an important role in the type of the wake and in the swimming performance. In figure \ref{kine} the undulation kinematics of the foil is shown for two experiments: a case near the wall in Fig. ~\ref{kine}(a), and a case with no influence of the wall in Fig.~ \ref{kine}(b). It can be readily seen that the peak-to-peak lateral excursion of the tip is markedly influenced by the presence of the wall.
\begin{figure}[t]
\centering
\includegraphics[width=0.65\linewidth]{fig2_kine.pdf} \caption{Sequence of motion of the foil for $\theta_0=160^{\circ}$ and $f=1.5hz$ for two different distances to the wall (a) $d=0.3$ and (b) $1.54$. The foil swims from left to right. The dotted and black lines denote the swimming direction (the trace of the head of the foil) and the position of the wall, respectively.
}\label{kine}
\end{figure}
The envelope of the trailing edge of the foil motion is obtained using the Hilbert transform of the time series of figure \ref{kine}. This is shown in figure \ref{hl}, where the top and bottom rows correspond to two different pitch amplitudes of 160$^{\circ}$ and 240$^{\circ}$ respectively, and each column corresponds to one of the three values of $d$ shown previously in Fig.~\ref{kine}. The two cases with $\theta_0=160$$^{\circ}$ and 240$^{\circ}$ shown are the largest swept angles tested and they mimic the real motion of the backward-propagating wave along an animal as \cite{Blevins:2013}, \cite{Blevins:2012} and \cite{Rosenberger:2001}. Each graph includes two different pitch frequencies. For both pitch amplitudes, the envelopes show larger amplitudes when the foil is far away from the wall as shown previously by Webb \cite{Webb:1993} and \cite{Webb:2002}. For $\theta_0=240$$^{\circ}$ in figs.~\ref{hl} (d), (e) and (f), envelopes practically do not vary with pitch frequency. On the other hand, for $\theta_0=160$$^{\circ}$, pitch at a higher frequency produces a smaller envelope amplitude if compared to the low frequency, see Figs.~\ref{hl} (b) and (c). However, this is does not occur close to the wall ---Figs.~\ref{hl} (a)---, here the high frequency generates more amplitude than the low frequency. The ground effect can be noticed especially at the first peak of the envelope where the amplitude is always higher than the rest of the peaks of cycles as the following graphs a), b), d), and e).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig3_hilb.pdf} \caption{Envelopes of the trailing edge of the foil motion at different distances to the wall. The parameters for each case are included as a legend in each frame. The top and bottom rows correspond, respectively, to $\theta_0=160^{\circ}$ and $240^{\circ}$. In the left column $d=0.3$, in the centre column $d=0.38$ and in the right column $d=1.54$. Two different frequencies are plotted: $f=1.5$ Hz (dashed line) and 3.3 or 3.5 Hz (solid line).}\label{hl}
\end{figure}
\subsection{Propulsive force and cruise velocity}
\label{S:Forces}
\begin{figure}[t]
\centering
\includegraphics[height=0.41\linewidth]{fig4a_F.pdf}
\includegraphics[height=0.41\linewidth]{fig4b_U.pdf}
\caption{(a) Propulsive force (thrust) and (b) limit velocity (cruise velocity), versus frequency for different swept angles (40, 80, 160, 240 degrees) and distance to the wall (0.25, 0.3, 0.38, 0.45, 0.55 and 1.54). Dotted lines link the data points for each series corresponding to the 240 degrees forcing to guide the eye. } \label{Fpf}
\end{figure}
The propulsive force $F$ and the cruise velocity $U$ are governed by the kinematics of the foil and the distance to the wall. The thrust force $F$ produced by the foil was calculated from the displacement measurements $x(t)$ as in Raspa \emph{et al.} \cite{Raspa:2013,Raspa:2014}. The measured displacement is fitted by the equation $x(t)=\frac{m}{ \gamma}\log \left [ \cosh \frac{\sqrt{\gamma F}}{m} t \right ]$, which is the solution of $m\ddot{x} + \gamma \dot{x}^2=F$. The latter equation represents a simplified dynamical model of the system in which $m\ddot{x}$ is the inertial term (with a total moving mass $m=2.85$ kg including the body of the foil and its supporting system) and $\gamma \dot{x}^2$ is the hydrodynamic drag term. An iterative optimization process is applied to the analytical solution for $x(t)$, with $\gamma$ and $F$ as unknowns, until estimated and measured values of $x(t)$ converge.
Performance is first analysed by studying how $F$ and $U$ behave as a function of the swept angle $\theta_0$, the imposed pitch frequency $f$ and the dimensionless distance to the wall $d$, see figure \ref{Fpf}. In the figure, different symbols are used to identify distance to the wall, while colours denote the different amplitudes of the pitching oscillation imposed to the head of the foil. The first observation is that the four different sets of pitch amplitudes imposed to the foil, define four distinct branches of performance with respect to frequency. The higher the pitch amplitudes, the higher the swimming speed and the thrust produced. In the two branches corresponding to the smaller pitch amplitudes ($\theta_0=40$$^{\circ}$ and 80$^{\circ}$), the effect of increasing pitch frequency in thrust and cruise velocity is relatively mild, and one recognises the shape of the curves reported in previous studies, with a slight peak that corresponds to a resonant behaviour with one of the deformation modes of the foil \cite{Raspa:2014,Quinn:2014,Paraz:2014}. But when the imposed pitch is large ($\theta_0=160$$^{\circ}$ and especially 240$^{\circ}$), the effect of the forcing frequency is crucial: increasing frequency not only determines more rapid increases in thrust and cruising speed, but also determines that the effect of the proximity to the wall, which was undetectable for the lower amplitudes, appears now as an important element for swimming performance.
Considering that the hydrodynamic thrust force at these large Reynolds numbers is expected to scale as the dynamic pressure acting on the propulsive element, the $U$ and $F$ data can be plotted together as $F\propto U^2$ ---see Fig.~\ref{DataAdim}(a)---, where the surface of the foil $S=WL$ and the fluid density $\rho$ have been used in order to obtain a dimensionless thrust coefficient \begin{equation}
C_T=\frac{2F}{\rho U^2 S} \;.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[height=0.41\linewidth]{fig5a_CT.pdf}
\includegraphics[height=0.41\linewidth]{fig5b_Ubar.pdf}
\caption{(a) $F$ vs. $U^2$ and (b) reduced velocity $\bar U=U/fA$ as a function of the dimensionless excitation frequency $f/f_0$ for the same data as figure \ref{Fpf} .} \label{DataAdim}
\end{figure}
The dashed line whose slope is an estimate of the average thrust coefficient was obtained as a linear fit of the data corresponding to $\theta_0=80$$^{\circ}$. It can be seen that while the case of smaller pitching amplitudes ($\theta_0=40$$^{\circ}$) is well described also by this fit, the series corresponding to $\theta_0=160$$^{\circ}$ and $\theta_0=240$$^{\circ}$ deviate notably from the fit roughly for the upper half of the propulsive force range explored in the present experiments. The previous observation is not surprising, since the large amplitude pitching excitation at $\theta_0=160$$^{\circ}$ and 240$^{\circ}$ produces large deformations of the foil, most likely modifying significantly the coefficient of the quadratic drag model used here. Moreover, it is clear from this figure that the proximity to the wall plays thus an important role in the balance of thrust and drag, producing non-trivial behaviours at the large amplitude cases. Figure \ref{DataAdim}(b) presents another usual way of analysing the self-propelled swimming velocity by means of the reduced velocity $\bar U=U/fA$, a dimensionless parameter measuring the ratio of swimming speed to a flapping characteristic speed $f\times A$. Here $A=L\sin(\theta_0)$ is the amplitude of the imposed flapping motion. We note that $\bar U$ is the inverse of the Strouhal number $St_A$ and is related to a ``mechanical efficiency'' of the flapping motion. This representation, however, brings no direct clarification to the role of the proximity to the wall in the scatter of the different data series.
\subsection{Wall effect on swimming velocity}
\label{S:WallEffect}
The effect of the distance to the wall can of course be examined directly comparing the different force or velocity curves in figure \ref{Fpf} as a function of $d$, for each pitching frequency. When the imposed pitch is small ($\theta_0=40$$^{\circ}$ and 80$^{\circ}$), the ground effect is negligible, and all curves collapse over a common curve for each amplitude. But if the pitch amplitude is increased, swimming near or far away from the wall has a dramatic effect on the thrust and on the cruising velocity. The zoomed region in figure \ref{Fpf}(a) permits to examine as an example the thrust for a foil forced at $\theta_0=240$ and $f=1$ Hz. The maximum value is produced for a distance to the wall $d=0.38$ followed by $d=0.45$, indicating the ground effect is positive. If the distance is too large ($d\geq0.55$) the wall effect starts to be of less importance, becoming negligible at a distance of $d=1.54$, with thrust points collapsing on the same values. This behaviour is in agreement with the observations of \cite{Blevins:2013}. On the other hand, for distances to the wall $d\leq0.3$ the ground effect is negative for thrust. The other notable feature at the largest imposed pitch, $\theta_0=240$$^{\circ}$, is the sudden drop in velocity and thrust when the pitch frequency is set to values larger than 1.5 Hz and the foil is at distances to the wall larger than $d=0.45$. The analysis of velocity fields in the next section will be useful to understand this observation.
Figure \ref{vdis} shows an alternative way of looking at the results, by plotting the cruising speed normalised by its value $U_{\mathrm{bulk}}$ away from the wall (i.e. swimming in the bulk). We focus now on the cases where the effect of the wall is significant which are those corresponding to $\theta_0=160$$^{\circ}$ and $\theta_0=240$$^{\circ}$. The values of $U/U_{\mathrm{bulk}}$ are plotted against $f/f_0$ for all cases in the the top panels of Fig. \ref{vdis}, the different markers corresponding to different distances to the wall. The two bottom panels of the figure show $U/U_{\mathrm{bulk}}$ as a function of the normalised distance to the wall $d$, only for a few selected frequencies for clarity. Different behaviours are observed for the two different amplitudes analysed and the main features can be summarised as follows: (1) aside from a few exceptions the wall has an overall positive effect on swimming speed; (2) the optimal position with respect to the wall evolves as a function of the frequency and the two different amplitudes tested present different behaviours. For instance, for $\theta_0=160$$^{\circ}$ at the lowest frequency tested, the cases swimming closest to the wall $d=0.25$ -- 0.3 were the best performers, while for $\theta_0=240$$^{\circ}$ the best case was at $d=0.45$; (3) the optimal distance for $\theta_0=240$$^{\circ}$ case presents a sharp change for frequencies higher than $f/f_0\approx 5$, going from $d\approx 0.45$ down to $d\approx 0.3$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig6a_U_Ubulk.pdf}
\includegraphics[width=\linewidth]{fig6b_U_Ubulk_vs_d.pdf}\caption{Cruising swimming velocity rendered dimensionless by normalising it with the cruising speed away from the wall $U_{\mathrm{bulk}}$ as a function of frequency and distance to the wall $d$ (see legends).}\label{vdis}
\end{figure}
In what follows we examine the velocity field around the swimming foils in order to pinpoint the fluid dynamical mechanisms responsible for the previous observations.
\subsection{DPIV analysis}
DPIV measurements were performed for two different foil configurations: Stationary swimming configuration (air bearing blocked), and self-propelled free swimming configuration (free to swim along the direction prescribed by the supporting air-bearing rail). DPIV was performed for the reference case without wall effect, and for selected cases with wall effect in which there was an enhancement of propulsion, as seen in section \ref{S:Forces}, that is for cases with large pitch motions and moderate distances to the wall. DPIV measurements of the stationary swimming configuration are used to obtain a global overview of the mean velocity fields, whilst in the free swimming configuration, the analysis is focused on the local instantaneous vorticity fields and the different wake topologies found behind the foil. DPIV data in all figures appear in dimensionless form, with velocities given by $(V_{x},V_y)=(v_x,v_y)/fL$ and vorticities computed as $\omega_{z}L/U$.
\subsubsection{Stationary foil}
\label{S:Stationary}
Contours of the mean velocity field are presented in figures \ref{avgC1} and \ref{avgC3}, for the stationary foil. The stream-wise component ($V_{x}$) appears in all these figures on the left column and the transverse velocity ($V_{y}$) on the right one. The figures are a good indication of the the momentum distribution in the wake.
Figure \ref{avgC1} is for an experiment with enhanced propulsion due to the wall effect (plots (a) and (b)) as seen in section \ref{S:Forces} and without wall (plots (c) and (d)) for the $\theta_0=240^{\circ}$ case. The same arrangement of plots appears in figure \ref{avgC3} but for $\theta_0=160^{\circ}$ . There are obvious differences introduced by the wall when comparing by rows the plots in both figures. Whilst in the cases without wall effect in the lower rows, the mean flow fields are typical of those of symmetric wakes \cite{Dewey:2011}, the momentum distribution changes considerably by the effect of the wall, as seen in the upper row of both plots. Regions of high momentum directed along the propulsion direction appear near the wall in both figures, showing clearly one of the causes for propulsion enhancement.
\subsubsection{Self-propelled free foil}
\label{S:Self-propelled}
In addition to the previous mean-flow analysis with the stationary foil, further insight on the mechanisms that govern the ground effect on swimming performance can be obtained by examining the cases with self-propulsion. In this section the foil is free to move along the rail of the air bearing and DPIV has been used to analyse the instantaneous flow patterns in the wake, depending on the main parameters governing the experiments ($d$, $f$ and $\theta_0$).
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig7_avg1.pdf} \caption{Average of the velocity fields, stream-wise $\bar V_x$ in the left column and cross-stream $\bar V_y$ on the right column for: (a) and (b) $d=0.3$, $\theta_0=240^{\circ}$ and $f=2.5$ Hz; (c) and (d) $d=1.54$, $\theta_0=240^{\circ}$ and $f=2.5$ Hz. Dashed black lines denote the position of the trailing edge of the foil and black thick lines represent the wall. The foil swims from left to right.} \label{avgC1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{fig8_avg3r.pdf} \caption{Average of the velocity fields, stream-wise $\bar V_x$ in the left column and cross-stream $\bar V_y$ on the right column for: (a) and (b) $d=0.3$, $\theta_0=160^{\circ}$ and $f=1.5$ Hz; (c) and (d) $d=1.54$, $\theta_0=160^{\circ}$ and $f=1.5$ Hz. Dashed black lines denote the position of the trailing edge of the foil and black thick lines represent the wall. The foil swims from left to right.} \label{avgC3}
\end{figure}
\begin{figure}[t]
\centering
(a)\\ \includegraphics[width=0.8\linewidth]{fig9a_wake1.pdf}\\
(b)\\ \includegraphics[width=0.8\linewidth]{fig9b_wake2.pdf}\\
(c)\\ \includegraphics[width=0.8\linewidth]{fig9c_wakefinAL.pdf}
\caption{Instantaneous vorticity fields and velocity vectors for: (a) $d=1.54$, $\theta_0=240$ degrees and $f= 1.5$Hz; (b) $d=1.54$, $\theta_0=240$ degrees and $f= 3.3$Hz; and (c) $d=0.38$, $\theta_0=240$ degrees and $f= 1.5$Hz. Snapshots at 0$\%$ and 50$\%$ of the cycle are shown on the left and right plots of each row, respectively. The foil swims from left to right. The thick black lines at the bottom in (c) denote the wall. Vorticity colour maps are overlaid on top of the vector velocity field generated by the foil. Blue is used for clockwise vorticity and red is for counter-clockwise. (See text for the description of the vortex labelings in this figure and the following).}
\label{wk1}
\end{figure}
{A nomenclature based on that proposed by Williamson and Roshko \cite{Williamson:1988} to describe the flow structures in the wake of cylinders, is used here to describe the topology of the wake downstream the foil. According to this way of describing wakes, an $S$ is used to denote a single vortex at one side of the wake per shedding cycle. If a $P$ is used, the wake consists of a pair of counter-rotating vortices at one side per shedding cycle. If the same arrangement of vortices is observed at each side of the wake each cycle, a $2$ is placed in front of the $S$ or the $P$. Therefore a $2S$ wake is a wake consisting of a single vortex shed at each side of the wake \footnote{It should be noted that here circulations are reversed with respect to the case of a cylinder wake, the 2S wake being thus the well known reverse B\'enard-von K\'arm\'an (BvK) pattern associated to flapping-based propulsion \cite{Koochesfahani:1987,Anderson:1998,Triantafyllou:2000,GodoyDiana:2008}.} and a $2P$ is a wake made of a pair of counter-rotating vortices at each side. When the observed pattern is different at both sides of the wake, a combination is needed and the symbol $+$ is used. For instance a $P+S$ wake consists of a single vortex in one side and a pair of vortices in the other. In our experiment if there is a combination, the first character before the $+$ symbol denotes the structure observed at the side of the wake without wall, and the second one, after the $+$ indicates the structure at the side of the wall. If the pair of vortices in the $P$ structure is co-rotating, $P^*$ is used.}
The patterns observed in the wake of the foil far away from the wall ($d=1.54$, where the wall effect is negligible), are summarised in table \ref{T:bulkVortesModes} for pitch motions of $\theta_0=160^{\circ}$ and $240^{\circ}$ and 3 pitch frequencies. For the case with $\theta_0=240^{\circ}$, the dominant structure in the wake is the $2P$, a pair of counter-rotating vortices at each side of the wake, as observed in figures \ref{wk1}(a) and \ref{wk1}(b) with pitch frequencies of 1.5 and 3.3 Hz respectively. The two vortices in the $2P$ mode are denoted using capital letters and a subscript to indicate each side of the wake, hence $A_1$ and $B_1$ are the vortices at one side of the wake and $A_2$ and $B_2$ are the two vortices at the other side. The figure shows two different instants in time separated half a cycle. For $\theta_0=160^{\circ}$, when the foil is far away from the wall ($d=1.54$), the $2P$ only appears at the lowest frequency ---vorticity field not shown here but similar to figure ~\ref{wk1}(a)---. For higher frequencies $2S$ and $2P^*$ wakes are developed, as shown in figure \ref{bvk_wake}: (a) $2S$ wake with vortices $A_1$ and $A_2$ at each side of the wake, and (b) $2P^*$ wake with two co-rotating vortices at each side of the wake, $A_1$ and $B_1$ at the upper half and $A_2$ and $B_2$ in the lower part.
\begin{table}[t]
\begin{center}
\begin{tabular}{||l | c | r||}
\hline
\hline
d=1.54 & $\theta_0=160$$^{\circ}$ & $\theta_0=240$$^{\circ}$ \\
\hline
\hline
1.5 Hz & $2P$ & $2P$\\
\hline
2.5 Hz & $2S$ & $2P$\\
\hline
3.5 Hz & $2P^*$ & $2P$ \\
\hline
\hline
\end{tabular}
\caption{Summary of vortex modes found in the experiments in which the wall effect was not important}
\label{T:bulkVortesModes}
\end{center}
\end{table}
\begin{figure}[t]
\centering
(a)\\ \includegraphics[width=0.8\linewidth]{fig10a_wake4.pdf}\\
(b)\\ \includegraphics[width=0.8\linewidth]{fig10b_wake5.pdf}
\caption{Instantaneous vorticity fields and velocity vectors for: (a) $d=1.54$, $\theta_0=160$ degrees and $f= 2.5$Hz; and (b) $d=1.54$, $\theta_0=160$ degrees and $f= 3.5$Hz. Snapshots at 0$\%$ and 50$\%$ of the cycle are shown on the left and right plots of each row, respectively. Other data as in Fig.~\ref{wk1}.}
\label{bvk_wake}
\end{figure}
We now describe the vortex wakes observed when the foil is closer to the wall, which are summarised in table \ref{T:WallVortesModes}, focusing on the cases where propulsion was improved: first the case of $\theta_0=240^{\circ}$ and $d=0.38$ and then $\theta_0=160^{\circ}$ and $d=0.3$. In both cases the same pitch frequencies are reported for comparison with the cases presented in table \ref{T:bulkVortesModes} without wall. The patterns are hybrid modes and show complex structures because of the effect of the wall. With the largest pitch amplitude, a $P+S$ structure was observed independently of the pitch frequency. A case showing this $P+S$ structure for $\theta_0=240^{\circ}$, $f=3.5$ Hz and a dimensionless distance to the wall of $d=0.38$ appears in figure \ref{wk1}(c), with vortices $A_1$ and $B_1$ in the upper part of the plot and a single vortex $C_w$ at the side of the wake near the wall. The $2P$ structure observed without wall has now changed to a $P+S$ structure if the wall is near the foil. That is, the counter-rotating vortex pair that was observed at the lower part of the measurement window for the case without wall changes to a single vortex $C_w$ that is pushed vigorously downstream due to the existence of a high momentum jet-like region near the wall. This is readily seen in figure \ref{wk1}(c) by observing the distance at which $C_w$ is located respect to the trailing edge of the foil, compared to the distance of the vortex pair $A_2$ and $B_2$ in figure \ref{wk1}(a).
\begin{table}
\begin{center}
\begin{tabular}{||l | c | r||}
\hline
\hline
Freq & $\theta_0=160$$^{\circ}$ & $\theta_0=240$$^{\circ}$ \\
& d=0.3 & d=0.38 \\
\hline
\hline
1.5 Hz & $P+S$ & $P+S$\\
\hline
2.5 Hz & $S+P$ & $P+S$\\
\hline
3.5 Hz & $P^*+S$ & $P+S$ \\
\hline
\hline
\end{tabular}
\caption{Summary of vortex modes found in the experiments in which the wall effect was important.}
\label{T:WallVortesModes}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig11_evolution3.pdf} \caption{Sequence of instantaneous vorticity fields and velocity vectors. Every 20 frames is presented ($\Delta$t=50 ms) for $d=0.3$, $\theta_0=160$ degrees and $f= 2.5$Hz. The foil swims from left to right and the black thick lines represent the wall at $y/L=0$.} \label{wk6}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig12_visu.pdf} \caption{Flow visualization with fluorescein dye injection and a laser sheet of the vortex structures near the wall effects for $d=0.3$, $\theta_0=160$ degrees and $f= 2$Hz. The time lapse between frames is $\Delta t=250$ ms and the foil moves from left to right.} \label{visu1}
\end{figure}
With the lower amplitude $\theta_0=160^{\circ}$, the structures are clearly dependent on the distance to the wall. At the lowest frequency the $P+S$ is the dominant structure and at a frequency of 2.5 Hz the $S+P$ structure is seen. Figure \ref{wk6} presents a sequence of 8 DPIV snapshots covering a full pitching cycle for the latter case with the foil at a dimensionless distance to the wall of $d=0.3$. At the wall side, a single vortex is shed from the foil ($A_w$) which eventually splits forming another structure $B_w$ because of the proximity to the wall. In the other side of the wake a single vortex $C_1$, forms the $S+P$ mode in the wake. The flow visualisation with fluorescein-dye presented in figure \ref{visu1} confirms this latter observation and the existence of this counter-rotating vortex pair ($A_w$ and $B_w$) at the side of the wall.
The enhancement in propulsion observed in the thrust and velocity measurements presented above can thus be related to clear changes in the vortex dynamics in the wake of the foil. Whilst at the largest pitch amplitudes the main structure was a $2P$, with ground effect the dominant structure becomes a $P+S$. Now, if the pitch swept angle is 160$^\circ$ the structures are modified to combinations of single and a pair of vortices.
One of the important features observed in the thrust and velocity figures of section \ref{S:Forces}, is the dramatic drop in thrust and foil velocity that takes place at the largest pitch angle as the frequency of pitch is increased. The explanation for that phenomena is clear from figures \ref{wk1}(a) and \ref{wk1}(b) where it can be seen how without the wall, the increase in frequency yields a large change in the angle ($\alpha$ in the figures) at which the shedding of vortices occur, showing that the momentum distribution in the wake becomes less beneficial to the direction of swimming. If the foil is near a wall, the result is a change in this momentum distribution that enhances propulsion: this can be seen comparing \ref{wk1}(a) and \ref{wk1}(c), where without wall, vortices $A_1$ and $B_1$ remain unchanged, but with the wall the disappearance of $A_2$ and $B_2$ to form $C_w$, indicates less energy is dissipated in the wake and a higher-momentum jet-like structure is produced near the wall. This more beneficial momentum distribution was also pointed out in the analysis of the averaged flow fields presented for the stationary configuration in section \ref{S:Stationary}.
\subsection{SPOD analysis}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig13_pod.pdf} \caption{POD kinetic energy $V_{x}$ and $V_{y}$ of the first four modes (ordered from high to less energy from left to right on each plot) versus frequency (1.5 and 2.5 Hz). First row for $d=0.3$ and second row for $d=1.54$. First and second columns stream-wise velocity and the third and four columns for cross-stream velocity.}
\label{PODt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.63\linewidth]{fig14_podr.pdf} \caption{ Comparison first POD mode (left column) and average of velocity fields (right column) for the stream-wise direction. a) $d=0.3$, $\theta_0=160$ degrees and $f= 2$Hz b) $d=0.38$, $\theta_0=160$ degrees and $f= 3$Hz c) $d=0.3$, $\theta_0=240$ degrees and $f= 1.5$Hz d) $d=0.3$, $\theta_0=240$ degrees and $f= 2.5$Hz e) $d=0.38$, $\theta_0=240$ degrees and $f= 3$Hz f) $d=1.54$, $\theta_0=240$ degrees and $f= 1.5$Hz. Dashed white lines denote the position of the trailing edge of the foil and black thick lines represent the wall. The foil swims from left to right.} \label{PD}
\end{figure}
Snapshot Proper Orthogonal Decomposition (SPOD) \cite{Sirovich:1987} has been applied to the velocity DPIV data, following the technique described by Huera-Huarte et al. \cite{HueraHuarte:2009} and recalled in Appendix I. Assuming that the fluctuating part of the flow can be represented by linear combinations of POD modes $\phi_i(x,y)$ and time varying modal coefficients $a_i(t)$,
\begin{equation}
V(x,y,t)=\bar{V}(x,y)+\sum_{i=1}^M a_i(t) \phi_i(x,y)
\label{E:BasisPOD}
\end{equation}
\noindent the SPOD technique permits to study the kinetic energy ($\varepsilon$) distribution of the flow into the most important modes. The $\varepsilon$ associated to the first four more energetic POD modes is shown in figure \ref{PODt}, for both stream-wise and cross-flow components of the velocity. Two different dimensionless distances to the wall (0.3$W$ and 1.54$W$) appear in the figure, for two pitch amplitudes (160 and 240 degrees), and two pitch frequencies (1.5 and 2.5 Hz). The figure shows how the $\varepsilon$ is mostly concentrated in the first POD mode of the stream-wise direction in all cases, with more than 70\%. In the cross-flow component, the energy is shared more uniformly mainly between the first three POD modes.
The decrease in thrust and propulsive velocity observed (Fig. \ref{Fpf}) for the $\theta_0=240^{\circ}$ case at frequencies higher than 2 Hz when the foil is away from the wall can also be explained through the POD analysis. Indeed, for frequencies higher than 2 Hz, since the momentum structure in the wake is then directed mostly perpendicularly to the swimming direction, the first POD mode at 2.5 Hz has dropped considerably if compared to the 1.5 Hz case (see second row and column in Fig. \ref{PODt}). Another point that can be seen is that, while for $\theta_0=160^{\circ}$ the energy of the different modes does not change noticeably when increasing the driving frequency, for $\theta_0=240^{\circ}$ on the contrary, the energy in the first POD mode does increase with frequency when the wall is present. The latter reflecting our previous observation that at these large angles the foil is diverting the momentum in a direction perpendicular to the propulsion direction and the presence of the wall reorients momentum favourably.
Figure \ref{PD} compares the first stream-wise POD mode (left column) and the average stream-wise velocity fields (right column) of the same cases. The trailing edge of the foil in its rest position is shown in the plots with a dashed white line for the sake of clarity. The POD and the averaged velocity fields appear normalised by the maximum value in the cases shown, for comparison. The plots certify again how the first stream-wise mode is enough to represent the momentum in the wake.
\section{Conclusions}
The experimental data presented in this work shows that swimming with large-amplitude undulatory motions at a moderate distance to a wall can have clear advantages in terms of velocity and thrust production. Positive ground (or wall) effect has been observed for the system presented here, when swimming with pitch motions of large amplitude ($\theta_0=160^{\circ}$ and $240^{\circ}$) and for distances to wall between 0.25 and 0.55 times the width ($W$) of the foil. Maximum improvements in velocity and thrust have been observed of about 25\% and 45\% respectively. The results also suggest that for distances of more than 1.5 widths of the foil, the ground effect can be neglected, fact also found by Blevins et al. \cite{Blevins:2013}. The fluid dynamical mechanisms behind this enhancement have been explored by investigating the flow field in the wake of foil, showing how the wall constrains the distribution of momentum in a direction favourable to propulsion. In addition to the analysis of the mean flow, which exhibits the constrained jet structure in the wake of the foil (Figs~\ref{avgC1} and \ref{avgC3}), the time-resolved vorticity fields show the changes in the wake vortex topology associated to the enhancement of propulsion ---e.g. Fig.~\ref{wk1} (a) and (c)---.
As a point of perspective we can comment on the three-dimensional structure of the wake. Although the hypothesis of quasi-two-dimensionality underlying our analysis (as well as that of most of the literature on simplified model foils) can be partially justified alluding to the aspect ratio of the propulsive appendage, it is clear that the inherent 3D nature of this type of flows needs to be further analysed and included in realistic models. With respect to the present results, in addition to the vortex structures in the $xy$-plane analysed here, the wall will also affect the stream-wise structures in the $yz$-plane which have been recently established as important players in the drag-thrust balance \cite{Raspa:2014,Ehrenstein:2014}. These issues will be the subject of future work.
The results with the present flexible foil excited by a pitching oscillation at its head are in agreement with what has been reported for a foil with heaving excitation \cite{Quinn:2014c}. This is an interesting observation from the point of view of bio-inspired design, where pitching motions associated to the elastic response of an appendage could sometimes be an optimal solution to actuate a robotic setup.
\section*{Appendix I}
A linear eigenvalue problem can be derived using the POD method. Let an ensemble of DPIV data \textbf{V}, with $N$ being the total number of the available flow fields or snapshots, arranged in column form i a way in which the first half of the columns are the stream-wise velocities and the second one the cross-flow velocities,
\begin{equation}\label{3a}
\mathbf{V}=\left [ \mathbf{v}^{1} \mathbf{v}^{2}...\mathbf{v}^{N} \right ]
\end{equation}
and the fluctuating part of the flow is,
\begin{equation}
\mathbf{V} = \mathbf{\tilde{V}}-{\bf \bar {v}} =\mathbf{\tilde{V}}-\frac{1}{N}\sum_{n=1}^N{\bf v}^n \qquad n=1,2, ... N
\label{E:fluctV}
\end{equation}
the eigenvalue formulation results in,
\begin{equation}
\mathbf{C}\mathbf{H}^i=\lambda^i \mathbf{H}^i
\label{E:EigSnpPOD}
\end{equation}
where the matrix $\mathbf{C}$ is,
\begin{equation}
\mathbf {C}=\mathbf{V}^T\mathbf{V}
\label{E:EigC}
\end{equation}
The solution of equation \ref{E:EigSnpPOD} consists of $N$ eigenvalues ($\lambda^i$) and the $N$x$N$ modal matrix ($\mathbf{H}$), made of column eigenvectors ($\mathbf{H}^n$). The eigenvectors provide a basis to produce the POD modes,
\begin{equation}
\phi^i=\frac{\sum_{n=1}^{N}H^i_n \mathbf{v}^n}{||\sum_{n=1}^{N}H^i_n \mathbf{v}^n||}, \qquad i=1,2,...N
\label{E:EigModes}
\end{equation}
where $||\cdot||$ denotes p2-norm, and it is calculated as the square root of the summation of the squares of each component inside the brackets. The result of equation \ref{E:EigModes} is a set of $N$ POD modes. As introduced in equation \ref{E:BasisPOD}, the flow can be expressed as a linear combination of POD modes and POD coefficients,
\begin{equation}
\mathbf{v}^n=\sum_{n=1}^N a_i^n \mathbf{\phi}^i=\mathbf{\Phi}\mathbf{a}^n
\label{E:FluctPartRec}
\end{equation}
hence, once the POD modes are available, the POD coefficients ($\mathbf{a}^n$) can be obtained,
\begin{equation}
\mathbf{a}^n=\mathbf{\Phi}^T\mathbf{v}^n
\label{E:PODcoef}
\end{equation}
This coefficients indicate how important is each POD mode in each time snapshot. The eigenvalues ($\lambda^i$), are proportional to the $\varepsilon$ of the fluctuating part of the flow and by sorting them in a decreasing fashion, $\lambda^i>\lambda^{i+1}$ for $i=1,...,N-1$ the most energetically important POD modes in the flow can be identified. The relative $\varepsilon$ associated to each POD mode can be calculated as,
\begin{equation}
\varepsilon_i=\frac{\lambda^i}{\sum_{n=1}^N \lambda^n}
\label{E:RelEner}
\end{equation}
\section*{References}
\bibliographystyle{unsrt}
|
1,116,691,500,338 | arxiv |
\section{Put together: the algorithm}\label{sec:Approach}
In Algorithm~\ref{alg:Grouping}, we present our algorithm for grouping fuzzed crashes. The algorithm takes as input $C_I$, a collection of inputs that can lead to the crashes in a fuzzer, and generates the fault groups, each of which represents a bug and has the corresponding fault signature(s). The crashes can be generated from different runs of the same fuzzer or even from different fuzzers.
\begin{algorithm}
\caption{Grouping fuzzed crashed using fault signatures}\label{alg:Grouping}
\begin{algorithmic}[1]
\STATE{\textbf{INPUT}: $C_I$ (Fuzzed crashes)}
\STATE{\textbf{OUTPUT}: $F_g$ (Fault groups), $F_s$ (Fault signatures)}
\STATE{Initialize $F_s$} \COMMENT{Fault signatures}\label{AlgLine:initFS}
\STATE{Initialize $F_g$} \COMMENT{Fault groups}\label{AlgLine:initFG}
\FORALL{$c \in C_I$}\label{AlgLine:forFS}
\IF{$\exists s \in F_s \text{ and } s \rightarrow c$}\label{AlgLine:ifExistFS} \STATE{Add $c$ to $s.Crashes$}\label{AlgLine:updateS}
\ELSE{}\label{AlgLine:elseExistFS}
\STATE{$t \leftarrow$ Dynamic Trace ($c$)}\label{AlgLine:dynTrace}
\STATE{$s_{new} \leftarrow$ Generate Signature ($t$)}\label{AlgLine:genSig}
\STATE{Add $s_{new}$ to $F_s$}\label{AlgLine:addNewtoFS}
\ENDIF{}\label{AlgLine:endExistFS}
\ENDFOR{}\label{AlgLine:endforFS}
\STATE{$worklist \leftarrow F_s$}\label{AlgLine:worklistInit}
\WHILE[Merge Fault Signatures]{$worklist \neq \emptyset$}\label{AlgLine:whileWork}
\STATE{Remove $s_c$ from $worklist$}\label{AlgLine:getfromWL}
\STATE{Initialize $G_{sc}$} \COMMENT{Fault group for $s_c$}\label{AlgLine:initGsc}
\FORALL{$s_{wl} \in worklist$}\label{AlgLine:forSwlInWL}
\STATE{$score \leftarrow$ Compute Similarity ($s_c,s_{wl}$)}\label{AlgLine:compSim}
\IF{$score \geq$ threshold}\label{AlgLine:checkThres}
\STATE{Add $s_{wl}$ to $G_{sc}$}\label{AlgLine:addToGsc}
\ENDIF{}\label{AlgLine:endCheckTres}
\ENDFOR{}\label{AlgLine:endForSWL}
\ENDWHILE{}\label{AlgLine:endWhileWork}
\RETURN{$F_g$}\label{AlgLine:retuFg}
\end{algorithmic}
\end{algorithm}
Lines~3--13 implements the components of \textit{Generate fault signatures} and \textit{Classify fault signatures} specified in Figure~\ref{fig:Overview}. Lines 14--25 implements the \textit{Merge fault signatures}. Specifically, at lines~\ref{AlgLine:initFS} and~\ref{AlgLine:initFG}, we initialize fault signatures ($F_s$) and fault groups ($F_g$) respectively. The initialization can either set them as empty sets or using existing fault signatures and fault groups from previous fuzzing campaigns or a different fuzzer. Lines~\ref{AlgLine:forFS}--\ref{AlgLine:endforFS} loop through all the fuzzed crashes ($C_I$) given as the inputs to create and test with fault signatures. At line~\ref{AlgLine:ifExistFS}, the current fuzzed crash ($c$) is checked against all the existing fault signatures to see if there exist a fault signature ($s$) that can lead it to crash. In case such an existing fault signature is found, at line~\ref{AlgLine:updateS}, we add the crash into the group represented by the fault signature. If we are unable to find such a fault signature, we create a new one at lines~\ref{AlgLine:elseExistFS}--\ref{AlgLine:endExistFS}. To create a fault signature, we run the program with the crashing input to collect its dynamic trace ($t$) at line~\ref{AlgLine:dynTrace}. This trace is used to create a new fault signature at line~\ref{AlgLine:genSig}, by removing the statements in the trace until the failure cannot be reproduced. The newly created signature ($s_{new}$) is added to other fault signatures at line~\ref{AlgLine:addNewtoFS}.
Once we have generated the fault signatures that can classify all the fuzzed crashes, we further merge the fault signatures into fault groups at lines~\ref{AlgLine:whileWork}--\ref{AlgLine:endWhileWork}. For this, we first create a work list ($worklist$) consisting of all the fault signatures at line~\ref{AlgLine:worklistInit}. We initialize a fault group ($GS_{sc}$) at line~\ref{AlgLine:forSwlInWL} for $s_c$. This can either be empty or be an existing fault group given at line~4. We then traverse the work list at lines~\ref{AlgLine:forSwlInWL}--\ref{AlgLine:endForSWL}. For each fault signature in the work list, we compute similarity scores ($score$) at line~\ref{AlgLine:compSim} and compare it with the other fault signatures in the work list. The $score$ is computed as the average of (1) normalized Levenshtein's edit distance and (2) normalized percent of matching functions in the crashing call stack between two fault signatures. If this similarity score is above a set threshold, then we add that fault signature ($s_{wl}$) to the fault group ($G_{sc}$) at line~\ref{AlgLine:addToGsc}.
Once the \texttt{worklist} is empty, we finish grouping all the fault signatures into fault groups. We return this group of fault groups, that represent ``unique'' crashes from the fuzzed crashes, as the output of our algorithm at line~\ref{AlgLine:retuFg}.
\section{Conclusions and Future Work}\label{sec:Conclusion}
This paper presents a heuristics based approach for deduplicating fuzzed crashes. As opposed to the use call stacks, code coverage, and failure symptoms based approaches, our approach uses \textit{fault signatures} to group fuzzed crashes. A fault signature captures the necessary statements that allow the bugs to be reproduced. Crashes grouped based on a fault signature thus likely share the root causes and fixes. We developed an algorithm and a tool that consist of the three components, \textit{generating fault signatures}, \textit{classifying with fault signatures} and \textit{merging fault signatures}. We evaluated our approach on 3020 fuzzed crashes against the ground truth we set up from 15 real-world bugs and patches and from 4 different open-source projects. Our results show that our approach correctly grouped 99.1\% of 3020 fuzzed crashes and generated 17 groups for 15 bugs. Our approach significantly outperformed the deduplication methods offered by 3 SOTA fuzzers, namely AFL, BFF and Honggfuzz, which reported 40--1276 groups. Considering diagnosing a crash can be challenging and time-consuming, we believe our tool can significantly improve the debugging productivity for fuzzing. In the future, we will explore the further usage of fault signatures for fault localization and automated patch generation/verification. We will also experiment our approach for grouping fuzzed crashes from different program versions and from different fuzzers.
\section{Evaluation}\label{sec:Eval}
Our evaluation aims to answer two research questions:
\begin{itemize}
\item \textbf{RQ1:} Can we correctly group crashes generated by the fuzzers?
\item \textbf{RQ2:} How effective is our technique compared to the SOTA\@ methods?
\end{itemize}
\subsection{Experimental Setup}\label{subsec:ExpSetup}
\subsubsection{Implementation}
We implemented \helium{} for C programs using Clang~\cite{Lattner_Adve_2004}, srcML~\cite{Collard_Decker_Maletic_2013}, SQLite~\cite{sqlite3}, PIN~\cite{Luk_Cohn_Muth_Patil_Klauser_Lowney_Wallace_Reddi_Hazelwood_2005}, C-Reduce~\cite{Regehr_Chen_Cuoq_Eide_Ellison_Yang_2012} and Rust~\cite{Matsakis_Klock_2014}. Specifically, we used Clang, srcML and SQLite to create a database containing the function names and their line numbers at a file level granularity for each benchmark. Then we used this database with PIN to collect statement level dynamic traces for crash inducing inputs and generated an executable program. We used C-Reduce to minimize the executable programs into fault signatures. We used Rust to implement the similarity comparison between traces and between fault signatures. We used a cost of 1 for all the edits when computing the Levenshtein's distance. The threshold for grouping two \hSig{s} (line~\ref{AlgLine:checkThres} in Algorithm~\ref{alg:Grouping}) was set as ``0.7''.
\subsubsection{Subject Selection} To answer the research questions and demonstrate that our techniques are applicable in practice, we aim to use the benchmarks that (1) are real-world open-source C programs, (2) preferably contain multiple real-world bugs in each program, so that we can evaluate if our approach can separate the crashes from different bugs, (3) the bugs and their patches are known, so we can have a ground truth to compare against, (4) the bugs can be triggered by the fuzzers, so we can generate the crash corpus, and (5) can be handled by our baseline methods, so we can compare with them.
We searched for the readily available benchmarks based on the above 5 criteria in fuzzing literature~\cite{Pham_Khurana_Roy_Roychoudhury_2017,Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018,Bohme_Pham_Nguyen_Roychoudhury_2017,Klees_Ruef_Cooper_Wei_Hicks_2018,Boehme_Cadar_Roychoudhury_2021,Liang_Pei_Jia_Shen_Zhang_2018,van_Tonder_Kotheimer_Le_Goues_2018}. We also went through the list of programs at AFL's website~\cite{afl-trophy}. As a result, we collected all the C projects (3 out of 6 total projects) provided by~\cite{van_Tonder_Kotheimer_Le_Goues_2018}, namely \texttt{w3m}, \texttt{sqlite} and \texttt{libmad}, and \texttt{libarchive} from AFL's website. We were not able to use the other 3 benchmarks from~\cite{van_Tonder_Kotheimer_Le_Goues_2018}, namely \texttt{PHP}, \texttt{R} and \texttt{Conntrackd}, as they were either implemented using multiple languages (PHP, R) or the bug/patch were in a non C file. Similarly, for C only benchmarks with multiple bugs on the AFL's websites, such as \textit{audiofile} and \texttt{libxml}, we were either not able to get the crashing input and the minimal patch, or they had the bug/patch in non C files.
Through the above process, we collected a total of 15 known bugs from 4 large real-world C projects. Specifically, we used 4 bugs from \texttt{w3m}, a text-based web browser, 8 bugs from \texttt{sqlite}, a database software, 1 bug from \texttt{libmad}, an MPEG audio decoding library, and 2 bugs from \texttt{libarchive}, a multi-format archive and compression library. Four \texttt{sqlite} bugs from~\cite{van_Tonder_Kotheimer_Le_Goues_2018} were not included due to the problems of reproducing them with PIN\@. The projects, their sizes\footnote{LOC calculated using \texttt{tokei}, https://github.com/XAMPPRocky/tokei}, and the bugs are listed in the first three columns in Table 1.
\subsubsection{Fuzzer selection}
To demonstrate the effectiveness of our techniques and compare with meaningful baselines, we looked for fuzzers that (1) are open source, (2) are well documented, (3) have been widely used in research or industrial settings, (4) applied different methodologies for deduplicating fuzzed crashes (so we can compare with different approaches of deduplication used in practice), and (5) could work with our benchmarks. In case of fuzzers with similar deduplication methodologies, we picked the one that reported more bugs, cited by more references, and that are easier to work with our benchmarks. In the end, we used AFL~\cite{zalewski2017american} to generate crashes, and used the deduplication methods implemented in AFL, CERT-BFF~\cite{certBFF} and Honggfuzz~\cite{swiecki2017honggfuzz} as our baselines. Specifically, AFL uses branch (edge) coverage and coarse grained branch-taken hit counter to determine unique fuzzed crashes. Only fuzzed crashes associated with execution paths that involves new edges or doesn't visit the common edges are kept after deduplication. CERT-BFF uses hashes generated from last $N$ calls (frames) in the call stack to determine uniqueness. Any fuzzed crashes sharing the same call stack hash is discarded during the deduplication process. Honggfuzz, on the other hand, uses the information at the crash location (fault address, last known PC instruction and last 7 frames in call stack) to deduplicate the fuzzed crashes.
\begin{table*}[ht]
\begin{tabular}{@{}lccccccccc@{}}
\cmidrule(r){1-9}
\textbf{Benchmark} &
\multicolumn{1}{l}{\textbf{Size (KLOC)}} &
\multicolumn{1}{l}{\textbf{Bug ID}} &
\multicolumn{1}{l}{\textbf{Crashes}} &
\multicolumn{1}{l}{\textbf{Fault Sig}} &
\multicolumn{1}{l}{\textbf{Group}} &
\multicolumn{1}{l}{\textbf{Correct}} &
\multicolumn{1}{l}{\textbf{Incorrect}} &
\multicolumn{1}{l}{\textbf{Missed}} \\ \cmidrule(r){1-9}
\textbf{w3m} & \textbf{80.4} & 1 & 250 & 1 & 1 & 250 & 0 & 0 \\
v0.5.3 & & 2 & 352 & 4 & 1 & 351 & 0 & 1 \\
& & 3 & 250 & 4 & 1 & 250 & 0 & 0 \\
& & 4 & 139 & 2 & 1 & 115 & 0 & 24 \\ \cmidrule(r){1-9}
Sub Total & & \textbf{4} & \textbf{991} & \textbf{11} & \textbf{4} & \textbf{966} & \textbf{0} & \textbf{25} \\ \cmidrule(r){1-9}
\textbf{SQLite} & \textbf{313.3} & 5 & 285 & 5 & 1 & 285 & 0 & 0 \\
v3.8.5 & & 6 & 191 & 14 & 1 & 191 & 0 & 0 \\
& & 7 & 240 & 5 & 1 & 240 & 0 & 0 \\
& & 8 & 113 & 2 & 1 & 113 & 0 & 0 \\
& & 9 & 226 & 2 & 1 & 226 & 0 & 0 \\
& & 10 & 237 & 5 & {\color[HTML]{CC0000} 2} & 237 & 0 & 0 \\
& & 11 & 250 & 4 & 1 & 250 & 0 & 0 \\
& & 12 & 236 & 3 & {\color[HTML]{CC0000} 2} & 236 & 0 & 0 \\ \cmidrule(r){1-9}
Sub Total & & \textbf{8} & \textbf{1778} & \textbf{40} & \textbf{10} & \textbf{1778} & \textbf{0} & \textbf{0} \\ \cmidrule(r){1-9}
\textbf{libmad} v0.15.1b & \textbf{18.0} & 13 & 99 & 2 & 1 & 99 & 0 & 0 \\ \cmidrule(r){1-9}
Sub Total & & \textbf{1} & \textbf{99} & \textbf{2} & \textbf{1} & \textbf{99} & \textbf{0} & \textbf{0} \\ \cmidrule(r){1-9}
\textbf{libarchive} & \textbf{207.2} & 14 & 67 & 1 & 1 & 67 & 0 & 0 \\
v3.1.0 & & 15 & 85 & 1 & 1 & 85 & 0 & 0 \\ \cmidrule(r){1-9}
Sub Total & & \textbf{2} & \textbf{152} & \textbf{2} & \textbf{2} & \textbf{152} & \textbf{0} & \textbf{0} \\ \cmidrule(r){1-9}
\textbf{Total} & \textbf{618.9} & \textbf{15} & \textbf{3020} & \textbf{55} & \textbf{17} & \textbf{2995} & \textbf{0} & \textbf{25} \\ \cmidrule(r){1-9}
\end{tabular}
\caption{Result of RQ1: Evaluating Grouping Correctness}\label{tab:RQ1}
\end{table*}
\subsubsection{Experimental design for RQ1}\label{subsubsec:GroupCorrect}
In RQ1, our goal is to evaluate the correctness of the grouping made by \helium{}. Specifically, we aim to discover (1) if we can correctly group all the fuzzed crashes from the same bug, (2) if we would incorrectly mix crashes from different bugs and put them in one group, and (3) if we would fail to group any fuzzed crashes.
To establish the ground truth, we propose to generate the crashes for the multiple known bugs and see if we can group crashes caused by the same bug and separate the crashes generated from different bugs. The challenge we face is that given an arbitrary given seed, the fuzzers may not trigger the known bugs or trigger them within a reasonable time window. To set up this experiment, we used a special configuration of the fuzzers together with the bug patches from the developers to achieve the goal. Specifically, our approach is to run
AFL in the ``Crash Exploration Mode~\cite{afl-crash-mode}''. It takes a known fuzzed crash inducing input and uses the traditional feedback and genetic algorithms to quickly generate a corpus of crashes that explore different paths that can lead to similar crash state. We found that this approach is likely to generate crashes that trigger the given bug. In our experiments, we found only a total of 28 among thousands of crashes that belong to some unknown bugs.
Our setup is as follows. First, we generate crash corpus for individual know bugs using the above approach. We ran AFL for 2 hours per bug. Given 2 hours, some bug generated more than 2K fuzzed crashes, some bug only generated less than 100 crashes. To balance the crashes from different bugs, we used at most 250 crashes from each bug (randomly select 250 if there are more than 250) and mixed them as a mixed crash corpus. Since this approach is heuristic, we also used the developers' patch to further validate whether the generated crashes are indeed from a known bug and which known bug it belongs to. Using developers' patch to determine ground truth~\cite{Klees_Ruef_Cooper_Wei_Hicks_2018,van_Tonder_Kotheimer_Le_Goues_2018}, is based on the assumption that if an input $I$ crashes a program $P$, but no longer crashes it after applying patch $p$, we can associate $I$ with the bug for $p$~\cite{Chen_Groce_Zhang_Wong_Fern_Eide_Regehr_2013}. Further, if two inputs $I_1$ and $I_2$ both crash $P$, but disappear with patch $p$, then both $I_1$ and $I_2$ are caused by the same bug (given that the patches are ``minimal''~\cite{WhatIsBug_2015}). As a result of validation, each crash is labeled with a bug ID and the ones do not match any existing bugs are labeled as unknown. We then apply \helium{} to group these crashes that we know the ground truth.
The validation with developers' patch is done on the mixed crash corpus consisting of up to 250 randomly selected fuzzed crash from each known bug. Due to the nature of fuzzing, it is possible for the fuzzing campaign to expose a different bug than the seed bug. For example, the fuzzing campaign for \textit{Bug 4} from \texttt{w3m} also generated crashes for \textit{Bug 2} which got selected during the random selection. However, after the validation these fuzzed crashes are correctly labeled with bug ID for \textit{Bug 2}. Hence, some bugs (\textit{Bug 2} and \textit{Bug 5} in Table~\ref{tab:RQ1} and Table~\ref{tab:RQ2}) have more than 250 fuzzed crashes (under \textit{Crashes}) associated with them.
To evaluate the correctness, we used as metrics of the number of fuzzed crashes that were (1) correctly grouped with fault groups for a bug, (2) incorrectly grouped with fault groups from a different bug, (3) weren't grouped to any fault group. We also recorded the number of fault signatures and fault groups created for each bug to measure the usefulness of the grouping.
\subsubsection{Experimental design for RQ2}\label{subsubsec:EvalFuzzer}
In RQ2, we compared \helium{} with the three SOTA real-world fuzzers regarding their capabilities of deduplicating crashes. In the following, we present the setups of the fuzzers used in comparison:
For AFL, we ran \textit{afl-cmin} on all the fuzzed crashes generated for each benchmark. It finds the smallest subset of fuzzed crashes that still exercises the full range of instrumented data points as the original fuzzed crash corpus. The remaining fuzzed crashes are reported as the deduplicated fuzzed crashes for \textit{AFL}.
We ran \textit{CERT-BFF} on all the fuzzed crashes for each benchmark, with \texttt{backtracelevels} set to 5 (\textit{BFF-5}). This gives us deduplicated fuzzed crashes based on the uniqueness of the last 5 frames (function calls) on the stack. Similarly, we also performed deduplication using the last frame (crashing function) of the call stack by setting \texttt{backtracelevels} to 1 (\textit{BFF-1}). We chose these two configurations because \textit{BFF-5} represents the default deduplication used by \textit{CERT-BFF}, and \textit{BFF-1} is used as a baseline in the related work~\cite{van_Tonder_Kotheimer_Le_Goues_2018}.
For \textit{Honggfuzz}, we ran the fuzzed crashes for each benchmarks with the \texttt{instrument} option enabled. This gave us deduplicated fuzzed crashes determined using a combination of code coverage, call stack, and crash site information~\cite{honggfuzz-usage}. Then we used the \texttt{noinst} mode (\textit{Honggfuzz-S}) to obtain deduplicated fuzzed crashes determined using only call stack and crash site information. We chose these two configurations because \textit{Honggfuzz} represents the default deduplication of the fuzzer and \textit{Honggfuzz-S} is also used as a baseline in the related work~\cite{van_Tonder_Kotheimer_Le_Goues_2018}.
In the experiment, we first collect a set of crashes for a project, e.g., 991 for w3m shown in Table 2. We then run a baseline tool, e.g., AFL, to deduplicate the crashes. The number of groups reported by the tool is listed under the columns of each baseline's \texttt{SubTotal} row, e.g., 490 for AFL in Table~\ref{tab:RQ2}. We then use the developer's patch to determine how many groups were reported for each bug, e.g., 109 for Bug 1 for AFL\@.
\subsubsection{Running the experiments} The initial crash corpus generation and the crash deduplication for the baseline fuzzers were run on a VM with 64-bit 32 core Intel Haswell processors. The \helium{} experiments were conducted on a VM with 64-bit 12 core Intel Haswell processors. Both the VMs had with 32 GB memory and were running CentOS 8.
\section{Introduction}\label{sec:Intro}
In the recent years, we have seen an increasing number of vulnerabilities in programs that were exploited~\cite{ProjectZero-2022,ZeroDayInitiative-2022}. Google's \textit{Project Zero}, a team of security researchers that study zero-day vulnerabilities, recorded their most ever detected and disclosed vulnerabilities in 2021~\cite{ProjectZero-2022}. 67\% of the actively exploited zero-day vulnerabilities in 2021, detected by \textit{Project Zero}, were from memory related bugs. Thankfully, the modern state-of-the-art (SOTA) fuzzers are adept at finding these types of bugs~\cite{Finding-bugs-in-SQLite,Ding_Goues_2021}. Thus, the companies such as Microsoft and Google invest heavily to develop and deploy effective fuzzers for daily uses. Open source platforms such as GitHub~\cite{ClusterFuzzLite} and GitLab~\cite{Coverage-guided-fuzz-gitlab} also provide friendly integration to run fuzzers. However, even with large scale fuzzing services, like Google's OSS-Fuzz~\cite{Serebryany_2017} and Microsoft's OneFuzz~\cite{OneFuzz_2020}, using fuzzers to find and fix bugs still involves considerable manual efforts~\cite{Chen_Groce_Zhang_Wong_Fern_Eide_Regehr_2013}. The fault diagnosis may hardly catch up to the speed at which new crashes are generated. As a result, critical and exploitable bugs can be left undiagnosed in the large number of crashes reported by the fuzzers.
In this paper, our goal is to develop approaches and tools that can help group fuzzed crashes and provide support for diagnosing them. Grouping fuzzed crashes is also called \textit{crash deduplication} or reporting \textit{unique crashes}~\cite{Klees_Ruef_Cooper_Wei_Hicks_2018}. In the past, the common approach for crash deduplication is to apply heuristic metrics~\cite{certBFF,swiecki2017honggfuzz,Cha_Woo_Brumley_2015,Rawat_Jain_Kumar_Cojocar_Giuffrida_Bos_2017,zalewski2017american,Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018,Bohme_Pham_Nguyen_Roychoudhury_2017,van_Tonder_Kotheimer_Le_Goues_2018,Holmes_Groce_2018,Pham_Khurana_Roy_Roychoudhury_2017}, such as call stack hashing~\cite{certBFF,swiecki2017honggfuzz,Cha_Woo_Brumley_2015,Rawat_Jain_Kumar_Cojocar_Giuffrida_Bos_2017}, instrumented coverage information~\cite{zalewski2017american,Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018,Bohme_Pham_Nguyen_Roychoudhury_2017} or dynamic symptoms~\cite{van_Tonder_Kotheimer_Le_Goues_2018,Holmes_Groce_2018,Pham_Khurana_Roy_Roychoudhury_2017} to compare the similarities of the crashes. For example, AFL~\cite{zalewski2017american} uses instrumented branch coverage, while CERT-BFF~\cite{certBFF} and Honggfuzz~\cite{swiecki2017honggfuzz} use call stacks. However, the coverage based metrics tend to inflate the number of ``unique'' fuzzed crashes~\cite{Klees_Ruef_Cooper_Wei_Hicks_2018,Hazimeh_Herrera_Payer_2020} where typically crashes that traverse different paths will be separated, independent of whether they trigger the same bug. Meanwhile, call stack based metrics have risks of misclassifications; the crashes generated from the same bugs are separated into different groups as their call stacks are different~\cite{Blazytko_AURORA_2020} or different bugs are grouped together because they have the same call stacks~\cite{Pham_Khurana_Roy_Roychoudhury_2017}. The dynamic symptoms based approaches, like the ones based on symbolic analysis~\cite{Pham_Khurana_Roy_Roychoudhury_2017} and automatically generated patches~\cite{van_Tonder_Kotheimer_Le_Goues_2018,Holmes_Groce_2018}, are more precise, but they often have a limited scope, e.g., applicable only to a certain type of crashes.
In this paper, we propose to use \textit{fault signatures} to group the fuzzed crashes. A fault signature is an executable program that consists of ``indispensable'' statements that can reproduce the bug. As opposed to call stacks, failure symptoms, and coverage based metrics, the fault signature captures the root cause of a failure. Crashes grouped based on the same fault signature thus likely share the root cause and fix, and should be diagnosed together. We say these crashes are induced by the {\it same bug}.
Our approach consists of three components, namely \textit{generating fault signatures}, \textit{classify with fault signatures}, and \textit{merging fault signatures}. Given a collection of fuzzed crashes (here each crash is associated with an input), we first ran a crash inducing input to get its dynamic trace. We then create an executable program from the trace and perform program reduction to generate an as-small-as-possible program, namely \textit{fault signature}, that can reproduce the bug (meaning removal of any statement from this program can result in the bug not triggering). Since the fault signature only contains a subset of statements from the complete crashing trace, the two crashes that exercise different paths in the original program can be grouped into the same signature as long as the two share the subset of root cause statements. To classify the next fuzzed crash, we took its crash inducing input and ran with the generated fault signatures and see if the failure has been produced with any of the fault signatures; if so, we classify the fuzzed crash into a group labeled with the fault signature. These two steps are implemented in the components of \textit{generating fault signatures} and \textit{classify with fault signatures} respectively.
After all the crashes have been bucketed into the groups, each of which is labeled with a fault signature, we perform post-processing, namely {\it merging fault signatures}, to examine if any of the two fault signatures can be further grouped. Two fault signatures may share a set of statements that are root causes, but differ in the paths that lead to the root cause from the entry of the program. These fault signatures can be grouped. We applied a heuristic based matching between two fault signatures to group very similar fault signatures. Specifically, we measure how many common statements the two signatures share and whether the call stacks are similar when crashing the two different fault signatures with their respective crash inducing inputs.
We implemented our approach in a tool called \helium{} for C programs. We used 15 real-world bugs from 4 large open source projects to evaluate it. Furthermore, we generated a total of 3020 fuzzed crashes and compared \helium{} with 3 SOTA fuzzers, name \textit{AFL}, \textit{CERT-BFF} and \textit{Honggfuzz}, and a total of 5 settings. We used developers' patches to establish the ground truth, similar to the approaches in~\cite{Klees_Ruef_Cooper_Wei_Hicks_2018,van_Tonder_Kotheimer_Le_Goues_2018}. Our results show that we correctly group 2995 (99.1\%) of the 3020 fuzzed crashes and didn't misclassify any crashes into a wrong fault group. We grouped the fuzzed crashes from 15 different bugs into 17 (+2) groups while the next best baseline reported 40 groups, and other baselines reported 100+ or even 1000+ groups. Considering it is very time-consuming to diagnose a bug, our approach thus has a great potential to improve the productivity of bug diagnosis and fix associated with fuzzing tools.
In summary, we make the following contributions in the paper:
\begin{enumerate}
\item we proposed to use \textit{fault signatures} to group fuzzed crashes, and our intuition is that fault signatures capture the root causes and thus can more accurately classify the crashes;
\item we designed an algorithm, consisting of \textit{generating fault signatures}, \textit{classify with fault signatures}, and \textit{merging fault signatures}, to automatically group fuzzed crashes, and
\item we implemented our tool \helium{} for C projects, and evaluated it with real-world bugs and large open source projects. Our results show that our approach can correctly group the crashes and significantly outperformed the SOTA widely deployed fuzzers.
\end{enumerate}
\section{Motivation}\label{sec:Motivation}
In this section, we provide a few simple examples to explain the challenges of grouping fuzzed crashes and why existing fuzzing deduplication methods~\cite{certBFF,swiecki2017honggfuzz,Rawat_Jain_Kumar_Cojocar_Giuffrida_Bos_2017,zalewski2017american,Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018,Bohme_Pham_Nguyen_Roychoudhury_2017} are not sufficient.
In Figure~\ref{subfig:CovDup}, we show a code snippet that contains a null-pointer dereference at line~3 adapted from~\cite{Klees_Ruef_Cooper_Wei_Hicks_2018}. Here, the pointer \texttt{p} at line 2 is not initialized, and then the dereference at the next line can lead to crash. This bug can be triggered independent of the condition outcome at line 8, as both the paths $\left<6,7,8,1,2,3\right>$ and $\left<6,7,9,1,2,3\right>$ calls into the \texttt{bug} function. Due to the presence of two distinct paths, a coverage based heuristic, e.g., used in AFL, will classify the fuzzed crashes for this bug into two separate groups. However, in this case, the branch at line~8 is not important for triggering the bug.
\begin{figure}[ht]
\begin{subfigure}[b]{0.9\columnwidth}
\centering
\lstset{
numbers=left,
numbersep=5pt,
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=\parindent,
language=C,
showstringspaces=false,
basicstyle=\footnotesize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
}
\lstinputlisting{Example/SameBugDiffPath.txt}
\caption{A null pointer dereference with different paths}\label{subfig:CovDup}
\end{subfigure}
\begin{subfigure}[b]{0.9\columnwidth}
\centering
\lstset{
numbers=left,
numbersep=5pt,
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=\parindent,
language=C,
showstringspaces=false,
basicstyle=\footnotesize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
}
\lstinputlisting{Example/SameBugSig.txt}
\caption{\helium{}: \hSig{} for the bug in Figure~\ref{subfig:CovDup}}\label{subfig:CovDupSig}
\end{subfigure}
\caption{Deduplication based on code coverage can fail}\label{fig:SameBug}
\Description{Examples to show deduplication based on code coverage can fail}
\end{figure}
In our approach, we will take a crashed input and collect its dynamic trace. We then reduce the trace to be able to reproduce the bug. We call such a program that contains only statements that contribute to the failure the {\it fault signature}. For Figure~\ref{subfig:CovDup}, we generate the fault signature shown in Figure~\ref{subfig:CovDupSig}. Here, lines 7--9 in the original program are replaced with a single call to the function {\tt bug}. To group the fuzzed crashes, we run the crash inducing inputs with this fault signature. If the failure is reproduced with the fault signature, we group the fuzzed crash; if not, we will generate a new fault signature that can represent the crash. In this example, we can group any crashes triggered along the branch at line 8 (inputs that start with {\tt a}) or along the one at line 9 (inputs that do not start with {\tt a}) into the same fault signature.
\begin{figure}
\begin{subfigure}{.9\columnwidth}
\lstset{
numbers=left,
numbersep=5pt,
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=\parindent,
language=C,
showstringspaces=false,
basicstyle=\scriptsize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
}
\lstinputlisting{Example/CallStackDiff.txt}
\caption{\texttt{null pointer dereference} with different call stacks}\label{subfig:CallDup}
\end{subfigure}
\begin{subfigure}[b]{.7\columnwidth}
\centering
\lstset{
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=\parindent,
language=C,
showstringspaces=false,
basicstyle=\footnotesize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
morekeywords={Bug1,Bug2},
escapechar=\!,
}
\lstinputlisting{Example/CallInfo.txt}
\caption{call stacks for the bugs in Figure~\ref{subfig:CallDup}}\label{subfig:CallInfo}
\end{subfigure}
\begin{subfigure}[b]{.9\columnwidth}
\centering
\lstset{
numbers=left,
numbersep=5pt,
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=\parindent,
language=C,
showstringspaces=false,
basicstyle=\scriptsize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
morekeywords={Sig1,Sig2,Sig3},
escapechar=\!,
}
\lstinputlisting{Example/CallSig.txt}
\caption{\helium{}: \hSig{s} for the bugs in Figure~\ref{subfig:CallDup}}\label{subfig:CallDupSig}
\end{subfigure}
\caption{Deduplication based on call stacks can fail}\label{fig:DiffBug}
\Description{Example to show deduplication based on call stacks can fail}
\end{figure}
In the second example shown in Figure~\ref{subfig:CallDup}, we provide two different null pointer difference bugs.
\texttt{Bug1}, marked at line~5, will trigger an incorrect pointer dereference at line~3 after the pointer is freed at line~5. Meanwhile, \texttt{Bug2}, marked at line~17, will trigger a null pointer dereference at line~3 along the path $\left<17, 18, 8, 4,6,3\right>$. The uninitialized pointer at line~17 will be dereferenced.
The crashes for {\tt Bug1} can traverse different paths and lead to different call stacks at the crash site, as shown in Figure~\ref{subfig:CallInfo}. The first two lines indicate that \texttt{Bug1} can be triggered by calling \texttt{foo} (at line~14), \texttt{bug} (at line~8), and \texttt{trigger} (at line~6); or by calling \texttt{bar} (at line~15), \texttt{bug} (at line~9), and \texttt{trigger} (at line~6). Since the two crashes have two different call stacks, the call stacks based approach can fail to group them and will consider them as different bugs.
Similarly, the crash for {\tt Bug2} can be triggered by calling \texttt{foo} (at line~18), \texttt{bug} (at line~8), and \texttt{trigger} (at line~6). As shown in Figure~\ref{subfig:CallInfo}, the call stacks for Bug1-Crash1 and Bug2-Crash1 are the same. Therefore, the call stack based deduplication methods can mistakenly group the two different bugs (they have different causes and require different fixes) into in one group.
Using our approach for this example, we will first generate fault signatures for the crashes, one at a time, shown in Figure~\ref{subfig:CallDupSig}, {\tt Sig1} at lines~2--11 for {\tt Bug1-Crash1}, {\tt Sig2} at lines~14--24 for {\tt Bug1-Crash2}, and {\tt Sig3} at lines~27--38 for {\tt Bug2-Crash1}. We can see that in the fault signatures, the statements that weren't contributing to the bug's manifestation have been removed.
Each fault signature can represent one path or a set of paths that lead the crashes. Since two sets of paths may contain the same root cause, we further merge fault signatures to be the same fault group. In this example, we observe that {\tt Sig1} and {\tt Sig2} differ only at the branch {\tt b==`a'} (highlighted in yellow and also see line~11 and lines 23--24), and we can merge them to be the same fault group. The merged fault signatures then can classify all fuzzed crashes that manifest \texttt{Bug1} into a single group. Similarly, our approach will determine that {\tt Sig3} has no close relation with {\tt Sig1} and {\tt Sig2} in terms of branches and the shared statements. We thus classify it as a separate fault group.
The above examples show that our fault signature based grouping potentially is more accurate than coverage based and call stack based approaches that are popularly used in the current fuzzers. In Sections 3 and 4, we provide the details on how we generate fault signatures, how we group the fuzzed crashes based on the fault signatures, and how we further merge fault signatures into fault groups.
\section{Our approach}\label{sec:Overview}
\begin{figure*}
\centering
\def0.9\textwidth{0.9\textwidth}
\import{Figures/}{Overview.pdf_tex}
\caption{Overview of \helium{}}\label{fig:Overview}
\Description{Overview of \helium{} as a flow chart}
\end{figure*}
In this section, we first give an overview of \helium{}. Next, we provide a detailed explanation to help understand what is a \textit{fault signature}. We then present the three main components of \helium{}.
\subsection{An Overview}
Figure~\ref{fig:Overview} presents an overview of our workflow. \helium{} takes as input a collection of crashes from the fuzzers. Each crash is associated with an input. Each input will be run through a set of fault signatures created so far. If the failure is triggered with some fault signature, the fuzzed crash is put in a bucket labeled with the corresponding fault signature. This step is implemented in the component named \textit{Classify with Fault Signature}.
If the fuzzed crash input did not trigger any failures matching with existing fault signatures created so far, we run the input in the original program using PIN~\cite{Luk_Cohn_Muth_Patil_Klauser_Lowney_Wallace_Reddi_Hazelwood_2005} and collect its dynamic trace. We then patch the dynamic trace to generate an executable program. In the next step, we use a program reduction tool, C-Reduce, to reduce the executable program into the \textit{fault signature}. C-Reduce ensures that the fault signature can still reproduce the failure at the same location with the same failure symptoms while statements not relevant to the failure reproduction are removed. This step is implemented in the component named \textit{Generate Fault Signatures}.
Once all the crashes and their inputs are labeled with the fault signatures, we perform the step of \textit{Merge Fault Signatures} and apply heuristics and similarity metrics to group fault signatures that likely originated from the same root cause. The resulting fault groups, representing grouped fuzzed crashes, and their corresponding fault signatures will be presented as the output of \helium{}.
\subsection{Fault Signatures}\label{subsec:FaultSig}
\begin{figure}
\centering
\lstset{
numbers=left,
numbersep=5pt,
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=10pt,
language=C,
showstringspaces=false,
basicstyle=\footnotesize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
escapechar=\!,
}
\lstinputlisting{Example/w3mSig1.txt}
\caption{Fault Signature for a \texttt{Null Pointer Dereference} in \texttt{w3m}}\label{fig:w3mSig1}
\Description{Fault Signature for a \texttt{Null Pointer Dereference} in \texttt{w3m}}
\end{figure}
Fault signature can be viewed as a minimized version of the original program consisting of only the statements necessary for triggering a particular bug. Ideally, a fault signature can reproduce the same bug for all the inputs that can trigger the bug in the original program. The statements in a fault signature include two parts: (1) those that trigger the bug and (2) those that set up the necessary conditions, e.g., parsing the input, for reaching the buggy location.
As an example, consider the \texttt{null pointer dereference} bug~\footnote{https://github.com/tats/w3m/issues/18} in \texttt{w3m}\texttt{--0.5.3}. Even though the entire program consists of 80K lines of code, we generated a fault signature consisting of less than 100 lines. It can trigger the bug using the crash inducing inputs from the original programs. A simplified version of the fault signature is shown in Figure~\ref{fig:w3mSig1}. The \texttt{null pointer dereference} occurs at line~6, while trying to access the variable \texttt{tbl\_mode}. This variable was previously initialized as \texttt{NULL} at line 3 and not updated since then. Hence, lines 2--8 in Figure~\ref{fig:w3mSig1} can be considered as the statements that trigger the bug, and lines 9--26 are necessary to set up the conditions to reach the bug.
While the lines actually triggering the bug (lines 2--8 in Figure~\ref{fig:w3mSig1}) mostly remain the same between different crash inducing inputs, the statements leading to it (lines 9--26) can be different. In other words, a bug can be triggered when executing different paths (e.g., in the region of lines 9--26), but these paths can share a root cause (lines 2--8). As a concrete example, consider the two execution paths (\texttt{Bug1-Crash1} and \texttt{Bug1-Crash2} from Figure~\ref{subfig:CallInfo}) for the bug shown in Figure~\ref{subfig:CallDup}. The statements triggering this bug are in lines 3--7, while lines 8--15 provide the necessary conditions to reach the bug location.
It is difficult to enumerate all the different ways to reach a program point. Hence, producing an ideal fault signature that can reproduce the bug for all the crash inducing inputs is hard. However, since we have access to some inputs responsible for triggering the crashes, it is easy to create a fault signature based on one input with respect to its path, and then determine if other inputs can crash the same paths. Therefore, we chose to generate such fault signatures in \textit{Generate Fault Signature} to group the fuzzed crashes.
\subsection{Generate Fault Signatures}\label{subsec:GenFaultSig}
In order to generate fault signatures, we need to identify statements that are necessary for triggering a bug. Any statements that are not executed during a bug's manifestation are not necessary. So as a first step, we ran the crash inducing input with the original program to collect its dynamic trace. We used PIN, a dynamic binary instrumentation framework, to achieve this. The dynamic trace information collected using PIN is more resilient to call stack corruptions as opposed to the traditional methods~\cite{cert-bff-crash-recyler}.
The statements collected using dynamic trace typically don't include lines representing static information such as variables definitions, structure initialization and switch case headers. Or in other words, it is not possible to generate an executable fault signature directly using just the dynamic trace. Therefore, as the second step, we extracted all the functions that had a statement present in the dynamic trace. This takes care of missing local variable definitions and switch case headers. To get the global variable definitions and structure initialization, we extracted all global variables, macros, header file includes, and structure initialization using tools like srcML~\cite{Collard_Decker_Maletic_2013}. We made an executable program from these extracted information using the compilation and linker flags obtained via tools like Bear~\cite{bear-url}. This program, even though not minimal, is a reduced version of the original program capable of triggering the original bug.
\begin{figure}
\centering
\lstset{
numbers=left,
numbersep=5pt,
belowcaptionskip=1\baselineskip,
breaklines=true,
xleftmargin=\parindent,
language=C,
showstringspaces=false,
basicstyle=\footnotesize\ttfamily,
keywordstyle={\bfseries\color{green!40!black}},
commentstyle={\itshape\color{purple!40!black}},
identifierstyle=\color{blue},
stringstyle=\color{orange},
escapechar=\!,
}
\lstinputlisting{Example/w3mOrig1.txt}
\caption{Example of the statements removed for creating the Fault Signature in Figure~\ref{fig:w3mSig1}}\label{fig:credRem}
\Description{Example of the statements removed for creating the Fault Signature in Figure~\ref{fig:w3mSig1}}
\end{figure}
As the final step, we used C-Reduce~\cite{Regehr_Chen_Cuoq_Eide_Ellison_Yang_2012} to remove statements not required for triggering the bug to generate fault signatures. C-Reduce, by default, uses a set of 135 passes to minimize the program, which also include transformations such as renaming variables and function names and merging control branches. We developed a custom configuration of C-Reduce by removing 45 passes to suit our needs. Figure~\ref{fig:credRem} shows an example of the reduction (highlighted in red) when using our configuration for producing the fault signature shown in Figure~\ref{fig:w3mSig1}. Only the statements involving the variables \texttt{obuf} and \texttt{tbl\_mode} are necessary for reproducing the \texttt{null pointer dereference} bug at line 17. Hence, all the statements not related to the two variables till the fault's manifestation (at line 17) are removed (lines~2, 3, 5--7, 9--12, 14, 15) by C-Reduce. Since the statements after triggering the bug are also unnecessary, they also got removed (lines~18--21). We also remove the entire functions that were only used in the removed statements (lines~23 and 24), like \texttt{HTMLlineproc1} (used at line~10) and \texttt{StrNew} (used at line~12).
\subsection{Classify with Fault Signatures}\label{subsec:classifyWith}
Although the fault signature generation starts with one crash inducing input, after trace minimization and removing unnecessary statements, the fault signature can crash a set of failure inducing inputs that exercise the same path and the paths that only differ in the removed statements. It thus can be used to group a set of crash inducing inputs.
We ran crash inducing inputs with the existing fault signatures. If the same failure occurs (meaning it triggers the same bug type at the same source code location with the same call stack as the original input used to produce the fault signature), we classify the fuzzed crash into the group labeled with this fault signature. Otherwise, we take the input that can not yet be categorized and generate another fault signature.
When running a fuzzed crash with a fault signature, we may encounter failures that do not match the original crashes, e.g., entering an infinite loop or hanging indefinitely. Therefore, we set a configurable timeout (1 minute in our evaluation) when running fault signatures to classify whether a valid failure is triggered. We also encountered some flakiness when running with fault signatures caused by the nondeterminism in the software execution. Hence, we repeated running any fuzzed crash that failed to trigger a bug for a fixed number of times (10 times in our evaluation).
\subsection{Merge Fault Signatures}\label{subsection:GenFaultGroup}
Our fault signature is created from the dynamic trace generated using a single crash inducing input. As discussed in Section~\ref{subsec:GenFaultSig}, these fault signatures don't necessarily cover all the statements that can lead to the program state from which the bug can be triggered. Thus, during generation and classification of fault signatures, it is possible to create multiple fault signatures for the same bug, each of which represent a scenario of reaching the bug location. For example, see \texttt{Sig1} and \texttt{Sig2} for \texttt{Bug1} in Figure~\ref{fig:DiffBug}. In order to further group all the fuzzed crashes from \texttt{Bug1} into one group, we develop a technique to cluster fuzzed crashes associated with these fault signatures.
Our considerations are twofold. First, we want to group fault signatures of the same root cause (while a root cause can cover a segment/set of statements), and thus we should consider the similarity/overlap between the fault signatures. Second, we also consider the paths that lead to the crash site when merging the fault signatures. We observed that different bugs may fail at the same location, but two crashes that traverse very different paths before reaching the same location likely have different root causes. For example, in Figure~\ref{fig:DiffBug}, \texttt{Sig3} from \texttt{Bug2} shares the bug manifestation statements (line 3 in Figure~\ref{subfig:CallDup}) with \texttt{Sig1} and \texttt{Sig2} from \texttt{Bug1}. However, the actual root causes (line 17 for \texttt{Bug2} and line 5 for \texttt{Bug1} in Figure~\ref{subfig:CallDup}) along the paths leading to the buggy state (lines 18,4--7 for \texttt{Bug2} and lines 8--16 for \texttt{Bug1} respectively) are very different.
Specifically, to measure the similarity between two fault signatures, we used the Levenshtein's edit distance between them. See Equation~\ref{eq:SimFault}, where $Sim_{Sig}$ is the similarity score, $MAXSize$ returns the maximum size in lines of code of the two fault signatures, $S_1$ and $S_2$, and $LDistance$ is the Levenshtein's edit distance between the two signatures.
\begin{equation}\label{eq:SimFault}
Sim_{Sig} = \frac{MAXSize(S_1, S_2) - LDistance(S_1, S_2)}{MAXSize(S_1, S_2)}
\end{equation}
We also measured the similarity in the paths leading to the failure location using call stacks generated by running the crash inducing inputs with the fault signatures. Specifically, we used Equation~\ref{eq:SimCall}, where $Sim_{Call}$ is the similarity score, $COMMON$ is the number of functions that are shared between two call stacks $CS_1$ and $CS_2$, and $MAXSize$ is the maximum size in count of call stacks.
\begin{equation}\label{eq:SimCall}
Sim_{Call} = \frac{COMMON(CS_1, CS_2)}{MAXSize(CS_1, CS_2)}
\end{equation}
The final similarity score used to decide whether two fault signatures should be merged or not is shown in Equation~\ref{eq:SimScore}, which is the average of the two similarity scores.
\begin{equation}\label{eq:SimScore}
Sim_{Score} = \frac{Sim_{Sig} + Sim_{Call}}{2}
\end{equation}
\section{Related Work}\label{sec:Related}
The SOTA fuzzers~\cite{zalewski2017american,Rawat_Jain_Kumar_Cojocar_Giuffrida_Bos_2017,Bohme_Pham_Nguyen_Roychoudhury_2017,certBFF,swiecki2017honggfuzz,Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018,Cha_Woo_Brumley_2015,slicing-2015} use either a coverage based~\cite{zalewski2017american,Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018,Bohme_Pham_Nguyen_Roychoudhury_2017} or call stack based~\cite{certBFF,swiecki2017honggfuzz,Cha_Woo_Brumley_2015,Rawat_Jain_Kumar_Cojocar_Giuffrida_Bos_2017,slicing-2015} heuristics to determine uniqueness of the fuzzed crashes and report the deduplicated fuzzed crashes. Boehme et al.~\cite{Bohme_Pham_Nguyen_Roychoudhury_2017} extended AFL to direct the fuzzing towards a specific target, while Gan et al.~\cite{Gan_Zhang_Qin_Tu_Li_Pei_Chen_2018} improved AFL to more uniquely determine the branch coverage when fuzzing. Even though both carried out additional (manual) verification when reporting ``unique'' bugs, they didn't change AFL's underlying deduplication method of using branch coverage. Similarly, most of the call stack based approaches used the same hashing method proposed by Molar et al.~\cite{Molnar_Li_Wagner_2009} with varying number of calls (frames) used for the hashing. Of particular interest are \texttt{SYMFUZZ}~\cite{Cha_Woo_Brumley_2015} that uses ``safe stack hash'', that only considered non-corrupted call stacks, and \texttt{VUzzer}~\cite{Rawat_Jain_Kumar_Cojocar_Giuffrida_Bos_2017} that uses the last 10 basic blocks along with call stack to generate hashes to prevent classifying fuzzed crashes from different bugs into the same group. These deduplication methods are tightly coupled to their respective fuzzers and hard to use independently. In contrast, our method is agnostic of the method used for generating the fuzzed crashes. Since we capture the root cause of the bug in the fault signatures, we are also less prone to the over counting and misclassification of ``unique'' bugs found in the coverage and stack hash based methods~\cite{Klees_Ruef_Cooper_Wei_Hicks_2018}.
There are also grouping methods developed independent of the fuzzers~\cite{Chen_Groce_Zhang_Wong_Fern_Eide_Regehr_2013,Holmes_Groce_2018,van_Tonder_Kotheimer_Le_Goues_2018,Pham_Khurana_Roy_Roychoudhury_2017}. The closest related work is by Chen et al.~\cite{Chen_Groce_Zhang_Wong_Fern_Eide_Regehr_2013} that calculated a ``distance'' between fuzzed crashes to capture their static and dynamic properties and used a machine leaning to rank them. Homes et al.~\cite{Holmes_Groce_2018} and van Tonder et at.~\cite{van_Tonder_Kotheimer_Le_Goues_2018} grouped fuzzed crashes based on their responses to the mutations of the program. They hypothesize that if the behavior of two fuzzed crashes change similarly (change from crashing to not crashing) due to the same mutation, then they are more likely to be the same bug. Pham et al.~\cite{Pham_Khurana_Roy_Roychoudhury_2017} proposed using symbolic constraints on input paths to group fuzzed crashes. They have limited applicability due to the reliance on symbolic execution to generate the constraints. Cui et al.~\cite{Cui_Peinado_Cha_Fratantonio_Kemerlis_2016} and Molar et al.~\cite{Molnar_Li_Wagner_2009} proposed call stack similarity to group fuzzed crashes. They are more prone to misclassification as discussed above. We have considered these related work as baselines, however, they are either tailored for specific applications like~\cite{Chen_Groce_Zhang_Wong_Fern_Eide_Regehr_2013, Holmes_Groce_2018} or limited in the types of bugs~\cite{van_Tonder_Kotheimer_Le_Goues_2018} or benchmarks~\cite{Pham_Khurana_Roy_Roychoudhury_2017} they can handle, while others~\cite{Cui_Peinado_Cha_Fratantonio_Kemerlis_2016,Molnar_Li_Wagner_2009} are very similar to our current baselines.
There have also been work on performing fault localization for fuzzers. Blazytko et al.~\cite{Blazytko_AURORA_2020} is a representative work in this area. Similar to our approach of generating crash corpus, they used a known crashing input from a fuzzed crash to generate similar inputs to observe the dynamic state of the program. These are used to generate predicates similar to the input path constraints generated by symbolic execution to isolate the root cause. Variations of delta debugging~\cite{Groce_Alipour_Zhang_Chen_Regehr_2014,Christi_Olson_Alipour_Groce_2018,Vince_Hodovan_Kiss_2021,Xuan_Monperrus_2014} have also been used for fault location. Both Christi et al.~\cite{Christi_Olson_Alipour_Groce_2018} and Vince et al.~\cite{Vince_Hodovan_Kiss_2021} proposed reducing the fuzzed crashes, similar to our approach in generating fault signatures, before trying to localize bugs in order to improve their accuracy. The main difference to our work is that they only identify or rank root causes for a crash, but do not generate executable fault signatures and use them to group fuzzed crashes.
\subsection{Results for RQ1: Grouping Correctness}\label{subsec:ResultCorrect}
\begin{table*}[ht]
\begin{tabular}{@{}lccccccccc@{}}
\cmidrule(r){1-9}
\textbf{Benchmark} &
\multicolumn{1}{l}{\textbf{Bug ID}} &
\multicolumn{1}{l}{\textbf{Crashes}} &
\multicolumn{1}{l}{\textbf{FuzzerAid}} &
\multicolumn{1}{l}{\textbf{AFL}} &
\multicolumn{1}{l}{\textbf{BFF-5}} &
\multicolumn{1}{l}{\textbf{BFF-1}} &
\multicolumn{1}{l}{\textbf{Honggfuzz}} &
\multicolumn{1}{l}{\textbf{Honggfuzz-S}} \\ \cmidrule(r){1-9}
\textbf{w3m} & 1 & 250 & 1 & 109 & 2 & 1 & 49 & 49 \\
& 2 & 352 & 1 & 208 & 2 & 1 & 9 & 8 \\
& 3 & 250 & 1 & 113 & 89 & 7 & 14 & 17 \\
& 4 & 139 & 1 & 60 & 4 & 3 & 8 & 7 \\ \cmidrule(r){1-9}
Sub Total & \textbf{4} & \textbf{991} & \textbf{4} & \textbf{490} & \textbf{97} & \textbf{12} & \textbf{80} & \textbf{81} \\ \cmidrule(r){1-9}
\textbf{SQLite} & 5 & 285 & 1 & 179 & 14 & 5 & 2 & 3 \\
& 6 & 191 & 1 & 43 & 22 & 8 & 11 & 12 \\
& 7 & 240 & 1 & 81 & 7 & 4 & 1 & 1 \\
& 8 & 113 & 1 & 43 & 2 & 3 & 1 & 1 \\
& 9 & 226 & 1 & 60 & 3 & 1 & 1 & 1 \\
& 10 & 237 & 2 & 58 & 4 & 1 & 2 & 2 \\
& 11 & 250 & 1 & 134 & 1 & 1 & 2 & 2 \\
& 12 & 236 & 2 & 62 & 4 & 2 & 2 & 2 \\ \cmidrule(r){1-9}
Sub Total & \textbf{8} & \textbf{1778} & \textbf{10} & \textbf{660} & \textbf{57} & \textbf{25} & \textbf{22} & \textbf{24} \\ \cmidrule(r){1-9}
\textbf{libmad} & 13 & 99 & 1 & 58 & 4 & 2 & 5 & 6 \\ \cmidrule(r){1-9}
Sub Total & \textbf{1} & \textbf{99} & \textbf{1} & \textbf{58} & \textbf{4} & \textbf{2} & \textbf{5} & \textbf{6} \\ \cmidrule(r){1-9}
\textbf{libarchive} & 14 & 67 & 1 & 24 & {\color[HTML]{CC0000} 0} & {\color[HTML]{CC0000} 0} & 1 & 1 \\
& 15 & 85 & 1 & 44 & {\color[HTML]{CC0000} 1} & {\color[HTML]{CC0000} 1} & 1 & 1 \\ \cmidrule(r){1-9}
Sub Total & \textbf{2} & \textbf{152} & \textbf{2} & \textbf{68} & \textbf{1} & \textbf{1} & \textbf{2} & \textbf{2} \\ \cmidrule(r){1-9}
\textbf{Total} & \textbf{15} & \textbf{3020} & \textbf{17} & \textbf{1276} & \textbf{159} & \textbf{40} & \textbf{109} & \textbf{113} \\ \cmidrule(r){1-9}
\end{tabular}
\caption{Results of RQ2: Comparing \helium{} against SOTA fuzzer deduplication}\label{tab:RQ2}
\end{table*}
Table~\ref{tab:RQ1} shows the result for RQ1. Each row corresponds to a known bug labeled with \textit{Bug ID}. The column \textit{Crashes} lists the number of fuzzed crashes generated for each bug using the approach presented in Section~\ref{subsubsec:GroupCorrect}. The crashes reported in this column are post-processed using the developer's patch. The \textit{Fault Sig} and \textit{Group} columns provide the number of fault signatures and the number of fault groups generated for the known bug using \helium{}. Under \textit{Correct} column we list the number of fuzzed crashes that were correctly grouped in one of the fault groups generated for the bug. Similarly, the \textit{Incorrect} column lists the number of fuzzed crashes from unrelated bugs that were incorrectly grouped under one of the fault groups generated for the bug. Any fuzzed crash that \helium{} failed to group under any fault group is reported under \textit{Missed}.
Our results indicate that among the total 3020 fuzzed crashes for which we know the ground truth, \helium{} correctly grouped 2995 (99.1\%) fuzzed crashes. For 3 benchmarks (\texttt{sqlite}, \texttt{libmad} and \texttt{libarchive}), we correctly classified 100\% (2029) of their fuzzed crashes. While we were unable to classify 25 (0.08\%) fuzzed crashes, we didn't misclassify any fuzzed crash into unrelated fault groups. The 25 fuzzed crashes we missed for \texttt{w3m} (1 from \textit{Bug 2} and 24 from \textit{Bug 4}) were due to fault signature generation failure caused by the project specific hard-coded dynamic functions. This implementation issue of \helium{} could be improved in the future.
\helium{} generated a total of 17 fault groups for the 15 known bugs. For 3 benchmarks (\texttt{w3m}, \texttt{libmad} and \texttt{libarchive}), we reported the same number of groups as the ground truth. The 2 extra fault groups (highlighted in red) generated for \texttt{sqlite} (one for \textit{Bug 10} and one for \textit{Bug 12}) missed the clustering threshold by a very small margin (difference of 0.08\% and 0.4\% respectively).
We reported a total of 55 fault signatures for the 15 bugs sized between 52 LOC to 340 LOC\@. The number of fault signatures can indicate the number of different important paths (or scenario) in which a particular bug can manifest. Of particular interest is \textit{Bug 6} from \texttt{sqlite}, which produced 14 fault signatures from \textit{just} 191 fuzzed crashes. The relatively high number of fault signatures may be an indicator that this bug can be crashed from a variety of scenarios and thus likely more important.
Using AFL ``Crash Exploration Mode'', our crash corpus also included 28 fuzzed crashes that do not belong to any known bugs which we discovered using the developers' patches (See Section 5.1.4). \helium{} is able to successfully separate them into different groups from the known bugs.
\subsection{Results for RQ2: Comparing Against SOTA}\label{subsec:ResultCompare}
Table~\ref{tab:RQ2} shows the result for RQ2. Similar to Table~\ref{tab:RQ1}, each row corresponds to a known bug. We label them using the same assigned \textit{Bug ID} in Table~\ref{tab:RQ1}. For all the crashes listed under \textit{Crashes}, the grouping results from \helium{} and our baselines are listed under \textit{FuzzerAid}, \textit{AFL}, \textit{BFF-5}, \textit{BFF-1}, \textit{Honggfuzz} and \textit{Honggfuzz-S} respectively.
Our results show that \helium{}'s grouping is the same as the ground truth, except that we generated two additional groups for the \textit{SQLite} bugs. We generated a total of 17 groups for 15 bugs, compared to 40 from \textit{BFF-1}, 109 from \textit{Honggfuzz}, 113 from \textit{Honggfuzz-S}, 159 from \textit{BFF-5}, and 1276 from \textit{AFL}. Considering the challenges and cost of diagnosing a bug, our precise grouping techniques and improvement over the baselines indeed have practical values.
AFL used a conservative approach and applied branch coverage information to group the crashes of the same paths, thus it generated the most group. On the other hand, call stack hashes based approach of CERT-BFF and Honggfuzz were able to greatly reduce the number of groups, but with the risk of misclassification. For example, when we inspected the correctness of the grouping, we found that both \textit{BFF-1} and \textit{BFF-5} incorrectly classified all the fuzzed crashes from the two bugs of \texttt{libarchive} into one single group. See the numbers in the row of \textit{libarchive} highlighted in red.
\subsection{Summary}
In our evaluation, \helium{} is able to correctly group 2995 out of 3020 (99.1\%) fuzzed crashes without any incorrect classification. We were also the closest to ground truth in terms of grouping with 17 fault groups reported instead of ground truth's 15. The next closest baseline (BFF-1) reported 40 groups (2.35 times more) while still misclassifying fuzzed crashes from one group for \texttt{libarchive}.
The trace generated by PIN for creating the fault signatures varied between 2.03 M lines to 9.5 K lines. Using the program reduction techniques, the fault signatures used to group crashes range between 52 LOC to 340 LOC\@. Such fault signatures provided fault localization information and potentially help developers focus on a small set of statements for bug understanding and diagnosis.
\subsection{Threats To Validity}\label{subsec:Threats}
\noindent{\bf Internal Threat to Validity}: One of the important challenges of evaluating deduplication of fuzzing results is that we need to have ground truth for grouping. To simulate this setting, our approach is to take known bugs and configure the fuzzers to generate only crashes for the known bugs. This approach can detect whether we are able to group crashes of the same bug together. Meanwhile, to evaluate that we do not mistakenly group crashes from one bug to other groups, we mixed all the crashes from known bugs and see whether the grouping is correct. We also consider the fact that using AFL ``Crash Exploration Mode'' may generate additional crashes from unknown bugs. We used the developers' patches to fix each bug and observe if the crashes disappear. We also used a similar approach to validate if there are any misclassification in the groups generated by \helium{} and other baselines.
\noindent{\bf External Threat to Validity}: To evaluate if our techniques can be generally applicable in practice, we used 15 different bugs from 4 real-world large open-source projects. These projects range from 18 KLOC to 313 KLOC and cover a variety of software, e.g., text based web browser and audio library. We also selected 3 SOTA widely used fuzzers and their 5 total different settings as baselines to understand if our approach indeed advances the SOTA\@. Although more crashes, bugs, software, and baselines can be useful for further confirming the generality of our approaches, our current results do provide confidence that our approach is promising and can be useful.
|
1,116,691,500,339 | arxiv | \section{Introduction}
Just as the experimental evidence for ``magic" proton and neutron numbers was instrumental for laying a basic foundation of nuclear theory \cite{PhysRev.75.1969,PhysRev.75.1766.2}, the observation of their demise in exotic nuclear systems \cite{PhysRevC.12.644} was pivotal for the establishment of the modern understanding of nuclear structure and the mechanisms driving its evolution far from $\beta$-stability \cite{PhysRevLett.87.082502,Sorlin2008602,Smirnova2010109}.
The magic numbers found their origin in systematic studies of mass differences \cite{elsasser}. The disappearance of the magic $N=20$ shell closure was likewise evidenced through mass measurements of exotic sodium ($Z=11$) isotopes \cite{PhysRevC.12.644}, for which the binding energy normally reduced beyond a shell closure was in fact found to increase due to deformation. This was attributed to intruder configurations forming what is now known as the ``island of inversion” \cite{ioi}.
The question of the persistence of the next magic number -- $N = 28$ -- below the doubly magic (stable) $^{48}$Ca isotope has been subjected to detailed experimental scrutiny over the past two decades. The demise of the $N$ = 28 spherical gap in the silicon ($Z$ = 14) isotopic chain has been established through various spectroscopic studies \cite{Grevy2003,Bastin2007,Campbell2006,Takeuchi2012,Stroberg2014,PhysRevLett.122.222501} while the sulfur chain ($Z$ = 16)
shows signatures of shape-coexistence in the vicinity of $^{44}$S \cite{Glasmacher1997,Gade2009,Gaudefroy2009,Force2010,Santiago-Gonzalez2011,PhysRevC.100.044312}, a phenomenon often encountered at the border of an island of inversion \cite{PhysRevLett.117.272501}.
The argon ($Z$ = 18) chain is however less clear cut.
A relatively healthy $N$ = 28 gap is attested by the high lying E(2$_{1}^{+}$) excitation energy \cite{Gade2009,Bhattacharyya2008}, which is one of the major indicators of a closed shell. The level scheme proposed for $^{45}$Ar in \cite{Grevy2003} was also found to be well described in a single-particle picture and little collectivity. Likewise, investigations of neutron-rich argon isotopes via neutron knockout reactions \cite{Gade2005} portray $^{46}$Ar as a seemingly ``good" semi-magic nuclide with a low observed cross section to the $3 / 2^{-}$ state in $^{45}$Ar. However, later results from $(d,p)$ transfer reactions performed at GANIL \cite{Gaudefroy2006,Gaudefroy2008} hinted at the erosion of the $N$ = 28 shell gap already at $Z$ = 18. Good indicators of the onset of collective nuclear behavior, B(E2: 2$_{1}^{+} \rightarrow$0$_{1}^{+}$) values also give conflicting results. Three independent measurements yield a rather low B(E2) value \cite{Scheit1996,Gade2003,Calinescu2014,Winkler2012} compatible with the persistence of the $N$ = 28 gap in this chain, while the B(E2) extracted from a life-time measurement \cite{Mengoni2010}, albeit a low statistics one, suggests the opposite.
Ground-state properties provide complementary and model-independent probes of nuclear phenomena. Laser-spectroscopy measurements of mean-square charge radii along the argon isotopic chain show a pronounced shell effect at $N$ = 28 \cite{KLEIN19961,BLAUM200830}. Likewise, mass measurements performed using the S800 spectrometer at the NSCL suggest the presence of a strong $N$ = 28 shell in the argon chain \cite{Meisel2015}, but the large uncertainties of these masses prevent from making definitive statements.
Mass measurements of the $N=28$ gap below calcium
\cite{PhysRevLett.84.5062,Jurado2007,Ringle2009} hint at its possible erosion
for chlorine ($Z=17$) and sulfur ($Z=16$) but again, no firm conclusions can be drawn due to the experimental uncertainties beyond $N=28$.
Neutron-rich nuclei in this region of deformation below $Z$ = 20 are also of great theoretical interest. Firstly, they are fully tractable via state-of-the-art shell-model calculations. Specifically, the \emph{SDPF-U} interaction \cite{PhysRevC.79.014310} was designed to describe the physics inside the $N$ = 28 island of inversion and has succeeded in reproducing excitation spectra in the high-$Z$ part of this region \cite{PhysRevC.81.064329}. The merging of the $N$ = 28 and $N$ = 20 islands of inversion is well described by the \emph{SDPF-U} Mix interaction \cite{Caurier2014}, even though the predictions of the two interactions significantly differ in lighter isotopes \cite{PhysRevLett.122.052501,PhysRevLett.122.222501}.
Open-shell medium-mass nuclei also provide important benchmarks for rapidly developing nuclear \emph{ab initio} methods and modern theories of nuclear interactions based on chiral effective field theory.
In this context argon isotopes offer a complementary picture to the calcium chain that constitutes a traditional testing ground.
One such approach, the valence-space formulation of the In-Medium Similarity Renormalization Group (VS-IMSRG) \cite{Tsuk12SM,Bogn14SM,Stro16TNO,Stro17ENO,Stro19ARNPS}, opened \emph{ab initio} theories to essentially all nuclei accessible to the nuclear shell model, including fully open-shell exotic systems. The
VS-IMSRG provides an adequate description of the emergence of the $N$ = 32 and $N$ = 34 sub-shell closures around the calcium chain \cite{Michimasa2018,PhysRevC.99.064303,PhysRevLett.120.062503,Moug18Cr}, but its ability to simultaneously describe the collapse of the $N$ = 28 closure has not yet been tested. Another approach, the self-consistent Green's function formalism in its Gorkov formulation (SCGF) \cite{PhysRevC.84.064317}, can now target open-shell nuclei and thus allows the testing of various Hamiltonians along complete isotopic chains \cite{Lapoux16, som2019}.
In this article we report on the high-precision measurement of the neutron rich $^{46-48}$Ar isotopes. The question of the persistence of the $N$ = 28 gap is revisited in light of the new high-precision data. The new binding energy trends are first compared to predictions from the \emph{SDPF-U} shell-model interaction, which is believed to well describe physics in this region of deformation. We then extend our theoretical investigations to VS-IMSRG calculations, to provide a first test with respect to the evolution of the $N=28$ shell closure below calcium.
Finally, we present results from SCGF calculations of open-shell isotopes around the calcium chain using the recently derived NN+3N(lnl) chiral Hamiltonian~\cite{som2019}.
\section{Experiment}
\begin{figure*}
\centering
\includegraphics[scale=1]{ISOLTRAP_drawings.png}
\caption{Schematic representation of the ISOLTRAP on-line mass spectrometer. The typical kinetic energy of the ions at various stages of the ISOLTRAP apparatus is shown in green. For details see \cite{Mukherjee-EurPhysJA,Kreim-NuclInstrumMethodsB.317.492}.}
\label{isoltrap_sketch}
\end{figure*}
The measurements reported in this article were performed at the radioactive ion-beam facility ISOLDE at CERN \cite{ISOLDE_2017} in July 2015 and August 2017. In both experiments, the radioisotopes of interest were produced using a thick UC$_{x}$ target which was bombarded with a primary beam of 1.4-GeV protons delivered by the PS-Booster. A VADIS VD7 plasma-ion source was used for ionization. This source was equipped with a water-cooled tantalum transfer line which inhibits the effusion of the less volatile species towards the active volume of the source \cite{doi:10.1063/1.3271245}. The obtained flux of ions was accelerated to a kinetic energy of 30/50 keV in 2015/2017, respectively. Prior to its delivery to the ISOLTRAP on-line mass spectrometer, the isobars of interest were selected using the ISOLDE High-Resolution (magnetic-dipole) Separator (HRS).
A schematic representation of the ISOLTRAP mass-spectrometer \cite{Mukherjee-EurPhysJA,Kreim-NuclInstrumMethodsB.317.492} is shown in Fig. \ref{isoltrap_sketch}. The radioactive ions were first accumulated in a linear radio-frequency cooler-buncher trap (RFQ-CB) \cite{Herfurth2001254},
where the emittance of the incoming beam was reduced in a few milliseconds through collisions with the helium buffer gas (see Table \ref{ArTable} for details).
The ions were extracted from the RFQ-CB in short bunches, were decelerated by a pulsed drift cavity to a kinetic energy of $\approx$ 3.2 keV and then injected into a Multi-Reflection Time-of-Flight Mass Separator (MR-ToF MS) \cite{WOLF201282,WOLF2013123}. There, the bunch of ions was reflected back and forth repeatedly between two electrostatic mirrors. As a result, the various isobaric species constituting the ISOLDE beam were separated in flight time. The beam composition was studied by measuring the time of arrival of the different beam constituents to a secondary electron multiplier placed behind the MR-ToF MS. The experimental duty cycle was adapted according to the nature and abundance of the contamination (see Table \ref{ArTable} for details). Typically, the beam was kept for 1000 revolutions inside the MR-ToF, corresponding to a trapping time of $\approx$16 ms. In all cases, the radioactive species were unambiguously identified by observing the effect with and without proton beam. After separation, the selection of the species of interest was achieved by optimising the timing and length of the extraction pulse from the MR-ToF MS \cite{WIENHOLTZ2017285}.
Being a noble gas, argon is characterized by a large first ionization potential and thus is prone to charge-exchange reactions with neutral impurities contained in the helium gas of the RFQ-CB \cite{DELAHAYE2004604}. The charge-exchange half-life inside the RFQ-CB was determined by monitoring the evolution of the number of stable argon isotopes behind the MRToF-MS as a function of the RFQ-CB cooling time. During the 2017 experiment, stable $^{38}$Ar$^{+}$ was used and the charge-exchange half-life was determined to initially be 23(2) ms. In order to purify the buffer gas, the He injection line was immersed in a bath of liquid nitrogen. Six hours after the installation of this cold trap, the charge-exchange half-life had improved to 50(13) ms. In 2015, the buncher charge-exchange half-life with the cold trap was estimated to be 33(5) ms for $^{36}$Ar$^{+}$. In both runs, the charge-exchange phenomenon was exploited to distinguish the argon isotopes from the contaminants by monitoring the count-rate loss in the argon time-of-flight window as a function of the RFQ-CB trapping time.
\begin{table*}
\centering
\caption{Summary of the production, preparation and measurement conditions for the isotopes $^{46-48}$Ar. For the ToF-ICR data, the exact quadrupole-excitation time applied in the measurement Penning trap is given. For the Ramsey-type ToF-ICR resonances, the total quadrupole excitation time is presented as $\tau^{RF}_{on}$-$\tau^{RF}_{off}$-$\tau^{RF}_{on}$.}
\label{ArTable}
\begin{ruledtabular}
\begin{center}
\begin{tabular}{c c c c c | c c c c c c}
\multicolumn{5}{c|}{\textbf{Production}} & \multicolumn{5}{c}{\textbf{Preparation/Measurement}} \\ \cline{1-11}
Date & Target/Line & Source & Sep. & Energy & Ion & RFQ-CB & MR-ToF MS & Prep. Trap & Meas. Trap & Method \\ \cline{1-11}
\multirow{4}{*}{July 2015} & \multirow{4}{*}{UC$_{x}$/Ta} & \multirow{4}{*}{VD7} & \multirow{4}{*}{HRS} & \multirow{4}{*}{30~kV} & $^{46}$Ar$^{+}$ & 10~ms & 16.3~ms & 104~ms & 200~ms & 2 $\times$ ToF-ICR\\ \cline{6-11}
& & & & & \multirow{3}{*}{$^{47}$Ar$^{+}$} & \multirow{3}{*}{15~ms} & \multirow{3}{*}{19.8~ms} & \multirow{3}{*}{104~ms} & 100~ms & 2 $\times$ ToF-ICR \\
& & & & & & & & & 200~ms & 1 $\times$ ToF-ICR \\
& & & & & & & & & 10-80-10~ms & 2 $\times$ Ramsey ToF-ICR \\ \cline{1-11}
Aug. 2017 & UC$_{x}$/Ta & VD7 & HRS & 50~kV & $^{48}$Ar$^{+}$ & 5~ms & 16.7~ms & & & 95 $\times$ 1000revs MR-ToF \\
\end{tabular}
\end{center}
\end{ruledtabular}
\end{table*}
After a 90-degree bend, the purified ion beam entered ISOLTRAP's vertical transport section and was captured in the preparation Penning trap \cite{RAIMBAULTHARTMANN1997378}. In this He-filled device, further beam purification was achieved using the so-called mass-selective resonant buffer-gas cooling technique \cite{SAVARD1991247}. Once again, a cold trap was used to purify the He-gas injection line. After installation of the cold traps, the charge exchange half-life in the preparation Penning trap was 223(38) ms. Consequently, a rather short processing time (see Table \ref{ArTable} for details) was used. Finally, the ion bunch was transported to the precision Penning trap, where the free cyclotron frequency of the ion of interest was measured using the Time-of-Flight Ion-Cyclotron-Resonance (ToF-ICR) technique \cite{Koenig-IntJMassSpectrom.142.95}.
The ion mass $m_{\textit{ion,x}}$ is connected to its cyclotron frequency by the relation:
\begin{equation}
\nu_{c,x} = \frac{q_{x} B}{2 \pi m_{\textit{ion,x}}},
\end{equation}
where $q_{x}$ is the ion's charge (in the following we consider $q_{x}$ = \emph{e} for all species) and $B$ is the strength of the confining magnetic field. The calibration of the magnetic field is performed by measuring the cyclotron frequency $\nu_{\textit{c,ref}}$ of a reference species of well-known mass $m_{\textit{ion,ref}}$ shortly before and after the measurement of the species of interest. The cyclotron frequency of the reference species is then linearly interpolated to the time at which the measurement of the ion of interest was performed. From the experimentally measured cyclotron-frequency ratio:
\begin{equation}
r_{\textit{ref},x} = \frac{\nu_{c,\textit{ref}}}{\nu_{c,x}} = \frac{m_{ion,x}}{m_{ion,\textit{ref}}},
\end{equation}
the atomic mass of the species of interest is calculated according to the relation:
\begin{equation}
m_{\textit{atom,x}} = r_{\textit{ref},x }(m_{\textit{atom,ref}} - m_{e}) + m_{e},
\end{equation}
where $m_{e}$ is the electron mass \cite{Sturm2014}.
Sometimes the low yield and/or short half-life of an ion species make a Penning-trap measurement impossible. In this case, the MR-ToF MS can be used as a mass spectrometer in its own right. The relationship between an ion mass-over-charge ratio $\frac{m_{\textit{ion,x}}}{q_{x}}$ and its time-of-flight $t_{x}$ is given by \cite{Guilhaus1995}:
\begin{equation}
t_{x} = a \sqrt{\frac{m_{ion,x}}{q_{x}}} + b,
\end{equation}
where $a$ and $b$ are calibration parameters which can be determined by measuring the flight times $t_{1,2}$ of two reference ions with well-known masses $m_{1,2}$ and charges $q_{1,2}$. The mass of an ion is calculated from the relation \cite{Wienholtz-Nature.498.346}:
\begin{align}
\sqrt{\frac{m_{ion,x}}{q_{x}}} & = C_{\textit{TOF}} \left( \sqrt{\frac{m_{ion,1}}{q_{1}}}-\sqrt{\frac{m_{ion,2}}{q_{2}}} \right) \nonumber \\
& + \frac{1}{2} \left( \sqrt{\frac{m_{ion,1}}{q_{1}}}+\sqrt{\frac{m_{ion,2}}{q_{2}}} \right),
\end{align}
with:
\begin{equation}
C_{\textit{TOF}} = \frac{2t_{x}-t_{1}-t_{2}}{2(t_{1}-t_{2})}.
\end{equation}
\subsection{The $^{46}$Ar mass}
During the 2015 experiment, although significant amounts of $^{92}$Kr$^{2+}$ were present in the beam, the most detrimental $A$ = 46 contaminant was the stable $^{34}$S$^{12}$C$^{+}$ molecular ion. A mass resolving power of $R =\frac{m}{\Delta m} =$ 2 $\times$ 10$^{5}$ is needed to separate $^{46}$Ar$^{+}$ from this contaminant. As a result, a mixture of the two species was transported to the measurement Penning trap, where a ratio of 3:1 in favor of the contaminant species was initially observed. Fortunately, after a few days the outgasing of the $^{34}$S$^{12}$C$^{+}$ molecular ion from the target unit reversed this ratio.
To enhance the collection of argon ions even further, the ISOLTRAP cycle was synchronized to the proton impact on the ISOLDE target and delayed by 50~ms to accumulate the argon ions at the maximum of their release from the target. The RFQ-CB cooling time was also reduced from
25~ms to 10~ms to minimise charge-exchange losses. These modifications meant that two quasi-pure ToF-ICR resonances of $^{46}$Ar$^{+}$ were recorded. A quadrupole-excitation time of 200~ms was used in both cases (see Table \ref{ArTable} for details).
\begin{figure}
\centering
\includegraphics[scale=0.35]{Argontoficr46.pdf}
\caption{A typical one-pulse ToF-ICR resonance \cite{Koenig-IntJMassSpectrom.142.95} of $^{46}$Ar$^{+}$ ($\tau_{on}^{RF}$ = 200 ms). The color-map represents the ion events recorded in each (frequency;tof) bin. The mean and standard deviation of the time-of-flight distribution recorded for each frequency is shown as open circles while the red line shows the result of the least-squares adjustment of the theoretical line shape to these data points. The vertical dashed line indicates the expected cyclotron frequency of the contaminant species $^{34}$S$^{12}$C$^{+}$.}
\label{46Ar_toficr}
\end{figure}
Because of the presence of $^{34}$S$^{12}$C$^{+}$, extra care was taken during the analysis procedure. In the present case, a vast majority of ejections out of the measurement Penning trap resulted in no ions detected (average count rate of 0.2 ions/ejections) while 250 events were recorded with only one ion detected. This number drops by a factor 5 for two ions detected per ejection and even more significantly for three ions or more. As a result, the so-called z-class analysis, a procedure described in \cite{kellerbauer2003} to estimate the effect of contaminants in ToF-ICR resonances could not be performed here. To limit the impact of residual contamination, the analysis was performed using the events where only one ion was detected after the measurement trap.
\begin{figure}
\centering
\includegraphics[scale=0.35]{46Ar_comp.pdf}
\caption{Comparison between the value for the $^{46}$Ar mass excess obtained in this work (red diamond) and the ones obtained in previous works \cite{PhysRevC.9.2067,PhysRevC.22.2449,Matos_thesis}. The black dashed line marks the AME2012 value while the grey band represents the AME2012 one standard deviation \cite{AME2012}. For the red diamond point, the uncertainty is smaller than the size of the point.}
\label{46arcomp}
\end{figure}
A typical resonance is shown in Fig. \ref{46Ar_toficr}. The purity of the resonance is attested by two factors. First, around the free cyclotron frequency of $^{34}$S$^{12}$C$^{+}$ (indicated by the vertical red dashed line
in Fig. \ref{46Ar_toficr}) very few ion counts are present between 220 and 240~\textmu s, meaning that very few excited contaminant ions were recorded. Second, close to zero frequency detuning, the time-of-flight distribution for each frequency value does not exhibit a significant amount of ion events detected at time-of-flights around 330~\textmu s, which would indicate the presence of unexcited contaminant ions.
In the present case, $^{39}$K$^{+}$ (atomic mass $m_{^{\text{39}}\text{K}}$ = 38963706.487(5)~\textmu u \cite{AME2016}) was used as a reference for the magnetic-field calibration. Taking into account the various sources of systematic uncertainties described in \cite{kellerbauer2003}, one obtains the mean frequency ratio in Table \ref{Ar_results}. This translates to an atomic mass excess of ME($^{46}$Ar) = -29771.3(23)~keV. Fig. \ref{46arcomp} shows a comparison between the value from this work and that obtained from previous measurements. When compared to the AME2012 value \cite{AME2012}, our new measurement deviates by 41.3~keV but is 20 times more precise. The AME2012 value was primarily determined through two \emph{Q}-value measurements: one in 1974 using the $^{48}$Ca($^{6}$Li,$^{8}$B)$^{46}$Ar reaction \cite{PhysRevC.9.2067} and another in 1980 using the $^{48}$Ca($^{14}$C,$^{16}$O)$^{46}$Ar reaction \cite{PhysRevC.22.2449}. These results agree with the new mass and were complemented by a measurement performed using the Isochronous Mass Spectrometry technique at the FSR-ESR storage ring (GSI, Germany) \cite{Matos_thesis} in 2004, which also agrees but had no weight in the evaluation due to the larger uncertainty.
\begin{table*}[!ht]
\centering
\caption{Final frequency ratios ($r_{ref,x} = \nu_{c,ref} / \nu_{c,x}$), time-of-flight ratios ($C_{ToF}$) and mass excesses of the argon isotopes measured in this work. Values of the mass excesses from the Atomic-Mass Evaluation 2016 (AME2016) \cite{AME2016} are given for comparison. Values from AME2012 are also given \cite{AME2012} (\# designates AME2012 extrapolated value).The masses of the reference ions were also taken from AME2016. Experimental half-lives are taken from the NUBASE2016 evaluation \cite{Nubase2016}.}
\begin{ruledtabular}
\begin{center}
\begin{tabular}{c c c c c c c}
& & & & \multicolumn{3}{c}{\textbf{Mass Excess (keV)}} \\ \cline{5-7}
\textbf{Species} & \textbf{Half-life} & \textbf{Reference} & \textbf{ ratio \emph{r} or $C_{ToF}$} & \textbf{This work} & \textbf{AME2016} & \textbf{AME2012} \\ \cline{1-7}
$^{46}$Ar & 8.4(8) s & $^{39}$K & $r_{ref,x}$ = 1.1797680972(640) & -29771.3(23) & -29772.9(11) & -29730(40)\\ \cline{1-7}
$^{47}$Ar & 1.23(3) s & $^{39}$K & $r_{ref,x}$ = 1.2055547092(340) & -25367.3(12) & -25366.3(11) & -25210(90)\\ \cline{1-7}
$^{48}$Ar & 415(15) ms & $^{32}$S$^{16}$O/$^{85}$Rb & $C_{ToF}$ = 0.499715668(560) & -22355(17) & -22280(310) & -22440\# (300)\# \\
\end{tabular}
\end{center}
\end{ruledtabular}
\label{Ar_results}
\end{table*}
\subsection{The $^{47}$Ar mass}
The $^{47}$Ar$^{+}$ ions were well separated from the other contaminants so that a pure beam was transported to the measurement Penning trap. The details of the ISOLTRAP measurement cycle are summarized in Table \ref{ArTable}. In total, three ToF-ICR resonances were recorded using a quadrupole-excitation time of 100 ms and 200 ms. In addition, two ToF-ICR resonances in the Ramsey-type excitation scheme \cite{George-IntJMassSpectrom.264.110,PhysRevLett.98.162501} were recorded. This excitation scheme is characterized by the application of two short radio-frequency pulses of duration $\tau_{on}^{RF}$ which are coherent in phase and separated by a waiting time $\tau_{off}^{RF}$. For the same total excitation time, this method offers a three-fold precision improvement in the determination of the free cyclotron frequency of an ion when compared to the single-pulse ToF-ICR method. In the present case, a $\tau_{on}^{RF} - \tau_{off}^{RF} - \tau_{on}^{RF}$ = 10~ms - 80~ms - 10~ms excitation scheme was used. A typical example of such a Ramsey-resonance is shown in Fig. \ref{47Ar_ramsey}.
\begin{figure}
\centering
\includegraphics[scale=0.35]{Argon47_ramsey.pdf}
\caption{A typical ToF-ICR resonance of $^{47}$Ar$^{+}$ using the Ramsey-type excitation scheme ($\tau_{on}^{RF}-\tau_{off}^{RF}-\tau_{on}^{RF}$ = 10~ms - 80~ms - 10~ms) \cite{George-IntJMassSpectrom.264.110,PhysRevLett.98.162501}. The color-map represents the ion events recorded in each (frequency;tof) bin. The mean and standard deviation of the time-of-flight distribution recorded for each frequency value is shown as open circles while the red line shows the result of the least-squares adjustment of these data points to the theoretical line shape.}
\label{47Ar_ramsey}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35]{47Ar_comp.pdf}
\caption{Comparison between the value for the $^{47}$
Ar mass excess obtained in this work (red diamond) and the ones obtained in previous works \cite{BENENSON198587,Tu1990,PhysRevLett.84.5062,Gaudefroy2006,Gaudefroy2012}. The black dashed line marks the AME2012 value while the grey band represents the AME2012 one standard deviation \cite{AME2012}. For the red diamond point, the uncertainty is smaller than the size of the point.}
\label{47arcomp}
\end{figure}
Here, $^{39}$K$^{+}$ ions were also used for the calibration of the magnetic field. The mean frequency ratio of Table \ref{Ar_results} can be used to derive an atomic mass excess value ME($^{47}$Ar) = -25367.3(12) ~keV. Figure \ref{47arcomp} shows the comparison between the new value from this work and previous measurements. Compared to the AME2012 value, our measurement provides a $\sim $ 90-fold improvement in precision and is 157~keV more bound. The AME2012 \cite{AME2012} value is mainly influenced by a measurement of the \emph{Q}-value of the reaction $^{46}$Ar(d,p)$^{47}$Ar \cite{Gaudefroy2006}. In this study the authors reported a 700-keV deviation to a previous measurement obtained from the reaction $^{48}$Ca($^{14}$C,$^{15}$O)$^{47}$Ar \cite{BENENSON198587}. In addition, the AME2012 also includes two time-of-flight measurements of $^{47}$Ar \cite{Tu1990,PhysRevLett.84.5062} which due to their large uncertainty bore no significant weights in the evaluation. The close proximity between the mass excesses of $^{46-47}$Ar reported in this work and that tabulated in the AME2016 \cite{AME2016} is due to the fact that a very preliminary version of the results presented in this work was communicated to the AME evaluators. Apart from this preliminary value, the AME2016 \cite{AME2016} also includes a time-of-flight measurement performed at GANIL \cite{Gaudefroy2012}. As shown in Table \ref{Ar_results} our results dominate the weight in the final AME2016 adjustment.
\subsection{The $^{48}$Ar mass}
\begin{figure*}
\centering
\includegraphics[scale=0.75]{48Ar_bg_sup.pdf}
\caption{\emph{A} = 48 time-of-flight spectrum after 1000 revolutions inside the MR-ToF MS. The spectrum recorded with protons on target results from the sum of 13 consecutive files and is represented in dark-grey. The light-grey spectrum represents a background measurement performed while the protons were turned off and is the sum of 21 consecutive files.}
\label{48Ar_MRToF}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.75]{argon48_fits.pdf}
\caption{The model PDF used to extract the time of flight of $^{48}$Ar$^{+}$. The analysis is performed in a restricted 1.1~\textmu s window. The full PDF is represented as a solid green line while the dashed blue and dashdotted red lines represent the contaminant (two Gaussians) and signal (one Gaussian) components, respectively.}
\label{48Ar_models}
\end{figure*}
The previous measurement campaign was followed in 2017 by an experiment targeting the measurement of $^{48}$Ar$^{}$. In order to establish the presence of the radioactive $^{48}$Ar$^{+}$ isotope in the ISOLDE beam, a reference time-of-flight histogram was built from 21 consecutive files recorded with the MR-ToF MS without protons on target. This histogram was compared with a histogram resulting from the sum of 13 consecutive files recorded with protons on target. To allow the comparison between the two spectra both were normalized to their total number of recorded events and superimposed. As seen from Fig. \ref{48Ar_MRToF}, the $A$ = 48 ISOLDE beam was found to be dominated by the presence of the stable $^{32}$S$^{16}$O$^{+}$ molecular ion which was unambiguously identified by measuring its cyclotron frequency in ISOLTRAP's measurement Penning trap. At later TOF, a double-peak structure corresponding to stable contamination is also visible. The yields of these species were too low to allow for the determination of their cyclotron frequencies using the measurement Penning trap. Their times of flight were compared to a wide variety of singly- and doubly-charged atomic and simple molecular species, none matched.
With protons on target, a $^{96}$Kr$^{2+}$ peak became clearly visible. Synchronizing the start of the experimental cycle with the proton impact on target, an excess of counts also appeared between the two stable undetermined species within the expected time-of-flight window for $^{48}$Ar$^{+}$. Varying the RFQ-CB cooling time from 20 to 150 ms, the absolute strength of this signal was extracted using the binned, extended maximum likelihood estimation method within a restricted time-of-flight window of 1.1~\textmu s \cite{doi:10.1002/9783527653416.ch2}. The probability-density function (PDF) of the fit was composed of the sum of two Gaussian PDFs (describing the two stable contaminants) and a uniform component (to capture the rather high level of baseline background) while the signal component was also considered to be described by a Gaussian PDF. In addition, the three Gaussian PDFs were assumed to share the same width parameter. In total eight parameters were left free during the estimation. Hence, we found that the strength of the studied signal decreases when the RFQ-CB cooling time is increased at a rate consistent with the observed charge-exchange half-life.
In total, eight MR-ToF MS spectra were used to perform the mass determination of $^{48}$Ar$^{+}$. Each of these spectra results from the sum of 8 to 20 individual files recorded consecutively. Within this set of 8 spectra, as few as 30 and as much as 170~ion counts, for a total of 700~ion counts attributed to $^{48}$Ar$^{+}$ were recorded. The same analysis method and parameters as used for estimating the signal strength were kept for the determination of the mean TOF of $^{48}$Ar$^{+}$.
Figure \ref{48Ar_models} shows a typical example of the adjusted PDF (solid green line). The background component (dashed blue line) and signal components (dash-dotted red line) are also represented. For the $A$ = 48 mass determination, the molecular contaminant $^{32}$S$^{16}$O$^{+}$ present in the $A$ = 48 spectrum (atomic mass $m_{^{\text{32}}\text{S}^{\text{16}}\text{O}}$ = 47966985.794(1)~\textmu u \cite{AME2016}) and $^{85}$Rb$^{+}$ (atomic mass $m_{^{\text{85}}\text{Rb}}$ = 84911789.738(5)~\textmu u \cite{AME2016}) provided by ISOLTRAP's offline ion source (see Fig. \ref{isoltrap_sketch}) were used as references. The obtained mean $C_{ToF}$ parameter and its associated uncertainty can be found in Table \ref{Ar_results}.
\begin{figure}
\centering
\includegraphics[scale=0.35]{48Ar_comp.pdf}
\caption{Comparison between the value for the $^{48}$Ar mass excess obtained in this work (red diamond) and those obtained in previous works \cite{Meisel2015,Michimasa2018}. The black dashed line marks the AME2016 value while the grey band represents the AME2016 one standard deviation \cite{AME2016}.}
\label{48arcomp}
\end{figure}
When one of the reference species is part of the same time-of-flight spectrum as the ion of interest, the accuracy of the MRToF-MS mass measurement is sensitive to any phenomenon affecting the extracted time-of-flight difference between the two species. In this respect, the main source of systematic uncertainty was found to be the shape of the time-of-flight distributions. As seen in Fig. \ref{48Ar_MRToF}, when sufficient statistics are collected, the time-of-flight peaks exhibit clear tailing towards later flight time. For the sake of consistency, the analysis was performed assuming that all peaks are Gaussian distributed.
To quantify the dependence of the estimated time of flight on the presence of these tails, the time-of-flight estimation was performed a second time for the reference species using the asymmetric peak profile described in \cite{LAN20011}. For each reference species ($i=$ 1 , 2) the time-of-flight differences $\Delta t_{i}$ to the results from the Gausssian PDF were averaged over the 8 spectra yielding the average time-of-flight deviations $\overline{\Delta t_{i}}$. These systematic fit deviations $\overline{\Delta t_{i}}$ were then translated into the individual systematic C$_{ToF}$ uncertainty contributions $\Delta C_{ToF}^{fit,i} = \rvert \frac{\partial C_{ToF}}{\partial t_{i}} \overline{\Delta t_{i} \lvert}$. Finally, all the $\Delta C_{ToF}^{fit,i}$ were added in quadrature to the statistical uncertainty to yield the total $C_{ToF}$ uncertainty. Since the statistics is too low to assess this effect for the $^{48}$Ar$^{+}$ peak, this peak was attributed the same additional uncertainty contribution as that of the isobaric $^{32}$S$^{16}$O$^{+}$ reference, the rest being purely statistical. This effect contributes 35 \% of the final $C_{ToF}$ uncertainty given in Table \ref{Ar_results}. Another systematic-uncertainty source is the so-called peak-coalescence phenomenon \cite{doi:10.1063/1.4796061} whereby the separation between isobaric species is reduced due to their Coulomb interaction. To mitigate this effect the count rate was always kept under $\approx$8 ions/cycle during the measurement, which has been shown from many cross-check measurements to be a safe limit.
Figure \ref{48arcomp} provides a direct comparison between the new value from the present work and previous measurements. Time-of-flight measurements published in 2015 with the S800 spectrometer at the NSCL \cite{Meisel2015} provided the first mass-excess value for $^{48}$Ar. Very recently, another such measurement was reported using the SHARAQ spectrometer at RIKEN \cite{Michimasa2018}. This measurement, in agreement with that of NSCL, brought a factor of 2.5 improvement in accuracy. Our measurement of the $^{48}$Ar mass excess (see Table \ref{Ar_results}) shows a factor $\approx$19 improvement in accuracy from the NSCL value while deviating by 74.8~keV. When compared to the RIKEN measurement, the present value provides a factor $\approx$7 improvement in accuracy and deviates by $\approx$25~keV. Both deviations are well within one standard deviation of the respective previous value.
\section{Discussion}
The mass values obtained in this work were used to assess the strength of the empirical \emph{N}~=~28 shell gap for argon. To extract nuclear-structure effects from binding energies, one typically investigates the variation with $N$ or $Z$ of finite binding-energy differences, also called mass filters. One such quantity, the two-neutron separation energy $S_{2n}(N,Z)$, is presented in Fig. \ref{s2n_exp} as a function of $N$ for the isotopic chains with $Z=16-20$. $S_{2n}$ is defined as $ME(N-2,Z)-ME(N,Z)+2M_{n}$
where $ME(N,Z)$ represents the mass excess of an isotope with \textit{N} neutrons and \textit{Z} protons and $M_{n}$ is the neutron mass excess. Along an isotopic chain, the $S_{2n}$ values usually follow a steadily decreasing trend,
while at a shell closure, the magnitude of this decrease is markedly larger. Figure \ref{s2n_exp} confirms that the trend of $S_{2n}$ obtained in this work for $Z$~=~18 is not significantly different than the one obtained using the results from \cite{Meisel2015}, from which a strong \emph{N}~=~28 shell-gap in the argon chain was inferred.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.35]{s2n_exp.pdf}
\caption{Experimental trends of S$_{2n}$ in the $N$=28 region for isotopic chains ranging from sulfur to calcium. For the argon isotopic chain the trend obtained from the AME2012 \cite{AME2012} is represented as open diamonds, the trend extracted from the 2015 NCSL time-of-flight measurements is represented as orange diamonds \cite{Meisel2015} and the trend from this work is shown as blue circles. The values for all the other chains are extracted from the AME2016 mass evaluation \cite{AME2016}. The black circle was obtained using values from \cite{Jurado2007} which are not included in the AME.}
\label{s2n_exp}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35]{d3n_exp.pdf}
\caption{Three-point estimator of the pairing gap for the calcium, argon and sulfur ($Z$~=~20, 18, 16, respectively) isotopic chains. The calcium and sulfur values are extracted from the AME2016 \cite{AME2016} and are represented as open square and triangles, respectively. For the argon isotopic chain, values extracted from AME2012 \cite{AME2012} are represented as open circles while the orange diamonds represent the trend obtained from the NCSL 2015 measurements \cite{Meisel2015}. The trend obtained from this work is represented as blue circles.}
\label{d3n_exp}
\end{figure}
In order to examine the strength of the empirical shell gap at \emph{N} = 28 more directly, Fig.~\ref{d3n_exp} shows another mass filter, namely the three-point estimator of the pairing gap, defined as $\Delta_{3n}(N,Z) = \frac{(-1)^{N}}{2} \left[ ME(Z,N+1) - 2 ME(Z,N) + ME(Z,N-1) \right]$. This quantity is usually discussed in the context of the study of the odd-even staggering of binding energies, but at the crossing of a neutron-shell closure $N_{0}$ this staggering is enhanced and
$\Delta_{3n}(N_{0},Z)$ is then directly related to the one-neutron empirical shell gap following: $\Delta_{1n}(N_{0},Z) = S_{1n}(N_{0},Z)-S_{1n}(N_{0}+1,Z) = 2\times \Delta_{3n}(N_{0},Z)$. The strength of the empirical one-neutron shell gap in $^{46}$Ar estimated from this work is $\Delta_{1n}(28,18) =$ 4.405(4)~MeV. This value is in agreement with that obtained from the study of the $^{46}$Ar(d,p)$^{47}$Ar reaction \cite{Gaudefroy2006}. As a result, even if all the masses measured in this work are found to be more bound than in \cite{AME2012,Meisel2015}, they reveal a net reduction of the $N$ = 28 one-neutron empirical shell-gap in the argon chain by 73~keV. In addition, compared to $^{48}$Ca, $^{46}$Ar exhibits a $N$ = 28 shell gap which is 402(4)~keV smaller (see Fig.~\ref{d3n_exp}). Given the doubly magic character of $^{48}$Ca, investigating only the systematics of the mass surface, one would conclude that the $N$ = 28 shell is a quite robust shell closure down to $Z$ = 18, thus confirming the findings of \cite{Meisel2015}. On the contrary, the demise of the $N$ = 28 shell closure in the sulfur chain is suggested by the strong reduction of the one-neutron shell gap between $Z$ = 18 and $Z$ = 16, although the large uncertainty calls for precision mass measurements.
In order to gain further insights into the physics at play within this region of the nuclear chart, the binding energy trends obtained in this work were confronted with predictions from various theoretical approaches. To this end, mean-field calculations of even-even and odd-even argon isotopes were performed using the UNEDF0 energy-density functional \cite{PhysRevC.82.024313}. For these calculations a surface-volume-type pairing interaction was chosen. Its strength was kept fixed, since the UNEDF0 functional simultaneously fits this with the other functional parameters. The HFBTHO code, which solves the HFB equations enforcing axial symmetry \cite{STOITSOV20131592}, was used. The odd-\emph{N} isotopes were computed performing quasi-particle blocking within the so-called equal-filling approximation \cite{PhysRevC.78.014304}. The Lipkin-Nogami prescription was used for approximate particle-number restoration. The obtained trend of $\Delta_{3n}(N,Z)$ is presented in Fig.~\ref{d3n_theo_ar}. A first observation is that none of the characteristic features indicative of shell-closure at $N$ = 28 are reproduced. Furthermore, the overall scale of the predicted $\Delta_{3n}(N,Z)$ trends is greatly underestimated. This indicates that the adjusted UNEDF0 pairing strength is too weak to correctly describe this region of lighter nuclides.
\begin{figure}
\centering
\includegraphics[scale=0.35]{d3n_theo.pdf}
\caption{Comparison between the three-point estimator of the pairing gap for the argon chain obtained from this work and the ones predicted from the UNEDF0 density functional, $SDPF-U$ shell model and the {\it ab initio} VS-IMSRG.}
\label{d3n_theo_ar}
\end{figure}
The spectroscopic results in this region are believed to be well understood within the framework of the phenomenological shell model \cite{Gaudefroy2006,PhysRevC.81.064329,Bhattacharyya2008}. Thus, calculations were performed using the \emph{m-scheme} shell-model code ANTOINE \cite{antoine1,antoine2} using the \emph{SDPF-U} shell-model interaction \cite{PhysRevC.79.014310}. In the calculation, the neutron valence space spans the entire \emph{sd-pf} shell, while protons are restricted to the \emph{sd} shell. An additional constraint is that particle excitations between the \emph{sd} and \emph{pf} shells are forbidden (i.e., a so-called $0\hbar \omega$ calculation).
The trend of $\Delta_{3n}(N,Z)$ obtained from the calculated argon ground states is shown in Fig.~\ref{d3n_theo_ar}. A 250-keV offset notwithstanding, the agreement between theory and experiment is excellent, highlighting the ability of the \emph{SDPF-U} interaction to not only reproduce spectroscopy along the argon isotopic chain \cite{PhysRevC.81.064329}, but also binding-energy systematics.
\begin{figure}
\centering
\includegraphics[scale=0.35]{cor_energy.pdf}
\caption{Evolution of the ground-state correlation energy calculated using the \emph{SDPF-U} shell-model interaction \cite{PhysRevC.79.014310} as a function of the proton number $Z$ for the \textit{N} = 27 (dashed blue line), 28 (dash-dotted green line), 29 (solid red line) isotones.}
\label{cor_energy}
\end{figure}
The presence of a strong shell-closure at $N$ = 28 should be characterized by the predominance of the $\nu (1f7/2)^{8}$ \emph{natural} configuration in the ground-state wavefunction of even Ar isotopes. A so-called \emph{intruder} configuration would be characterised by the promotion of at least one such $1f7/2$ neutron to higher energy orbitals.
Hence, in agreement with \cite{Bhattacharyya2008}, our calculations show that the ground-state of the doubly-magic $^{48}$Ca isotope is built at $\approx$90 \% on the \emph{natural} configuration while the ground-states of $^{46,48}$Ar is only built at $\approx$50 \% on this same configuration. In addition, the \emph{monopole} and \emph{multipole} energy contributions of the calculated ground-state energies were extracted. While the \emph{monopole} energy represents single-particle contributions, of spherical Hartree-Fock type, the \emph{multipole} energy was shown to represent the contribution of correlations to the total energy of a calculated shell-model state \cite{Dufour1996}. The evolution of the calculated ground-state correlation energy for $Z$ = 14-20 and $N$ = 27-29 is shown in Fig. \ref{cor_energy}. Hence, in agreement with \cite{PhysRevC.81.064329}, we find a rapid increase of correlation energy south of $^{48}$Ca. In $^{48,49}$Ca, correlations account for $\approx$2~MeV of the total energy of the ground state. On the contrary, for $^{46,47}$Ar the correlation energy is already $\approx$11~MeV, when only two protons are removed from the closed calcium proton core. In comparison, the measured strength of the one-neutron empirical shell gap is close to $\approx$4.8~MeV and $\approx$4.4~MeV for $^{48}$Ca and $^{46}$Ar respectively. As a result, in agreement with previous shell-model studies performed with the phenomenological \emph{SDPF-U} interaction, we find that the ground-states of the studied argon isotopes do not exhibit the expected characteristics of a typical closed-shell nucleus, but rather suggests that collectivity is already emerging only two protons below $^{48}$Ca.
This observation establishes the argon chain as the transitional point from the closed-shell region around calcium towards a region of collectivity below $Z$ = 18. This conclusion is also supported by other experimental evidence \cite{Gade2005,Bhattacharyya2008}, the most compelling of which is the spectroscopic factor from a $^{46}$Ar(d, p)$^{47}$Ar transfer reaction \cite{Gaudefroy2006}. Indeed, this reaction populates a $7/2^{-}$ state in $^{47}$Ar for which the model-dependent determined vacancy is 1.36(16). Again, this is in contradiction with the expectations of a naive shell-model picture of a closed-shell $^{46}$Ar. As a result, the conclusion drawn from the mass systematics alone of a strong shell closure in $^{46}$Ar \cite{Meisel2015} must be nuanced in light of the wealth of experimental and theoretical data.
\begin{figure}
\centering
\includegraphics[scale=0.35]{d3n_reg.pdf}
\caption{Comparison between the empirically determined pairing-gap trend and the one obtained in VS-IMSRG calculations for the calcium (red), argon (blue) and sulfur chains (green). For the calcium and sulfur isotopic chains the experimental values from AME2016 \cite{AME2016} are represented as open squares and triangles, respectively. For the argon isotopic chain, values extracted from AME2012 \cite{AME2012} are represented as open circles while the plain blue circles are values from this work. The VS-IMSRG predictions are represented as dashed, dotted and solid lines for the calcium, sulfur and argon chains, respectively.}
\label{d3n_reg}
\end{figure}
The ground states of the measured argon isotopes were also examined using the {\it ab initio} VS-IMSRG approach \cite{Tsuk12SM,Bogn14SM,Stro16TNO,Stro17ENO,Stro19ARNPS}. The spectroscopic quality of this approach has been recently studied in light of the first measurement of the $2_{1}^{+}$ state in $^{52}$Ar \cite{Liu2019}. While the \emph{SDPF-U} phenomenological interaction provided the best overall description of the evolution of the $2_{1}^{+}$ states along the argon chain, the VS-IMSRG approach nonetheless reasonably well reproduced this trend up to $^{52}$Ar. In this work we start from the 1.8/2.0 (EM) NN+3N interactions developed in \cite{Hebe11fits,Simo17SatFinNuc}, which reproduces the ground-state energy systematics, including the location of the proton and neutron driplines, of nuclei throughout the light to medium-mass regions \cite{Simo16unc,Hag16,Rui16,Simo17SatFinNuc,Holt2019}. Details of the calculations are the same as those given in \cite{Simo17SatFinNuc}, unless explicitly stated otherwise. In particular, we use the Magnus formulation of the IMSRG \cite{Morr15Magnus,Herg16PR} to construct an approximate unitary transformation to first decouple the $^{28}$O core energy, then a proton $sd$ and neutron $pf$ valence-space Hamiltonian from the full $A$-body problem. In addition, with the ensemble normal-ordering procedure of Ref.~\cite{Stro17ENO}, we approximately include effects of 3N forces between valence nucleons, such that a specific valence-space Hamiltonian is constructed for each nucleus to be studied. The final diagonalization is performed using the NuShellX@MSU shell-model code~\cite{BROWN2014115}.
\begin{figure}
\centering
\includegraphics[scale=0.35]{d2n_theo.pdf}
\caption{$N$ = 28 two-neutron empirical shell gap for elements ranging from sulfur to chromium. Experimental values are represented as black circles \cite{AME2016}. The open circle represents the value from this work while the open diamond represents the value extracted from \cite{Jurado2007} (not included in the AME evaluation).}
\label{d2n}
\end{figure}
The trend of $\Delta_{3n}(N,Z)$ obtained from these calculations is also shown in Fig.~\ref{d3n_theo_ar}, revealing that the experimental $\Delta_{3n}(N,Z)$ trend is also very well reproduced along the entire argon chain, particularly the magnitude of the one-neutron empirical shell gap. Figure~\ref{d2n} shows the $N$ = 28 two-neutron shell gap, defined as $\Delta_{2n} = S_{2n}(N,Z) -S_{2n}(N+2,Z)$, obtained from various theoretical approaches as a function of $Z$. The VS-IMSRG prediction for the two-neutron shell gap at $Z$~=~18 is also in good agreement with the one obtained from the masses measured in this work, despite modestly overestimating it by $\approx$500~keV. The $N$~=~28 two-neutron gap for the calcium chain is however overestimated by more than 1~MeV. Nonetheless, we see that the \emph{ab initio} approach of the VS-IMSRG offers a consistent framework for predicting the systematics of ground- and excited-state energies simultaneously throughout the argon chain.
We also examined the composition of the wavefunctions obtained within the VS-IMSRG approach. In complete analogy with the conclusions drawn from the phenomenological \emph{SDPF-U}-interaction, we find that the ground state of $^{46}$Ar is not majoritarily ($\approx$40 \%) built on the \emph{natural} $\nu (1f7/2)^{8}$ configuration while the ground state of the benchmark doubly closed-shell $^{48}$Ca nucleus is built at $\approx$90 \% on that same configuration. In addition, to assess the quality of the VS-IMSRG prediction in this transitional region, we also perform calculations in the sulfur isotopic chain. The trend of $\Delta_{3n}(N,Z)$ obtained from these calculations is shown in Fig.~\ref{d3n_reg}. Here we see that not only the magnitude of the empirical one-neutron shell gaps in both $^{48}$Ca and $^{46}$Ar are well reproduced, but also that the erosion of the $N$ = 28 shell closure, as extracted from the mass systematics in the sulfur chain \cite{PhysRevLett.84.5062,Jurado2007,Ringle2009}, emerges \emph{ab initio}. The marked reduction of the predicted $N$ = 28 two-neutron shell gap from $Z$~=~18 to $Z$~=~16 is apparent also in Fig.~\ref{d2n}. Therefore a precise determination of the $^{45,46}$S masses is highly desirable in order to firmly assess the agreement between theory and experiment. While a systematic study of the entire region is beyond the scope of the present article, the VS-IMSRG offers a promising and consistent framework to guide future experimental efforts in the region of deformation below $^{48}$Ca.
To complete our \emph{ab initio} analysis, many-body calculations within the Gorkov-SCGF approach \cite{PhysRevC.84.064317,Soma14a} were performed for closed- and open-shell isotopes around $N$~=~28 and with $Z = 16-24$.
Medium-mass nuclei around $Z$ = 20 had been previously investigated within this framework \cite{PhysRevC.89.061301,Rosenbusch2015} using the NN+3N(400) chiral Hamiltonian of Refs.~\cite{PhysRevLett.109.052501,PhysRevC.68.041001,Navratil2007}.
That study had revealed a satisfying reproduction of the binding-energy trend (namely two-neutron separation energies) for the Ca, K and Ar chains, although the agreement with experiment was worsening when going south of the Ca chain.
The calculations are extended here using two more recent Hamiltonians.
The first such interaction is the NNLO$_{\text{sat}}$~\cite{Ekstrom2015}, which
departs from the traditional strategy of fitting to only few-body systems, and also includes observables up to $A$ = 25.
This procedure allows to correct for the poor saturation properties of the original NN+3N(400) Hamiltonian and leads to a reasonable reproduction of binding energies and charge radii up to the nickel chain~\cite{som2019}.
Another Hamiltonian labelled NN+3N(lnl) has been proposed to remedy some of the fundamental shortcomings of the NN+3N(400).
Contrarily to NNLO$_{\text{sat}}$, NN+3N(lnl) is adjusted solely on systems with $A$ = 2, 3 and 4.
First benchmark calculations on O, Ca and Ni chains~\cite{som2019} as well as application to K and Ca isotopes~\cite{PhysRevLett.120.062503, Chen19, Sun20} indicate that it constitutes a valuable addition to existing chiral Hamiltonians.
The results obtained for elements with $Z = 16-19$ and $Z = 21-24$ with these new Hamiltonians are presented here for the first time. Calculations were performed in a spherical harmonic-oscillator basis including up to 14 major shells (e$_{max}$ = 13) while the three-body matrix elements were restricted to e$_{3max}$ = 16 $<$ 3e$_{max}$. A fixed oscillator frequency $\hbar \Omega $ = 20 MeV was used for the NNLO$_{\text{sat}}$ Hamiltonian, while $\hbar \Omega $ = 18 MeV was chosen for NN+3N(lnl). These correspond to the optimal values for total binding energies in this mass region~\cite{som2019}.
SCGF results with these two Hamiltonians for the $N$~=~28 two-neutron shell-gap as a function of the proton number $Z$ are displayed in Fig.~\ref{d2n}.
First, we observe that both interactions predict the emergence of the $N$~=~28 shell closure in $^{48}$Ca and its progressive demise in $^{46}$Ar and $^{44}$S.
Nonetheless, a marked difference between the SCGF-NN+3N(lnl) and SCGF-NNLO$_{\text{sat}}$ values is seen. The latter generally overestimates the strength of the two-neutron gap by several MeV, while the former offers a level of agreement with experimental data comparable to that of the VS-IMRSRG.
It is noteworthy that both the VS-IMSRG and SCGF-NN+3N(lnl) approaches predict a two-neutron gap in $^{44}$S of similar magnitude.
Above $Z$ = 20, SCGF calculations first follow the experimental trend displaying a decrease of the gap for scandium and titanium, then depart from experimental data for vanadium and chromium.
This disagreement signals the deterioration of the accuracy for doubly open-shell systems that display significant deformation.
Indeed, at present the Gorkov-SCGF framework achieves an efficient treatment of pairing correlations by breaking the U(1) symmetry associated to particle number, but enforces conservation of rotational symmetry, which leads to an inefficient account of deformation.
While this approach allows to tackle a large number of open-shell systems that do not exceedingly depart from sphericity, it looses accuracy when quadrupole correlations play a major role, which is presumably the case for nuclei like $^{49}$V and $^{50}$Cr.
The fact that this effect is not seen for sulfur and chlorine isotopes does not contradict the findings of the shell-model calculations, but rather points to a more mild impact of collectivity in those nuclei, at least for the description of the ground states.
\section{Conclusion}
In summary, we performed high-precision measurements of the atomic masses of $^{46-48}$Ar using the ISOLTRAP mass spectrometer at ISOLDE/CERN. Despite severe stable molecular contamination, the masses of $^{46-47}$Ar were successfully measured using the ToF-ICR method in a Penning trap, while the mass of $^{48}$Ar was determined by use of MR-ToF mass spectrometry. No statistically significant deviations were found when compared to literature values, but the uncertainties were reduced by up to a factor 90. The trends of nuclear binding energies obtained from the measured masses were used to probe the $N$ = 28 shell closure in neutron-rich argon isotopes. The systematics of the one- and two-neutron shell gaps indicate the presence of a persistent, yet reduced empirical shell gap in $^{46}$Ar compared to the doubly magic $^{48}$Ca, in accordance with results of previous mass measurements. More specifically, the one-neutron empirical shell gap is found to be reduced by only 402(4)~keV between $^{48}$Ca and $^{46}$Ar. However, taking into account the wealth of spectroscopic data available and using shell-model calculations performed with the \emph{SDPF-U} interaction, this conclusion must be nuanced. Indeed, $^{46}$Ar is found to form a transition point between the doubly closed-shell $^{48}$Ca and the collective $^{44}$S ground state.
A theoretical investigation of the measured isotopes was also performed using state-of-the-art \emph{ab-initio} approaches. The VS-IMSRG calculations reproduce the ground-state energy behavior in the argon chain as well as the phenomenological \emph{SDPF-U} interaction, thus providing an \emph{ab initio} description of the underlying physics in this region.
SCGF calculations were also performed using two different Hamiltonians, NNLO$_{\text{sat}}$ and the recently derived NN+3N(lnl).
Also in this case a progressive reduction of the empirical two-neutron shell-gap was observed from $Z$ = 20 to $Z$ = 16.
While SCGF-NNLO$_{\text{sat}}$ results overestimate the strength of the two-neutron shell gap at ($Z,N$)=(18, 28), SCGF-NN+3N(lnl) closely follow those obtained from the VS-IMSRG, confirming the very good performance of the NN+3N(lnl) interaction in this mass region.
Accurate mass measurements extending the present study to more neutron-rich argon isotopes approaching $N$ = 32, 34 and to the sulfur isotopes beyond $N$ = 28 are highly desirable to put the predictions from the presented \emph{ab initio} approaches to the test. To this end, the present mass values constitute ideal anchor points for future experimental campaigns reaching further away from stability.
\begin{acknowledgments}
M.M and D.L thank L. Gaudefroy for fruitful discussions which helped improve this article. We thank the ISOLDE technical group and the ISOLDE Collaboration for their assistance. We acknowledge support from the Max Planck Society, the German Federal Ministry of Education and Research (BMBF) (Contracts No.~05P12HGCI1, 05P15ODCIA, 05P15HGCIA and 05P18RDFN1), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 279384907 -- SFB 1245, the French IN2P3, the United Kingdom Science and Technology Facilities Council (STFC) (Grants No. ST/P005314/1 and No. ST/L005816/1) and the European Union’s Horizon 2020 research and innovation programme (Grant No. 654002). J.K. acknowledges support from the Wolfgang Gentner Ph.D. scholarship (Grant No. 05E12CHA). Computations were performed at the J\"ulich Supercomputing Center (JURECA). SCGF calculations were performed by using HPC resources from GENCI-TGCC, France (Contract No. A007057392) and the DiRAC DiAL system at the University of Leicester, UK (BIS National E-infrastructure Capital Grant No. ST/K000373/1 and STFC Grant No. ST/K0003259/1).
\end{acknowledgments}
|
1,116,691,500,340 | arxiv | \section{Introduction}
Recently Ho\v{r}ava has proposed a renormalizable theory of
gravity at a Lifshitz point\cite{ho1}, which may be regarded as
a UV complete candidate for general relativity. Very recently,
the Ho\v{r}ava-Lifshitz gravity theory has been intensively
investigated
in~\cite{ho2,ho3,VW,klu,Nik,Nas,Iza,Vol,CH,CHZ,Nis,KS,OR,Kon,CNPS,SVW},
its cosmological applications in
~\cite{cos1,cos2}, and its black hole solutions in
~\cite{bh1,bh2}.
We would like to mention that the IR vacuum of this theory is anti
de Sitter (AdS) spacetimes. Hence, it is interesting to take a
limit of the theory, which may lead to a Minkowski vacuum in the
IR sector. To this end, one may modify the theory by introducing
``$\mu^4R$" and then, take the $\Lambda_W \to 0$ limit. This does
not alter the UV properties of the theory, but it changes the IR
properties. That is, there exists a Minkowski vacuum, instead of
an AdS vacuum.
A relevant issue of (deformed) Ho\v{r}ava-Lifshitz gravity is to
answer to the question of whether it can
accommodate a scalar mode $\psi \propto H$, the trace of $h_{ij}$,
in addition to two degrees of freedom for a massless graviton.
Known results were sensitive to a gauge-fixing. If one chooses a
gauge of $n_i=0$ together with a Lagrange multiplier $A$, then
there remains a term of $\dot{H}^2$ in the quadratic action, which
may imply that $H$ is physical, but nonpropagating on the
Minkowski background~\cite{ho1,KS}. On the other hand, choosing a
gauge of $A=E=0$ with a non-dynamical field $B$ leads to two
terms of $c_1\dot{\psi}^2+c_2(\partial_k\psi)^2$, which implies
that a gauge-invariant scalar $\psi$ is a dynamically scalar
degree of freedom~\cite{CHZ}. If the trace $\psi$ is really
propagating on the Minkowski background, the deformed
Ho\v{r}ava-Lifshitz gravity amounts to a scalar-tensor theory.
However, it was known that a choice of gauge-fixing cannot be
done, in general, by substituting the gauge condition into the
action directly\footnote{ In order to find propagators, first
substituting the gauge-condition into the gauge-invariant bilinear
action with parameter $b^2$, inverting, and then, taking the
limit of $b^2\to \infty$. See Ref.\cite{FPp} for the
gauge-propagator in the Yang-Mills theory, Ref.\cite{HV} for
graviton-propagator in general relativity, Ref.\cite{Stelle} for
graviton-propagator in higher-derivative quantum gravity, and
Ref.\cite{RSS} for graviton-propagator in the Kaluza-Klein
theory.}~\cite{FPp,HV,Stelle,RSS}. Hence, we need to introduce
another approach to confirm the propagation of scalar mode around
Minkowski spacetimes.
In this work, we will not choose any gauge to identify physical
scalar degrees of freedom. One way to identify the physical
degrees of freedom is to treat non-dynamical fields in the
quadratic Lagrangian without fixing a gauge~\cite{FJ,RT}. In this
work, we consider the Lagrangian formalism~\cite{RT} only because
the Hamiltonian formalism was not working for Ho\v{r}ava-Lifshitz
gravity well, and thus, it has shown unwanted results for scalar
degrees of freedom~\cite{LPa}. We would like to mention that there
are two kinds of non-dynamical fields: at the level of quadratic
action, a non-dynamical field may enter the action either linearly
or quadratically. As is shown in Eq.(\ref{giaction}), for
$\lambda\not=1$, examples of the latter are two gauge-invariant
modes $w_i$ and $\Pi$. These modes can be integrated out: their
equations can be used to express these in terms of dynamical
fields $\psi$ (the latter enters the action with time derivative)
and then, one gets rid of these by plugging the resulting
expression
back into the action. Therefore, the number of dynamical fields
is not reduced in this way.
The other is that the action does not contain a quadratic term as a non-dynamical field.
$A$ is the case for Ho\v{r}ava-Lifshitz gravity
and $\Phi$ for general relativity. Unlike in the quadratic case,
the corresponding equation is a constraint imposed on
dynamical fields, and thus $A$ is a
{\it Lagrange multiplier}. An important feature is that the
constraint reduces the number of dynamical fields.
This implies that Lagrange multipliers play the important role in
finding physical degrees of freedom.
In the view of Faddeev-Jackiw
constraints~\cite{FJ,Jac}, quadratic non-dynamical fields are
superficial constraints and a linear non-dynamical field is a
true constraint. Hence we wish to distinguish the former with
notation (=) from the latter with ($\approx$).
In order to compare the foliation-preserving diffeomorphism
(FDiff) of the Ho\v{r}ava-Lifshitz gravity with others, we
introduce transverse diffeomorphism (TDiff), full diffeomorphism
(Diff), and Weyl-transverse diffeomorphism (WTDiff) for general
relativity in the Appendix.
\section{Deformed Ho\v{r}ava-Lifshitz gravity}
First of all, we introduce the ADM formalism where the metric is
parameterized
\begin{equation} ds_{ADM}^2= - N^2 dt^2 + g_{ij} \Big(dx^i - N^i dt\Big)
\Big(dx^j - N^j dt\Big)\,, \end{equation}
Then, the Einstein-Hilbert action can be expressed as
\begin{equation} \label{Eins} S^{EH} = \fft{1}{16\pi G} \int d^4x \sqrt{g} N
\Big(K_{ij} K^{ij} - K^2 + R - 2\Lambda\Big)\,, \end{equation}
where $G$ is Newton's constant and extrinsic curvature $K_{ij}$
takes the form
\begin{equation} K_{ij} = \fft{1}{2N} \Big(\dot g_{ij} - \nabla_i N_j -
\nabla_j N_i\Big)\,. \end{equation}
Here, a dot denotes a derivative with respect to $t$ (
$``~\dot{}~"=\frac{\partial}{\partial t}$).
On the other hand, a deformed action of the non-relativistic
renormalizable gravitational theory is given by~\cite{KS}
\setlength\arraycolsep{2pt} \begin{eqnarray}%
\label{act}S^{dHL}&=&\int dtd^3{\bf x}\, \Big({\cal L}_0 +\mu^4R + {\cal L}_1\Big)\,,\\
{\cal L}_0 &=& \sqrt{g}N\left\{\frac{2}{\kappa^2}(K_{ij}K^{ij}
\label{action1}-\lambda K^2)+\frac{\kappa^2\mu^4(\Lambda_W R
-3\Lambda_W^2)}{8(1-3\lambda)}\right\}\,,\\ {\cal L}_1&=&
\sqrt{g}N\left\{\frac{\kappa^2\mu^4
(1-4\lambda)}{32(1-3\lambda)}R^2 -\frac{\kappa^2}{2w^4}
\left(C_{ij} -\frac{\mu w^2}{2}R_{ij}\right) \left(C^{ij}
-\frac{\mu w^2}{2}R^{ij}\right) \right\}\,.\label{action2}
\end{eqnarray}%
where $C_{ij}$ is the Cotton tensor
\begin{equation} C^{ij}=\epsilon^{ik\ell}\nabla_k\left(R^j{}_\ell
-\frac14R\delta_\ell^j\right).\label{def.K.C} \end{equation}
Comparing ${\cal L}_0$ with Eq.(\ref{Eins}) of general relativity,
the speed of light, Newton's constant and the cosmological
constant are given by
\begin{equation} c=\fft{\kappa^2\mu}{4}
\sqrt{\fft{\Lambda_W}{1-3\lambda}}\,,\qquad
G=\fft{\kappa^2}{32\pi\,c}\,,\qquad \Lambda=\ft32
\Lambda_W\,.\label{cg} \end{equation}
The equations of motion were derived in \cite{cos1} and
\cite{bh1}, but we do not write them due to the length.
In the limit of $\Lambda_W \to 0$, we obtain the
$\lambda$-Einstein action from ${\cal L}_0+\mu^4 R$ as
\begin{equation}
S^{EH\lambda}=\int dt d^3x \sqrt{g}
N\Bigg[\frac{2}{\kappa^2}\Big(K_{ij}K^{ij}-\lambda K^2\Big)+\mu^4
R\Bigg]\ . \label{SM2} \end{equation} In this case, we have Minkowski
background with ~\cite{KS} \begin{equation} c^2=\fft{\kappa^2\mu^4}{2}\,,\qquad
G=\fft{\kappa^2}{32\pi\,c}\,,\qquad \Lambda=0\,.\label{mcg} \end{equation}
Considering the $z=3$ Ho\v{r}ava-Lifshitz gravity, we have scaling
dimensions of $[t]=-3,[x]=-1, [\kappa]=0,$ and $[\mu]=1$. We wish
to consider perturbations of the metric around Minkowski
spacetimes, which is a solution of the full theory (\ref{act})
\begin{equation} \label{decom1}
g_{ij}= \delta_{ij}+w h_{ij},~N= 1+ wn,~N_i= w n_i. \end{equation} At quadratic
order the action (\ref{SM2}) turns out to be \setlength\arraycolsep{2pt} \begin{eqnarray} \label{EHlambda}
S^{EH\lambda}_2 &=& w^2\int dt d^3x \Bigg\{{1 \over \kappa^2}
\left[{1\over 2} \dot h_{ij}^2 -{\lambda\over 2} \dot h^2 +
(\partial_i n_j)^2 +(1-2\lambda) (\partial \cdot n)^2 - 2
\partial_i n_j(\dot h_{ij} -\lambda \dot h \delta_{ij})\right]
\nonumber\\
&&\phantom{x} + {\mu^4\over 2} \left[ -\frac{1}{2}(\partial_k
h_{ij})^2+\frac{1}{2}(\partial_i h)^2 +(\partial_i
h_{ij})^2-\partial_i h_{ij}\partial_j h + 2 n (\partial_i
\partial_j h_{ij}-\partial^2 h) \right]\Bigg\} . \end{eqnarray}
In order to analyze the physical degrees of freedom completely, it
is convenient to use the cosmological decomposition in terms of
scalar, vector, and tensor modes under spatial rotations
$SO(3)$~\cite{MFB}
\setlength\arraycolsep{2pt} \begin{eqnarray} \label{pert}
n &=&-\frac{1}{2}A,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
n_i&=&\Big(\partial_iB+V_i\Big),\label{decom2} \\
h_{ij}&=&\Big(\psi\delta_{ij}+\partial_i\partial_j E+2\partial_{(i}F_{j)}+t_{ij}\Big), \nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \end{eqnarray}
where $\partial^iF_i=\partial^iV_i=\partial^it_{ij}=t^i~_i=0$.
The
last two conditions mean that $t_{ij}$ is a transverse and
traceless tensor in three dimensions. Using this decomposition,
the scalar modes ($A,B,\psi,E$), the vector modes ($V_i,F_i$), and
the tensor modes ($t_{ij}$) decouple from each other. These all
amount to 10 degrees of freedom for a symmetric tensor in four
dimensions.
Before proceeding, let us check dimensions. We observe
$[n]=0,[n_i]=2,$ and $[h_{ij}]=0$, which imply
$[A]=0,[B]=1,[V_i]=2,[\psi]=0,[E]=-2,[F_i]=-1,$ and $[t_{ij}]=0$.
The Lagrangian is obtained by substituting (\ref{decom2}) into the
quadratic action (\ref{EHlambda}) as
\setlength\arraycolsep{2pt} \begin{eqnarray} \label{fehl} S^{EH\lambda}_2 &=&
\int dtd^3x \left\{
\frac{w^2}{2\kappa^2} \left[ 3(1-3\lambda)\dot{\psi}^2
+2\partial_i\omega_j\partial^i\omega^j
-4\left((1-3\lambda)\dot{\psi}+(1-\lambda)\partial^2\dot{E}\right)\partial^2B\right.
\right. \nonumber\\
&& \left.~~~~~~~~~~~~~
+4(1-\lambda)(\partial^2B)^2+2(1-3\lambda)\dot{\psi}\partial^2\dot{E}
+(1-\lambda)(\partial^2\dot{E})^2+ \dot{t}_{ij}\dot{t^{ij}}\right]
\nonumber\\
&&\left.~~~~~~~~~~~~~
+\frac{\mu^4w^2}{4}\left[2\partial_k\psi\partial^k\psi
+4A\partial^2\psi -\partial_k t_{ij}\partial^k
t^{ij}\right]\right\} \end{eqnarray} with $w_i=V_i-\dot{F}_i$.
On the other hand, the higher order action obtained from ${\cal
L}_1$ takes the form \setlength\arraycolsep{2pt} \begin{eqnarray} S^1_2=\int dt d^3x
\frac{\kappa^2\mu^2w^2}{8}\Bigg\{
&-&\frac{1+\lambda}{2(1-3\lambda)} \psi
\partial^4 \psi -\frac{1}{4}t_{ij}\partial^4 t^{ij} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
&+&\frac{1}{\mu w^2} \epsilon^{ijk} t_{il} \partial^4
\partial_j t^l~_k+\frac{1}{\mu^2w^4}
t_{ij} \partial^6 t^{ij}
\Bigg\}. \label{1action} \end{eqnarray}
We observe that two modes of $\psi$ and $t_{ij}$ exist in the
higher order action.
Now we are in a
position to discuss the diffeomorphism in the $z=3$
Ho\v{r}ava-Lifshitz gravity. Since the anisotropic scaling of
temporal and spatial coordinates ($t\to b^z t, x^i \to b x^i$),
the time coordinate $t$ plays a privileged role. Hence, the
spacetime symmetry is smaller than the full diffeomorphism (Diff)
in the standard general relativity (Einstein gravity). The
Ho\v{r}ava-Lifshitz gravity of $S^{EH\lambda}_2+S^1_2$ should be
invariant under the ``foliation-preserving" diffeomorphism (FDiff)
whose form is given by \begin{equation} t \to \tilde{t}=t+\epsilon^0(t),~~x^i
\to \tilde{x}^i=x^i+\epsilon^i(t,\bf{x}). \end{equation} Using the notation
of $\epsilon^\mu=(\epsilon^0,\epsilon^i)$ and
$\epsilon_\nu=\eta_{\nu\mu}\epsilon^\mu$, the perturbation of
metric transforms as \begin{equation} \delta g_{\mu\nu} \to
\delta\tilde{g}_{\mu\nu}=\delta g_{\mu\nu}+\partial_\mu
\epsilon_\nu+\partial_\nu \epsilon_\mu. \end{equation} Further, making a
decomposition $\epsilon^i$ into a scalar $\xi$ and a pure vector
$\zeta^i$ as $\epsilon^i=\partial^i\xi+ \zeta^i$ with $\partial_i
\zeta^i=0$, one finds the transformation for the scalars \begin{equation}
\label{trans1} A \to \tilde{A}=A-2\dot{\epsilon^0},~\psi \to
\tilde{\psi}=\psi,~ B \to \tilde{B}=B+\dot{\xi},~E \to
\tilde{E}=E+2\xi.\end{equation} On the other hand, the vector and the tensor
take the forms \begin{equation} \label{trans2} V_i \to \tilde{V}_i=V_i
+\dot{\zeta}_i,~F_i\to \tilde{F}_i=F_i+\zeta_i,~t_{ij} \to
\tilde{t}_{ij}=t_{ij}. \end{equation} Considering scaling dimensions of
$[\epsilon^0]=-3$ and $[\epsilon^i]=-1$, we have $[\xi]=-2$ and
$[\zeta^i]=-1$. For the FDiff transformations, gauge-invariant
combinations are \begin{equation} t_{ij},~~w_i=V_i-\dot{F}_i, \end{equation} for tensor
and vector, respectively and \begin{equation} \Big(\psi,~~\Pi=2B-\dot{E} \Big)
\end{equation} for two scalar modes. Finally, we note scaling dimensions:
$[w_i]=2$ and $[\Pi]=1$. We emphasize that ``$A$" leaves a
gauge-dependent quantity alone. For other gauge-invariant scalars
in general relativity, see the Appendix.
\section{$n_i=0$ gauge-fixing}
Firstly, we may consider a gauge of $n_i=0$~\cite{ho1,KS}. It
amounts to the gauge-fixing: \begin{equation} B=0,~V_i=0.\end{equation} Then, the
bilinear action takes the form
\setlength\arraycolsep{2pt} \begin{eqnarray} \label{nizact} S^{EH\lambda}_2 =
\int dtd^3x \Bigg\{
\frac{\omega^2}{2\kappa^2} \Big[ 3(1-3\lambda)\dot{\psi}^2
&+&2\partial_i\dot{F}_j\partial^i\dot{F}^j
+2(1-3\lambda)\dot{\psi}\partial^2\dot{E}
+(1-\lambda)(\partial^2\dot{E})^2+ \dot{t}_{ij}\dot{t^{ij}}\Big]
\nonumber\\
&+&\frac{\mu^4\omega^2}{4}\left[2\partial_k\psi\partial^k\psi
+4A\partial^2\psi -\partial_k t_{ij}\partial^k
t^{ij}\right]\Bigg\}. \end{eqnarray} It is obvious that $A$ is a Lagrange
multiplier and thus it provides a constraint \begin{equation}
\label{aconstraint}
\partial^2\psi\approx0.\end{equation}
It is emphasized that the notation ``$\approx$" is used to denote
the constraint obtained by varying the Lagrange multiplier only.
We may consider $\dot{E}$ and $\dot{F}_i$ as non-dynamical fields
even though they have time derivatives. Since gauge-invariant
quantities are given by $\Pi=2B-\dot{E}$ and $w_i=V_i-\dot{F}_i$,
it seems that canonical variables are not $E$ and $F_i$ but
$\dot{E}$ and $\dot{F}_i$.
Hence,
in order to eliminate these fields, we use their variations \begin{equation}
\partial^2 \dot{E}=-\Big(\frac{1-3\lambda}{1-\lambda}\Big)\dot{\psi},~~\dot{F}_i=0. \end{equation}
Substituting these into the quadratic action, we have the relevant
one
\begin{equation} \label{npert} S^{EH\lambda}_2=\int dt d^3x\Bigg\{
\frac{w^2}{2\kappa^2}\Bigg(
\frac{2(1-3\lambda)}{1-\lambda}\dot{\psi}^2+\dot{t}_{ij}\dot{t}^{ij}
\Bigg) -\frac{\mu^4w^2}{4}
\partial_k t_{ij} \partial^k t^{ij}
\Bigg\}. \end{equation}
It is clear that for $\lambda\not=1,1/3$, the scalar ``$\psi$" is
not a propagating mode on the Minkowski background because of the
constraint (\ref{aconstraint}), while $t_{ij}$ represents for a
massless graviton propagation. On the other hand, the bilinear
action to ${\cal L}_1$ leads to
\begin{equation} S^1_2=\int dt d^3x
\frac{\kappa^2\mu^2w^2}{8}\Bigg\{ -\frac{1}{4}t_{ij}\partial^4
t^{ij} +\frac{1}{\mu w^2} \epsilon^{ijk} t_{il} \partial^4
\partial_j t^l~_k+\frac{1}{\mu^2w^4}
t_{ij} \partial^6 t^{ij}
\Bigg\}. \label{1actn} \end{equation}
Plugging \begin{equation} \psi \to \frac{1-\lambda}{2(1-3\lambda)}H,~~t_{ij}\to
\tilde{H}_{ij}\end{equation} into Eqs. (\ref{npert}) and (\ref{1actn}) with
$x^0=ct ~([x_0]=-1,[c]=2)$, one arrives at the quadratic action
exactly~\cite{KS}
\setlength\arraycolsep{2pt} \begin{eqnarray} &&S^{HL}_2=\int dx^0 d^3x
\Bigg\{\frac{w^2c}{2\kappa^2}\left[\Big(\partial_0\tilde{H}_{ij}\Big)^2-\frac{\mu^4\kappa^2}{2c^2
} \Big(\partial_k \tilde{H}_{ij}\Big)^2\right]
+\frac{w^2c(1-\lambda)}{4\kappa^2(1-3\lambda)}\Big(\partial_0{H}\Big)^2
\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
&&+ \frac{\kappa^2 \mu^2 w^2}{8c}\Bigg[
-\frac{1}{4}\tilde{H}_{ij}\partial^4 \tilde{H}^{ij} +\frac{1}{\mu
w^2} \epsilon^{ijk} \tilde{H}_{il}
\partial^4
\partial_j \tilde{H}^l~_k+\frac{1}{\mu^2w^4}
\tilde{H}_{ij} \partial^6 \tilde{H}^{ij} \Bigg]\Bigg\}.
\label{senn} \end{eqnarray} Note that for $1/3<\lambda<1$, the kinetic term
of $H$ becomes negative, indicating a ghost instability. Thus, one
may argue that either $\lambda$ runs to $1^+$ from above in the IR
or $H$ does not couple at all to matter. However, this may not be
a promising way to resolve the ghost problem. A correct answer is
that the scalar mode of $H \propto \psi$ is a nonpropagating mode.
We also see from (\ref{senn}) that the speed of gravitational
interaction is \begin{equation} c_g^2=\frac{\mu^4\kappa^2}{2c^2 }c_0^2, \end{equation}
where $c_0^2$ is the speed of light. We know that the propagation
of gravity interaction equals the velocity of light to better than
$1:1000$. Hence, we get that \begin{equation} c^2=\frac{\mu^4\kappa^2}{2 } \end{equation}
with the above accuracy, independent of the value of the
couplings.
Finally, we have the quadratic action \setlength\arraycolsep{2pt} \begin{eqnarray} &&S^{HL}_2=\int dx^0
d^3x
\Bigg\{\frac{w^2c}{2\kappa^2}\Big(\partial_\mu\tilde{H}_{ij}\Big)^2
+\frac{w^2c(1-\lambda)}{4\kappa^2(1-3\lambda)}\Big(\partial_0{H}\Big)^2
\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
&&+ \frac{\kappa^2 \mu^2 w^2}{8c}\Bigg[
-\frac{1}{4}\tilde{H}_{ij}\partial^4 \tilde{H}^{ij} +\frac{1}{\mu
w^2} \epsilon^{ijk} \tilde{H}_{il}
\partial^4
\partial_j \tilde{H}^l~_k+\frac{1}{\mu^2w^4}
\tilde{H}_{ij} \partial^6 \tilde{H}^{ij} \Bigg]\Bigg\}.
\label{senf} \end{eqnarray}
\section{$A=0$ and $E=0$ gauge-fixing }
In the perturbation, the lapse function $n$ is a function of $t$
only, and thus, $A$ is a function of $t$. It may allow $A$ to be
a gauge degree of freedom by choosing a initial time $t_0$. Also,
we may choose $E$ as a gauge degree of freedom. In this section,
we start with a gauge-fixing ~\cite{CHZ}: \begin{equation} A=0,~E=0.\end{equation} Then,
the bilinear action takes the form
\setlength\arraycolsep{2pt} \begin{eqnarray} S^{EH\lambda}_2=\int dt d^3x\Bigg\{
\frac{w^2}{2\kappa^2}&\Bigg(&
3(1-3\lambda)\dot{\psi}^2-4(1-3\lambda)\dot{\psi}\partial^2
B+4(1-\lambda)B \partial^4 B \label{qir1} \\
+2\partial_k w_i \partial^k w^i&+&\dot{t}_{ij}\dot{t}^{ij} \Bigg)
+\frac{\mu^4w^2}{4}\Bigg( 2\partial_k\psi
\partial^k \psi -
\partial_k t_{ij} \partial^k t^{ij}\Bigg)
\Bigg\}.
\label{qir2}
\end{eqnarray} For $\psi \to -2 \Psi$, the first line (\ref{qir1}) recovers
those of Ref.\cite{CHZ} with $a=1$ and $\Lambda=0$. We observe
that $B$ and $w_i$ are non-dynamical fields because they do not
have time derivatives. Hence, in order to eliminate these, we use
their variations \begin{equation}
\partial^2 B=\frac{(1-3\lambda)}{2(1-\lambda)}\dot{\psi},~~w_i=0. \end{equation}
Substituting these relations into the quadratic action, we have
the relevant one
\begin{equation} S^{EH\lambda}_2=\int dt d^3x\Bigg\{
\frac{w^2}{2\kappa^2}\Bigg(
\frac{2(1-3\lambda)}{1-\lambda}\dot{\psi}^2+\dot{t}_{ij}\dot{t}^{ij}
\Bigg) +\frac{\mu^4w^2}{4}\Bigg( 2(\partial_k\psi)^2 -
\partial_k t_{ij} \partial^k t^{ij}\Bigg)
\Bigg\}. \end{equation}
\label{physact}
It seems that for $\lambda\not=1,1/3$, the scalar ``$\psi$" is
propagating on the Minkowski background, in addition to $t_{ij}$
for a massless graviton propagation. This is because a kinetic
term $(\partial_k\psi)^2$ survives because a gauge condition of
$A=0$ does not impose any constraint. However, the sign of
$(\partial_k\psi)^2$ is opposite to that of $\partial_k t_{ij}
\partial^k t^{ij}$ and thus, it may not lead to a proper
scalar propagation on the Minkowski background.
On the other hand, the bilinear action to ${\cal L}_1$ leads to
\setlength\arraycolsep{2pt} \begin{eqnarray} S^1_2=\int dt d^3x
\frac{\kappa^2\mu^2w^2}{8}\Bigg\{
&-&\frac{1+\lambda}{2(1-3\lambda)} \psi
\partial^4 \psi -\frac{1}{4}t_{ij}\partial^4 t^{ij} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
&+&\frac{1}{\mu w^2} \epsilon^{ijk} t_{il} \partial^4
\partial_j t^l~_k+\frac{1}{\mu^2w^4}
t_{ij} \partial^6 t^{ij}
\Bigg\}, \label{1act} \end{eqnarray}
where the first term represents a fourth order for the scalar
$\psi$. This term survives because a gauge condition of $A=0$ was
chosen.
\section{Without gauge-fixing}
One may identify physical degrees of freedom, without fixing any
gauge, by treating non-dynamical fields in (\ref{fehl})properly.
First of all, we express the quadratic action (\ref{fehl}) in
terms of gauge-invariant quantities of the scalar, vector, and
tensor modes as \setlength\arraycolsep{2pt} \begin{eqnarray} S^{EH\lambda}_2 =
\int dtd^3x &\Bigg\{&
\frac{w^2}{2\kappa^2} \Big[ 3(1-3\lambda)\dot{\psi}^2
-2w_i\bigtriangleup\omega^i
-2(1-3\lambda)\dot{\psi}\bigtriangleup \Pi+(1-\lambda)(\bigtriangleup\Pi)^2 \nonumber \\
\label{giaction}&+& \dot{t}_{ij}\dot{t}^{ij}\Big]
+\frac{\mu^4w^2}{4}\left[-2\psi\bigtriangleup\psi
+4A\bigtriangleup\psi + t_{ij} \bigtriangleup t^{ij}\right]\Bigg\}
\end{eqnarray} with $\bigtriangleup=\partial_i\partial^i=\partial^2$. We
note that $S_2^1$ in (\ref{1action}) contains only $\psi$ and
$t_{ij}$, which are also gauge-invariant. It is emphasized again
that ``$A$" is not a gauge-invariant quantity and thus, it should
be eliminated in the consistent quadratic action. Fortunately,
this is possible because it belongs to a Lagrange multiplier,
irrespective of any value $\lambda$.
Before proceeding, we mention two special cases: $\lambda=1/3$ and
$\lambda=1$. Plugging $\lambda=1/3$ into the above action, we have
a term like $\dot{\psi}^2$. In addition, we have two non-dynamical
fields ($w_i,~\Pi$) and one Lagrange multiplier ($A$) which
provide two relations and one constraint as, respectively\begin{equation}
\bigtriangleup w_i=0,~~\bigtriangleup
\Pi=0,~~\bigtriangleup\psi\approx 0. \end{equation} This implies that the
$t_{ij}$ are only propagating tensor modes. Similarly, for
$\lambda=1$, one have
no scalar mode $\psi$ definitely because of one relation and two constraints from
one non-dynamical field ($w_i$) and two Lagrange multipliers
($\Pi,~A$):
\begin{equation}
\bigtriangleup w_i=0,~\dot{\psi}\approx 0,~ \bigtriangleup
\psi\approx0.\end{equation}
Note here that for $\lambda\not=1/3,1$, $\Pi$ and $w_i$ are two
non-dynamical fields to be solved to have two relations \begin{equation}
\bigtriangleup
\Pi=\frac{(1-3\lambda)}{(1-\lambda)}\dot{\psi},~~w_i=0. \end{equation}
Substituting these into the quadratic action, we have \setlength\arraycolsep{2pt} \begin{eqnarray}
S^{EH\lambda}_2 &=&
\int dtd^3x \left\{
\frac{\omega^2}{2\kappa^2} \left[
\frac{2(1-3\lambda)}{(1-\lambda)}\dot{\psi}^2
+\dot{t}_{ij}\dot{t^{ij}}\right]\right.\nonumber\\
&&~~~~~~~~~~\left.+\frac{\mu^4\omega^2}{4}\left[-2\psi\bigtriangleup\psi
+4A\bigtriangleup \psi + t_{ij}\bigtriangleup t^{ij}\right]
\right\}.\end{eqnarray} Here we observe that for $1/3<\lambda<1$, a ghost
appears because there is a negative kinetic term for $\psi$. Also,
comparing $-2\psi\bigtriangleup\psi$ with $t_{ij}\bigtriangleup
t^{ij}$, we find a negative spatial derivative term for scalar
$\psi$. Hence it should not be a propagating mode on the
Minkowski background. Since $A$ is a Lagrange multiplier, its
variation provides a constraint \begin{equation} \bigtriangleup\psi\approx 0.
\end{equation} Then, we have the bilinear action without $A$
\begin{equation} S^{EH\lambda}_2 = \frac{\omega^2c}{2\kappa^2}
\int d^4x
\left[
\frac{2(1-3\lambda)}{(1-\lambda)}(\partial_0\psi)^2
+\partial_0{t}_{ij}\partial_0{t^{ij}} -\frac{\mu^4\kappa^2}{2c^2}
\partial_k t_{ij}
\partial^k t^{ij}\right]\end{equation}
with $x^0=ct$. Using $c^2=\mu^4 \kappa^2/2$, we may have the
relativistic action for graviton \begin{equation} S^{EH\lambda}_2 =
\frac{\omega^2c}{2\kappa^2}
\int d^4x
\left[
\frac{2(1-3\lambda)}{(1-\lambda)}(\partial_0\psi)^2 +{t}_{ij}
\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}} t^{ij} \right],\end{equation}
which implies that the scalar mode is not
propagating even for $\lambda>1$ because it contains
$(\partial_0\psi)^2$ only, while the tensor mode (graviton) is
propagating on the Minkowski background. Here
$\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}=\eta^{\mu\nu}\partial_\mu\partial_\nu$ with
$\eta_{\mu\nu}={\rm diag}(-,+,+,+)$. Finally, the higher order
action $S^1_2$ is given by (\ref{npert}). However, this action
does not determine whether a mode is propagating or not.
We would like to mention that $\psi$ is a non-propagating mode
under the $n_i=0$ gauge with a Lagrange multiplier $A$ in Section
3, but it is a propagating mode under the $A=E=0$ gauge with two
non-dynamical fields $B$ and $w_i$ in Section 4. It seems that the
origin of this discrepancy is due to different gauge-fixings.
However, it was known that a choice of gauge-fixing cannot be
done, in general, by substituting the gauge condition into the
action directly~\cite{FPp,HV,Stelle,RSS}. Hence, our approach is a
consistent mathematical formalism for checking the absence of new
degrees of freedom around the Minkowski background.
\section{Discussions}
A hot issue of Ho\v{r}ava-Lifshitz gravity is to clarify whether
it can
accommodate a scalar mode as the trace of $h_{ij}$, in addition to
two degrees of freedom for a massless graviton.
Actually, known results were sensitive to a gauge-fixing. If one
chooses a gauge of $n_i=0 ~(B=V_i=0)$ together with a Lagrange
multiplier $A$ (equivalently, $\partial^2\psi\approx0$) and two
non-dynamical fields ($\dot{E},\dot{F}_i$) there remains a term of
$\dot{H}^2$ in the action, which implies that $H$ is
nonpropagating~\cite{ho1,KS}. On the other hand, choosing a gauge
of $A=E=0$ together with two non-dynamical fields ($B,w_i$) leads
to two terms of $c_1\dot{\psi}^2+c_2(\partial_k\psi)^2$, which may
imply that a gauge-invariant scalar $\psi$ is a propagating scalar
degree of freedom~\cite{CHZ}.
In this work, we did not choose any gauge to identify physical
scalar degrees of freedom. Without fixing a gauge, one could
identify physical degrees of freedom by treating two
non-dynamical fields ($w_i,\Pi,$) and one Lagrange multiplier $
A$ appropriately. This means that Lagrange multiplier plays the
important role in finding physical degrees of freedom. In the
foliation-preserving diffeomorphism (FDiff), gauge-invariant
scalars are $\psi$ and $\Pi$, while the lapse
perturbation``$A\propto n$" is a gauge-dependent scalar. Thus, the
latter should be eliminated from the quadratic action. It is
either a function $A(t)$ when imposing the projectability
condition or a function $A(t,{\bf x})$ without the projectability
condition. Because $A$ is a Lagrange multiplier, we could always
use it to obtain a constraint $\bigtriangleup\psi\approx 0$ and
thus, $\psi$ is not a propagating scalar mode. A gauge-invariant
scalar $\Phi=A-\dot{\Pi}$ emerging in general relativity is split
into a gauge-dependent scalar $A$ and a gauge-invariant scalar
$\Pi$, due to the FDiff. We note that $\Phi$ is a Lagrange
multiplier in TDiff and Diff as well as $A$ is Lagrange
multipliers in the deformed Ho\v{r}ava-Lifshitz gravity.
We compare FDiff with different diffeomorphisms in general
relativity in Table 1. As the general analysis was shown in
the Appendix, it is not easy to have a scalar mode in
four-dimensional general relativity. The TDiff case has less
symmetry than Diff and WTDiff cases. One has to realize that the
TDiff case has three gauge-invariant scalars, thanks to an
additional condition of $\partial_\mu \epsilon^\mu=0$. This case
provides really a scalar mode which is propagating on the
Minkowski background. Two cases of Diff and WTDiff correspond to
enhanced diffeomorphisms. As a result, there are two
gauge-invariant scalars and thus, no propagating scalar mode. The
FDiff of the Ho\v{r}ava-Lifshitz gravity is similar to Diff and
WTDiff cases, which have enhanced diffeomorphisms, compared with
the TDiff. Hence, we expect to have no propagating scalar mode in
the deformed Ho\v{r}ava-Lifshitz gravity.
We would like to mention a couple of recent works. The
authors~\cite{CNPS} have shown that $\psi$ is a scalar degree of
freedom appeared when the massless limit of a massive graviton
(vDVZ discontinuity~\cite{vDVZ}). Using the Hamiltonian
constraints, the authors~\cite{SVW} have argued that a scalar
mode of $\psi$ is propagating around the Minkowski space but it
has a negative kinetic term, providing a ghost mode~\cite{SVW}.
Hence, it was strongly suggested that it is desirable to eliminate
this scalar mode if at all possible.
Consequently, we have shown that the deformed Ho\v{r}ava-Lifshitz
gravity has no scalar mode which is propagating on the Minkowski
background.
\begin{table}
\caption{Summary for scalar modes. GR (HL) means general relativity (deformed Ho\v{r}ava-Lifshitz gravity).
GIS denotes gauge-invariant scalars. SDoF and TDoF mean number of scalar and tensor
degrees of freedom, respectively. Here $\Phi=A-2\dot{B}+\ddot{E}=A-\dot{\Pi}$, $\Theta=A-\bigtriangleup E$, and
$\Pi=2B-\dot{E}$. }
\begin{tabular}{|c|c|c|c|c|}
\hline
diffeomorphism & TDiff & Diff & WTDiff &FDiff \\
\hline
Theory & GR & GR & GR & HL \\ \hline
parameters & $a\not=1,b\not=1$ & $a=b=1$ & $a=1/2,b=3/8$& $\lambda\not=1,1/3$ \\ \hline
GIS& $\psi,\Phi,\Theta$ & $\psi,\Phi$ & $\Xi=\psi+\Phi,\Upsilon=\psi+\Theta$& $\psi,\Pi$ \\ \hline
SDoF & 1($\psi$) & 0 &0 & 0 \\ \hline
TDoF & 2($t_{ij}$) & 2($t_{ij}$) & 2($t_{ij}$)& 2($t_{ij}$) \\
\hline
\end{tabular}
\end{table}
{\it Note added}--after the present work was released, relevant
works on extra scalar mode have appeared on the arXiv. The
authors~\cite{GWBR} have shown that on the cosmological
background, the extra scalar is non-dynamical. One of authors has
found that $\psi$ is a scalar degree of freedom related to the
massless limit of the case with Fierz-Pauli mass
terms~\cite{myungm}. However, using the Lorentz-violating mass
terms, there is no such a scalar appeared in the massless limit.
Also, the authors in \cite{BPS} have found that for a general
background, the extra mode is propagating. The extra mode
satisfies equation of motion which is first order in time
derivatives. At linear level, thus, the mode is manifest only
around spatially inhomogeneous and time-dependent background with
two serious problems. However, the Minkowski spacetime is a
singular point. Furthermore, the authors \cite{BS} have shown that
the extra mode is not allowed because of its ghost-like
instability around the Minkowski background.
\section*{Acknowledgement}
Y. Kim was supported by the Korea Research Foundation Grant funded
by Korea Government (MOEHRD): KRF-2007-359-C00007. H. Lee was
supported by KOSEF, Astrophysical Research Center for Structure
and Evolution of the Cosmos at Sejong University. Y. S. Myung was
supported by the Korea Research Foundation (KRF-2006-311-C00249)
funded by the Korea Government (MOEHRD).
\section*{Appendix: General relativity with different diffeomorphisms}
The most general relativistic Lagrangian for a massless symmetric
tensor field $h_{\mu\nu}$ is given by~\cite{ABGV,blas} \begin{equation} {\cal
L}_{GR}= {\cal L}^I+\beta {\cal L}^{II}+ a {\cal L}^{III}+ b{\cal
L}^{IV},\end{equation} where \setlength\arraycolsep{2pt} \begin{eqnarray} {\cal L}^I&=&\frac{1}{4}\partial_\mu
h^{\nu\rho}\partial^\mu h_{\nu\rho},~{\cal
L}^{II}=-\frac{1}{2}\partial_\mu h^{\mu\rho}\partial_\nu
h^\nu~_\rho,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
{\cal L}^{III} &=& \frac{1}{2} \partial^\mu h \partial^\rho
h_{\mu\rho},~~{\cal L}^{IV}=-\frac{1}{4} \partial_\mu h
\partial^\mu h.
\end{eqnarray} Under a general transformation of the fields $h_{\mu\nu} \to
h_{\mu\nu}+\delta h_{\mu\nu}$, we have up to total derivatives
\setlength\arraycolsep{2pt} \begin{eqnarray} \delta{\cal L}^I&=-&\frac{1}{2}\delta h_{\mu\nu} \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}
h^{\mu\nu},~\delta{\cal L}^{II}=\delta h_{\mu\nu}\partial^\rho
\partial^{(\mu}
h^{\nu)}~_\rho,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
{\cal L}^{III} &=& -\frac{1}{2} \Big(\delta h
\partial^\mu\partial^\nu h_{\mu\nu}+\delta h_{\mu\nu}\partial^\mu\partial^\nu h\Big),~~{\cal L}^{IV}=\frac{1}{2}
\delta h \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}} h. \end{eqnarray} We note that the vector Lagrangian is
problematic unless $\beta=1$ because it induces a ghost
problem~\cite{blas}. Hence, we choose $\beta=1$ case. It follows
that the combination \begin{equation} \label{tdiff} {\cal L}_{\rm TDiff}={\cal
L}^I+ {\cal L}^{II}+ a {\cal L}^{III}+ b{\cal L}^{IV},\end{equation} with
arbitrary $a$ and $b$ is invariant under restricted gauge
transformations \begin{equation} \label{gaug1}\delta h_{\mu\nu}=\partial_\mu
\epsilon_\nu+
\partial_\nu \epsilon_\mu \end{equation} with \begin{equation} \label{gaug2}
\partial_\mu\epsilon^\mu=0. \end{equation}
It is noted that $\epsilon^0(t,{\bf x})$ and $\epsilon^i(t,{\bf
x})$. We call the transformations (\ref{gaug1}) and (\ref{gaug2})
transverse diffeomorphisms (TDiff)~\cite{SV,BVN}. We can obtain
two enhanced gauge symmetries by adjusting parameters $a$ and
$b$: Firstly, $a=b=1$ leads to the Fierz-Pauli Lagrangian which is
invariant under the full diffeomorphisms (Diff), where the
condition (\ref{gaug2}) is dropped~\cite{FP}. This corresponds to
the standard general relativity (Einstein gravity). Secondly,
$a=1/2,b=3/8$ provides Weyl symmetry of $h_{\mu\nu} \to
h_{\mu\nu}+\frac{\phi}{2} \eta_{\mu\nu}$, in addition to TDiff. We
call this enhanced symmetry the Weyl-transverse diffeomorphisms
(WTDiff)~\cite{ABGV}.
Now let us investigate mode propagations when using the TDiff.
Considering the decomposition (\ref{decom1}) with (\ref{decom2}),
we have the same transformations in Eqs.(\ref{trans1}) and
(\ref{trans2}) except replacing $B\to
\tilde{B}=B+\dot{\xi} $ by
\begin{equation} B\to
\tilde{B}=B-\epsilon^0+\dot{\xi} \end{equation} in general relativity. In
this case, using the residual gauge condition of Eq.(\ref{gaug2})
which implies $\dot{\epsilon}_0=\partial^2\xi$, we have three
gauge-invariant scalars, \begin{equation}
\Big(\psi,~\Phi=A-2\dot{B}+\ddot{E},~\Theta=A-\partial^2E\Big).
\end{equation} Substituting (\ref{decom1}) and (\ref{decom2}) into
(\ref{tdiff}) leads to \begin{equation} {\cal L}_{\rm TDiff}= {\cal L}^t_{\rm
TDiff}+{\cal L}^v_{\rm TDiff}+{\cal L}^s_{\rm TDiff}, \end{equation} where
\begin{equation} {\cal L}^t_{\rm TDiff}=\frac{1}{4} t_{ij} \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}
t^{ij},~~{\cal L}^v_{\rm TDiff}=-\frac{1}{2} w_i \bigtriangleup
w^i, \end{equation} for tensor and vector modes and \setlength\arraycolsep{2pt} \begin{eqnarray} {\cal L}^s_{\rm
TDiff}&=& \frac{1}{4} \Big(3 \dot{\psi}^2+\psi \bigtriangleup \psi
-\dot{\Theta}^2-\Theta \bigtriangleup(\Theta-2\Phi)-2
\bigtriangleup \psi (\Phi-\Theta) \Big)
\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \\
&+& \frac{a}{2}\Big( (\Theta -3
\psi)(\bigtriangleup(\Theta-\psi-\Phi)-\ddot{\Theta}) \Big) \\
&-& \frac{b}{4}\Big( (\dot{\Theta} -3 \dot{\psi})^2+ (\Theta
-3\psi)\bigtriangleup(\Theta-3\psi) \Big) \end{eqnarray} for all scalar
modes. From this decomposition, we realize that $\Phi$ is always a
Lagrange multiplier whose variation yields the constraint \begin{equation}
\bigtriangleup\Big[(1-3a)\psi-(1-a)\Theta \Big]\approx 0. \end{equation} In
this case, the Lagrangian reduces to \begin{equation} {\cal L}^s_{\rm
TDiff}=\frac{Z}{(a-1)^2} \psi \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}} \psi,~~{\rm with}~
Z=\frac{3}{2}\Big(a-\frac{1}{3}\Big)^2-\Big(b-\frac{1}{3}\Big)\end{equation}
which implies that for $b<1/3$, $\psi$ is really a propagating
scalar mode on the Minkowski background. For two cases of $a=b=1$
and $a=1/2,b=3/8$, we have $Z=0$, which implies that these should
be treated separately.
In the Diff case of $a=b=1$, only two scalar combinations are
gauge invariant, namely \begin{equation} \Big(\Phi,~\psi\Big). \end{equation} Then, its
Lagrangian takes the form \begin{equation} {\cal L}^s_{\rm Diff}=-\frac{1}{2}
\Big(-2\Phi \bigtriangleup \psi +3\dot{\psi}^2 +\psi
\bigtriangleup \psi\Big). \end{equation} However, since $\Phi$ is a Lagrange
multiplier, its variation leads to $\bigtriangleup \psi\approx0$.
Plugging this into the above, we have
\begin{equation} {\cal L}^s_{\rm Diff}=-\frac{3}{2}
\dot{\psi}^2, \end{equation} which means that $\psi$ is not propagating on
the Minkowski background.
Finally, for Weyl transformations of $a=1/2$ and $b=3/8$, we have
two scalar invariants which are also scalar for TDiff, \begin{equation}
\Big(\Xi=\Phi+\psi,~~\Upsilon=\Theta+\psi\Big). \end{equation} Then, its
Lagrangian is given by \begin{equation} {\cal L}^s_{\rm
WTDiff}=-\frac{1}{96}\Big(2(8\Xi-3~\Upsilon)\bigtriangleup
\Upsilon-6\dot{\Upsilon}^2\Big). \end{equation} However, since $\Xi$ is a
Lagrange multiplier, its variation leads to $\bigtriangleup
\Upsilon\approx 0$. Plugging this into the above, we have
\begin{equation} {\cal L}^s_{\rm WTDiff}=-\frac{1}{16}
\dot{\Upsilon}^2, \end{equation} which means that $\Upsilon$ is not
propagating on the Minkowski background.
|
1,116,691,500,341 | arxiv | \section{Introduction} \label{intro}
The PLANET (Probing Lensing Anomalies NETwork) collaboration
uses a longitudinally-distributed network of telescopes
in the southern hemisphere to perform densely-sampled, precise
photometric monitoring of Galactic microlensing events
with the express goal of searching for anomalies,
especially those that may betray the presence of extrasolar
planets (\cite{planetpilotpaper}) orbiting the lenses.
Multiple lenses, such as lensing binary stars or planetary systems,
generate measurably anomalous light curves if the background
source comes close to a caustic generated by the lens;
direct transits of a caustic can cause sudden and dramatic
changes (amplification $\mathrel{\raise.3ex\hbox{$>$}\mkern-14mu\lower0.6ex\hbox{$\sim$}}$ 10 on time scales of hours)
in the apparent brightness of the source.
Three such microlensing
events were monitored by PLANET in its 1997 Galactic bulge
observing season; here we describe our results for one of these,
{MACHO~97-BLG-28}.
Although typical subgiants and giants in the Galactic bulge
have radii that subtend only $\sim$1$ - $10~microarcseconds ($\mu$as),
the dramatic change in magnification in the vicinity of caustics can
selectively magnify the spatial structure of the source star by huge factors,
momentarily acting as a very high-resolution, large-aperture telescope
trained on the background source. These relatively rare but spectacular
lensing events are particularly important because they can yield otherwise
unobtainable information about the lens and source.
Generally speaking, the only physical parameter
that can be deduced from non-anomalous microlensing light curves
is the characteristic time scale $t_{\rm E}$ of the event,
defined as the time required for the lens to move an
angular distance across the observer's sight line to the source equal
to the angular Einstein radius $\theta_{\rm E}$
\begin{equation}
\theta_{\rm E} = \sqrt{\frac{4GM}{c^2}\,\frac{D_{\rm LS}}{D_{\rm L}\,D_{\rm{S}}}}\,,
\end{equation}
where $M$ is the total mass of the lens and $D_{\rm L}$, $D_{\rm S}$,
and $D_{\rm LS}$ denote the observer-lens, observer-source and
lens-source linear distances, respectively.
In caustic crossing events, photometric monitoring can allow
more information to be obtained, including the relative proper motion $\mu$
of the lens-source system and --- if the coverage is
complete over the caustic --- the surface brightness profile of the
source star.
During a caustic transit, the radial surface brightness profile, or limb-darkening, of the source influences the shape of the
resulting light curve (\cite{witt95}).
Photometric data of sufficient precision and temporal resolution
can thus be inverted to deduce the limb-darkening profile
of source stars as distant as the Galactic bulge (\cite{bc96}; \cite{sassiap}), including possible chromatic effects
induced by differential limb-darkening at different
wavelengths
(\cite{bc95a}, 1995b;
\cite{vallsgabaudchromo95}, 1998;
\cite{gouldwelch96}).
Time-resolved spectroscopic monitoring with very large aperture telescopes
may reveal even more information, including detailed information
about stellar atmospheric physics
(\cite{loebsasse95}; \cite{gaudi99}; \cite{heyrovsky99})
and metallicity (\cite{lennon96}) of the sources.
One of the most important observational constraints on
models of stellar atmospheres has come from limb-darkening measurements
of the Sun, a relatively cool dwarf.
Constraints on the atmospheres of other types of stars are more
difficult to obtain. Measurements of radial profiles, or more specifically,
limb-darkening parameters, have been attempted for only a handful of stars
(see \cite{scholz97} and references therein).
Eclipsing binaries, lunar occultations and interferometry
have been used to resolve and measure the surface structure of nearby stars.
The classical method of using the light curves of eclipsing binaries
has provided accurate limb-darkening measurements or
consistency checks in only a few cases (Popper 1984; Andersen 1991;
Ribas {et al.\thinspace}\ 1987), but at least one
suggests that a non-linear limb-darkening law is indicated
(Milone, Stagg \& Kurucz 1992).
Direct HST imaging of the bright, nearby M2 supergiant Betelgeuse has
resulted in estimates for its limb-darkening and other surface irregularities
(\cite{gildupresstars96}).
Interferometric techniques also require very large, bright stars
(Hanbury Brown {et al.\thinspace}\ 1974; Dyck {et al.\thinspace}\ 1996; Mozurkewich {et al.\thinspace}\ 1991;
Mourard {et al.\thinspace}\ 1997),
where surface structure is often due to spotting or dust envelopes rather than
limb-darkening (Roddier \& Roddier 1985; Wilson {et al.\thinspace}\ 1992).
Although the interferometric signal is
often incompatible with a uniform stellar disk,
with the exception of extreme supergiants like Betelgeuse
(\cite{cheng86}; \cite{burns97}),
theoretical limb-darkening laws are adopted in order to arrive at
better stellar radius determinations
(Quirrenbach {et al.\thinspace}\ 1996; Hajian {et al.\thinspace}\ 1998).
Nevertheless, the technique holds promise for limb-darkening
determinations as it begins to probe into the normal K giant regime
(\cite{dyck98}). Lunar occultations have been used to
resolve stellar surface structure (Bogdanov \& Cherepashchuk
1984, 1990, 1991; Richichi \& Lisi 1990; Di~Giacomo {et al.\thinspace}\ 1991),
but the dust associated with these supergiants complicates
limb-darkening estimates here as well
(Richichi {et al.\thinspace}\ 1991, 1995) and surface profiles from theory are
generally assumed so as to derive better diameters
(White \& Feierman 1987; Richichi {et al.\thinspace}\ 1998).
The precision of microlensing in mapping stellar surface brightness
profiles can thus make an important contribution
to stellar atmospheric theory by providing limb-darkening and spectral
information on stars too faint or too small to be studied by
traditional methods.
We report here our precise and temporally dense photometric
observations of the MACHO-detected microlensing event 97-BLG-28,
which when combined with our spectral typing and modeling
have produced the first limb-darkening measurement
via microlensing, and the first resolution of
surface structure in a star so distant ($\sim$8~kpc).
\section{PLANET Observations of {MACHO~97-BLG-28}}\label{data}
The Galactic bulge microlensing event {MACHO~97-BLG-28}\ was alerted by the
MACHO Collaboration on 29 May 1997\footnote{MACHO alerts are posted
at http://darkstar.astro.washington.edu}
as a normal event expected to reach peak brightness near 10 June 1997.
PLANET began to observe the event directly after the initial alert
as part of its normal monitoring program.
Within hours of the anomalously rapid increase in its brightness
on 14 June 1997, PLANET\footnote{PLANET anomaly alerts are posted
at http://www.astro.rug.nl/$\sim$planet}
and MACHO/GMAN independently issued electronic
alerts announcing {97-BLG-28}\ as a likely caustic-crossing binary event.
Dense photometric PLANET monitoring continued over the peak
of the light curve and for about 6 weeks thereafter, with additional
data taken near the baseline magnitude in October 1997 and at the
beginning of the 1998 season.
A pre-caustic spectrum was taken on 31 May 1997.
\subsection{Photometric Monitoring}
Three PLANET telescopes participated in the monitoring:
the Dutch/ESO 0.92~m at La~Silla, Chile,
the SAAO~1m at Sutherland, South Africa,
and the Canopus~1m near Hobart, Tasmania.
The full PLANET dataset consists of 686 data points taken in 1997
(508 I-Band: 247 La~Silla, 128 SAAO, 133 Tasmania;
178 V-Band: 98 La~Silla, 38 SAAO, 42 Tasmania) and another
10 data points (5 I-Band; 5 V-band) collected at the
SAAO~1m in March and April of 1998.
For details concerning the telescopes, detectors, and field sizes,
refer to Albrow {et al.\thinspace}\ (1998).
The data were reduced
with D{\sc o}PHOT\ (\cite{dophot})
using fixed position catalogs and four stars in the
field chosen to be stable, moderately bright and relatively uncrowded
as a relative flux standard (\cite{planetpilotpaper}).
Instrumental magnitudes were calibrated against contemporaneous
observations of Johnson-Cousins, UBV(RI)c E-region
standards (Menzies {et al.\thinspace}\ 1989) made at SAAO at the beginning of the
1998 season.
As we describe in \S3.1, individual
zero points for photometry at different PLANET sites allow us
to refer all our multi-site data to the SAAO natural system, and thus
to the standard system. Measurements taken in 1998 and
calibrated in the manner described above produced
baseline magnitudes of the source star of Johnson
$V = 17.91$ and Cousins $I = 15.66$, accurate to 0.05~mag.
Whenever the event was
fainter than $I \approx 14.7$, measurements derived from frames
with poor image quality (FWHM $> 2.2\arcsec$) were clearly and systematically
fainter than those from moderate to high quality images,
and had a scatter that was substantially larger than the
D{\sc o}PHOT-reported error bars.
In order to avoid systematically biasing our modeling,
measurements with $I \mathrel{\raise.3ex\hbox{$>$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 14.7$ (corresponding to
$V \mathrel{\raise.3ex\hbox{$>$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 17$) and FWHM$ > 2.2\arcsec$ were excluded from the fitting
procedure to extract lens and source parameters.
In addition, one particularly poor image that produced
measurements that deviated by 0.75~mag for constant stars
as bright as $I = 14$ was also excluded. This high-quality subset of our
data\footnote{This PLANET dataset for {MACHO~97-BLG-28}\ is publicly available
at http://www.astro.rug.nl/$\sim$planet} consists of 586 data points
(431 I-Band: 247 La~Silla, 130 SAAO, 54 Tasmania;
155 V-Band: 98 La~Silla, 41 SAAO, 16 Tasmania) and is displayed in Fig.~1.
\subsection{Spectroscopy}
On 31 May 1997, prior to the anomalous peak of {MACHO~97-BLG-28},
red and blue spectra were taken of the source star by Dr. M.~Sahu
using the high-throughput EFOSC (ESO Faint Object
Spectrographic Camera) on the ESO~3.6m at La Silla, Chile.
The camera has both imaging and spectroscopic capabilities, and can hold
5 gratings simultaneously so that multiple spectral resolutions and
bandpasses can be easily configured.
A direct image of the field was first obtained with an exposure of 10~sec.
The 512 $\times$ 512 pixel Tektronix CCD provided a
5.2$\arcmin \times$ 5.2$\arcmin$ field of view.
No filter was used in order to minimize any possible offset between the
slit and the direct image.
A slit of 1\arcsecpoint5 width was used for all the observations,
and was placed in the correct parallactic angle.
Observations were conducted through thin cirrus clouds
in seeing of $\sim$1\arcsecpoint0.
The blue spectrum was obtained at 04:40 UT using the B150 grating,
which covers 3900-5300 \AA, with a resolution of about 5\AA.
The red spectrum was obtained at 05:25 UT
using the O150 grating, which covers the wavelength range 5200 - 6800 \AA,
with the same resolution.
The exposure time in both cases was 2400 sec.
The image processing software packages MIDAS and IRAF were used to
reduce the spectral data. After bias
subtraction, the frames were flat-field corrected with an average
of 5 flat-field images of the dome illuminated with a tungsten lamp.
To ensure good sky subtraction, sky levels were determined from
both sides of the spectrum. The resulting one-dimensional spectrum
was then wavelength calibrated with a He-Ar spectral lamp
and flux calibrated against the standard star LTT~4364.
Shown in Fig.~2 is the resulting geocentric (lab frame),
combined spectrum for the source star of {97-BLG-28}.
Obvious cosmic rays have been removed.
The spectral resolution is $\sim 5$\AA,
corresponding to a velocity resolution of
about 260 {\rm km~s$^{\hbox{\rm --1}}$}\ at the strong NaI absorption feature near 5890\AA.
The heliocentric correction for the velocities is 9.5 {\rm km~s$^{\hbox{\rm --1}}$}, which
is negligible compared to the resolution of the spectrum.
\subsection{Radius and Spectral Type of the Source}
Comparison of the flux-calibrated spectrum shown in Fig.~2 with
spectral synthesis codes ({\sc FITGRID} and {\sc SPECFIT} tasks of IRAF)
yields the best match to a K2III star
of solar metallicity, effective temperature $T=4350 \,$K and
surface gravity ${\rm log} ~ g=1.9$. The strong NaI line
is not typical of a star of this type, but the equivalent
width of the line is entirely consistent with interstellar absorption along
this long sight line to the bulge source star at celestial (J2000)
coordinates
$\alpha = \, $18$^{\rm h}$00$^{\rm m}$33.8$^{\rm s}$ and
$\delta = \, -$28\deg01$\arcmin$10$\arcsec$,
corresponding to Galactic $l = 2.46$$^{\circ}$\ and $b = - 2.36$$^{\circ}$.
Some ambiguity remains in the spectral solution depending on the
extinction to the source, but the best fit for the reddening of the
spectrum and the radius of the source are
consistent with that derived from an independent analysis
of the color-magnitude diagram of field, described below.
The radius of the source star was determined by considering its position in
the $(V-I)_{0} - I_{0}$ color-magnitude diagram (CMD) relative to
red clump (RC) stars in the Galactic bulge. Our CMD was derived
from observations taken at the SAAO~1m, calibrated by reference to
E-region standard star baseline data in 1998.
The absolute $I$ magnitude of the peak of the RC luminosity function in
the bulge has been determined to be $I_{RC} = -0.23 \pm$~0.03
(Stanek \& Garnavich 1998; Paczy\'nski \& Stanek 1998).
We have fitted the $I$-magnitude distribution of the RC stars in our sample
using the same function as Stanek \& Garnavich (1998).
Based on the distribution of stars in CMD,
we find that there is an equal probability that the
{97-BLG-28}\ source star is in
the RC or on the red giant branch (RGB).
Possible values for the average field reddening were
determined by fitting the observed RGB to the
Bertelli {et al.\thinspace}\ (1994) isochrones for ages of 3.1 and 10 Gyr and
metallicities [Fe/H] = $-$0.4, 0, +0.4, resulting in the range of
reddening values $0.8 < E(V-I) < 1.2$.
Stanek's (1998) reddening map of the Galactic plane
indicates a gradient of $\sim$ 0.2 mag across the $3 \times 3$~arcmin
SAAO field; interpolation at the position of the microlensing event
gives $E(B-V) = 0.88$, corresponding to $E(V-I) = 1.10$.
For each value of plausible reddening, we used the known RC
absolute magnitude to rescale our CMD,
treating the distance modulus as a free parameter.
For each isochrone fit to the RGB for a given age and metallicity, other
isochrones were also plotted to allow for the possibility that the
source star may differ in age or metallicity
from the RGB mean. In all cases for which a reasonable fit
($\Delta(V-I)_{0} < 0.1$~mag) was found
for the source being on the RGB or in the RC,
the source radius was calculated from the appropriate isochrone.
Reasonable consistency emerged among the 25 resulting radius
values, with all in the range $13 < R_*/R_\odot < 20$,
and a mean value of $R_*/R_\odot = 15 \pm 2$ (1-$\sigma$ uncertainty).
This result will
be used together with the modeling in \S\ref{params} to estimate the
relative proper motion of the lens with respect to the source.
In Fig.~3, we show our CMD for the field of {97-BLG-28}, dereddened
with our fitted value of $E(V-I) = 1.095$ ($A_I = 1.56$)
over the whole of the field.
The position of the observed (blended) source star at baseline is shown as the
filled circle in Fig.~3, where it is assumed that the
source suffers the same reddening as the field mean.
Our derived dereddened color of $(V-I)_0 = 1.16$
from CMD considerations is in excellent agreement with the
$(V-I)_0 = 1.14$ (Bessell 1990) expected for our K2 giant spectral typing
of this source.
\section{An Extended-source, Binary-lens Model for {MACHO~97-BLG-28}}\label{model}
The abrupt change of slope in the light curve near
HJD - 2449719 = 896.4 evident in Fig.~4 suggests that the source star
has crossed the caustic structure of a binary lens.
The single nearly-symmetric finite peak near HJD - 2449719 = 895.6
further suggests that the extent of the source is resolved
by the observations and that the source passed behind an isolated cusp
of the caustic geometry on a nearly perpendicular trajectory.
This lensing geometry is indeed borne out by our modeling of the light curve
for {97-BLG-28}, as can be seen in Fig.~5. To reproduce the features
in the observed light curve it was necessary to include
lens binarity, source blending, extended source size, and
source limb-darkening in this model. Together with
allowances for possible photometric offsets between the
3 observing sites in both bands, this requires a total of 19 fit parameters,
as we now explain.
We label each subset of the data by observing site and filter.
Let $k$ denote the number of such data subsets.
A model with a point-source and point-lens will require
$3+k$ parameters:
$t_{\rm E} = \theta_{\rm E}/\mu$, the (Einstein) time required for
the lens to move an angular distance $\theta_{\rm E}$ (Eq.~1);
$u_0$, the smallest lens-source angular separation in units
of $\theta_{\rm E}$;
$t_0$, the time at which this smallest separation occurs; and the set of $k$
$m_0^i$, the multi-site, multi-band baseline magnitudes.
Individual baselines are required for each site because although
all data of a given band are referred to the same secondary field
standards, we often find that small discrepancies remain in our
multi-site photometry that appear to be related to differences
in detector resolution and average seeing conditions (Albrow {et al.\thinspace} 1998).
Parametrization of a binary lens composed of two objects with mass
$M_1$ and $M_2$ requires an additional 3 parameters:
$d$, the angular binary separation in units of $\theta_{\rm E}$;
$q = M_2/M_1$, the binary mass ratio; and
$\alpha$, the angle between the line from $M_2$ to $M_1$ and
the direction of source proper motion
(relative to the lens).
These parameters and the direction of proper motion are chosen so
that $u_0 \geq 0$, $0 \leq \alpha < 2\pi$, $0 < q \leq 1$,
and the midpoint of the lens system is on the right hand side
of the moving source as viewed by the observer.
For a binary system, the smallest angular separation $u_0$
between lens and source now refers to the midpoint of the lens system, and
the angular Einstein radius $\theta_{\rm E}$ refers to the total mass
$M = M_1 + M_2$.
The baseline magnitudes include all light, including any that may
be due to any unresolved, non-lensed light that may be confused with
source light in the observations. To account for the possibility of
such photometric confusion, blending parameters $f_j$ ($j =1\ldots{}n$)
must be added for each of the observed $n$ wave bands
to characterize the fraction of the total light contributed
by the source alone at baseline.
No blending in wave bands $j$ corresponds to $f_j = 1$.
The extended size of the source is characterized by an additional
parameter $\rho_\ast \equiv \theta_\ast/\theta_{\rm E}$,
the angular radius of the source in units of the Einstein radius.
Finally, the light intensity profile of the source may be limb-darkened
rather than uniform over the stellar disk.
We adopt a limb-darkening law of the form
\begin{equation}
I_\lambda(\vartheta) = I_\lambda (0) \left[1-c_\lambda \, (1-\cos \vartheta) - d_\lambda \, (1-\sqrt{\cos \vartheta}
)\right]\,,
\end{equation}
where $\vartheta$
is the angle between the normal to the stellar surface and the
line of sight.
The coefficients $c_\lambda$ and $d_\lambda$ are wavelength dependent,
requiring an additional $2n$ parameters.
The inclusion of all these effects thus requires $7+k+3n$ fit parameters.
For the 3 PLANET sites, each observing {MACHO~97-BLG-28}\ in $V$ and $I$,
$k = 6$ and $n = 2$, thus necessitating 19 parameters in the full model.
\subsection{Fitting the Model to the Data}
We fit three different extended-source binary-lens models, with either a
uniform source or a 1- or 2-parameter limb-darkened source,
to the combined, high-quality dataset of 586 CCD frames (\S2.1)
using the formal uncertainties reported by DoPHOT. Details of calculating
the light curves for such models can be found in the work of Dominik (1998).
The multi-site, multi-band baselines were allowed to
vary independently in the fitting process;
this photometric alignment resulted in relative multi-site offsets of
$0.007 - 0.065 \, $mag in our best model.
In general, it is a difficult task to find a minimum in high-dimensional
parameter space. However, in this case, nearly optimal values for
some parameters could be found by first searching in lower-dimensional
subspaces. The six baselines could be estimated from the latest data points
and then fitted with a point-lens model together with the parameters
$u_0$, $t_0$, $t_{\rm E}$ and the two blending parameters, using the
data points outside the peak region. We thus began
fitting the binary-lens extended-source models
only after we had good guesses for 11 of the parameters.
Parameters from the uniform source fit then provided us with
good initial estimates for a total of 15 parameters, requiring only
2 or 4 additional parameters when including limb-darkening.
Our best fit was achieved for a 2-parameter limb-darkening model (LD2),
and yielded a $\chi^2_{\rm min}$ of 1913 for the
567 degrees of freedom (d.o.f).
This model is displayed in the left panels of Fig.~4 and its
fit parameters are listed in Table~\ref{fpartab}.
If the LD2 model is indeed the best
representation of the data, then the reduced
$\chi^2_{\rm min} / {\rm d.o.f.}\, = 3.374$
is an indication that the D{\sc o}PHOT\ uncertainties are on average
underestimated by
a factor $\sim$1.8, consistent with our previous experience
(\cite{planetpilotpaper}).
The corresponding source trajectory and caustic structure for the
LD2 model shown in
Fig.~5 illustrate that the cusp caustic of the binary lens just
sweeps over the limb of the extended source, the first clear observation of
such a cusp crossing. Such a crossing differentially magnifies
different portions of the source as a function of time (top panel, Fig.~6).
Because the central ring crosses
almost directly over the cusp, it experiences the greatest magnification.
The fraction of light contributed by concentric rings (of equal area)
on the stellar disk therefore also varies during the anomaly
(bottom panel, Fig.~6), allowing
the surface profile, and thus limb-darkening, of the source to be
deduced. For the specific geometry of {97-BLG-28}, the contrast
between the fraction of light contributed by the inner and
outermost of 10 equal-area rings is a factor $\sim$3 during the cusp crossing.
The PLANET dataset is well-sampled everywhere during the cusp
crossing except in the 12-hour period near ${\rm HJD - 2449719 = 895}$.
\section{Limb-Darkening of the Bulge K Giant Source Star}\label{limbdark}
In addition to our best fitting model with square root limb-darkening
coefficients (LD2),
Table~\ref{fpartab} also lists the parameters for two other models:
a fit with a linear, 1-parameter, limb-darkening law (LD1),
which yields $\chi^2_{\rm min}$ of 1930 for 569 d.o.f., and a
fit with a uniformly bright source (UB), which yields
$\chi^2_{\rm min}$ of 3255 for 571 d.o.f.
Under the assumption that our 2-parameter limb-darkening model is
indeed the best representation of the data, the large $\chi^2$ compared to the number of degrees of freedom must be attributed to a misestimation
of the experimental uncertainties. This is not unexpected since
our previous experience in crowded fields indicates that D{\sc o}PHOT\
formal uncertainties are smaller than the true scatter in the photometry
of constant stars by an amount that depends on the crowding of
the star (Albrow {et al.\thinspace}\ 1998). In order to assess conservatively
the differences in $\chi^2_{\rm min}$ between models, therefore, we first
rescale the formal D{\sc o}PHOT\ uncertainties by
$\sqrt{1913/567} = \sqrt{3.374} \approx 1.8$,
which is tantamount to the assumption that the LD2 model is perfect
and would yield a true $\chi^2_{\rm min} = 567$ if the ``true'' photometric
uncertainties were known. The rescaling is global, preserving the
relative uncertainty between points, and conservative, allowing for
the possibility that part of the $\Delta\chi^2_{\rm min}$ between models
can be due to the underestimation of photometric uncertainties by D{\sc o}PHOT.
The rescaled $\chi^2_{\rm min}$
of the LD1 and UB models are thus 572 and 965, respectively.
The significance of the limb-darkened LD1 and LD2 models over the
uniform brightness UB model is enormous: using the rescaled
uncertainties, $\Delta \chi^2_{\rm min} = 393$ between the LD1 and UB models, and $\Delta \chi^2_{\rm min} = 398$ between LD2 and UB.
This precipitous drop in $\chi^2_{\rm min}$ with
the addition of the limb-darkening parameters leaves
no doubt that limb-darkening has been detected to an exceedingly high
degree of confidence in the PLANET photometry of this event.
The enlargements of the light curve in Fig.~4 show how precisely the
2-parameter limb-darkening model (LD2) reproduces the detail in the
cusp and limb regions in both the $I$ and $V$ bands.
On the right of the same figure, these sections of the light curve are
overplotted with the uniform source model (UB).
The multi-site photometry is aligned automatically
by the fitting procedure; this photometric alignment
is somewhat different for the two models.
Even given the freedom to photometrically realign the data from
different sites relative to one another, the UB model fails to fit the
shape of the convexity at peak and the gentle slope of the curve
as the limb of the star egresses. The failure of the uniformly
bright model to reproduce the structure in the light curve during
the limb crossing can be seen most dramatically in Fig.~7, where
the residuals of the LD2 and UB models are shown.
If the smaller normalized $\chi^2_{\rm min}$ of the LD2 model
compared to the LD1 models is not physically significant, but instead
only due to the freedom of adjusting the additional $d_\lambda$ parameters,
we would expect
$\Delta \chi^2_{\rm min}$ to be distributed as $\chi^2$
with a number of degrees of freedom equal to the number of additional
parameters (Press {et al.\thinspace}\ 1986, Chapter 14).
The significance of the LD2 model over the LD1
model is thus determined by the 8.2\% probability of obtaining
by chance a $\Delta \chi^2_{\rm min} > 5$ improvement
with two additional degrees of freedom, corresponding to a
marginal 1.7$\sigma$ detection of a surface profile that deviates
from the 1-parameter limb-darkening law in both bands.
This marginal improvement indicates that the inclusion of additional
profile fitting parameters beyond those included in the LD2 model
is unlikely to result in a significantly better fit;
the LD2 model contains the all the information
that we are able to pull from this dataset about the source profile.
The large difference in $\Delta \chi^2_{\rm min}$ between the
limb-darkened and uniform bright models is not due solely to the
cusp-crossing portions of the light curve.
Since the model parameters are correlated, even those parameters
unrelated to the source profile differ between the uniform source model
and the limb-darkened models, as inspection of Table~\ref{fpartab}
indicates.
Interestingly, the limb-darkened models always required a smaller
photometric offset between sites in both bands. The uniform
brightness model apparently attempted to match the observed shape
of the light curve at the limb by adjusting the photometric offsets slightly,
hindering the $\chi^2$ performance of the UB model elsewhere
in the light curve.
The clear signature of limb-darkening is revealed only
with high precision data during the few hours when the limb of the
star is grazing the caustic; high-quality data over the whole of the
light curve is required, however, to obtain a full and accurate
microlensing solution.
\subsection{Robustness of the Measurement}
Another test of the reliability of our limb-darkening parameters
is provided by fits we performed on the full dataset, including the poorer
quality frames, but now estimating the photometric uncertainties empirically
so that the size of the error bar scales with the overall frame quality.
Specifically, we set the uncertainty of the event magnitude on a given frame
equal to the average deviation of all similarly-bright constant stars
from their average magnitude.
We also eliminated the baseline points taken in 1998 in order to test
whether the availability of a long temporal baseline was important to
our conclusions about limb-darkening. The resulting $\chi^2_{\min}$ and
fit parameters are given in the last three columns of Table~\ref{fpartab}
for the three source models. The value of $\chi^2_{\min}$
now appears to be too small primarily due to the fact that
{97-BLG-28}\ is less crowded than a typical star of its brightness,
so that the frame quality undercertainties are overestimates of its
true photometric scatter.
What is important to note is that the uniformly bright source is
again strongly ruled out and that all model parameters
are left almost entirely unaffected by this alternate selection and
treatment of the data, demonstrating the model's robustness.
Both of the simpler models are special cases of the LD2 family of models:
the LD1 model requires the limb-darkening parameter $d_\lambda = 0$ in both
$I$ and $V$, while the UB model sets
$c_\lambda = d_\lambda = 0$ in both bands. This means that our LD1 and UB
models, which correspond to points on the $\chi^2$ hypersurface
over restricted portions of the LD2 parameter space, also correspond
to points on the full LD2 $\chi^2$ hypersurface.
The probability of obtaining a renormalized
$\chi^2$ that is larger than that of
the LD1 solution is $> 10\%$ for both the
D{\sc o}PHOT\ and frame quality methods of estimating relative photometric
uncertainties; the probability of obtaining a
renormalized $\chi^2$ larger than that of the UB is negligibly small.
We conclude that the 2-parameter limb-darkening model is clearly superior
to the uniform source model and marginally superior to the 1-parameter
limb-darkening model. We now adopt it as our best model and use it
to derive the physical parameters for the lens and the
surface brightness profile of the source, which we now
compare to expectations from stellar atmosphere models.
\subsection{Comparison to Stellar Atmospheric Models}
As we discussed in \S\ref{data}, both spectroscopic and photometric
considerations indicate that the source star of {97-BLG-28}\ is
a K giant in the Galactic bulge, with K2III being the most probable typing.
The fits to our photometric data discussed in \S\ref{model}
yield the limb-darkening coefficients $c_\lambda$ and $d_\lambda$ for
the $V$ and $I$ bands separately.
The corresponding surface brightness profiles for the source star,
normalized so as to give a total flux equal to unity, are shown
in Fig.~8 for our LD2
model of the high-quality photometric dataset using
D{\sc o}PHOT\ estimates for the relative photometric uncertainties.
The surface brightness profiles derived from modeling the full dataset
with empirical ``frame quality'' error bars produces nearly identical profiles
(see coefficients in Table~1). Our results clearly indicate that, as expected,
this giant is more limb-darkened in $V$ than in $I$.
Also shown in Fig.~8 are predictions from stellar atmospheric models
for K0 and K5 giants from two different groups
(\cite{vanhammelimbdark}; \cite{diazlimbdark}; \cite{claretlimbdark})
both using the square-root limb-darkening law of Eq.~2.
We have interpolated between the values given by
these authors, labeling as K0 a star with effective temperature
$T = 4750 \,$K and surface gravity ${\rm log} \, g = 2.15$, and as K5
a giant $T = 4000 \,$K and surface gravity ${\rm log} \, g = 1.75$, in
good agreement with standard convention (\cite{lang}).
The cooler, later type K5
giants are more limb-darkened than their warmer K0 cousins.
Although the theoretical $I$ profiles of both groups are nearly identical,
the $V$ profiles of Diaz-Cordoves, Claret \& Gimenez (1995)
are slightly more limb-darkened than those of van~Hamme (1993).
The surface brightness profiles derived from our photometric data
modeled with the limb-darkening law of Eq.~2
are in excellent agreement with these predictions from atmospheric
models. Although Fig.~8 indicates that the source star of {97-BLG-28}\
may be slightly more limb-darkened in the $I$ band than current models
predict for its spectral type, this conclusion should be treated with
considerable caution given the uncertainties in the light curve modeling,
spectral typing, and the rather ad hoc nature of the fitting
function for the surface brightness profile.
Our results for {97-BLG-28}\ represent the first time that
limb-darkening coefficients have been measured for a microlensing
source.
Limb-darkening was indicated but not measured by MACHO observations of
the high-amplitude single-lens event MACHO~95-BLG-30; fixing
limb-darkening coefficients to agree with stellar atmosphere models did
produce a fit to the data that was marginally better
(normalized $\Delta \chi^2 = 3.4$) than uniform disk models
(\cite{alcockmb9530}).
Our limb-darkening measurement for {97-BLG-28}\ is among only a very few
determinations for normal giants, and respresents the first
surface brightness profile for any star as distant as the Galactic bulge.
The ability of PLANET
photometry to yield measurements of limb-darkening coefficients
in {MACHO~97-BLG-28}\ is due to the overall characterization of the complete
light curve combined with excellent temporal coverage
(sampling times $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 30$ minutes) over peak and limb caustic egress
regions.
\section{Physical Parameters of the Lens}\label{params}
In caustic crossing events, dense photometric monitoring allows
not only the determination of $t_{\rm E}$, the time required for the
lens to travel an angular distance $\theta_{\rm E}$, but also
a second time scale: the time $t_\ast$ required
to travel the angular source radius $\theta_\ast$.
For uniform rectilinear motion, the ratio $t_\ast/t_{\rm E}$ directly yields
the angular size of the source in units of the Einstein ring,
$\rho_\ast \equiv \theta_\ast/\theta_{\rm E}$.
If $\theta_\ast$ can be estimated by other means
({e.g.,}\ photometric or spectroscopic typing), both
$\theta_{\rm E}$ and the relative lens-source proper motion
$\mu \equiv \theta_{\rm E}/t_{\rm E}$ can thus be determined,
yielding important information on lens kinematics
(\cite{gouldfinsource94}; \cite{nemwick94};
\cite{wittmao94}; \cite{peng97}).
To date, proper motions derived from this technique
have been published for only two binary events in the Magellanic Clouds,
MACHO~LMC-09 (\cite{alcock97lmcevents}; \cite{bennettonlmc9})
and MACHO~98-SMC-01 (\cite{erossmc9801paper}; \cite{planetsmc9801paper}; \cite{machosmc9801paper}; \cite{oglesmc9801paper};
\cite{mpssmc9801paper}), and one
single-lens (point-caustic) event MACHO~95-BLG-30 in the Galactic bulge
(\cite{alcockmb9530}).
The relative proper motion can be written in terms of the model
parameters $t_{\rm E}$ and $\rho_{\ast}$, and the physical source size $R_\ast$
and distance $D_S$ as:
\begin{equation}
\mu = \frac{R_{\ast}}{D_{\rm S}\,\rho_{\ast}\,t_{\rm E}}\,,
\end{equation}
which for $D_{\rm S} = 8.0~\mbox{kpc}$ and $R_{\ast} = 15~R_{\sun}$, as
indicated by the spectral typing of \S2.3,
yield a relative proper motion
\begin{equation}
\mu = 19.4\,
\left(\frac{R_{\ast}}{15~R_{\sun}}\right)\,\mbox{km}\,\mbox{s}^{-1}\,
\mbox{kpc}^{-1} =
11.2\,\left(\frac{R_{\ast}}{15~R_{\sun}}\right)\,\mu\mbox{as}\,
\mbox{day}^{-1}\,.
\end{equation}
This corresponds to a perpendicular velocity in the lens plane
\begin{equation}
v = x \, D_{\rm S}\,\mu = 155 \, x \,
\left(\frac{R_{\ast}}{15~R_{\sun}}\right)\,\mbox{km}\,\mbox{s}^{-1}\,,
\end{equation}
where $ x \equiv D_{\rm L}/D_{\rm S}$.
The (total) mass of the lens is given by
\begin{equation}
M = \frac{c^2}{4G D_{\rm S}}\,\frac{R_{\ast}^2}{\rho_{\ast}^2}\,
\frac{x}{1-x} = 0.09\,\left(\frac{R_{\ast}}{15~R_{\sun}}\right)^2\,M_{\sun} \frac{x}{1-x}\,,
\end{equation}
so that for a disk lens halfway to the Galactic Center ($x=0.5$),
one obtains $v = 78~\mbox{km}\,\mbox{s}^{-1}$ and total lens mass
$M = 0.09~M_{\sun}$, whereas a lens embedded in the Bulge with $x=0.9$
yields $v = 140~\mbox{km}\,\mbox{s}^{-1}$ and
$M = 0.81~M_{\sun}$. Either value is consistent with Galactic kinematics.
Assuming a source distance $D_S = 8 \,$kpc and source radius
in the range $13 - 17~R_{\odot}$ (\S2.3), the top panel of Fig.~9 gives the
mass of the individual lensing components as a function of $x$.
Since the binary mass ratio is $q = 0.234$, it is quite likely that
the lighter of the lens objects has the mass of an M dwarf or less;
if the lens lies in the Galactic disk with $x < 0.8$, at least the smaller
of the binary components is a brown dwarf.
If, on the other hand, the lensing binary resides in the bulge
$x > 0.8$, the masses
of its components would be consistent with that of a typical
lower main-sequence
binary with a projected separation $a_p = R_{\ast} x d / \rho_{\ast}$
between 1 and 2~AU (bottom panel, Fig.~9).
Disk lenses would have smaller separations.
\subsection{Blended Light: Light from the Lens Itself?}
Our model fits to {97-BLG-28}\ yield a fraction $1 - f$ of blended
(non-lensed) light that is larger in the $V$ than in the $I$ band,
indicating that the blend star is very much bluer
than the background source star (Table~1). If the blend is due to a
single star, it lies in an underpopulated portion of the CMD (Fig.~3),
either because it is not at the mean field distance and has been
improperly dereddened, or because it is in a short-lived phase of its
evolution. A main sequence star with $(V-I)_0 \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 0.6$
has $I_0 < 3.7$, and thus is so intrinsically bright that it cannot appear
as the blend anywhere along the reddening vector of Fig.~3
unless it lies substantially
{\it behind\/} the bulge, where it is likely to have more reddening
than the {97-BLG-28}\ field, not less.
If, on the other hand, the blend star is somewhat
in front of the bulge and experiences slightly less extinction,
it could lie on the portion of the isochrone at $(V-I)_0 = - 0.4$
that corresponds to the transition
zone between planetary nebulae and white dwarfs.
Such an interpretation would be consistent, furthermore, with the blended
light coming from the lens itself: a white dwarf mass of
$M_{\rm WD} \approx 0.55 \, M_\odot$ is consistent with the
mass that the larger lens component ($M_1$) would have at a distance
$x \approx 0.88$, slightly in front of the bulge. This dense portion of
the Milky Way is a likely region for microlenses and should
have an extinction similar to (but slightly less than)
that of the field mean (see Fig.~3). If this explanation is correct,
then the smaller lensing component would have a mass just above
the hydrogen-burning limit at $M_2 \approx 0.13 \, M_\odot$.
These conclusions are valid only if the blended light
sensed by the light curve fitting is due to a single star,
but any explanation requiring two stars to produce the color
of the blend would be even more ad hoc.
\section{Conclusions} \label{conclude}
The temporal coverage of the bulge microlensing event {MACHO~97-BLG-28}\ from
three PLANET observing sites longitudinally distributed in the southern hemisphere has resulted in an excellent characterization of the
light curve in the $I$ and $V$ bands.
The full data set consists of 513 photometric measurements in the $I$-band
and 183 measurements in $V$, with 586 of these being of very high quality.
During a 30-hour period over the peak of the light curve,
sampling times between $\sim 3$ and 30 minutes were continuously
maintained from the three sites. Less dense photometric monitoring
over other portions of the light curve, combined with baseline
measurements taken nearly a year later have resulted in the
best-characterized microlensing light curve to date.
Our spectrum of {97-BLG-28}, taken at moderate magnification, indicates
that the source star is likely to be a highly reddened K2 giant of
solar metallicity. This conclusion is supported by a comparison of
the color-magnitude diagram of the field to theoretical isochrones.
Modeling of the PLANET photometric dataset for {97-BLG-28}\
clearly indicates that the disk of the
resolved source is transited by the cusp of the central caustic
generated by a binary lens with mass ratio $q = 0.23$ and
instantaneous separation, in units of the Einstein ring radius,
of $d = 0.69$.
A source of uniform surface brightness is strongly ruled out by
our data; the derived square root limb-darkening coefficients in the
$I$ and $V$ bands derived from the photometric data alone are in
excellent agreement with expectations from stellar atmospheric models
for K giants. Under the assumption that the source star is a K2III,
the models appear slightly more limb-darkened in $I$ than our
measurements, but the difference may not be significant.
All of the conclusions above are independent of the (unknown)
lens-source distance ratio $x$ and (rather well-constrained) source
distance $D_S$ and radius $R_{\ast}$. Assuming a source in the
Galactic bulge at $8 \,$kpc of radius $R_{\ast} = 15 \pm 2 \, R_\odot$,
as indicated by our spectral typing, the total mass of the binary
and its projected separation can be determined.
If the lens resides in the bulge, it is likely to be a lower main sequence
binary with both components above the hydrogen burning limit
and a projected separation between 1 and 2~AU.
If the lens resides in the disk with the
lens-source distance ratio $x < 0.8$, the projected separation would
be smaller and one or both of the components would have a mass
in the brown dwarf regime.
Assuming the same source distance and radius, the relative
lens-source proper motion derived from our modeling is
$\mu = 19.4\, \pm 2.6 \, \mbox{km}\,\mbox{s}^{-1}\,\mbox{kpc}^{-1}$ or
$\mu = 11.2 \pm 1.5 \, \mu$as/day, consistent with the lens having
disk or bulge kinematics. The uncertainty in the proper motion is
dominated by the uncertainty in the source radius, not
by the characterization of the microlensing light curve.
If the unlensed, blended light indicated by our models is due to a
single star, its color and magnitude suggest that it is a
young white dwarf in the bulge. This conclusion is consistent with
the light coming from the lens itself: the larger lens component
would have a typical white dwarf mass at a bulge distance
of $x \approx 0.88$, in which case its companion would
be a very faint late M dwarf.
These results mark the first time that the surface structure of
a source star has been measured via microlensing and the first
limb-darkening determination of a star as distant as the Galactic Bulge
by any technique. As the number of caustic-crossing events alerted
in real time continues to rise, and the quality and frequency of
photometric monitoring of microlensing light curves continues to
improve, microlensing holds the promise of making a substantial
contribution to the field of stellar atmospheres by determining the
surface profiles of normal stars too faint or small to be easily
measured by other techniques.
\acknowledgments
PLANET thanks the MACHO collaboration for providing the
original real-time electronic alert of this event; such alerts are
crucial to the success of our intensive microlensing monitoring.
We are grateful to observers from the Astronomical Society of Tasmania,
especially Bob Coghlan, who made many of the key observations
reported here. We thank Andy Gould for reading a penultimate
version of this manuscript.
Financial support from Nederlands Wetenschapelijk Onderzoek, through award
ASTRON 781.76.018, is gratefully acknowledged.
PLANET members also wish to thank The Leids Sterrewacht Foundation,
the South African Astronomical Observatory, Canopus Observatory,
the European Southern Observatory, and Perth Observatory for the
generous allocations of time that make our work possible.
The work of M.D. has been financed by a research grant from the Deutsche Forschungsgemeinschaft while at STScI, Baltimore and by a
Marie Curie Fellowship at Kapteyn Institute, Groningen.
|
1,116,691,500,342 | arxiv | \section{Introduction}
\subsection{Satcoms: State-of-the Art}
Nowadays, the main application of fixed satellite services involves broadcasting information to a large number of user terminals, distributed over a wide coverage. In spite of the market driven, broadcasting nature of current satellite communications (SatComs), the road towards interactive broadband services seems inevitable. Multibeam satellite systems that reuse the available spectrum have already managed to provide broadband services, facilitated by the second generation of satellite standards \cite{DVB_S2_standard}. For instance, Viasat1 \cite{viasat1} can deliver up to 110~Gbps of total throughput over the coverage. A new generation of multibeam systems that still reuse frequency in a conventional manner is expected by 2016 \cite{viasat1}. Nevertheless, as the most recent extensions of SatCom standards have shown \cite{DVB_S2X}, there is only as much as one can achieve with fractional frequency reuse and conventional payloads. Therefore, satellite manufactures are exploring novel system architectures \cite{vidal2013arch} that can match the expected demand.
\subsection{Multibeam SatComs: beyond SoA.}
Advanced satellite system architectures, able to meet the highly increasing demand for throughput and close the digital divide are of high relevance nowadays. In this direction, the investigation of aggressive frequency reuse methods comes into play. The term aggressive frequency reuse, refers to operating a fractional reuse system, such as a multibeam satellite system, with very low frequency reuse factors. In the extreme case of full frequency reuse, all available bandwidth is allocated to all beams, leading to a reuse factor of one and a high level of co-channel interferences. Such configurations are enabled by the spatial degrees of freedom offered by the multibeam antenna. To fully exploit this spatial separation, advanced signal processing techniques, namely precoding, constitute a substantial interference mitigation resource \cite{Christopoulos2013AIAA, Zheng2012, Christopoulos2012_EURASIP}. As a result, the scarce user link bandwidth can be efficiently utilized by higher frequency reuse schemes.
The most recent results on aggressive frequency reuse multibeam satellites can be found in \cite{Christopoulos2014_TWCOM} and the references therein.
Taking the above concept a step further, aggressive frequency reuse can come into play between physically separated satellites \cite{Christopoulos2012_ICC}. The term dual multibeam satellites, will hereafter refer to satellites bearing multibeam communications payloads compatible with aggressive frequency reuse configurations, that share one orbital position. Under the assumption of information exchange between the different gateways (GWs) that serve each satellite, advanced interference mitigation techniques can come into play. The techniques considered in the present work can be classified in the general class of cooperative and cognitive methods. Prior to introducing these techniques, the motivation behind dual multibeam satellite scenarios is given.
\subsection{Satellite Co-location}
The herein examined dual satellite paradigm constitutes an instance of a constellation of multiple satellites, co-located in a single orbital slot. Satellite co-location was pioneered by SES with the Astra 19.2$^\circ$E system for the delivery of broadcasting services \cite{HighAbove}.
The reasoning behind this proven concept and its extension to multibeam satellite architectures is summarized by the following points:
\begin{itemize}
\item \textit{Orbital slot Congestion:} In the evolution of {geostationary (GEO)} satellite systems, orbital slots are becoming a scarce resource. To address this uprising problem, the deployment of more than one multibeam satellites, in one orbital position becomes relevant.
\item \textit{Traffic Demand:} The operational lifetime of a multibeam satellite spans over a period of more than fifteen years. It is therefore probable that unpredictable changes in the traffic demand might dictate the launch of secondary satellites to support existing ones. The opportunity to place such satellites in the same orbital slot, is considered as a great asset for the satellite operator. What is more, and even if traffic demand is well predicted, the gradual deployment offered by a multi-satellite system reduces the upfront investment and the operational cost, thus providing higher flexibility to the operator.
\item \textit{Payload Complexity: } Aggressive frequency reuse increases the communication payload size since a single high power amplifier (HPA) cannot be shared by multiple beams \cite{Christopoulos2014_TWCOM}. Hence, the payload required to drive a large number of beams that cover large regions (e.g. pan-European coverage) can be carried by multiple co-existing satellites.
\item \textit{Redundancy:} Hardware redundancy to guarantee uninterrupted service delivery in case of malfunctions is of high importance for SatComs. The co-location of separate platforms in a single orbital slot reduces the individual payload requirements, thus allowing for redundant equipment to be carried for the cases of failure. More importantly, redundancy can be offered between the co-located satellites.
\item \textit{A priory co-location:} Last but not least, long periods of coexisting satellites appear by default during the satellite replacement phase. This \textit{a priori} co-location can be exploited towards increasing the system capacity.
\end{itemize}
Based on the above arguments, this work attempts to determine the most promising, with respect to specific performance metrics, method to enable the cooperation of multibeam satellites, co-located in a unique orbital slot.
In the following section, the dual satellite paradigm, an instance of a multibeam satellite constellation, is formulated.
\section{Dual Satellite Paradigm}
\begin{figure}[h] \centering
\includegraphics[width=0.8\columnwidth]{system2.eps}\\
\caption{Different architectures to realize constellations of co-existing satellites. Different colors represent different frequency bands.}\label{fig: system model}
\end{figure}
In Fig. \ref{fig: system model}, four possible ways to deploy multibeam satellites in one orbital position are presented. More details for each architecture are given hereafter.
\subsection{Conventional frequency splitting}
The simplest way to facilitate satellite co-location requires no added complexity and is illustrated in Fig. \ref{fig: system model} (a). In this scenario, the total available bandwidth of the forward link is divided into two equal segments. Further on, this bandwidth is divided into $N_{\mathrm{c}}$ segments, where the parameter $N_{\mathrm{c}}$ is the frequency reuse factor. The total gain in terms of frequency reuse obtained by using a multibeam satellite depends on the frequency reuse factor. As the value of $N_{\mathrm{c}}$ decreases, the available bandwidth per beam increases, at the expense however, of increased co-channel interference. Remembering the Shannon formula, the capacity scales linearly with the bandwidth and in logarithmic fashion with the signal to interference ratio. However, due to the antenna pattern, multiple tiers of interference are introduced when frequency is aggressively reused. Thus, the value of $N_{\mathrm{c}}$ should be chosen in such a way that the maximum system capacity is achieved. Herein, a frequency reuse factor of three is considered.
\subsection{Cooperation}
A cooperative dual satellite system refers to two satellites bearing aggressive frequency multibeam communications payloads, that are fed by fully interconnected and synchronized on a symbol level GWs. Under these assumptions, advanced signal processing techniques, namely linear precoding, can be applied and the two transmitters will ideally behave as one large satellite that bears the equivalent payload of the two platforms as depicted in Fig. \ref{fig: system model} (b). This fact, greatly increases the available degrees of freedom thus maximizing the potential gains of such systems.
However, the stringent demand of synchronization between two physically separated satellites renders such a scenario highly challenging.
Despite this, it is herein considered as an upper bound of the presented techniques.
To avoid a highly complex architecture, partial cooperation is proposed in the following.
\subsection{Coordination }
A simplest approach is the partial cooperation, hereafter referred to as coordination, between the two coexisting transmitters, as shown in Fig. \ref{fig: system model} (c). In this manner, the total system performance can be increased while maintaining system complexity at moderate levels. The term coordination implies a relaxation in the synchronization and the data exchange requirements. More specifically, coordination involves the exchange of a smaller amount of data, namely channel state information (CSI) and does not require the joint processing of signals between the two GWs. Therefore, it trades-off the high gains of inter-system cooperation for a reduced implementation complexity. After the exchange of CSI, each satellite serves only a set of users. Hence, the signals transmitted by each satellite do not need to be synchronized on the symbol level. More details on the algorithm to determine the user allocation to each satellite will be provided in the respective sections.
\subsection{Cognition}
Cognitive communications are considered a promising tool to address the spectrum scarcity problem caused by spectrum segmentation and current static frequency allocation policies \cite{Gridlock}. Several Cognitive Radio (CR) techniques have been proposed in the literature in order to allow the coexistence of cognitive systems with the licensed primary systems. The most common cognitive techniques can be categorized into interweave or Spectrum Sensing (SS), underlay, overlay and Database (DB) related techniques. In SS only techniques, Secondary Users (SUs) are allowed to transmit whenever Primary Users (PUs) do not use a specific band, whereas in underlay techniques, SUs are allowed to transmit as long as they meet the interference constraint of the PUs. Overlay networks are characterized by the mitigation of interference with the help of advanced coding and transmission strategies at the cognitive transmitters while in the DB scenarios, cognitive terminals query the predefined DB in order to find the unoccupied frequency bands and utilize them.
The potential of CR in terrestrial systems has aspired the concept of cognitive SatComs. In the field of cognitive SatComs, the main related literature can be found in \cite{Sks:jsat14} and the references therein. Despite the fact that cognition has also been assessed for the coexistence of satellite and terrestrial systems, the present work will only focus on the cognition between satellite systems. Consequently, possible conflicts of interest between satellite and terrestrial providers over the scarce spectrum are avoided.
In this article, the technical aspects coordinated dual satellite systems are over-viewed in Sec. \ref{sec: coordinated}. Next, Sec. \ref{sec: cognitive} presents a simple cognitive techniques that will be considered herein. Following this, the two approaches are numerically compared in Sec. \ref{sec: results}. Finally, the challenges for the application of such architectures are described in Sec. \ref{sec: challenges}.
\begin{table}
\caption{Link Budget and Simulation Parameters}
\centering
\begin{tabular}{l|c}
\textbf{Parameter} & \textbf{Value} \\\hline
Frequency Band & Ka (20~GHz)\\
Total user link Bandwidth & 500~MHz \\
Multibeam Antenna Gain & Bessel Approx. \cite{Christopoulos2012_EURASIP} \\
Total on-board Power $P_{\mathrm{tot}}$ & 29~dBW \\
\hline
\textbf{Per beam Link Budget}\\
Saturated power per beam & 55~W \\
OBO& 5~dB \\
Transmit power per beam & 17.38~W\\
3-dB Beam Gain & 54~dBi\\
EIRP & 66 dBW~ \\
Bore sight distance & 37569~Km \\
Path Loss & 210 dB\\
UT antenna gain & 41~dBi\\
Carrier Power $C$&-103~dBW\\
Clear Sky Temperature &235.34K\\
Noise Power $N$ &-118~dBW\\
$C/N$&15~dB
\\\hline
\end{tabular}
\label{tab: LinkBudgParas}
\end{table}
\section{Coordinated Constellations}\label{sec: coordinated}
The focus of this work will be limited to coordinated dual multibeam satellites that employ linear precoding and user scheduling to enable dual multibeam co-location. In the multiuser multiple input single output (MU MISO) literature, precoding is an interference precancelation technique that exploits the spatial degrees of freedom offered by the multiple transmit antennas
to serve multiple single antenna users. Multiuser interferences are canceled by multiplying the transmit signals by precoding vectors. Thus equivalent interference free channels are created by the transmitter. However, full knowledge of the channel is necessary at the transmitter. The focus is on the MU MISO broadcast channel (BC).
In the full frequency reuse scenario of Fig. \ref{fig: system model} (c), interferences from the adjacent satellite are limiting the system, while intra-satellite multiuser interferences are completely mitigated by linearly precoding the transmitted signals in satellite. Simple zero forcing (ZF) precoding methods with per-antenna power constraints are considered herein \cite{Christopoulos2012_ICC}.
The coordinated dual satellite concept assumes that the two satellites can serve a joint pool of available users. The CSI of each user and its data is readily available to both GWs serving each satellite. However, based on the CSI, in each satellite a different set of users is allocated. Therefore, at each transmission instance, each user is served by only one multibeam satellite. This assumption relaxes the necessity to jointly process signals in both GWs. More importantly, it relaxes the constraint of a symbol based synchronization between the two satellites.
\subsection{User Selection and Allocation}\label{sec: SIUA}
As proven in \cite{Yoo2006b}, user selection can significantly improve the performance of ZF in an individual system. Therefore, the performance of each satellite separately is optimized by constructing a semi-orthogonal user group from a pool of users according to the Semi-orthogonal User Selection (SUS) algorithm of \cite{Yoo2006b}.
However, considering the coexistence of two separate transmitters, as is the case in a dual satellite system, the problem of high intersatellite interferences arises.
The main constraint is that the exact calculation of the level of interferences in each iteration is not possible since the exact user set is still undetermined. However, based on a basic advantage of ZF beamforming, which is the decoupled nature of the precoder design and the power allocation optimization problems, an approximation of the interferences can be made\cite{Christopoulos2012_ICC}. To the end of managing the inter-satellite interferences, a novel algorithm that selects users and allocates them to each satellite has been proposed in \cite{Christopoulos2012_ICC}. This algorithm accounts for the effects of the interferences between the two satellites, and is hereafter described.
The Semi-orthogonal Interference aware User Allocation (SIUA) algorithm of \cite{Christopoulos2012_ICC}, improves the performance of each satellite and of the overall system, simultaneously. The first is achieved by maximizing the orthogonality between users allocated in the same set, hence optimizing the ZF performance, whilst the second by minimizing the level of interferences between the two sets. Consequently, the SIUA algorithm finds the most orthogonal users that at the same time receive and induce the least possible interferences. This algorithm, requires knowledge of the CSI of all users. Therefore, each GW handles only the data of the users allocated to the corresponding satellite and thus the amount of data that needs to be exchanged is reduced.
\section{Cognitive Constellations}\label{sec: cognitive}
Although the cognitive radio literature is quite mature in the terrestrial context, the application of cognition in SatCom systems is still in its infancy. Satellite systems operating in same or different orbits can be employed to provide different satellite services over the overlapping coverage area using the same frequency resources. One satellite system can be assumed to be primary and to have priority over the shared spectrum while another satellite system operates in a secondary way by providing sufficient protection to the existing licensed users. Several dual satellite co-location scenarios may exist in future SatCom networks and can be categorized on the basis of (i) operating frequency, (ii) operating mode, (iii) operators' ownership, (iv) coverage type, and (v) satellite orbit. Depending on the scenario, several techniques such as cognitive interference alignment, spectrum sensing, cognitive beam-hopping, power control, exclusion zone, etc. have been identified in the SatComs related literature \cite{Sharma2013VTC}. Herein, the focus will be limited to cognitive beam-hopping. For the cognitive architecture of Fig. 1 (d),
the primary satellite generates large beams over the coverage while the
secondary deploys smaller ones over the same coverage area.
\subsection{Cognitive Beamhopping System}
The term beamhopping refers to a system in which a portion of the total beams are simultaneously active with a regular repetition pattern\cite{Fso:12}. This technique applies a regular time window periodically and thus allows for the entire available bandwidth to be allocated to each illuminated beam. The duration for each illuminated beam should be selected to satisfy the user transmission delay requirement.
In more detail, the cognitive beamhopping system originally proposed in \cite{Sks:jsat14} is herein considered. Based on this a priori knowledge of the beamhopping pattern, the secondary satellite's beamhopping pattern is designed so that it does not degrade the primary's operation. The primary system employs the slot reuse factor of three and the secondary satellite adjusts its beamhopping pattern to avoid the primary active beams. The primary and secondary transmissions can be synchronized with the help of the timing information the primary satellite provides.
Moreover, the cognitive beamhopping with the power allocation method proposed in \cite{Sks:jsat14} is also considered.
The comparison of the techniques to determine the optimal in a system throughput, fairness and energy efficiency sense, technique follows.
\section{Performance Comparison} \label{sec: results}
A baseband block fading model, as described in \cite{Christopoulos2012_EURASIP}, models the satellite antenna radiation pattern, the path loss, the receive antenna gain and the noise power.
Clear sky conditions are assumed.
The users, each equipped with a single antenna, are uniformly distributed over the coverage area. Only one user per beam is served in each transmission. Despite the fact that in each satellite separately, a MU MISO BC is realized, the total system operates over an interference channel.
The link budget considerations are included in Tab. \ref{tab: LinkBudgParas}, along with an instance of nominal link budget. This corresponds to the on-board available power of current operational multibeam satellites and is included to provide a point of reference. However, to investigate the trends of the proposed methods with respect to the transmit power, results are plotted in a range of on board available power.
\subsection{Spectral Efficiency}
In the present section, the performance evaluation and comparison of cooperative and cognitive dual satellite systems is performed in terms of spectral efficiency (bits/sec/Hz). For a proper comparison, the same total power budget is employed in the different architectures.
Figure \ref{fig: spf} presents the Spectral Efficiency (SE) versus the total power budget $P_{\mathrm{tot} } = [-5: 50] $~dBWs. In the nominal operation point, i.e. 29 dBWs, the gain of the coordinated system over other approaches is notable. Also, as the available power increases, further gains can be gleaned by the coordinated systems. These gains stem from the saturation of the SE performance of conventional designs due to interbeam interferences. Despite the fact that the herein considered cognitive beamhopping techniques are greatly affected by interferences, their value is noted in the low power, noise limited regime. Cognitive beamhopping with power control becomes also relevant for very low power budgets, while in the mid ranges, power control is not offering any gains. Further, as it is illustrated in Fig. \ref{fig: spf}, there exists a crossing point between the performance curves of the coordinated dual satellite and cognitive beamhopping (without power control) systems at the value of $P_{\mathrm{tot}}=22.5$ dBW. Also, at this value, all considered methods perform no worst than the conventional systems. Consequently, this value will serve herein as a threshold for the choice of the most beneficial from a SE perspective, technique. as well as a point at which the techniques can be compared with respect to other criteria. Finally, The MU MIMO channel capacity curve is also plotted in Fig. \ref{fig: spf}. Clearly, it can be noted that in the lower power region, the performance gap from the channel capacity of the cognitive methods is reduced. However, coordination manages to maintain a smaller to the theoretical upper bound gap than the other techniques, in the high power region.
\begin{figure}[h] \centering
\includegraphics[width=0.9\columnwidth]{Co2SAT_wp6_fig1.eps}
\caption{Spectral efficiency of the proposed schemes}\label{fig: spf}
\end{figure}
\subsection{Instantaneous Fairness}
\begin{table}
\caption{Instantaneous Fairness Comparison}
\centering
\begin{tabular}{l|c|c}
\textbf{Technique} &\textbf{Fig. 1}& \textbf{Jain's Index} \cite{Hua:14} \\\hline
Conventional 3 Color& (a)& $0.766$\\
Coordinated &(c)& $0.127$\\
Cognitive (w/o power control)&(d)&$0.254$\\
Cognitive (w/ power control)&(d)&0.201\\\hline
\end{tabular}
\end{table}
Increased skepticism over spectrally efficient multibeam satellites stems from the effects of such configurations on the fairness of the system. The goal of the methods considered herein, is to increase the total throughput via aggressive frequency reuse. The present section aims at capturing and quantifying the effects of the proposed methods on the instantaneous fairness. However, it should be stress that the long term, average fairness can be guaranteed by proper user scheduling, which will remain out of the scope of the present work.
Amongst various methods to depict whether the rates are equally distributed over the users in wireless networks \cite{Hua:14}, herein, we apply Jain's fairness index \cite{Sks:jsat14}. When this index is equal to one, then all users are treated equally. The Jain's index for all approaches and for a total power budget of $P_{\mathrm{tot}} =22.5$ dBW is given in Tab. II. This is the point where the proposed methods perform equally or better than the conventional in terms of SE (cf. Fig. \ref{fig: spf}). Intuitively, the fairness reduction of the proposed systems is expected, based on the well known fairness versus sum rate tradeoff in multiuser systems. Clearly, the proposed methods greatly reduce the system fairness, as seen in Tab. II. This is the price paid for more than 30\% of SE gains over conventional approaches. Between the proposed methods, however, the highest fairness is achieved by cognitive beamhopping without power control, since for the same system sum rate, a fairness index of more than 0.25 is attained. By introducing power control, a 5\% reduction in the fairness index is noted.
Moreover, to provide a more concrete illustration of the fairness criterion, Fig. \ref{fig: CDF} presents the Cumulative Distribution Function (CDF) curves of the per user SEs provided by each of the considered schemes again at the value of $P_{\mathrm{tot}} =22.5$ dBW. In this figure, the very low rate variance of conventional systems is again clear. Also, cognitive beamhopping without power control provides better user fairness than the coordinated methods.
Nevertheless, the coordinated dual satellite system achieves more than double rates, at the expense of driving almost 65\% of the users to the unavailability region. At this point, it should be stressed, that the proposed coordinated system is focused on delivering high throughput. Different linear precoding methods than maximizing the fairness criterion remain out of the scope of this work.
\begin{figure}[h] \centering
\includegraphics[width=0.9\columnwidth]{Co2SAT_wp6_fig3p.eps}
\caption{Spectral Efficiency (SE) distribution over the coverage for the proposed methods}\label{fig: CDF}
\end{figure}
\subsection{Power Efficiency}
Finally, in Fig. \ref{fig: PE}, the Power Efficiency (PE) versus the total power budget for the considered schemes is plotted. The power efficiency for each scheme is obtained by dividing the SE (cf. Fig. \ref{fig: spf}) by the total amount of the consumed power. From this figure, it is clear how the power efficiency of the coordinated dual satellite system is better than all other schemes at higher values of $P_{\mathrm{tot}}$, i.e. above $P_{\mathrm{tot}}=17$ dBW. In the lower power regime, cognitive beamhopping with power control manages to outperform all other realizable schemes. Actually, it is almost as efficient as the optimal cooperative system that serves as an upper bound. This result renders the cognitive beamhopping scheme with power control as the most promising approach for low rate, power efficient designs. This intuitive concept manages to approach the upper bound of the optimal system in terms of efficient utilization of the power resources at very low implementation costs. If the efficiency and throughput performance is however required, one has to adhere to the more complex coordinated architectures and sufficient power budgets.
\begin{figure}[h] \centering
\includegraphics[width=0.9\columnwidth]{Co2SAT_wp6_fig2.eps}
\caption{Power efficiency of the proposed schemes}\label{fig: PE}
\end{figure}
\section{Challenges and Way Forward}\label{sec: challenges}
\subsection{Performance vs Complexity Tradeoffs }
Despite the fact that the technology readiness level (TRL) of aggressive frequency reuse configurations, is considered high, especially with the introduction of low mass HPAs, the important issue to enable the methods considered herein is the CSI that needs to be readily available at all GWs. Channel acquisition and synchronization methods, based on the recommendations of the latest DVB guidelines (see Annex E of \cite{DVB_S2X}) are part of future work. In terms of synchronization, coordination relaxes the constraint of symbol level synchronization between the two physically separated satellites, during transmission. However, user scheduling needs to be performed in a joint an synchronized manner. The proposed SIUA, can be executed in a centralized location or run in parallel at the GWs that share CSI.
In the intuitive, cognitive beamhopping scenario, the payload complexity remains low and no signal processing is required at each satellite either. Since a subset of the total beams is simultaneously active, the payload requirements in terms of HPAs are less. Thus, such a technique helps to reduce the number of amplifiers on board as well as the power demands on the payloads. However, advanced switching multiplexers are required to support the beamhopping operations. Also, the multibeam pattern of the secondary satellite is much denser, which implies an added payload complexity. The size of each smaller beam is also limited by wave diffraction rules. In terms of GW interconnection, the cognition is achieved by sharing the beamhopping pattern and the timing information of the primary satellite to the secondary system. This is achieved by a signalling link from the primary GW towards the secondary. Hence, the connectivity requirements between the GWs are less compared to the coordinated case, since a one directional link needs to be implemented (cf. Figs. 1 (c) and (d)).
Based on this complexity discussion, the increased performance of the more complex coordinated techniques is justified. This complexity versus performance tradeoff proves that cooperative and cognitive techniques are complementary to each other and constitute a substantial tool for the design of future SatComs.
\subsection{Main challenges in realizing dual satellite systems}
A helpful step towards the realization of the innovative satellite system architectures, is their acceptance from standardization bodies. Despite the fact that coordinated and cognitive satellite constellations are not mature in terms of TRL, the road to their standardization in the next generation of DVB satellite standards needs to be predefined. Following the example of advanced interference mitigation techniques for SatComs\cite{Christopoulos2014_TWCOM}, which have been included in \cite{DVB_S2X}, several practical constraints first need to be incorporated. By establishing practical ways to solve the framing constraints, the channel acquisition problem as well as other inherent satellite channel impairments (e.g. non-linearities of the on-board amplifiers), the acceptance by the standardization bodies will be facilitated. In the same manner, although cognitive satellite related standards are also being realized, e.g. \cite{ETSI_COG}, the satellite co-location scenario has yet to be considered. Still, by accounting the potentially high gains at considerably low costs, especially when multibeam satellite constellations will be readily available, this road appears to be worth following.
\section{Conclusions}
In the present article, a simple paradigm that substantiates the deployment of cooperative and cognitive architectures in the next generation of satellite communications is given. On the basis of the dual satellite scenario, when the design aims to enhance the overall system throughput, the coordinated and cognitive schemes can be alternated, based on the available on-board power. As a rule of the thumb, if the power budget is sufficient, then interference limits the system and thus coordinated systems are the way forward. On the other hand, in power limited systems, the cognitive approaches should be preferred. In terms of power efficiency similar results are noted: coordinated systems are better in the interference limited regime while cognitive are the choice for noise limited systems. Finally, the reported gains come at the cost of reduced instantaneous fairness. Based on the included arguments the potential of multibeam satellite co-location by the means of cooperation and cognition, motivates further examination and research on this topic.
\section*{Acknowledgments \& Related Activities}
This work was partially supported by the National Research Fund, Luxembourg under the project ``$CO^{2}SAT$'' and $\mathrm{SEMIGOD}$. A related activity is ``$\mathrm{CoRaSat}$: Cognitive Radio for Satellite Communications'', funded by the European framework program. The views expressed herein, can in no way be taken to reflect the official opinion of SES.
\bibliographystyle{IEEEtran}
|
1,116,691,500,343 | arxiv | \section{Introduction}
In his paper from 1982 \cite{Kazh}, David Kazhdan proves the following theorem:
\begin{thm} \label{thm:Kazh}
Let $0<\epsilon<\frac{1}{200}$, let G be an amenable group, and let $\mathcal{H}$ be a Hilbert space.
Let $\varphi: G \to \mathcal{U}(\mathcal{H})$ be such that it satisfies
\begin{equation*} \label{eq:epsilon_rep}
\forall g,h \in G, \; \Vert \varphi(g) \varphi(h) - \varphi(gh) \Vert_{op} < \epsilon
\end{equation*}
Then there exists a representation $\pi: G \to \mathcal{U}(\mathcal{H})$ with $\forall g \in G, \; \Vert \varphi(g) - \pi(g) \Vert_{op} < 2\epsilon$
\end{thm}
This theorem is a specific answer to a general question that was proposed by Ulam in \cite{Ulam}, which roughly asks:
Given a map between groups that is close to being a homomorphism, can it be approximated by a genuine homomorphism? Formally, one
can interpret this as follows (see \cite{DOT}):
\begin{defin}
Given a group $G$, a metric group $(H,d)$ and $\epsilon>0$, a map $\varphi: G \to H$ is said to be an $\epsilon$-homomorphism if
\begin{equation*}
\forall g,h \in G, \; d(\varphi(g) \varphi(h), \varphi(gh)) < \epsilon
\end{equation*}
\end{defin}
The question is then:
\begin{defin} \label{defin:unif_stab}
Given a class $\mathscr{G}$ of groups together with a class $\mathscr{ H}$ of metric groups. We say $\mathscr{G}$ is uniformly stable with respect to $\mathscr{H}$ if for any $\delta>0$ there exists $\epsilon>0$ such that for any $G \in \mathscr G$, any $(H,d) \in \mathscr{H}$ and any $\epsilon$-homomorphism $\varphi : G\to H$,
there is a genuine homomorphism $\pi: G\to H$ such that $d(\varphi(g),\pi(g)) < \delta$ for all $g \in G$. When $\mathscr{G} = \{\Gamma\}$ consists of a single group, we say $\Gamma$ is uniformly stable w.r.t. $\mathscr{H}$.
\end{defin}
Kazhdan answered this question positively for the family $\mathscr{G}$ of all amenable groups, and $\mathscr{H}$ of all unitary groups arising from
arbitrary Hilbert spaces, equipped with the distance induced by the \emph{operator norm}. This was studied further by Burger, Ozawa and Thom in \cite{BOT},
where one of the main results also shows that $SL(n, \mathcal{O})$, where $n \geq 3,\, \mathcal{O}$ is the ring of integers of a number field, is uniformly stable w.r.t. to \emph{finite dimensional} unitary groups, equipped with the operator norm.
In this work we are interested in the same question, but where we replace arbitrary unitary groups with the operator norm
by finite dimensional unitary groups $U(n)$ with the (normalized) Hilbert Schmidt norm:
The norm $\Vert \cdot \Vert_{HS,n}$ on $M_{n \times n}(\mathbb{C})$ is defined by $\Vert A \Vert_{HS,n} = (\frac{1}{n} Tr(AA^*))^{\frac{1}{2}} =(\frac{1}{n} \sum_{i,j} \vert a_{i,j} \vert^2)^{\frac{1}{2}}$ and induces a bi-invariant metric on $U(n)$ by $d_{HS,n} (U,V) = \Vert U - V \Vert_{HS,n}$.
There is no hope, however, to prove stability for the class of all amenable (or even finite) groups with respect to the Hilbert Schmidt norms, as was mentioned in \cite{GH} and \cite{DOT}. The following theorem is the main result of our work, which completely characterizes \emph{uniformly HS-stable groups} (that is, groups that are uniformly stable w.r.t $\mathscr{H} = \{(U(n), d_{HS,n})\}$) among f.g. residually finite groups. This shows that HS-stability occurs extremely rarely in contrast with the case of the operator norm:
\begin{thm} \label{thm:main}
A finitely generated residually finite group is uniformly HS stable if and only if it is virtually abelian.
\end{thm}
The "only if" direction uses the fact that finitely generated residually finite non virtually abelian groups have irreducible representations of arbitrarily large dimensions.
As a result, we prove they are not HS stable. This is done by replacing a true irreducible representation of dimension $n$ by its projection to an $(n-1)$-dimensional subspace. In this way we obtain "bad" $\epsilon-$representations which are bounded away from all true representations, while $\epsilon$ tends to $0$. This construction generalises observations made already in \cite{DOT}
The construction mentioned before shows instability. However, it suggests that a relaxed notion of stability, often called \emph{flexible} uniform stability, might still hold for many groups. Indeed, a recent result proved by De Chiffre, Ozawa and Thom \cite{DOT} does
hold for all amenable groups:
\begin{thm} \label{thm:DOT_main}
Let $\Gamma$ be a countable amenable group. For any $\epsilon > 0, n \in \mathbb{N}$ and an $\epsilon$-homomorphism $\varphi:\Gamma \to U(n)$ with respect to $d_{HS,n}$, there exists a unitary representation $\pi:G\to U(m)$ for some $n \leq m < n + 2500\epsilon^{2}n$ and an isometry $U: \mathbb{C}^n \hookrightarrow \mathbb{C}^m$ such that
$$ \forall g \in \Gamma, \; \Vert \varphi(g) - U^* \pi(g) U \Vert_{HS,n} < 161\epsilon $$
\end{thm}
This enables us to prove the other direction of Theorem \ref{thm:main}, by showing that for a virtually abelian group, we can deform the $\varepsilon$-representation into a genuine representation without paying the price of enlarging the dimension.
Along the way we prove that the relation between $\delta$ and $\epsilon$ does not depend on the choice of the group if we restrict ourselves to the case when the finite index does not exceed some fixed $d \in \mathbb{N}$:
\begin{thm} \label{thm:vir_abelian_case}
For any $d \in \mathbb{N}$, the class $\mathscr{G}$ of countable virtually abelian groups with an abelian subgroup of index $\leq d$ is uniformly stable
with respect to the family $\{(U(n), d_{HS,n})\}$ of unitary groups with the corresponding Hilbert Schmidt norms.
\end{thm}
To the best of the authors' knowledge, the last result seems to be new even for the infinite cyclic group $\mathbb{Z}$, and for the collection of all finite abelian groups. It is also interesting to compare with the situation of approximate actions, that is, considering the family of symmetric groups with normalised Hamming metrics $\mathscr{H}:=\{(S_n, d_{Hamm})\}$. Becker and Chapman \cite{BC} showed that $\mathbb{Z}$ is \emph{not} uniformly stable w.r.t. $\mathscr{H}$, where as we show it \emph{is} stable w.r.t. $\{U(n), d_{HS,n}\}$. This seems to be the first result for which $\mathscr{H}$ and $\{U(n), d_{HS,n}\}$ behave differently.
Becker and Chapman \cite{BC} also prove an analog of Theorem \ref{thm:DOT_main} in the realm of symmetric groups with the Hamming distances.
In their manuscript they explain how these results are relevant for the fields of property testing and quantum information theory. It seems that our results might be interpreted in a similar way, the interested reader can find more about connections between group stability and quantum information theory in \cite{VidickBlog}.
\section*{Acknowledgements}
We would like to express deep gratitude to our advisor Alex Lubotzky and to Michael Chapman for the very useful conversations, for their help and for initially pointing us towards the questions addressed here. This work is part of the MSc theses of both authors. It was supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (Grant No. 692854).
\section{Large irreducible representations and instability}
Throughout this article, all groups are supposed to be discrete and countable. The goal in this section is to assemble the tools needed to prove the "only if" direction of Theorem \ref{thm:main}. We will prove an \emph{instability} result, which generalizes remarks made in \cite{DOT}, \cite{GH}. But first, we need two known lemmas. The following can be found for example as Lemma 6.1. in \cite{GH}.
\begin{lem} \label{lem:HS_prop}
The (normalised) Hilbert Schmidt norm satisfies:
\emph{(1)} (Unitary Invariance) For any $A \in M_n(\mathbb{C}),\; U,V \in U(n)$, we have
$\Vert UAV \Vert_{HS,n} = \Vert A \Vert_{HS,n}$
\emph{(2)} For any $n,m \in \mathbb{N}$, $A \in M_{n \times m}(\mathbb{C}), B \in M_{m \times m} (\mathbb{C})$, $C \in M_{m \times n}(\mathbb{C})$ we have $\Vert ABC \Vert_{HS,n} \leq \sqrt{\frac{m}{n}}\Vert A \Vert_{op} \Vert B \Vert_{HS,m} \Vert C \Vert_{op}$
\end{lem}
The next lemma can be interpreted as a stability result for the relation defining unitary matrices. A very similar result can be found for example as Lemma 2.8 in \cite{SV}. There is a slight difference in formulation, so we include a proof for completeness.
\begin{lem} \label{lem:unitary_stab}
Let $M \in M_n(\mathbb{C})$. Then there exists a unitary $R \in U(n)$ such that $\Vert M - R \Vert_{HS,n} \leq \Vert M^* M - I_n \Vert_{HS,n}$
\end{lem}
\begin{proof}
Write the Singular Value Decomposition
$M = S \Sigma V^*$ where $V,S \in U(n)$ and $\Sigma = \diag(\lambda_1, \dots, \lambda_n)$
is a diagonal matrix with the singular values $\lambda_i \geq 0$.
We have:
$$\Vert M^* M - I_n \Vert_{HS,n} = \Vert V \Sigma S^* S \Sigma V^* - I_n\Vert_{HS,n} = \Vert V \Sigma^2 V^* - I_n \Vert_{HS,n} = \Vert \Sigma^2 - I_n \Vert_{HS,n}$$
Where we used unitary invariance (\ref{lem:HS_prop}, (1)) in the last equality. Now: $\Vert \Sigma^2 - I_n \Vert_{HS,n}^2 = \frac{1}{n} \sum_{i=1}^n (\lambda_i^2 - 1)^2$.
We always have for $\lambda \geq 0$, $\vert \lambda ^2 - 1 \vert = \vert(\lambda - 1)\vert \vert(\lambda + 1)\vert \geq \vert(\lambda - 1)\vert$
So we also know that
$$\Vert \Sigma - I_n \Vert_{HS,n}^2 = \frac{1}{n} \sum_{i=1}^n (\lambda_i - 1)^2 \leq \frac{1}{n} \sum_{i=1}^n (\lambda_i^2 - 1)^2 = \Vert M^* M - I_n\Vert_{HS,n}^2 $$
Now, define the unitary $R = SV^* \in U(n)$. Observe:
$$ \Vert M - R \Vert_{HS,n} = \Vert S \Sigma V^* - SV^* \Vert_{HS,n} = \Vert S (\Sigma - I_n) V^* \Vert_{HS,n} = \Vert \Sigma - I_n \Vert_{HS,n} \leq \Vert M^* M - I_n\Vert_{HS,n}$$
Where in the last equality, we again used the fact that the Hilbert Schmidt norm is unitarily invariant (\ref{lem:HS_prop}, (1)).
\end{proof}
\begin{prop} \label{prop:instab}
If $\Gamma$ is a group with irreducible finite dimensional unitary representations of arbitrarily large dimensions, then $\Gamma$ is \emph{not} uniformly HS stable.
\end{prop}
\begin{proof}
Let $36 \leq n \in \mathbb{N}$ be such that there exists an irreducible unitary representation $\pi : \Gamma \to U(n)$. Denote by $\mathcal{M} = M_n(\mathbb{C})$ the algebra of $n \times n$ matrices.
Let $P$ be the orthogonal projection operator onto the first $n-1$ coordinates, so that $P$ has rank $n-1$. We will work with the "corner" $P\mathcal{M}P$, which consists of matrices in $\mathcal{M}$ whose last column and row consist of zeros.
As such, we have a $*$ algebra isomorphism $F: M_{n-1}(\mathbb{C}) \xrightarrow{\sim} P\mathcal{M}P \subset \mathcal{M}$ defined by $F(A) = QAQ^*$ where $Q:\mathbb{C}^{n-1} \hookrightarrow \mathbb{C}^n$ is defined by $Q(e_i) = e_i$ for $i \leq n-1$, $\{e_i\}_{i=1}^n$ the standard basis. Visually:
$$
A \in M_{n-1}(\mathbb{C}) \mapsto
\begin{pmatrix}
A & 0 \\
0 & 0
\end{pmatrix}
\in M_n(\mathbb{C})
$$
Further, we have: $\Vert A \Vert_{HS,n-1} = c \Vert F(A) \Vert_{HS,n}$ for $A \in M_{n-1}(\mathbb{C})$ where $c=\sqrt{\frac{n}{n-1}}$.
Similarly, we can also define $F: Hom_{\mathbb{C}}(\mathbb{C}^n, \mathbb{C}^{n-1}) \xrightarrow{\sim} P\mathcal{M}$ by $F(B) = QB$. We then get the following "functoriality": for $A \in M_{n-1}(\mathbb{C}), \; B \in Hom_{\mathbb{C}}(\mathbb{C}^n, \mathbb{C}^{n-1}), C \in \mathcal{M}$, $F(AB)=F(A)F(B),\;F(BC)=F(B)C$.
Define the map $\varphi : \Gamma \to \mathcal{M}$ by $\varphi(g) = P \pi(g) P$, that is, we simply look at the restriction of $\pi(g)$ to the left upper $(n-1) \times (n-1)$ corner, and surround it by zeros. By definition, $\varphi$ maps to $P \mathcal{M} P$. Let $\varphi' = F^{-1} \circ \varphi: \Gamma \to M_{n-1}(\mathbb{C})$.
We claim $\varphi'$ is a $\frac{c}{\sqrt{n}}$-homomorphism (although technically $\varphi'$ is not valued in $U(n-1)$ yet). Indeed, we have $\Vert I_n - P \Vert_{HS,n}^2 = \frac{1}{n}$, so by (\ref{lem:HS_prop}, (1)) and (\ref{lem:HS_prop}, (2)) and the fact that $\Vert P \Vert_{op} = 1$:
\begin{align*}
\Vert \varphi'(g)\varphi'(h) - \varphi'(gh) \Vert_{HS,n-1} & = c \Vert \varphi(g)\varphi(h) - \varphi(gh) \Vert_{HS,n} = c\Vert P\pi(g) P \pi(h) P - P\pi(gh)P \Vert_{HS,n} \\
& \leq c \Vert \pi(g) P \pi(h) - \pi(g)\pi(h) \Vert_{HS,n} = c\Vert P - I_n \Vert_{HS,n} \leq \frac{c}{\sqrt{n}}
\end{align*}
Since $F(I_{n-1}) = P$, by (\ref{lem:HS_prop}, (1)) and (\ref{lem:HS_prop}, (2)):
\begin{align*}
\Vert I_{n-1} - (\varphi'(g))^* \varphi'(g) \Vert_{HS,n-1} & = c\Vert P - P \pi(g)^* P \pi(g) P \Vert_{HS,n} = c\Vert PI_nP- P \pi(g)^* P \pi(g) P \Vert_{HS,n}\\
& \leq c\Vert I_n - \pi(g)^* P \pi(g) \Vert_{HS,n} = c\Vert I_n - P \Vert_{HS,n} \leq \frac{c}{\sqrt{n}}
\end{align*}
Using Lemma \ref{lem:unitary_stab} we can find unitaries $\psi'(g)$ in $U(n-1)$ such that $\Vert \varphi'(g) - \psi'(g) \Vert_{HS,n-1} \leq \frac{c}{\sqrt{n}}$. Define $\psi = F \circ \psi': \Gamma \to \mathcal{U}(P\mathcal{M}P)$.
Since $\Vert \varphi'(g) \Vert_{op} \leq \Vert Q \Vert_{op} \Vert P \Vert_{op} \Vert \pi(g) \Vert_{op} \Vert P \Vert_{op} \Vert Q^* \Vert_{op} \leq 1,\; \Vert \psi'(g) \Vert_{op} = 1$, we get:
\begin{align*}
\Vert \psi'(g)\psi'(h) - \psi'(gh) \Vert_{HS,n-1} & \leq \Vert \psi'(g)\psi'(h) - \varphi'(gh) \Vert_{HS,n-1} + \frac{c}{\sqrt{n}} \\
& \leq \Vert \psi'(g)\psi'(h) - \varphi'(g)\varphi'(h) \Vert_{HS,n-1} + \frac{2c}{\sqrt{n}} \\
& \leq \Vert (\psi'(g) - \varphi'(g))\psi'(h) \Vert_{HS,n-1} + \Vert \varphi'(g)(\psi'(h) - \varphi'(h)) \Vert_{HS,n-1} +\frac{2c}{\sqrt{n}} \\
& \leq \frac{4c}{\sqrt{n}}
\end{align*}
Thus, $\psi': \Gamma \to U(n-1)$ is a $\frac{4}{\sqrt{n-1}}$-homomorphism.
We now claim that it is bounded away from all true unitary representations of dimension $n-1$ by $\frac{1}{2}$. Assume $\rho': \Gamma \to U(n-1)$ is a representation such that $\Vert \rho'(g) - \psi'(g) \Vert_{HS,n-1} < \frac{1}{2}$ for all $g\in\Gamma$. Define $\rho = F\circ\rho': \Gamma \to \mathcal{U}(P\mathcal{M}P)$.
We consider the bounded family of matrices $\{\rho(g) \pi(g)^*\}_{g\in\Gamma} \subset P\mathcal{M}$, and take its closed convex hull to be $\mathcal{C} \subset P\mathcal{M}$. Recall we had $\Vert \varphi(g) - \psi(g) \Vert_{HS,n} \leq \frac{1}{\sqrt{n}}, \; \Vert \rho(g) - \psi(g) \Vert_{HS,n} < \frac{1}{2c}$. By Lemma \ref{lem:HS_prop} again, we have for all $g$:
\begin{align*}
\Vert \pi(g) - P \pi(g) P \Vert_{HS,n} &\leq \Vert \pi(g) - \pi(g) P \Vert_{HS,n} + \Vert (I_n - P) \pi(g) P \Vert_{HS,n} \leq 2 \Vert I_n - P \Vert_{HS,n} = \frac{2}{\sqrt{n}} \\
\Vert I_n - \rho(g) \pi(g)^* \Vert_{HS,n} & \leq \Vert I_n - \varphi(g) \pi(g)^* \Vert_{HS,n} + \Vert (\varphi(g) - \rho(g))\pi(g)^* \Vert_{HS,n} \\
& \leq \Vert I_n - \varphi(g) \pi(g)^* \Vert_{HS,n} + \Vert \varphi(g) - \psi(g) \Vert_{HS,n} + \Vert \rho(g) - \psi(g) \Vert_{HS,n} \\
& < \Vert I_n - P \pi(g) P \pi(g)^* \Vert_{HS,n} + \frac{1}{\sqrt{n}} + \frac{1}{2c} \leq \frac{3}{\sqrt{n}} + \frac{1}{2}
\end{align*}
Notice we used $c>1$. And so $\mathcal{C}$ is contained in the ball $B_{\frac{1}{2} + \frac{3}{\sqrt{n}}}(I_n)$ as we just showed $\{\rho(g) \pi(g)^*\}_{g\in\Gamma}$ is contained in it. Since $\frac{1}{2} + \frac{3}{\sqrt{n}} \leq 1$, $\mathcal{C}$ does not contain $0$.
Notice that $\Gamma$ acts isometrically on the normed space $P \mathcal{M}$ (equipped with the Hilbert Schmidt norm) by $g \cdot A = \rho(g) A \pi(g)^*$.
This is indeed an action since $\rho = F \circ \rho'$ is a homomorphism, and it preserves the $\Vert \cdot \Vert_{HS,n}$ norm by unitary invariance (\ref{lem:HS_prop}, (1)):
For $A \in P\mathcal{M}$ we have $g \cdot A = (\rho(g) + (I_n - P)) A \pi(g)^*$ where $\rho(g) + (I_n - P)$ is unitary since $\rho(g) \in \mathcal{U}(P\mathcal{M}P)$.
Notice $\mathcal{C}$ is invariant under the action of $\Gamma$, as $\{\rho(g)\pi(g)^*\}_{g \in \Gamma}$ is. Since $\mathcal{C}$ is closed and convex in a Hilbert space, it contains a unique vector of minimum norm $A$.
Since the action is isometric, this operator $A$ is invariant under the $\Gamma$ action, so it satisfies that $\rho(g)A=A\pi(g)$ for all $g$ in $\Gamma$. By "functoriality": $\rho'(g)F^{-1}(A) = F^{-1}(A) \pi(g)$, So $F^{-1}(A):\mathbb{C}^n \to \mathbb{C}^{n-1}$ intertwines $\pi$ and $\rho'$. But since it is non zero and $\pi$ is irreducible, by Schur's Lemma it has to be injective. Which is a contradiction to the existence of such $\rho'$.
Thus, we obtained a family of $\psi_n$ of $\frac{4}{\sqrt{n-1}}$-homomorphisms for arbitrarily large $n$'s, that are bounded away by $\frac{1}{2}$ from any true $n-1$ dimensional representation. So, $\Gamma$ is not uniformly HS stable.
\end{proof}
The following is where the residual finiteness and finite generation assumptions come in: We prove that in this situation being virtually abelian is equivalent to the property that dimensions of finite dimensional irreducible unireps are bounded.
\begin{lem} \label{lem:irrep_dims}
\emph{(1)} If $\Gamma$ is virtually abelian with an abelian subgroup of index $\leq d$, then each finite dimensional irreducible complex representation of $\Gamma$ has dimension $\leq d$.
\emph{(2)} If $\Gamma$ is residually finite, finitely generated but not virtually abelian, then it has irreducible finite dimensional unitary representations of arbitrarily large dimensions.
\end{lem}
\begin{rem}
Irreducible unitary representations of virtually abelian groups are automatically finite dimensional, see \cite{Moore}, but we will not be using this fact.
\end{rem}
\begin{proof}
(1) Let $A \leq \Gamma$ be an abelian subgroup of index $k, k \leq d$. Let $\rho: \Gamma \to GL(V)$ be a finite dimensional irreducible $\Gamma$-representation. Consider $\rho\vert_{A}$. This is as an $A$-representation and therefore it has an irreducible $A$ sub-representation (since $V$ is finite dimensional, it has an $A$- invariant subspace of minimal dimension).
Since $A$ is abelian, this irreducible subspace is $1$-dimensional and acted on by $A$ with some character $\chi$. Let $g_1, \dots, g_k$ denote some left coset representatives of $A$ in $\Gamma$.
Since $V$ is irreducible, we have $\spam(\Gamma \cdot v_0) = V$. Since $A$ preserves $\spam(v_0)$, we obtain:
$$\spam(\Gamma \cdot v_0) = \spam\big(\bigcup_{i=1}^{k} g_i A \cdot v_0 \big) = \spam\{g_i\cdot v_0\}_{i=1}^{k}$$
Thus, $\dim(V) = \dim(\spam\{g_i\cdot v_0\}_{i=1}^{k}) \leq d$.
(2) Assume all irreducible unitary representations of $\Gamma$ (f.g. residually finite) have dimension less than $d$, we shall show it is virtually abelian.
For each $e \neq x \in \Gamma$, there exists a finite index normal subgroup $N_x \triangleleft \Gamma$ with $x\notin N_x\triangleleft\Gamma$.
Since irreducible finite dimensional representations of a finite group separate points, there exists a (unitary) irrep $\rho_{x}:\Gamma/N_x\to U(n)$
for some $n\in\mathbb{N}$ with $\rho_{x}(x\cdot N_x) \neq I_n$. If we pre-compose $\rho_{x}$ with the quotient map $\Gamma \twoheadrightarrow \Gamma / N_x$, we get an irreducible representation of $\Gamma$, which we will still denote by $\rho_{x}$.
Thus by assumption: $n \leq d$, and $\rho_x(x) \neq I_n$. Since $\rho_x(\Gamma)$ are finite, by the Jordan-Schur Theorem (see for example \cite{TerryBlog}) there is an integer $C$ depending only on $d$ s.t. for all $x \neq e$, there is an abelian subgroup $A'_{x} \leq \rho_{x}(\Gamma)$ with index bounded by $C$.
Pull back $A_{x}=\rho_{x}^{-1}(A'_{x})$, we have $[\Gamma:A_{x}]\leq C$ by the correspondence theorem. Let $A=\bigcap_{x} A_{x} \leq \Gamma$.
This intersection is actually of finite index: it is known that $\Gamma$, being finitely generated, has finitely many subgroups of index less than $C$. And so, $A$ is a finite intersection of finite index subgroups and therefore is of finite index. Lastly, we show $A$ is abelian.
Indeed, if $g,h\in A$ do not commute, then the commutator $[g,h] \ne 1$ satisfies $\rho_{x}([g,h])=1$ for all $e \neq x \in \Gamma$ because $\rho_x(g), \rho_x(h)$ belong to the abelian subgroup $A_x$. On the one hand $[g,h] \neq e$ in particular implies $\rho_{[g,h]} ([g,h])=1$. On the other hand, $\rho_{[g,h]} ([g,h]) \ne 1$ by construction, which gives us a contradiction. Thus, $A$ is abelian.
\end{proof}
\section{Stability of virtually abelian groups and beyond}
In this section we set out to prove Theorem \ref{thm:vir_abelian_case}, which will give the "if" direction of our main result \ref{thm:main}. We do so by proving a generalization of \ref{thm:vir_abelian_case} that holds for amenable groups with irreducible finite dimensional representations of bounded dimensions, giving a converse to Theorem \ref{prop:instab} in the amenable setting. Recall the following simple fact:
\begin{lem} \label{lem: unireps}
Let $\Gamma$ be any group and let $\pi: \Gamma \to U(n)$ be a unitary representation. Then it is completely reducible. That is, there exist $W \in U(n)$ and irreducible representations $\pi_1: \Gamma \to U(n_1), \dots, \pi_k: \Gamma \to U(n_k)$ with $n_1+ \dots +n_k=n$ such that $\pi=W^* \diag(\pi_1, \dots, \pi_k)W$.
\end{lem}
The following contains the main technical novelty of the article:
\begin{thm}\label{thm:amenable_case}
\emph{(1)} Given $d \in \mathbb{N}$, the class $\mathscr{H}$ of amenable groups for which all irreducible finite dimensional unitary representations are of dimension $\leq d$ is uniformly stable with respect to the family $\{(U(n), d_{HS,n})\}$ of unitary groups with the corresponding Hilbert Schmidt norms.
\emph{(2)} Let $\Gamma$ be an amenable group. Then $\Gamma$ is uniformly HS-stable if and only if there exists $d \in \mathbb{N}$, such that all finite dimensional irreducible unitary representations of $\Gamma$ are of dimension $\leq d$.
\end{thm}
\begin{proof}
The "only if" part of (2) follows directly from Proposition \ref{prop:instab}. The converse direction of (2) clearly follows from (1).
For showing (1), we will prove that for any $\Gamma$ in the class $\mathscr{H}$, any $\epsilon$-homomorphism $\varphi:\Gamma\to U(n)$ into the unitary group $U(n)$ with the Hilbert Schmidt norm $\Vert\cdot\Vert_{HS,n}$ and for
$$\delta=161\epsilon + 100\big( (\sqrt{2}+1)\sqrt{1+2500\epsilon^2}+\sqrt{d-1} \big)\epsilon$$ there exists a true representation $\rho: \Gamma \to U(n)$ with $\Vert \varphi(g) - \rho(g) \Vert_{HS,n} < \delta$ for all $g \in \Gamma$.
Firstly, by De Chiffre, Ozawa and Thom's result (Theorem \ref{thm:DOT_main}), there exists $n \leq m < n+2500\epsilon^{2}n$, an isometry $U:\mathbb{C}^{n}\hookrightarrow\mathbb{C}^{m}$ and a unitary representation $\pi:\Gamma\to U(m)$ such that:
\begin{equation} \label{eq:DOT}
\forall g\in\Gamma\text{, }\Vert\varphi(g)-U^{*}\pi(g)U\Vert_{HS}<161\epsilon
\end{equation}
If $\epsilon<\frac{1}{50\sqrt{n}}$, i.e. if $n+2500\epsilon^{2}n<n+1$, then $m=n$ automatically and we have nothing to prove ($\rho(g)= U^* \pi(g) U$ is as required). So, from now one we assume that $ \frac{1}{n} \leq 2500\epsilon^2$.
Applying Lemma \ref{lem: unireps} to $\pi$, we deduce that there exists $W \in U(m)$, $k \leq m$ and irreducible representations $\pi_1, ..., \pi_k$, such that $\pi(g)=W^{*}\diag(\pi_{1}(g),\dots,\pi_{k}(g))W$ for every $g \in \Gamma$. Note that $dim(\pi_i) \le d$ for each $i=1 \dots k$ by assumption.
Replace $U$ by $WU$ and $\pi(g)$ by $\diag(\pi_{1}(g),\dots,\pi_{k}(g))$. Note that the new $U$ is still an isometry with the same domain and codomain and we still have $\Vert\varphi(g)-U^{*}\pi(g)U\Vert_{HS}<161\epsilon$. Denote the standard basis by $\{e_1,...,e_m\}$. Notice $\pi(g)$ is now block-diagonal in this basis.
Let $P = UU^*$ be the projection onto the image of $U$. Denote by $[P]_n$ the $m \times n$ matrix of the first $n$ columns of $P$ in the basis $\{e_1,...,e_m\}$. That is, $[P]_n: \mathbb{C}^n \to \mathbb{C}^m$ is given by $[P]_ne_i=Pe_i$ for $i=1 \dots n$. Define $M:= U^* [P]_n \in M_n(\mathbb{C})$. Notice $U^*P = U^* UU^* = U^*$, so this is just the square matrix consisting of the columns $\{U^* e_1, U^* e_2, \dots, U^* e_n\}$.
\begin{claim} \label{claim:PM}
\emph{(1)} $\Vert I_m - P \Vert_{HS,m}^2 =\Vert I_m - P^* P \Vert_{HS,m}^2 \leq 2500 \epsilon^2$
\emph{(2)} $\Vert [P]_n^* [P]_n - I_n \Vert_{HS,n} \leq \sqrt{1+2500\epsilon^2} 50 \epsilon$
\emph{(3)} $\Vert M^* M - I_n\Vert_{HS,n} \leq \sqrt{1+2500\epsilon^2} 50 \epsilon$
\end{claim}
\begin{proof}
(1) Since $P$ is a rank $n$ projection, $I_m - P$ is a rank $m - n$ projection, we have that
$$\Vert I_m - P \Vert_{HS,m}^2 = \frac{1}{m} \Tr((I_m - P)^* (I_m - P)) = \frac{1}{m} \Tr((I_m - P)) = \frac{m - n}{m} \leq 2500 \epsilon^2$$
(2) Notice that $[P]_n^* [P]_n$ appears as a corner of $P^* P$ as follows:
$$
P^*P =
\begin{pmatrix}
[P]_n^* [P]_n & * \\
* & *
\end{pmatrix}, \;
P^*P - I_m =
\begin{pmatrix}
[P]_n^* [P]_n - I_n & * \\
* & *
\end{pmatrix}
$$
Thus, since $n\Vert [P]_n^* [P]_n - I_n \Vert_{HS,n}^2$ is the sum of square entries of $[P]_n^* [P]_n - I_n$, we have that:
$$\frac{n}{m}\Vert [P]_n^* [P]_n - I_n \Vert_{HS,n}^2 \leq \Vert P^* P - I_m \Vert_{HS,m}^2 \leq 2500 \epsilon^2$$
So:
$$\Vert [P]_n^* [P]_n - I_n \Vert_{HS,n} \leq \sqrt{m/n} 50\epsilon \leq \sqrt{1+2500\epsilon^2} 50 \epsilon$$
(3) We just pull (2) back to $\mathbb{C}^n$, Since $P = UU^*$ is a projection:
$$\Vert M^* M - I_n\Vert_{HS,n} = \Vert [P]_n^* U U^* [P]_n - I_n\Vert_{HS,n} = \Vert [P]_n^* [P]_n - I_n\Vert_{HS,n} \leq
\sqrt{1+2500\epsilon^2} 50\epsilon $$
\end{proof}
Hence, we can apply Lemma \ref{lem:unitary_stab} to (3) in claim \ref{claim:PM} and obtain a unitary $R \in U(n)$ which is $\sqrt{1+2500\epsilon^2} 50 \epsilon$-close to $M$ in the $HS$-norm.
Define the block-diagonal matrix $\pi'(g):=\diag(\pi_1(g), \dots, \pi_r(g), 1, \dots, 1)$ for the maximal $r \leq k$ such that $d_1+ d_2 +\dots+ d_r \leq n$ and fill in the rest of the diagonal by $1$'s so that $\pi'(g)$ is an $n \times n$ matrix. Note that it guarantees $d_1+ d_2 +\dots+ d_k \geq n-d+1$, since all blocks have size $\leq d$. Finally, define the representation $\rho : \Gamma \to U(n)$ as $\rho(g)=R \pi'(g) R^*$. This is a unitary representation since $\pi'(g)$ is a unitary representation and $R$ is unitary.
{\it Remark:} If $d=1$, then each $d_i$ equals $1$ as well and therefore each $\pi_i$ is a character, so let us denote it by $\chi_i$ because this is more natural in this case. Let $R_i$ be the $i-$th column of $R$. Using that $R$ is unitary, i.e. $R^*R=Id$, we obtain $\rho(g)R_i=R \pi'(g) (R^*R_i)=R \pi'(g) e_i=R \chi_i(g) e_i=\chi_i(g) (Re_i)=\chi_i(g) R_i$, so there is a nice short formula for $\rho$ in the orthonormal basis given by $\{R_1,...,R_n\}$.
\begin{claim} \label{claim:inequalities}
\emph{(1)} $\Vert M\pi'(g)M^*-\rho(g) \Vert_{HS,n} \leq 100\epsilon\sqrt{1+2500\epsilon^2}$
\emph{(2)} $\Vert U^*\pi(g)U-M\pi'(g)M^* \Vert_{HS,n} \leq 100(\sqrt{2 + 5000\epsilon^2} + \sqrt{d-1})\epsilon$
\end{claim}
\begin{proof}
(1) Note that $\Vert \pi'(g) \Vert_{op}=\Vert R \Vert_{op}=1$ because these two operators are unitary, $\Vert M \Vert_{op}= \Vert U^* [P]_n \Vert_{op} \leq \Vert U^* \Vert_{op} \Vert [P]_n \Vert_{op} \leq \Vert U^* \Vert_{op} \Vert P \Vert_{op}=1$. Using this, (\ref{lem:HS_prop}, (2)) and Claim \ref{claim:PM} we obtain:
\begin{align*}
\Vert M\pi'(g)M^*-\rho(g) \Vert_{HS,n} & = \Vert M\pi'(g)M^*-R \pi'(g)R^* \Vert_{HS,n} &\\
& \leq \Vert (M-R)\pi'(g)M^* \Vert_{HS,n}+\Vert R\pi'(g)(M-R)^* \Vert_{HS,n} \\
& \leq \Vert (M-R)\Vert_{HS,n} \Vert \pi'(g)M^* \Vert_{op}+\Vert R\pi'(g)\Vert_{op} \Vert(M-R)^* \Vert_{HS,n} &(\ref{lem:HS_prop}, (2))\\
& \leq 2\Vert (M-R)\Vert_{HS,n} \leq 100\epsilon\sqrt{1+2500\epsilon^2}
\end{align*}
(2) Let $Q: \spam(e_1,...,e_n) \hookrightarrow \spam(e_1,...,e_m)$ be given by $Q e_i := e_i$ for $i=1 \dots n$. It gives us $Q^*(e_i) = e_i$ for $i=1 \dots n$ and $Q^*(e_i) = 0$ for $i>n$. Denote $n \geq n':=d_1+\dots+d_k \geq n-d+1$.
Recall the bound $\Vert I_m - P \Vert_{HS,m}^2 \leq 2500\epsilon^2$ in \ref{claim:PM}. We deduce:
$$\Vert [P]_nQ^*-I_m \Vert_{HS,m}^2=\frac{1}{m} \sum_{i=1}^{n} \Vert Pe_i - e_i \Vert^2+\frac{m-n}{m} \leq \frac{1}{m} \sum_{i=1}^{m} \Vert Pe_i - e_i \Vert^2+\frac{m-n}{n} \leq 2500\epsilon^2 +2500\epsilon^2 $$
So $\Vert [P]_nQ^*-I_m \Vert_{HS,m} \leq 50\sqrt{2}\epsilon$. Note that $\pi'(g)e_i=Q^*\pi(g)Qe_i$ for $i \leq n'$ and $\Vert \pi'(g)e_i-Q^*\pi(g)Qe_i \Vert = \Vert e_i-Q^*\pi(g)e_i \Vert \leq 2$ for $n \geq i>n'$, since $\pi'$ was defined to act trivially on these vectors. This implies:
$$\Vert Q^*\pi(g)Q-\pi'(g) \Vert_{HS,n}^2=\frac{1}{n} \sum_{i=1}^n \Vert Q^*\pi(g)Qe_i-\pi'(g)e_i \Vert^2=\frac{1}{n} \sum_{i=n'+1}^n \Vert Q^*\pi(g)Qe_i-\pi'(g)e_i \Vert^2 \leq \frac{4(n-n')}{n}$$
Now recall we assumed $\frac{1}{n} \leq 2500\epsilon^2$. Since $n - n' \leq d-1$, we deduce $\Vert Q^*\pi(g)Q-\pi'(g) \Vert_{HS,n} \leq 100\sqrt{d-1}\epsilon$.
Using (\ref{lem:HS_prop}, (2)) again and also using $\Vert U \Vert_{op}=\Vert U^* \Vert_{op}=1$, $\Vert [P]_n \Vert_{op}=\Vert [P]_n^* \Vert_{op} \le \Vert P \Vert_{op}= 1$ we obtain:
\begin{align*}
\Vert U^*\pi(g)U & -M\pi'(g)M^* \Vert_{HS,n} = \Vert U^*\pi(g)U-U^*[P]_n\pi'(g)[P]_n^*U \Vert_{HS,n} &\\
& \leq \sqrt{\frac{m}{n}}\Vert \pi(g)-[P]_n\pi'(g)[P]_n^* \Vert_{HS,m} &(\ref{lem:HS_prop}, (2)) \\
& \leq \sqrt{\frac{m}{n}}\Vert \pi(g)-[P]_nQ^*\pi(g)Q[P]_n^* \Vert_{HS,m} \\
& + \sqrt{\frac{m}{n}}\Vert [P]_nQ^*\pi(g)Q[P]_n^* - [P]_n\pi'(g)[P]_n^* \Vert_{HS,m} \\
& \leq \sqrt{\frac{m}{n}}\Vert (I_m-[P]_nQ^*)\pi(g) \Vert_{HS,m} + \sqrt{\frac{m}{n}}\Vert [P]_nQ^*\pi(g)(I_m-Q[P]_n^*) \Vert_{HS,m} \\
& +\Vert Q^*\pi(g)Q-\pi'(g) \Vert_{HS,n} &(\ref{lem:HS_prop}, (2))\\
& \leq 2\sqrt{\frac{m}{n}}\Vert (I_m-[P]_nQ^*) \Vert_{HS,m}+100\sqrt{(d-1)}\epsilon \\
& \leq \sqrt{1 + 2500\epsilon^2}100\sqrt{2}\epsilon + 100\sqrt{(d-1)}\epsilon &(\ref{lem:HS_prop}, (2))
\end{align*}
Notice that in the second to last line we used the fact that $\Vert (I_m - [P]_nQ^*) \Vert_{HS,m} = \Vert (I_m - Q[P]_n^*) \Vert_{HS,m}$ as we saw in Lemma \ref{lem:HS_prop}'s proof that $\Vert A \Vert_{HS,m}= \Vert A^* \Vert_{HS,m}$.
\end{proof}
By combining both inequalities of Claim \ref{claim:inequalities}, we obtain:
\begin{align*}
\Vert U^* \pi(g) U - \rho(g) \Vert_{HS,n} & \leq \Vert U^* \pi(g) U - M \pi'(g) M^* \Vert_{HS,n}+\Vert \rho(g)- M \pi'(g) M^*\Vert_{HS,n} \\
& \leq 100(\sqrt{2 + 5000\epsilon^2} + \sqrt{d-1})\epsilon+100\epsilon\sqrt{1+2500\epsilon^2} \\
& = 100((\sqrt{2}+1)\sqrt{1+2500\epsilon^2}+\sqrt{d-1})\epsilon
\end{align*}
Hence, using the inequality guaranteed by De Chiffre, Ozawa, Thom (\ref{eq:DOT}) and the triangular inequality:
\begin{align*}
\Vert \varphi(g) - \rho(g) \Vert_{HS,n} & \leq \Vert U^* \pi(g) U - \varphi(g) \Vert_{HS,n} + \Vert U^* \pi(g) U - \rho(g) \Vert_{HS,n} \\
& < 161\epsilon + 100\big( (\sqrt{2}+1)\sqrt{1+2500\epsilon^2}+\sqrt{d-1} \big)\epsilon
\end{align*}
\end{proof}
As a special case of Theorem \ref{thm:amenable_case}, we obtain stability in the virtually abelian setting:
\begin{proof} [Proof of Theorem \ref{thm:vir_abelian_case}]
Let $G$ be from the class $\mathscr{G}$ of virtually abelian groups with an abelian subgroup of index $\leq d$. Lemma \ref{lem:irrep_dims}, (1) tells us that the dimensions of finite dimensional irreducible unitary representations of $G$ are bounded by $d$ as well.
Since any virtually abelian group is amenable, we conclude that $\mathscr{G} \subseteq \mathscr{H}$ and therefore Theorem \ref{thm:amenable_case} applies directly.
\end{proof}
We now deduce Theorem \ref{thm:main} as a direct consequence of the last section and the present one:
\begin{proof}[Proof of Theorem \ref{thm:main}]
Let $\Gamma$ be finitely generated and residually finite. Assume it is virtually abelian. Then in particular, Theorem \ref{thm:vir_abelian_case} shows it is HS-stable.
Conversely, assume $\Gamma$ is not virtually abelian. By Lemma \ref{lem:irrep_dims}, (2) we know $\Gamma$ must have irreducible unitary representations of arbitrarily large dimensions. As a result, Proposition \ref{prop:instab} shows $\Gamma$ is not HS-stable.
\end{proof}
\begin{rem}
Theorem \ref{thm:main} is not valid without the assumption of $\Gamma$ being residually finite. For example, by \cite{JushenkoMonod} there exists an infinite finitely generated simple amenable group $D$. Such a group is not virtually abelian. However, every finite dimensional representation of $D$ is trivial (by Malcev's theorem \cite{Malcev}) and so Theorem \ref{thm:amenable_case} implies that $D$ is uniformly HS stable.
\end{rem}
{\it Question.} This paper shows that within the classes of (a) residually finite groups and (b) amenable groups, the property of uniform HS stability is equivalent to the property that all finite dimensional irreducible unitary representations are of bounded dimension. A common generalization of (a) and (b) is the family of sofic groups. It is therefore natural to ask if this characterisation is still valid for sofic or even hyperlinear groups.
\begin{bibdiv}
\begin{biblist}
\bib{BC}{article}{
title={Stability of approximate group actions: uniform and probabilistic},
author={Becker, O.},
author={Chapman, M.},
journal={arXiv: Group Theory},
note={Available at \url{https://arxiv.org/abs/2005.06652}},
year={2020}
}
\bib{BOT}{article}{
author={Burger, M.},
author={Ozawa, N.},
author={Thom, A.},
title={On Ulam stability},
journal={Israel J. Math.},
volume={193},
date={2013},
number={1},
pages={109--129},
}
\bib{DOT}{article}{
title={Operator algebraic approach to inverse and stability theorems for amenable groups},
author={De Chiffre, M.},
author={Ozawa, N.},
author={Thom, A.},
journal={Mathematika},
year={2019},
volume={65},
pages={98-118},
note = {Available at \url{https://arxiv.org/abs/1706.04544}}
}
\bib{GH}{article}{
title={Inverse and stability theorems for approximate representations of finite groups},
author={Gowers, W. T.},
author={Hatami, O.},
date={2017},
journal={ Matematicheskii Sbornik, Volume 208, Number 12, Pages 70-106},
note={Avaliable at arXiv:1510.04085v2}
}
\bib{JushenkoMonod}{article}{
author={Juschenko, K.},
author={Monod, N.},
title={Cantor systems, piecewise translations and simple amenable groups},
date={2013/9/1},
journal={Annals of Mathematics},
pages={775-787},
note={Available at \url{https://arxiv.org/pdf/1204.2132.pdf}},
}
\bib{Kazh}{article}{
author={Kazhdan, D.},
title={On $\varepsilon$-representations},
journal={Israel J. Math.},
volume={43},
date={1982},
number={4},
pages={315--323},
}
\bib{Malcev}{article}{
author={Malcev, A. I.},
title={On isomorphic representations of infinite groups of matrices},
journal={Mat. Sb.},
date={1940},
volume={9},
pages={405-422},
note={(also in Amer. Math. Sot. Transl. 45 (1965), l-18)}
}
\bib{Moore}{article}{
author={Moore, C.},
title={Groups with finite dimensional irreducible representations},
journal={Trans. Amer. Math. Soc.},
year={1972},
volume={166},
pages={401-410}
}
\bib{SV}{article}{
author={Slofstra, W.},
author={Vidick, T.},
title={Entanglement in Non-local Games and the Hyperlinear Profile of Groups},
journal={Annales Henri Poincar{\'e}},
year={2018},
volume={19},
pages={2979-3005}
}
\bib{TerryBlog}{article}{
author={Tao, T.},
title={The Jordan-Schur theorem},
date={2011},
eprint={{https://terrytao.wordpress.com/2011/10/05/the-jordan-schur-theorem/}}
}
\bib{Ulam}{book}{
author={Ulam, S. M.},
title={A collection of mathematical problems},
series={Interscience Tracts in Pure and Applied Mathematics, no. 8},
publisher={Interscience Publishers, New York-London},
date={1960},
pages={xiii+150}
}
\bib{VidickBlog}{article}{
author={Vidick, T.},
title={Pauli braiding},
date={2017},
eprint={{https://mycqstate.wordpress.com/2017/06/28/pauli-braiding/}},
}
\end{biblist}
\end{bibdiv}
\end{document} |
1,116,691,500,344 | arxiv | \section*{Introduction.}
We study the Bernoulli Boolean discrete percolation model on the $d$-dimensional lattice $\Z^d$. This is a discrete percolation model which can be informally described
as follows. Consider a Bernoulli point process $\cX$ with retention parameters $0<p_x<1$, $x\in\Z^d$, on the $d$-dimensional lattice $\Z^d$. This means that each site
$x\in\Z^d$ is {\it{present}} or {\it{absent}} in $\cX$ with probability $p_x$ or $1-p_x$, respectively and independently of anything else. Each point of $\cX$ is the
center of a ball of random radius in the metric induced by the $L_1$ norm. The random radii $R_x$, $x\in\Z^d$, are independent and independent of $\cX$. We consider the
\emph{occupied region} which is defined as the subset of $\Z^d$ obtained by taking the union of all random balls centred at the points of $\cX$.
This model is the discrete counterpart of the Poisson Boolean model of continuum percolation. In the Poisson Boolean model a ball of random radius is centred at each point of a
homogeneous Poisson point process with density $\lambda$ on $\R^d$. The corresponding radii form an independent and identically distributed collection of non-negative
random variables which are also independent of the point process. Denote by $\cB$ the union of these balls and by $\cC$ the connected component of $\cB$ containing the origin. Let $R$ be one of the random radii and denote by $\bf{P}$ the law governing
the continuous boolean model. Also, denote by $\bf{E}$ the corresponding expectation operator. In \cite{Hall}, Hall proved that
for values of $\lambda$ small enough, $\cC$ is almost surely bounded provided that $\E[R^{2d-1}]$ is finite. In \cite{Meester_and_Roy}, Meester and Roy proved that if $d \geq 2$ and
$\E[R^{d}]$ is finite, then the expected number of balls in the occupied component which contains the origin is finite whenever $\lambda$ is small enough if, and only if,
$\E[R^{2d}]$ is finite. Also, they proved that if $\E[R^{2d-1}]$ is finite then $\P(\mbox{number of balls in any occupied component is finite})=1$ provided that $\lambda$ is small
enough. In \cite{Gouere}, Gouere showed that the set $\mathcal{C}$ is almost surely bounded for small enough $\lambda$ if and only if $\E[R^d]$ is finite.
In this paper we prove that if $p_x=p \in (0,1)$ for all $x$ and the random radii $(R_x, x \in \mathbb{Z}^d)$ are i.i.d. random variables with finite $d$-moment, then the connected
components arising in the discrete Boolean model are almost surely finite for sufficientlu small values of $p$. We also prove that such behavior does not occur if the
random radii have infinite $d$-moment. Then, using a coupling argument, we extend the result about subcriticality to the case where the values of $p_x$ are not constant and
the random radii are independent but not necessarily identically distributed. Then we use the result above about subcriticality to provide a graphical construction method for
interacting particle systems with interactions of infinite range. In order to prove this result we assume that the generator of the particle system admits a Kalikow-type decomposition.
Recently, this type of decomposition has been explored by Galves et al. in the
context of perfect simulation of interacting particle systems with interactions of infinite range. More precisely, in \cite{Galves_Garcia_Locherbach} the authors exhibit a
sufficient condition under which a Kalikow-type decomposition holds for the transition rates of interacting particle systems with interactions of infinite range. Namely, if the
transition rates satisfy a continuity condition then the referred decomposition holds. For further details on Kalikow-type decompositions see \cite{kalikow} and \cite{bramson-kalikow}.
This paper is organized as follows: In section \ref{booleanpercolation} we describe the discrete boolean percolation model and state the main result of this work which
says about the absence of percolation on the model described above. This result is proved in section \ref{mainp} following ideas for the continuous boolean percolation
model studied in \cite{Gouere}.
In section \ref{PoTP} we extend the result in \cite{harris} on the graphical
construction of interacting particle system with finite-range interaction to the case of interactions of infinite range, using the results in section \ref{booleanpercolation}
under mild assumption on the decay of the range of interaction.
\section{Definitions, notation and main results}
\label{booleanpercolation}
Throughout this paper $\N_0$ will denote the set of non-negative integer numbers. We write $\| \;\|$ for the $L_1$ norm on $\Z^d$ and $|A|$ for the cardinal number
of any set $A\subset\Z^d$. Also, $B(x,r)=\{y\in\Z^d:\, \|y-x\|\leq r\}$ denotes the (close) ball of radius $r$ centred at $x$ and $S_r=\{x\in\Z^d:\,\|x\|=r\}$ denotes the sphere of
radius $r$. For any set $A\subset\Z^d$, $A^c$ stands for the complement of $A$.
If $F$ denotes a cumulative distribution function, let $F^{-1}$ be the generalized inverse of $F$ defined by $F^{-1}(u)=\inf\{r\in\R: F(r)\geq u\}$ where $u\in[0,1]$.
If $X$ and $Y$ are two stochastic elements equally distributed, we write $X \stackrel{D}{=}Y$.
A \emph{Bernoulli point process on} $\Z^d$ with retention parameters $\np=(p_x:x\in\Z^d)$, where $0<p_x<1$ for all $x\in\Z^d$, is a family of independent $\{0,1\}$-valued random variables
$\cX=(X_x:\,x\in\Z^d)$ such that $p_x$ is the probability of the event $\{X_x=1\}$. Identify the family of random variables $\cX$ with the random subset $\cP$ of $\Z^d$ defined by
$\cP=\{x\in\Z^d:\, X_x = 1\}$ whose distribution is a product measure whose marginals at each site $x$ are Bernoulli distribution of parameter $p_x$.
By a \emph{Bernoulli marked point process} on $\Z^d$ we mean a pair $(\cX,\cR)$ formed by a Bernoulli point process $\cX$ on $\Z^d$ and a family of independent $\mathbb{N}_0$-valued random variables
$\cR=(R_x: x\in\Z^d)$ called marks. We assume that these marks are independent of the point process $\cX$.
Let $(\cX,\cR)$ be a Bernoulli marked point process on $\Z^d$. Let $p_x$ be the retention parameter of the random variable $X_x$ and let $\nu_x$ be the probability function of the random variable $R_x$. If there exists a value $p\in(0,1)$ and a probability function $\nu$ on $\N_0$ such that $p_x=p$ and $\nu_x=\nu$ for every $x\in\Z^d$ we say that the marked point process $(\cX,\cR)$ is \emph{spatially homogeneous with retention parameter $p$ and marks distributed according to $\nu$}.
Let $(\cX,\cR)$ be a Bernoulli marked point process on $\Z^d$ with retention parameters $\np=(p_x:x\in\Z^d)$ and marks distributed according to a family of probability functions
$\mathbf{n}=(\nu_x:x\in\Z^d)$. We denote by $\P_{\np,\,\nn}$ and $\E_{\np,\,\nn}$ respectively the probability measure and the expectation operator induced by $(\cX,\cR)$.
If $(\cX,\cR)$ is spatially homogeneous with retention parameter $p$ and marks distributed according to the probability function $\nu$, we denote by $\P_{p,\,\nu}$ and $\E_{p,\,\nu}$
respectively the probability measure and the expectation induced by $(\cX,\cR)$.
Let $(\cX,\cR)$ and $(\cX',\cR')$ be two marked point process on $\Z^d$ defined on the same probability space. If
\[X_x\leq X'_x\qquad \mbox{ and }\qquad R_x\leq R'_x,\qquad x\in\Z^d,\]
we say that $(\cX,\cR)$ is \emph{dominated} by $(\cX',\cR')$ and we denote this by $(\cX,\cR)\preceq (\cX',\cR')$.
\subsection*{Random Graphs and Percolation.} Let $(\cX,\cR)$ be a Bernoulli marked point process on $\Z^d$. Then we define an associated random graph $\cG(\cX,\cR)=(\Z^d,\cE)$ as the
undirected random graph with vertex set $\Z^d$ and edge set $\cE$ defined by the condition $\{x,y\}\in\cE$ if, and only if, $X_x=1$ and $y\in B(x,R_x)$ or $X_{y}=1$ and $x\in B(y,R_y)$.
A \emph{path} on $\cG(\cX,\cR)$ is a sequence of distinct vertex $x_0, x_1,\dots, x_n$ with $x_{i-1} \neq x_i$ such that $\{x_{i-1},x_i\}\in\cE$, $i=1,\dots, n$.
A set of vertex $C\subset\Z^d$ is connected if, for all pair of distinct vertex $x$ and $y$ in $C$, there exists a path on $\cG(\cX,\cR)$ using vertices only from $C$, starting at $x$
and ending at $y$. The connected components of the graph $\cG(\cX,\cR)=(\Z^d,\cE)$ are its maximal connected subgraphs.
The cluster $C(x)$ of vertex $x$ is the connected component of the graph $\cG(\cX,\cR)$ containing $x$. Define the Percolation as follows:
\begin{eqnarray}
[\mbox{Percolation}]:=\bigcup_{x\in\Z^d}\left\{|C(x)|=\infty\right\}.
\end{eqnarray}
\noindent {\bf{Phase transition}}. Consider the Bernoulli Boolean discrete percolation model introduced above. Then replace the random radii in this model by the deterministic radius $0$. What
we get is the independent site percolation model. It is well known (see Grimmett \cite{Grimmett}, page 25) that the critical parameter for this last model is a positive number
$p_c^{\mbox{site}}(\Z^d)<1$. Then, a coupling argument shows that for any $p>p_c^{\mbox{site}}(\Z^d)$ there is percolation for the discrete Boolean model. Thus we focus our attention
in the subcritical regime.
\medskip
Now we state the main result of this work.
\bteo
Let $(\cX,\cR)$ be a spatially homogeneous marked point process on $\Z^d$ with retention parameter $p$ and marks distributed according to a probability function $\nu$. If $\sum_{r\geq 1}r^d\nu(r)<\infty$, then there exists $p_0>0$ such that $\P_{p,\,\nu}($Percolation$)=0$ for all $p\leq p_0$.
\label{coupI}
\eteo
Indeed, a similar result holds if we only assume that the values of $p_x$ are uniformly bounded and the family of random radii are independent, but not identically distributed.
\bteo
\label{Rind}
Let $(\cX,\cR)$ be a marked point process on $\Z^d$ with retention parameters $\np=(p_x:x\in\Z^d)$, and random marks $R_x$, $x\in\Z^d$ distributed according to a family of probability functions $\mathbf{n}=(\nu_x:x\in\Z^d)$. If
\begin{equation}
\label{StD}
\lim_{r\to\infty}\inf_{x\in\Z^d}\P_{\np,\,\nn}(R_x \leq r)=1
\end{equation}
and
\begin{equation}
\label{coup}
\sum_{r}r^d\left(\inf_{x\in\Z^d}\P_{\np,\,\nn}(R_x\leq r)-\inf_{x\in\Z^d}\P_{\np,\,\nn}(R_x\leq r-1)\right)<\infty,
\end{equation}
then there exists $p_0>0$ such that $\P_{\np,\,\nn}($Percolation$)=0$ for any family of retention parameters $\np=(p_x:x\in\Z^d)$ such that $\sup_{x\in\Z^d}p_x\leq p_0$.
\eteo
\brem
We claim that hypothesis (\ref{StD}) in Theorem \ref{Rind} above is equivalent to assume the existence of a random variable $R$ such that each random variable in
$\mathcal{R}$ is stochastically dominated by $R$. Indeed, let $U$ be a uniform random variable on $[0,1]$. Then, define $R$ as follows:
\begin{equation}
\label{VDM}
R=\sum_{r\geq 1}r\cdot\one\left\{\inf_{x\in\Z^d}\P_{\np,\,\nn}\left(R_{x}\leq r-1\right) <U \leq \inf_{x\in\Z^d} \P_{\np,\,\nn}\left(R_{x} \leq r \right) \right\}.
\end{equation}
We readily verify that $R$ is a random variable and that each random variable in $\mathcal{R}$ is stochastically dominated by $R$. By hypothesis (\ref{coup}) we have $\E[R^d]<\infty$. Now, using this stochastic domination we may construct, in a common probability space, two marked point processes. For that purpose, let $(U_x:\,x\in\Z^d)$ be a family of i.i.d. random variables, each one uniformly distributed on $[0,1]$. Then, set $\hat{R}_x=F_{R_x}^{-1}(U_x)$ and $\hat{R}^x=F_{R}^{-1}(U_x)$, where $F_{R_x}$ and $F_{R}$ are the cumulative distribution function of $R_x$ and $R$ respectively. It follows that $\hat{R}_x\stackrel{D}{=}R_x$, $\hat{R}^x\stackrel{D}{=}R$ and $\hat{R}_x\leq \hat{R}^x$. Since $\hat{R}_x\leq \hat{R}^x$, we get that $B(x,\hat{R}_x)\subset B(x,\hat{R}^x)$. Then, $\left(\mathcal{X},\mathcal{R}_1\right)\preceq \left(\mathcal{X},\mathcal{R}_2\right)$, where $\cR_1=(\hat{R}_x:\, x\in\Z^d)$ and $\cR_2=(\hat{R}^x:, x\in\Z^d)$. Since $(\cX,\cR)\stackrel{D}{=}(\cX,\cR_1)$ and $\E[R^d]<\infty$, Theorem \ref{Rind} is a simple consequence of Theorem \ref{coupI}.
\erem
\beje
Observe that condition (\ref{coup}) in Theorem \ref{Rind} turns out to be slightly stronger than requiring $\sup_{x\in\Z^d}\E_{\np,\,\nn}[R_x^d]<\infty$. Note that if the
random radii $R_x, x\in\Z^d$ are i.i.d random variables, then condition (\ref{coup}) in Theorem \ref{Rind} becomes $\E_{\np,\,\nn}[R^d]<\infty$, where $R$ is a random
variable distributed as $R_x$ for some $x\in\Z^d$. The example below shows that it is possible to construct a sequence of random variables $(R_n)_{n\in\N}$ In a common
probability space such that
\begin{eqnarray}
\sup_{n\in\N}\E[R_n]<\infty \mbox{ with } \E[R]=\infty,
\end{eqnarray}
\noindent where $R$ is a random variable such that $\P(R\leq r)=\inf_{n\in\N}\P(R_n\leq r)$ and $\E$ is the corresponding expectation operator.
Let $(R_n)_{n\geq 2}$ be a sequence of random variables with distribution function
\begin{eqnarray*}
F_{n}(x)
&=&
\left(1-\frac{3}{4n}\right)\one\{0\leq x<1\}\\
&+&
\left(\frac{1}{4n(n-1)}(x-1)+1-\frac{3}{4n}\right)\one\{1\leq x<n\}\\
&+&
\left(1-\frac{1}{2n}\right)\one\{n\leq x<n+1\}\\
&+&
\one\{n+1\leq x\}.
\end{eqnarray*}
We readily check that
\begin{eqnarray*}
\sup_{n\geq 2}\E[R_n]<\infty.
\end{eqnarray*}
The distribution function $F(x)=\inf_{n\in\N}F_n(x)$ is given by
\begin{eqnarray*}
F(x)=\sum_{n\geq 2}\left(1-\frac{1}{2n}\right)\one\{n\leq x<n+1\}.
\end{eqnarray*}
Finally, note that if $R$ is a random variable with distribution function $F$ as above, then
\[
\E[R]\leq \sum_{n \geq 2}\frac{1}{2n}=+\infty.
\]
This example shows that, with our techniques, the hypothesis in Theorem \ref{Rind} can not be weakened.
\eeje
\subsection*{Complete Coverage}
We complement the result of Theorem \ref{coupI} by establishing a sufficient condition for complete coverage of the space $\Z^d$. For
any $A\subset\Z^d$, define $\Lambda(A)=\bigcup_{x\in A\cap\cP}B(x,R_x)$.
\bteo
\label{Riidinf}
Let $(\cX,\cR)$ be a spatially homogeneous marked point process on $\Z^d$ with retention parameter $p$ and marks distributed
according to the probability function $\nu$. If $\sum_{r\geq 1}r^d\nu(r)=\infty$, then for any $p\in(0,1]$ , $\Lambda
(\mathbb{Z}^d)=\Z^d$.
\eteo
\subsection{Particle systems with interactions of infinite range}
Let $S$ be a finite (or countable) set and let $S^{\Z^d}$ be the set of mappings $\sigma:\Z^d\to S$. Give $S$ the discrete topology and $S^{\Z^d}$ the product topology. The measurable
sets of $S^{\Z^d}$ are the Borel sets. The elements of $S$ are called \emph{spins} or \emph{particles}. $S^{\Z^d}$ is called the configuration space and its elements are in general
written as $\sigma$, $\eta$, $\xi\dots$. For each $x\in\Z^d$, $\sigma(x)$ denotes the spin value of configuration $\sigma$ at site $x$. For each $A\subset\Z^d$, $\sigma(A)\in S^A$
denotes the restriction of configuration $\sigma$ to $A$.
A particle system with interactions of infinite range is a Markov process on $S^{\Z^d}$ whose generator is defined on cylinder functions by
\begin{eqnarray}
\label{IG}
Lf(\sigma)
&=&\sum_{x\in\Z^d}\sum_{s\in S}c_x(s,\sigma)\left[f(\sigma_{x,s}) - f(\sigma)\right],
\end{eqnarray}
where $\sigma_{x,s} \in S^{\mathbb{Z}^d}$ is defined by $\sigma_{x,s}(x)=s$, $\sigma_{x,s}(y)=\sigma(y)$ if $y\neq x$. Here $c_{x}(s,\sigma)>0$ is the intensity for a jump
$\sigma\to\sigma_{x,s}$ and it depends on $x$ and the whole spin configuration $\sigma$.
\paragraph{Kalikow-type decomposition.} We assume that the following \emph{Kalikow-type decomposition} for the jump intensities holds:
\begin{eqnarray}
\label{rates}
c_x(s,\sigma)=M_xp_{x}(s|\sigma),
\end{eqnarray}
where $M_x>0$ and
\begin{eqnarray}
\label{CP}
p_{x}(s|\sigma)=\sum_{r\geq 0}\nu_x(r)p_x^{[r]}(s|\sigma).
\end{eqnarray}
Here, $\nu_x(\cdot)$ is a probability function on $\N_0$ and $p_x^{[r]}(\cdot|\sigma)$ is a probability function on $S$ which depends on $\sigma$ only through $\{\sigma(y):\, y\in B(x,r)\}$. For further details on this kind of decomposition see \cite{kalikow} and \cite{bramson-kalikow}.
Here and for the rest of the paper we will assume that
\begin{equation}
\label{supinf}
0<M_*:=\inf_{x\in\Z^d}M_x\leq M^*:=\sup_{x\in\Z^d}M_x<\infty
\end{equation}
and
\begin{equation}
\label{momentcondition}
\sum_{r\geq 1}r^d\sup_{x\in\Z^d}\nu_x(r)<\infty.
\end{equation}
Now we can state the result about existence of interacting particle systems with interactions of infinite range.
\bteo
\label{Harrisconstruction}
Let $\{c_x(s,\sigma):\, x\in\Z^d,\, s\in S,\, \sigma\in S^{\Z^d}\}$ be a family of jump intensities satisfying the Kalikow-type decomposition described in \reff{rates} and \reff{CP}. Let assumptions \reff{supinf} and \reff{momentcondition} hold. Then, for each initial spin configuration $\eta$, there exists an almost surely unique interacting particle system $(\sigma_t^{\eta})_{t\geq 0}$ with generator
\begin{eqnarray}
\label{2.12}
Lf(\sigma)&=&\sum_{x\in\Z^d}\sum_{s\in S}\sum_{r \geq 0}M_x\nu_x(r)p_x^{[r]}(s|\sigma)\left[f(\sigma_{x,s}) - f(\sigma)\right].
\end{eqnarray}
\eteo
In the rest of this section we present the sketch of the proof of Theorem \ref{Harrisconstruction} based on a extension, to the case of interactions with infinite range, of ideas developed by Harris for the case of finite range interactions. See \cite{harris} for further details.
\subsection*{Harris graphical construction.} The probability space where the Markov processes $(\sigma^{\eta}_t: \,t\geq 0)$ will be constructed is the space generated by
a family $(\cT,\cK,\cU)=\{(\cT_x, \cK_x, \cU_x):\,x\in\Z^d\}$ of mutually independent marked Poisson point processes on the time line $[0,\infty)$. For each $x\in\Z^d$,
the Poisson process $\cT_x=(T_{x,n}:\,n\in\N)$ is homogeneous with rate $M_x$, $\cK_x=(K_{x,n}:\,n\in\N)$ is a sequence of i.i.d. random variables with common law
$\nu_x$ on $\mathbb{N}_0$ and $\cU_x=(U_{x,n}:\,n\in\N)$ is a sequence of i.i.d. uniform random variables on $[0,1]$. Moreover, for each
$x \in \mathbb{Z}^d , \cT_x, \cK_x$ and $\cU_x$ are mutually independent.
For each $\eta\in S^{\Z^d}$, we construct a process $(\sigma^{\eta}_t:\,t\geq 0)$ with generator \reff{2.12} and initial condition $\eta$ at time $0$ as a function of the
family $(\cT,\cK, \cU)$. Roughly speaking, the process $(\sigma^{\eta}_t:\,t\geq 0)$ is constructed as follows. Initially, $\sigma^{\eta}_0:=\eta$. Then, at the time epoch
$t\in\cT_x$, the spin value at site $x$ is updated in the following way: if $t=T_{x,n}$, then sample the range of interaction using the random variable
$K_{x,n}\in\cK_x$. If $K_{x,n}=r$, then the spin value at site $x$ is updated by a random variable $W_{x}(\sigma^{\eta}_{t-})$ with law
$p_{x}^{[r]}(\cdot|\sigma^{\eta}_{t-})$:
\begin{eqnarray}
\label{GSa}
\sigma^{\eta}_t=\sigma^{\eta}_{x,W_{x}(\sigma^{\eta}_{t-})}.
\end{eqnarray}
The random variable $W_{x}(\sigma^{\eta}_{t-})$ is constructed as a function of the uniform random variable $U_{x,n}\in\cU_x$.
Since there are infinitely many Poisson processes the main difficulty in the construction described above is that in general there will be infinitely many jumps in each interval of
time.
The key of the Harris graphical construction \cite{harris} is to show that during a certain interval of time $[0,t_0]$, $\Z^d$ can be partitioned into a countable number of finite
random sets, called islands, with no interaction between islands. For this purpose we introduce a family of random graphs containing all the information concerning the interactions needed in each interval of time $[\tau,t]$.
\paragraph{Harris random graph.} Fix $t > 0$. For each $0\leq \tau\leq t$, let $\cG_{\tau,t}=(\Z^d,\cE_{\tau,t})$ be the undirected random graph with vertex set $\Z^d$ and edge set $\cE_{\tau,t}$ defined by $\{x,y\}\in\cE_{\tau,t}$ if, and only if, $(\cT_x\cup\cT_y)\cap(\tau, t]\neq\emptyset$ and if there exists $t'\in(\cT_x\cup\cT_y)\cap (\tau,t]$ such that: (i) $y\in B(x,K_{x,n})$ if $t'=T_{x,n}$ or (ii) $x\in B(y, K_{y,m})$ if $t'=T_{y,m}$.
Note that the presence of an edge $\{x,y\}\in\cE_{\tau,t}$ indicates that a Poisson epoch has caused $x$ to look at $y$ in order to figure out how to update its spin value, or, has
caused $y$ to look at $x$ in order to figure out how to update its spin value. Conversely, if there is no edge between $x$ and $y$ then none of them have looked at each other. The last observation implies that sites in different
components of the resulting random graph do not influence each other during the time interval $(\tau,t]$. Hence their evolutions can be computed separately.
In Section \ref{PoTP} we give the proof of the following result.
\bteo
\label{percolation}
Let assumptions (\ref{supinf}) and (\ref{momentcondition}) hold. Then, there exists $t_0>0$ such that for any $t\leq t_0$, the connected components of the Harris random graph $\cG_{0,t}$ are,almost surely, finite.
\eteo
Using Theorem \ref{percolation} we can show that during the time interval $[0,t_0]$, where $t_0 >0$ is deterministic and small enough, $\Z^d$ can be partitioned into a
countable number of finite islands, with no interaction between them: $\Z^d=\cup_{\ell\in\N}C_{\ell}$, where for each $\ell\in\N, C_{\ell}$ is a finite set which is itself
a deterministic function of the family of Poisson processes on $[0,t_0]$ and the marks $\cK$. The collection $(C_{\ell}, \ell \in \N)$ has the additional property that
$C_{\ell_1}\cap C_{\ell_2}=\emptyset$ for every $\ell_1\neq\ell_2$. During the time interval $[0,t_0]$ the process is constructed separately in each region $C_{\ell}$
independently of everything else. The details of this construction are given in subsection \ref{ApHGC}.
\section{Proof of Theorem \ref{coupI}}
\label{mainp}
The proof of Theorem \ref{coupI} will be divided into two steps. In the first step we introduce two families of events, $G(x,r)$ and $H(r)$, in order to study the diameter of the
cluster $C(0)$. The family of events $G(x,r)$ is helpful to understand the behavior of the cluster $C(0)$ on the subgraph of $\cG(\cX,\cR)$ induced by the point process on $B(0,10r)$.
The family of events $H(r)$ provides a way to take care of the influence of the point process $(\cX,\cR)$ from the exterior of the ball $B(0,r)$. Our aim in this step is to show
that the probability of the percolation event can be controlled by the probabilities of the events $G(0,r)$. In the second step we will show that if the radii are not too
large, then the occurrence of the event $G(0,r_1)$ implies the occurrence of two independent events $G(x,r_2)$ and $G(x',r_2)$ where $r_1=10r_2$. Our aim in this step is to show
that the probability of the events $G(0,r_1)$ can be bounded by the square of the probability of the events $G(0,r_2)$ plus a quantity that goes to zero when $r_1$ goes to
infinity. This provides a way to take care of the probabilities of the events $G(0,r)$ that allows us to show that for $p$ small enough, $\P_{p,\,\nu}(G(0,r))$ goes to zero when
$r$ goes to infinity.
\subsection{Controlling the diameter of the cluster of the origin}
For each $x\in\Z^d$, let $D_x=\inf\{r\geq 0:\, C(x)\subset B(x,r)\}$. The percolation event is equivalent to the event $\bigcup_{x\in\Z^d}\{D_x=\infty\}$. By translation invariance of
spatially homogeneous marked point processes, the probability of the events $\{D_x>r\}$ does not depend on $x$. Therefore, the proof of Theorem \ref{coupI} is reduced to show the
existence of $p_0>0$ such that $\lim_{r\to\infty}\P_{p,\,\nu}(D>r)=0$ for all $p<p_0$, where $D$ is the random variable $D_x$ at the origin.
We define two families of events to study the diameter of the cluster $C(0)$.
\paragraph{The family of events $G(x,r)$.} Let $B$ be a subset of $\Z^d$. Denote by $\cG[B]$ the subgraph of $\cG(\cX,\cR)$ induced by $B$. Let $A$ be a non-empty subset of $\Z^d$
contained in $B$ and let $x\in A$. We say that $x$ is \emph{disconnected from the exterior of $A$ inside $B$} if the connected component of $\cG[B]$ containing $x$ is contained
in $A$. Now we introduce the events $G(x,r)$. Let $x\in\Z^d$ and let $r\in\N$, we say that $G(x,r)$ does not occurs if $x$ is disconnected from the exterior of $B(x,8r)$ inside $B(x,10r)$.
\paragraph{The family of events $H(r)$.} For each $r\in\N$, define
\begin{eqnarray}
\label{hr}
H(r)=\left\{\exists\,x\in \cP\cap B(0,10r)^c:\, R_x>\frac{\|x\|}{10}\right\}.
\end{eqnarray}
The relation between the diameter of the cluster at the origin and the families of events defined above is established in the following lemma.
\blem
\label{GHM}
The following assertion holds for all $r\in\N$:
\begin{eqnarray}
\label{Mgrande2}
G(0,r)^c\cap H(r)^c\subset\left\{D\leq 8r\right\}.
\end{eqnarray}
\elem
\paragraph{Proof of Lemma \ref{GHM}.}
If the event $H(r)$ does not occur, then there are no sites of the point process with norm greater than $10r$ connected to $B(0,9r)$. Indeed, assume that $H(r)$ does not occur. Then
for every $x\in\cP\cap B(0,10r)^c$ we have $\|x\|-R_x\geq \frac{9}{10}\|x\|>9r$. Using the triangle inequality it is easy to see that $\|y\|\geq \|x\|-R_x>9r$ for all
$y\in B(x,R_x)$. If $G(0,r)$ does not occur, then $0$ is isolated from the exterior of $B(0,8r)$. If, in addition, the event $H(r)$ does not occur, then the balls
$B(x,R_x)$ with $x\in\cP\cap B(0,10r)^c$ do not help to connect the origin to the complement of $B(0,8r)$. Thus $D\leq 8r$. \hfill\square
\medskip
From \reff{Mgrande2} we get
\begin{eqnarray}
\label{Mgrande3}
\P_{p,\,\nu}(D>8r)\leq \P_{p,\,\nu}(G(0,r))+\P_{p,\,\nu}(H(r)).
\end{eqnarray}
Notice that $\lim_{r\to\infty}\P_{p,\,\nu}(H(r))=0$ for all $p\in(0,1)$. This is obvious because $H(r+1)\subset H(r)$ for all $r\in\N$ and $\bigcap_{r\in\N}H(r)=\emptyset$.
\subsection{Controlling the probabilities of the events $G(0,r)$}
To take care of the probabilities $\P_{p,\,\nu}(G(0,r))$ we introduce another family of events.
\paragraph{The family of events $\tilde{H}(r)$.} For each $r\in\N$, we define
\begin{eqnarray}
\label{hrt}
\tilde{H}(r)=\{\exists\,x\in\cP\cap B(0,100r):\,R_x \geq r\}.
\end{eqnarray}
\blem The following inclusion holds for all $r\in\N$:
\label{Escala2}
\begin{eqnarray}
\label{escala}
G(0,10rd)\cap\tilde{H}(rd)^c&\subset&\left(\bigcup_{x\in S_{10d}}G(rx,rd)\right)\cap\left(\bigcup_{x\in S_{80d}}G(rx,rd)\right).
\end{eqnarray}
\elem
\paragraph{Proof of Lemma \ref{Escala2}.}
\label{Geom3}
Fix $r\in\N$. First, assume that the event $G(0,10rd)$ occurs but the event $\tilde{H}(rd)$ does not occur. Since $G(0,10rd)$ occurs we can go from the origin to the complement of the
ball $B(0,80rd)$ just using balls $B(x,R_x)$ centred at points from $\cP\cap B(0,100rd)$. In this way, we can go from the sphere $S_{10rd}$ to the sphere $S_{80rd}$. One of this balls,
let say $B(x_*, R_{x_*})$, touches $S_{10rd}$. Since the sphere $S_{10rd}$ is a subset of $\cup_{x \in S_{10d}}B(rx,rd)$ (see Proposition \ref{geozd} in the Appendix), we get that
this ball touches a ball of the form $B(rk, rd)$ for some $k$ in $S_{10d}$.
Now we shall prove that, for this $k$, the event $G(rk,rd)$ occurs. It is easy to see that we can go from $B(rk, rd)$ to the complement of $B(rk, 8rd)$ just using balls of the form
$B(x,R_x)$ centred at points from $\cP\cap B(0,100rd)$. Since $\tilde{H}(rd)$ does not occur, the radius of any such ball is less than $rd$. Then we can go from $B(rk,rd)$
to the complement of $B(rk,8rd)$ just using balls of the form $B(x,R_x)$ centred at points from $\cP\cap B(rk,10rd)$. In other words, the event $G(rk, rd)$occurs. Then, the event
$\bigcup_{x\in S_{10d}}G(rx,rd)$ does occur. The proof that the event $\bigcup_{x\in S_{80d}}G(rx,rd)$ does occur follows in the same lines. \hfill\square
\medskip
The event on the right side of \reff{escala} is the intersection of two events. The first depends on what happens inside $B(0,20rd)$. The other event only depends on what happens in
the region $B(0,70rd)^c$. Then, these two events are independent. By translation invariance of spatially homogeneous marked point processes we get
\begin{eqnarray}
\label{Go8bis}
\P_{p,\,\nu}(G(0,10rd))\leq |S_{10d}||S_{80d}|\P_{p,\,\nu}(G(0,rd))^2+\P_{p,\,\nu}(\tilde{H}(rd)).
\end{eqnarray}
\blem
\label{Cotas}
There exist positive constants $C_2$ and $C_3$, which depends only on the dimension $d$, such that, for any $r\in\N$, the following inequalities hold:
\begin{eqnarray}
\label{CF1b}
\P_{p,\,\nu}(G(0,r)) &\leq& p\, C_2r^d,\\
\label{CG1b}
\P_{p,\,\nu}(\tilde{H}(r))&\leq& p\,C_3\E_{p,\,\nu}\left[R^d\one\{R\geq r\}\right].
\end{eqnarray}
\elem
\paragraph{Proof of Lemma \ref{Cotas}.}
It is a simple geometric fact that there exists a positive constant $C$ which depends only on the dimension $d$ such that $|B(0,r)|\leq Cr^d$.
Let $r\in\N$. A simple computation shows that
\begin{eqnarray}
\P_{p,\,\nu}(G(0,r))
&\leq& \P_{p,\,\nu}(\exists\,x\in\cP\cap B(0,10r))\nonumber\\
&\leq& p\,|B(0,10r)|.
\end{eqnarray}
The inequality \reff{CF1b} is satisfied with $C_2=10^dC$.
To show \reff{CG1b} we note that $\tilde{H}(r)=\one\{X\geq 1\}$, where $X$ is a random variable defined by
\[X=\sum_{x\in B(0,100r)}\one\{x\in\cP\}\one\{R_x\geq r\}.\]
We have
\begin{eqnarray*}
\P_{p,\,\nu}(\tilde{H}(r))&\leq& \E_{p,\,\nu}\left[X\right] \nonumber \\
&=&\sum_{x\in B(0,100r)}p\,\P_{p,\,\nu}(R_x\geq r)\nonumber \\
&=& p\, \left|B(0,100r)\right|\P_{p,\,\nu}(R\geq r)\nonumber \\
&\leq& p\,C_3\E_{p,\,\nu}[R^d\one\{R\geq r\}],
\end{eqnarray*}
where $C_3=100^dC$. The first equality follows from the independence between $\cP$ and $\cR$ and the second equality follows from the fact that the random variables $(R_x, x\in\Z^d)$
are identically distributed. \hfill\square
\subsection{Proof of Theorem 1}
By \reff{Mgrande3}, the proof of Theorem \ref{coupI} is reduced to show the existence of $p_0>0$ such that there exists an increasing sequence $(r_n)_{n\in\N}\subset \N$ with
$\lim_{n\to\infty}\P_{p,\,\nu}(G(0,r_n))=0$ for any $p<p_0$. For this reason we need the following lemma.
\blem
\label{FG0}
Let $f$ and $g$ be two functions from $\N$ to $\R_+$ satisfying the following conditions: (i) $f(r)\leq 1/2$ for all $r\in \{1,\dots, 10\}$; (ii) $g(r)\leq 1/4$ for all $r\in\N$; (iii) for all $r\in\N$:
\begin{eqnarray}
\label{FG1}
f(10r)\leq f^2(r)+g(r).
\end{eqnarray}
If $\lim_{r\to\infty}g(r)=0$, then $\lim_{n\to\infty}f(10^nr)=0$ for each $r\in\{1,\dots, 10\}$.
\elem
\paragraph{Proof of Lemma \ref{FG0}.} For each $n\in\N$, let $F_n=\max_{1\leq r\leq 10}f(10^nr)$ and let $G_n=\max_{1\leq r \leq 10}g(10^nr)$. Using \reff{FG1} and hypothesis (i) and (ii) we may conclude, by means of the induction principle that, for each $n\in\N$, $F_n\leq 1/2$ and
\begin{eqnarray}
\label{CotaIndb}
F_n \leq \frac{1}{2^{n+1}}+\displaystyle \sum_{j=0}^{n-1}\frac{1}{2^j}G_{n-1-j}.
\end{eqnarray}
Since $g(10^nr)$ goes to zero as $n\to\infty$ we have that $G_n\to 0$ when $n\to\infty$. By \reff{CotaIndb}, we obtain that $F_n\to 0$ when $n\to\infty$. \hfill\square
\medskip
Consider the functions $f(r)=C_1\P_{p,\,\nu}(G(0,rd))$ and $g(r)=C_1\P_{p,\,\nu}(\tilde{H}(rd))$, where $C_1=|S_{10d}||S_{80d}|$. By \reff{Go8bis}, it follows that
\begin{eqnarray}
\label{Go8tis}
f(10r)\leq f^2(r)+g(r).
\end{eqnarray}
By condition $\E_{p,\nu}[R^d]=\sum_{r\geq 1}r^d\nu(r)<\infty$ and \reff{CG1b}, we have that $\lim_{r\to\infty}g(r)=0$ for any $p$.
We show that there exists $p_0>0$ such that if $p<p_0$ then $f(r)\leq 1/2$, $1\leq r\leq 10$ and $g(r)\leq 1/4$, $r\in\N$.
Set
\[
p_0=\min((2C_1C_2(10d)^d)^{-1}, (4C_1C_3\E_{p,\,\nu}[R^d])^{-1}).
\]
By condition $\E_{p,\,\nu}[R^d]<\infty$, we get $p_0>0$.
Let $p>0$ be such that $p\leq p_0$. It follows from\reff{CF1b} that
\[
f(r)\leq \frac{1}{2}\left(\frac{r}{10}\right)^d.
\]
Thus we have that if $0<p\leq p_0$, then $\max_{1\leq r\leq 10}f(r)\leq 1/2$.
By \reff{CG1b}, we get
\[
g(r)\leq \frac{1}{4}.
\]
Finally, by Lemma \ref{FG0}, we may conclude that $\lim_{n\to\infty}f(10^nr)=0$ for each $r\in\{1,\dots, 10\}$. In particular,
\[\lim_{n\to\infty}f(10^n)=\lim_{n\to\infty}C_1\P_{p,\,\nu}(G(0, 10^nd)=0.\]
\hfill\square
\bigskip
We finish this section by proving the complete coverage of $\Z^d$ under the assumption $\E_{p,\,\nu}[R^d]=\infty$.
\subsection{Proof of Theorem \ref{Riidinf}}
We prove the equivalent statement that, for all $r\in\N$, the following assertion holds:
\[\P_{p,\,\nu}(\exists\, x\in\cP: B(0,r)\subset B(x,R_x))=1.\]
If $R_x>\|x\|+r$, then $B(0,r)\subset B(x,R_x)$. Hence,
\begin{eqnarray*}
\P_{p,\,\nu}(\exists\, x\in\cP: B(0,r)\subset B(x,R_x))\geq\P_{p,\,\nu}(\exists\, x\in\cP: R_x>\|x\|+r)
\end{eqnarray*}
Let $A_k$ be the event defined by $A_k=\{\exists x\in\cP\cap S_k: \, R_x>k+r\}$. It is clear that the events $A_k$ are independent and that $\P_{p,\,\nu}(A_k)=p|S_k|\P_{p,\,\nu}(R>k+r)$.
Note that
\begin{eqnarray}
\label{ineq}
\sum_{k\geq 0}\P_{p,\,\nu}(A_k)&=&p\sum_{k\geq 0}|S_k|\P_{p,\,\nu}(R>k+r)\nonumber \\
&=&\sum_{k\geq 0}|B_k|\P_{p,\,\nu}(R=k+r+1).
\end{eqnarray}
Since $\E_{p,\,\nu}[R^d]=\infty$, we conclude that the series in the right hand side of \reff{ineq} diverges. By the second Borel-Cantelli lemma (see Durrett \cite{durrett2}, page 50),
we have that $\P_{p,\,\nu}(A_k$ i.o.$)=1$. \hfill\square
\section{Proof of Theorem \ref{percolation}}
\label{PoTP}
The proof of Theorem \ref{percolation} falls naturally into two steps. In the first step, we construct a family of marked point processes $(\cX_t,\cR_t)$ on $\Z^d$ such that the random graphs $\cG(\cX_t,\cR_t)$ and $\cG_{0,t}$ have the same distribution. In the second step, using Theorem \ref{coupI}, we show that for $t$ small enough the connected components of the random graph $\cG(\cX_t,\cR_t)$ are, almost surely, finite. For the sake of clarity, each step is divided into a sequence of lemmas.
\bigskip
In order to prove Theorem \ref{percolation}, we need to introduce some notation. For each $x\in\Z^d$, $r\in \{-1,0,1,2,\dots\}$ and $0< t$ let
\[
N_{x,r}(t):=\sum_{n\geq 1}\one\{T_{x,n}\leq t\}\one\{K_{x,n}>r\}.
\]
$N_{x,r}(t)$ is nothing but the number of occurrences of the marked Poisson process $(\cT_x, \cK_x)$ during the time interval $(0,t]$ whose marks are greater than $r$. Notice that
$N_{x}(t):=N_{x,-1}(t)$ is the counting measure associated to the Poisson process $\cT_x$.
Let $(\cT,\cK)=\{(\cT_x, \cK_x):\,x\in\Z^d\}$ be a family of mutually independent marked Poisson point processes on the time line $[0,\infty)$. Let $\P_{(\cT,\cK)}$ and
$\E_{(\cT,\cK)}$ respectively be the probability measure and the expectation operator induced by $(\cT,\cK)$.
\begin{rem}
For each $x\in\Z^d$, let $M_x(t):=\sum_{n\geq 1}\max(K_{x,1},\dots, K_{x,n})\one\{N_x(t)=n\}$. It follows from the construction described above and the Coloring Theorem (see Kingman
\cite{Kingman}, page 52) that
\begin{eqnarray}
\label{M1}
\P_{(\cT,\cK)}(M_x(t)\leq r)=\P_{(\cT,\cK)}(N_{x,r}(t)=0)=\exp\left(-M_xtG_x(r)\right),
\end{eqnarray}
where $G_x(r)=\sum_{k>r}\nu_{x}(k)$.
Also, we have
\begin{eqnarray}
\label{M2}
\P_{(\cT,\cK)}(M_x(t)\leq r|N_{x}(t)\geq 1)=\frac{\exp\left(-M_xtG_{x}(r)\right)-\exp\left(-M_xt\right)}{1-\exp\left(-M_xt\right)}.
\end{eqnarray}
\end{rem}
\subsection{Simultaneous coupling construction: Step 1.}
Let $\cU_1=(U_{x,1}:\,x\in\Z^d)$ and $\cU_2=(U_{x,2}:\,x\in \Z^d)$ be two mutually independent families of independent uniform random variables on $(0,1]$. The probability space where
this coupling is performed is the one where the two families of uniform random variables are defined. We denote by $\P_{(\cU_1,\cU_2)}$ the probability measure induced by the families
of random variables $\cU_1$ and $\cU_2$.
For each $x\in\Z^d$ and $t > 0$, define
\begin{eqnarray}
\label{PP1}
X_{x,t}&:=&\one\{U_{x,1}\leq 1-\exp(-M_xt)\},\\
\label{MP1}
R_{x,t}&:=&F^{-1}_{x,t}(U_{x,2}),
\end{eqnarray}
where $F_{x,t}(r)$ is the cumulative distribution function given by
\begin{eqnarray}
\label{DD1bis}
F_{x,t}(r)=\frac{\exp\left(-M_xtG_{x}(r)\right)-\exp\left(-M_xt\right)}{1-\exp\left(-M_xt\right)}.
\end{eqnarray}
Set $\cX_t=(X_{x,t}:\,x\in\Z^d)$ and $\cR_t=(R_{x,t}:\,x\in\Z^d)$. It follows from the construction that the process $(\cX_t,\cR_t)$ is a marked point process on $\Z^d$ satisfying
\begin{eqnarray}
\label{DPbis}
\P_{(\cU_1,\cU_2)}(X_{x,t}=1)&=&\P_{(\cT,\cK)}(N_x(t)\geq 1),\\
\label{DD1b}
\P_{(\cU_1,\cU_2)}(R_{x,t}\leq r)&=&\P_{(\cT,\cK)}(M_x(t)\leq r|N_{x}(t)\geq 1).
\end{eqnarray}
\blem
\label{Lpaso1} Let $t > 0$ and let $(\cX_t,\cR_t)$ be the marked point process on $\Z^d$ defined by \reff{PP1} and \reff{MP1}. Then, the random graphs $\cG(\cX_t,\cR_t)$ and $\cG_{0,t}$ are equally distributed.
\elem
\paragraph{Proof of Lemma \ref{Lpaso1}.}
By \reff{DPbis} and \reff{DD1b} we have that
\begin{eqnarray}
\label{Leyes1b}
\P_{(\cU_1,\cU_2)}(X_{x,t}=1, R_{x,t}\leq r)=\P_{(\cT,\cK)}(N_x(t)\geq 1, M_x(t)\leq r).
\end{eqnarray}
Then, the random graphs $\cG_{0,t}(\cT,\cK)$ and $\cG(\cX_t,\cR_t)$ have the same distribution. \hfill\square
\subsection{Properties of the coupled processes $(\cX_t,\cR_t)$}
Now, we study the main properties of the marked point processes $(\cX_t,\cR_t)$ needed for the proof of Theorem \ref{percolation}. We begin by proving the following auxiliary result.
\blem
\label{haux}
For any $a\in(0,1)$, the function
\begin{eqnarray}
h(z)=\frac{1-\exp(-az)}{1-\exp(-z)}
\end{eqnarray}
is non-decreasing on $[0,\infty)$.
\elem
\paragraph{Proof of Lemma \ref{haux}.}
It suffices to prove the result for rational $a$. Then, assume that $a$ is a positive rational number and write it as the ratio of two positive integers $a=m/n$, where $1<m<n$. Now,
making $y=\exp(-z/n)$ we get
\begin{eqnarray}
\frac{1-\exp(-az)}{1-\exp(-z)}&=&\frac{1-y^m}{1-y^n}=\left(1+\frac{\sum_{i=0}^{n-m-1}y^k}{\sum_{k=1}^my^{-k}}\right)^{-1}.
\end{eqnarray}
Since the expression above is a decreasing function of $y$ and $y=\exp(-z/n)$ is itself a decreasing function of $z$, the result follows. \hfill\square
\blem
\label{SDom}
If $0<t'< t\leq 1$, then
\begin{eqnarray}
(\cX_{t'}, \cR_{t'})\preceq (\cX_t, \cR_t).
\end{eqnarray}
\elem
\paragraph{Proof of Lemma \ref{SDom}.}
Fix $x\in\Z^d$. It follows from \reff{PP1} that $X_{x,t'}\leq X_{x,t}$. Fix $r\in\N_0$. We shall show that if $0<t'<t$, then
\begin{eqnarray}
\label{losradioscrecen}
F_{x,t}(r)\leq F_{x,t'}(r).
\end{eqnarray}
By \reff{DD1bis}, it suffices to show that
\begin{eqnarray}
\tilde{h}(t)=\frac{1-\exp\left(-M_xtG_{x}(r)\right)}{1-\exp\left(-M_xt\right)}
\end{eqnarray}
is a non-decreasing function on $(0,1]$. This follows from Lemma \ref{haux} by substituting $a$ by $G_x(r)$ and $z$ by $M_xt$.
From \reff{losradioscrecen} we may conclude that the random variables $R_{x,t}$ defined in \reff{MP1} satisfy $R_{x,t'}\leq R_{x,t}$. hfill\square
\subsection{Simultaneous coupling construction: Step 2.}
For each $x\in\Z^d$ and $t > 0$, let
\begin{eqnarray}
\label{PP2}
X^*_{x,t}&:=&\one\{U_{x,1}\leq 1-\exp(-M^*t)\},\\
\label{hmp}
R^*_{x}&:=&\sum_{r\in\mathbb{N}_0}r\one\left\{\inf_{x\in\Z^d}F_{x,1}(r-1)<U_{x,2}\leq \inf_{x\in\Z^d}F_{x,1}(r) \right\}
\end{eqnarray}
and define $\cX^*_t=(X^*_{x,t}:\,x\in\Z^d)$ and $\cR^*=(R^*_{x}:\,x\in\Z^d)$.
First, we use conditions \reff{supinf} and \reff{momentcondition} to show that the random variables introduced in \reff{hmp} are well defined. For that purpose it suffices to show that $\lim_{r\to\infty}\inf_{x\in\Z^d}F_{x,1}(r)=1$. By \reff{supinf}, we have
\begin{eqnarray*}
F_{x, 1}(r)&=&\frac{\exp\left(-M_xG_{x}(r)\right)-\exp\left(-M_x\right)}{1-\exp\left(-M_x\right)}\nonumber\\
&=&1-\frac{1-\exp\left(-M_xG_{x}(r)\right)}{1-\exp\left(-M_x\right)}\nonumber\\
&\geq&1-\frac{1-\exp\left(-M^*\sup_{x\in\Z^d}G_{x}(r)\right)}{1-\exp\left(-M_*\right)}
\end{eqnarray*}
for any $x\in\Z^d$. Then,
\begin{eqnarray}
\label{CIFx1}
\inf_{x\in\Z^d}F_{x, 1}(r)&\geq&1-\frac{1-\exp\left(-M^*\sup_{x\in\Z^d}G_{x}(r)\right)}{1-\exp\left(-M_*\right)}.
\end{eqnarray}
On the other hand,
\begin{eqnarray}
\label{CSup}
\sup_{x\in\Z^d}G_{x}(r)=\sup_{x\in\Z^d}\sum_{\ell>r}\nu_x(\ell)\leq\sum_{\ell>r}\sup_{x\in\Z^d}\nu_x(\ell).
\end{eqnarray}
Under condition \reff{momentcondition} we may conclude that the right hand side of \reff{CSup} converges to $0$ when $r$ goes to $\infty$. Then
\begin{eqnarray}
\label{CSup2}
\lim_{r\to\infty}\sup_{x\in\Z^d}G_{x}(r)=0.
\end{eqnarray}
From \reff{CIFx1} and \reff{CSup2} we deduce that $\lim_{r\to\infty}\inf_{x\in\Z^d}F_{x,1}(r)=1$.
\subsection{The dominating marked point process $(\cX^*_t, \cR^*)$}
\blem
\label{DMPP}
For any $0<t\leq 1$,
\begin{eqnarray}
\label{DSt1}
(\cX_t, \cR_t) \preceq (\cX^*_t, \cR^*).
\end{eqnarray}
\elem
\paragraph{Proof of Lemma \ref{DMPP}.}
It follows from the construction that $X_{x,t}\leq X^*_{x,t}$ and $R_{x,1}\leq \R^*_x$. By Lemma \ref{SDom}, we have that $R_{x,t}\leq R_{x,1}$. This completes the proof.
\hfill\square
The last ingredient needed to prove Theorem \ref{percolation} is the following result.
\blem
\label{Cota1} Under the same assumptions of Theorem \ref{percolation} there exists $0<t_0\leq 1$ such that $\P_{(\cU_1,\cU_2)}(Percolation)=0$ for all $0<t\leq t_0$.
\elem
\paragraph{Proof of Lemma \ref{Cota1}.}
First note that, for each $t>0$, $(\cX^*_t, \cR^*)$ is a spatially homogeneous, marked point processes on $\Z^d$ with retention parameter $p(t)=1-\exp(-M^*t)$ and probability function of its marks $\nu(r)=\P_{(\cU_1,\cU_2)}(\hR_x=r)$ satisfying
\begin{eqnarray}
\label{Cotagrosa}
\sum_{r\in\N_0}r^d\nu(r)<\infty.
\end{eqnarray}
Indeed,
\begin{eqnarray}
\label{Cotagrosa1}
\nu(r)&=&\inf_{x\in\Z^d}\P_{(\cU_1,\cU_2)}(R_{x,1}\leq r)-\inf_{x\in\Z^d}\P_{(\cU_1,\cU_2)}(R_{x,1}\leq r-1)\nonumber\\
&\leq& \sup_{x\in\Z^d}\P_{(\cU_1,\cU_2)}(R_{x,1}=r)
\end{eqnarray}
Inequality \reff{Cotagrosa1} follows from the inequality $\inf\{a_x+b_x\}\leq \inf\{a_x\}+\sup\{b_x\}$ applied to the sequence $a_{x}(r)=\P_{(\cU_1,\cU_2)}(R_{x,1}\leq r-1)$ and $b_x(r)=\P_{(\cU_1,\cU_2)}(R_{x,1}=r)$.
On the other hand, we have
\begin{eqnarray}
\label{Cotagrosa2}
\P_{(\cU_1,\cU_2)}(R_{x,1}=r)
&=&\P_{(\cU_1,\cU_2)}(R_{x,1}\leq r)-\P_{(\cU_1,\cU_2)}(R_{x,1}\leq r-1)\nonumber\\
&=&\frac{\exp\left(-M_xG_{x}(r)\right)-\exp\left(-M_xG_{x}(r-1)\right)}{1-\exp\left(-M_x\right)}\nonumber\\
&=&\frac{\exp\left(-M_xG_{x}(r)\right)}{1-\exp\left(-M_x\right)}\left(1-\exp\left(-M_x\nu_{x}(r)\right)\right)\nonumber\\
&\leq&\left(\frac{M^*}{1-\exp\left(-M_*\right)}\right)\nu_{x}(r).
\end{eqnarray}
The last inequality follows from well known properties of the exponential function.
From \reff{Cotagrosa1} and \reff{Cotagrosa2} we may conclude that
\begin{eqnarray}
\label{Cotagrosa3}
\nu(r)\leq \left(\frac{M^*}{1-\exp\left(-M_*\right)}\right)\sup_{x\in\Z^d}\nu_{x}(r).
\end{eqnarray}
Inequality \reff{Cotagrosa} follows immediately from \reff{Cotagrosa3}.
Since $(\cX_t^*, \cR^*)$ satisfies the hypothesis of Theorem \ref{coupI}, there exists $p_0>0$ such that $\P_{(\cU_1,\cU_2)}($Percolation$)=0$ for any $0<t\leq 1$ such that $p(t)\leq p_0$.
Therefore, the connected components of the random graphs $\cG(\cX^*_t, \cR^*)$ are almost surely finite for any $0<t\leq \min(-\frac{1}{M^*}\log(1-p_0), 1)$. Indeed,
\begin{eqnarray*}
1-\exp(-M^*t)\leq p_0&\iff& t\leq -\frac{1}{M^*}\log(1-p_0).
\end{eqnarray*}
\hfill\square
\subsection{Proof of Theorem \ref{percolation}.}
By Lemma \ref{Lpaso1}, the random graphs $\cG_{0,t}(\cT,\cK)$ and $\cG(\cX_t,\cR_t)$ have the same distribution. By Lemmas \ref{DMPP} and \ref{Cota1}, we have that there
exists $0<t_0\leq 1$ such that $\P_{(\cU_1,\cU_2)}($Percolation$)=0$ for all $0<t\leq t_0$. Therefore, the connected components of the Harris random graph
$\cG_{0,t}(\cT,\cK)$ are almost surely finite for all $0<t\leq t_0$. \hfill\square
\section{Appendix}
\subsection{Harris graphical construction}
\label{ApHGC}
Let $t_0>0$ be as in Theorem \ref{percolation} and let $C_{\ell}=C_{\ell}(\cT\cap[0,t_0],\cK)$, $\ell\in\N$, be the partition of $\Z^d$ into a countable number of finite islands, with no interaction between them.
\paragraph{Finite-volume construction}
The construction of the process on each finite island $C_{\ell}$ with initial configuration $\eta$ using the Poisson processes is straightforward because the epoch of the associated Poisson processes are well ordered.
Fix $\ell\in\N$ and consider $0<\tau_1<\tau_2<\cdots<\tau_{n}$, where
\begin{eqnarray}
\{\tau_1,\tau_2, \dots, \tau_{n}\}=\bigcup_{x\in C_{\ell}}\left(\cT_x\cap [0,t_0]\right).
\end{eqnarray}
We construct the process $\sigma^{\eta}_t(C_\ell)$ inductively as follows. For $k=1,\dots, n$, let $x_1,x_2,\dots,x_n$ be the sites such that $\tau_k\in\cT_{x_k}$ .
\paragraph{Step 1.} Let
\[\sigma^{\eta}_t(C_\ell)=\eta(C_\ell) \mbox{ for all } 0\leq t<\tau_1\]
and set
\begin{eqnarray*}
\sigma^{\eta}_{\tau_1}(C_\ell)=\sigma^{\eta}_{x_1, W_{x_1}(\sigma^{\eta}_{\tau_1-}(C_\ell))}.
\end{eqnarray*}
Thus we have defined the process on the time interval $[0,\tau_1]$.
\paragraph{Inductive step.} Assume that $\sigma^{\eta}_{t}(C_\ell)$ has already been defined for all $0\leq t\leq \tau_k$. Then set
\[\sigma^{\eta}_{t}(C_\ell)=\sigma^{\eta}_{\tau_k}(C_\ell) \mbox{ for all } \tau_k<t<\tau_{k+1}\]
and
\begin{eqnarray*}
\sigma^{\eta}_{\tau_{k+1}}(C_\ell)=\sigma^{\eta}_{x_{k+1}, W_{x_{k+1}}(\sigma^{\eta}_{\tau_{k+1-}}(C_\ell))}.
\end{eqnarray*}
This step is repeated until the construction has been finished on the time interval $[0,t_0]$.
\paragraph{Infinite-volume construction.} Finally, for each $\ell\in\N$, let
\begin{eqnarray}
\label{GSb}
(\sigma^\eta_t)(C_{\ell}):=\sigma^{\eta}_t(C_\ell), \qquad t\in(0,t_0].
\end{eqnarray}
Since $t_0$ is independent of the initial configuration, we may conclude, by means of the Markovian property of Poisson processes, that the state of the process may be computed by
induction at any time $t \geq 0$.
\subsection{Geometry of $\Z^d$}
The following proposition deals with geometric aspects used in the proof of Lemma \ref{Escala2}.
\bpro
\label{geozd}
Fix $d\in\N$. Then, for any $n,r\in\N$, we have
\[S_{nr}\subset\bigcup_{x \in S_n} B\left(rx,\frac{d}{2}\,r\right).\]
\epro
In order to prove Proposition \ref{geozd} we need the following result.
\blem
\label{geozd2}
Let $x=(x_1,\dots, x_d)\in\R^d$ be such that $1>x_1\geq x_2\geq\cdots\geq x_d>0$ and $\sum_{i=1}^dx_i=m\in\N$, where $m<d$. Let $y=(1,\dots, 1, 0,\dots, 0)$ ($m$-ones). Then, $\|y-x\|\leq \frac{d}{2}$.
\elem
\paragraph{Proof of Proposition \ref{geozd}.} By symmetry, it suffices to prove the proposition for $x$ in the region
$\{x=(x_1, x_2, \ldots, x_d) \in \mathbb{R}^d : x_i \geq0, i=1, 2, \ldots, d \}$. Fix $n$ and $r\in\N$. By Lemma \ref{geozd2}, we have that for any $x\in\R^d$ with
$\|x\|=n$ there exists $y\in\Z^d$ with $\|y\|=n$ such that $\|y-x\|\leq\frac{d}{2}$.
Pick $x\in S_{nr}$. Then, for $\frac{x}{r}\in S_n$ there exists $ y\in\Z^d$ with $\|y\|=n$ such that $\|y-\frac{x}{r}\|\leq \frac{d}{2}$. Thus, $\|ry-x\|\leq\frac{d}{2}r$.
\hfill\square
\vspace{0.5cm}
We finish this subsection by proving Lemma \ref{geozd2}.
\paragraph{Proof.} We begin by observing that
\begin{eqnarray}
\label{n162}
\|y-x\|
&=&\sum_{i=1}^m(1-x_i)+\sum_{i=m+1}^dx_i\nonumber\\
&=&1-x_1+\sum_{i=2}^m(1-x_i)+\sum_{i=m+1}^dx_i\nonumber\\
&=&1-m+\sum_{i=2}^dx_i+m-1-\sum_{i=2}^mx_i+\sum_{i=m+1}^dx_i\nonumber\\
&=&2\sum_{i=m+1}^dx_i.
\end{eqnarray}
Now assume that $\|y-x\|>\frac{d}{2}$. From \reff{n162} we have that $\sum_{i=m+1}^dx_i>\frac{d}{4}$. Therefore, there exists $i\in\{m+1,\dots, d\}$ such that $x_i>\frac{d}{4(d-m)}$.
Also, $x_1,\dots, x_m>\frac{d}{4(d-m)}$. Since $\sum_{i=1}^mx_i>\frac{dm}{4(d-m)}$ we get
\begin{eqnarray}
\label{n163}
\sum_{i=1}^dx_i>\frac{dm}{4(d-m)}+\frac{d}{4
=\frac{d^2}{4(d-m)}.
\end{eqnarray}
Note that
\begin{eqnarray}
\label{n164}
\frac{d^2}{4(d-m)}\geq m
\end{eqnarray}
if, and only if, $(d-2m)^2\geq 0$. Since the last inequality is true, so is inequality (\ref{n164}). From \reff{n163} and \reff{n164} we get that $\sum_{i=1}^dx_i>m$ which is a contradiction.
The contradiction comes from the fact that we have assumed that $\|y-x\|>\frac{d}{2}$. \hfill\square
\section*{Acknowledgement}
We thank Jorge R. Busch for a carefully reading of a previous version of this work and for comments that improved the presentation of the results. During the realization of this work
both authors received partial financial support from FAPESP, grant 09/52379-8. Also, the second author received support from FAPESP, grant 2009/16437-3.
|
1,116,691,500,345 | arxiv | \section{Introduction \label{Intr}}
Integrable models of statistical
mechanics and field theory \cite{Bax,Mussardo10} provide us with a very important source of information about the critical
behavior of condensed matter systems. Any progress in analytical solutions of such models is highly desirable, since it does not only yield exact information about the model itself but also about the whole universality class it represents.
On the other hand, integrable models can
serve as zeroth-order approximations in the perturbative analysis of their non-integrable deformations, providing a useful insight into a rich set of physical phenomena that never occur in integrable models: confinement of topological excitations, particle decay and inelastic scattering, false-vacuum decay, etc.
The Ising Field Theory (IFT) is the Euclidean quantum field theory that describes the scaling limit of the
two-dimensional lattice Ising model near its phase transition point.
Upon making a Wick rotation, the IFT can be also viewed as a Lorentz-covariant field theory describing the dynamics of a one-dimensional quantum ferromagnet at zero temperature near its quantum phase
transition point \cite{Sach99}. The IFT is integrable at all temperatures for zero magnetic field $h=0$.
Directly at the
critical point $T=T_c$, $h=0$ it reduces \cite{BPZ84} to the minimal conformal field theory $\mathcal{M}_3$,
which describes free massless Majorana fermions. These fermions acquire a nonzero mass $m\sim |T-T_c|$ at non-critical temperatures, but remain free at $h=0$. In the ordered phase $T>T_c$, the fermions are ordinary particles, while in the ferromagnetic phase $T<T_c$ they
become topological excitations - the kinks interpolating between two degenerate ferromagnetic vacua. Application of the magnetic field $h>0$ induces interactions between fermions and breaks the
integrability of the IFT at $T \neq T_c$. In the ordered phase $T<T_c$, it explicitly
breaks also the degeneracy between ferromagnetic vacua. This induces an attractive long-range linear potential between the kinks, which leads to their confinement into two-kink bound states. Due to the analogy with quantum chromodynamics, such bound states are often called "mesons", while the kink topological
excitations in such a
confinement regime are also called "quarks". In what follows, we shall synonymously use the terms ``kinks'' and ``quarks''.
This mechanism of confinement known as the McCoy - Wu scenario was
first described for the IFT by these authors \cite{McCoy78} in 1978, and
attracted much interest in the last two decades. Recently it was experimentally observed and studied in
one-dimensional quantum ferro- and anti-ferromagnets \cite{Coldea10,Mor14,Gr15,Wang15,Bera17}.
Since the IFT is not integrable at $h>0$, $m>0$ , different approximate techniques have been used for the theoretical understanding
of the kink confinement in this model, such as analytical perturbative expansions \cite{FonZam2003, FZ06,Rut05,Rut09} in the {\it weak confinement regime}
near the integrable direction $h=0$, and numerical methods \cite{FZ06,LT2015}.
The idea to use the magnetic field as a perturbative parameter characterizing a small deformation of an
integrable massive field theory was first realized in the Form Factor Perturbation Theory (FFPT) introduced by Delfino, Mussardo, and Simonetti
\cite{Del96}. It turns out, however, that their original FFPT
cannot be applied directly to the kink confinement problem and requires considerable modification.
The reason
is that even an arbitrarily weak long-ranged confining interaction leads to qualitative changes of the particle
content at the confinement-deconfinement transition: isolated kinks cannot exist any more in the presence of the magnetic field, and
the mass spectrum $M_n(m,h)$, $n=1,2,\ldots$ of their bound states (the mesons), become dense in the interval $2m<M_n<\infty$
in the limit $h\to+0$.
This in turn makes straightforward perturbation theory
based on the adiabatic hypothesis unsuitable. { A different, non-perturbative technique to study
the IFT meson mass spectrum was developed by Fonseca and Zamolodchikov \cite{FonZam2003}. This technique
is based on the Bethe-Salpeter equation, which was derived for the IFT
in \cite{FonZam2003} in the two-quark approximation.}
The latter approximation
implies that at small magnetic fields $h\to+0$, the meson eigenstate
\begin{equation}\label{mesonwf}
|\Psi_P\rangle=|\Psi_P^{(2)}\rangle+|\Psi_P^{(4)}\rangle+|\Psi_P^{(6)}\rangle+\ldots
\end{equation}
of the IFT Hamiltonian, with $P$ being the meson momentum, is approximated by the two-quark component
\begin{equation}
|\Psi_P^{(2)}\rangle=\frac{1}{2}\int_{-\infty}^\infty\frac{dp_1}{2\pi}\frac{dp_2}{2\pi}\delta(p_1+p_2-P)
\,\Psi_P(p_1,p_2)\,|p_1p_2\rangle,
\end{equation}
neglecting the multi-quark contributions represented by further terms in the right-hand side of \eqref{mesonwf}. Here $p_{1}, p_2$ denote the momenta of two quarks coupled into a meson.
{It was shown in \cite{Rut09},
that the FFPT can be modified to adapt it to the confinement problem, if one takes into account the long-range attractive potential already at zeroth order and applies a certain $h$-dependent unitary transform
in the Fock space of the free IFT. Such a modified FFPT incorporates the Bethe-Salpeter
equation in its leading order. This perturbative technique can be effectively used in
the weak confinement regime $h\to+0$ despite the
break of the adiabatic hypothesis at the confinement-deconfinement transition at $h=0$.}
Two kinds of asymptotic expansions for the meson masses $M_n(m,h)$ have been obtained for
the IFT in the weak confinement regime $h\to+0$. The {\it low energy expansion}
\cite{McCoy78,FonZam2003, FZ06,Rut09}
in fractional powers of
$h$ describes the initial part of the meson mass spectrum, while the {\it semiclassical expansion}
\cite{FZ06,Rut05, Rut09} in integer powers of $h$ describes the meson masses $M_n(m,h)$ with $n\gg1$.
High accuracy of both expansions has been established \cite{FZ06,LT2015} by comparison with the IFT meson mass spectra calculated by direct numerical methods based on the Truncated Conformal Spaced Approach \cite{YuZam90, YuZam91}.
The leading terms in the low energy and semiclassical expansions can be gained from the Bethe-Salpeter equation. This indicates \cite{FZ06}, that the two-quark approximation is asymptotically exact to the leading order in $h\to 0$. It was shown \cite{FonZam2003, FZ06},
however, that starting from the second order in $h$ in both
low energy and semiclassical expansions, one must take into account the mixture of four-quark, six-quark, etc. configurations in the meson state \eqref{mesonwf}.
The leading multi-quark correction to the meson masses in the IFT was obtained by Fonseca and Zamolodchikov \cite{FZ06}. This correction is of order $h^2$, and originates from the renormalization of the quark mass. The third-order
$\sim h^3$ multi-quark corrections to the IFT meson masses have so far only partly been known.
These corrections arise from contributions of three effects.
\begin{itemize}
\item Renormalization of the long-range attractive force between the neighboring kinks (the 'string-tension')
of order $h^3$ which was determined in \cite{FZ06}.
\item
Multi-quark fluctuations modify the regular part of the Bethe-Salpeter kernel, which is
responsible for the pair interaction between quarks at short distances. The corresponding contribution $\sim h^3$ to
the meson masses was found in \cite{Rut09}.
\item The radiative corrections of the quark mass of the third-order in $h$, which was unknown.
\end{itemize}
The first aim of this paper is to complete the calculation of the
meson mass spectrum in the IFT in the weak confinement regime $h\to+0$ to third order in $h$.
To this end, we review and { further} modify the form factor perturbative technique developed for the confinement
problem in \cite{Rut09}. The FFPT contains a well known problem caused by the so-called kinematic singularities in the matrix elements of the spin operator. Merging of such singularities in the integrals arising in the FFPT leads to ill-defined quantities like $\delta(0)$, or $\delta(p)/p$. We propose a
consistent regularization procedure that allows one to perform high-order FFPT calculations in a controlled fashion
avoiding ill-defined quantities in
intermediate expressions.
The key idea is to replace the uniform magnetic field in the Hamiltonian of the
infinite system
by its nonuniform
counterpart switched on in a { finite} interval of the length $R$, to perform all calculations at a
large but finite $R$, and to proceed to the limit $R\to\infty$ afterwards. To verify the efficiency of this regularization procedure, we use
it to reproduce several well-known results and
to obtain some new ones for the scaling limit of the Ising model. Then we apply the same procedure
to calculate the third-order radiative correction to the quark mass in the ferromagnetic IFT
showing that it vanishes.
The mechanism of confinement outlined above is quite common in two-dimensional quantum field theories,
that are
invariant under some discrete symmetry group and display a continuous order-disorder phase transition.
If such a model has several degenerate vacua in the ordered phase, the application of an external
field typically leads to confinement of
kinks interpolating between different vacua. { Realizations of this scenario in different two-dimensional
models have been the subject of considerable interest in recant years \cite{DelMus98,Del08,Mus07,
Mus08,MusTak09}. In this paper we shall address to some aspects of the confinement problem in the three-state Potts Field Theory (PFT).}
The three-state PFT represents the scaling limit of the two-dimensional lattice three-state Potts model \cite{Bax,Wu82}.
At zero magnetic field, it is invariant under the permutation group ${\mathbb S}_3$ and displays
the continuous order-disorder phase transition.
It was shown by Dotsenko \cite{DOTSENKO198454}, that the conformal field theory corresponding
to the critical point of the three-state Potts model can be identified as the minimal unitary model $\mathcal{M}_5$. In the ordered phase at zero magnetic field, the three-state PFT
has three degenerate vacua and six kinds of massive particles of the same
mass - the kinks ('quarks') $K_{\mu \nu}$ interpolating between vacua $|0\rangle_\mu$ and $|0\rangle_\nu$, where $\mu,\nu\in \mathbb{Z} \,{{\rm mod}\,3}$. The three-state PFT is integrable at
zero magnetic field \cite{CZ92}, and the quark scattering matrix is exactly known \cite{KOBERLE1979209}.
This scattering matrix is non-trivial, which indicates that the quarks in the three-state PFT are not free at zero magnetic field, but strongly interact with each other at small distances, in contrast to the IFT.
The form factors of the physically relevant operators in the massive three-state PFT were determined by
Kirillov and Smirnov \cite{KS88}.
Application of the magnetic field $h\ne0$ breaks integrability of the PFT and leads to confinement of quarks.
The quark bound states in the $q$-state PFT in the confinement regime were classified by Delfino and
Grinza \cite{Del08}, who also showed that besides the mesonic (two-quark) bound states, the baryonic
(three-quark) bound states are allowed at $q=3$. First numerical calculations of the
meson and baryon mass spectra in the $q$-state PFT were described in
\cite{Del08,LTD}. The meson masses in the $q$-state PFT in the
weak confinement regime were analytically calculated to leading order in $h$ in \cite{RutP09}, where the generalization of the IFT
Bethe-Salpeter equation to the PFT was also
described. The masses of several lightest baryons in the three-state PFT in the leading order in $h$ have been calculated in \cite{Rut15B}. Analytical predictions of \cite{RutP09,Rut15B}
for the meson and baryon masses in the three-state PFT were confirmed in direct numerical
calculations performed by Lencs{\'e}s and Tak{\'a}cs \cite{LT2015}.
The second subject of the present paper is to estimate the second-order radiative correction
to the quark masses in the 3-state PFT in the weak confinement regime.
This correction to the quark mass gives rise to the multi-quark corrections to the
meson and baryon masses in second order in $h$.
Starting from
the Lehmann expansion for the quark mass radiative correction,
we calculate its first term representing the quark self-energy diagram with two virtual quarks in the
intermediate state.
The remainder of this paper is organized as follows.
In the next section we start with recalling some well-known properties of the $q$-state Potts model on the square lattice, and then describe briefly its scaling limit in the case $q=3$, and zero magnetic field.
In Section \ref{Is} we review the FFPT ad{a}pted in \cite{Rut09}
to the confinement problem in the IFT.
We further improve this FFPT technique in order to regularize the products of singular matrix elements of the spin operator which arise in this method.
We then apply the improved version of the FFPT to recover some well-known results and to obtain several new ones for the IFT. In Section \ref{FF} we describe the form factors of the disorder spin operators
in the three-state PFT at zero magnetic field in the paramagnetic phase, which were found by Kirillov and Smirnov \cite{KS88}.
Applying the duality transform to these form factors, we obtain the matrix elements of the order spin operators in the ferromagnetic three-state PFT between the one- and two-quark states. These matrix elements are used in Section \ref{SOPFT} to estimate the second-order correction to
the quark mass in the latter model in the presence of a weak magnetic field. Concluding remarks are
given in Section \ref{Conc}. Finally, there are four
appendixes describing technical details of some of the required
calculations.
\section{Potts Field Theory \label{PFTsec}}
In this section we following \cite{Del08} review
some well known properties of the $q$-state Potts model on the square lattice, and then proceed to its scaling limit.
Consider the two-dimensional square lattice ${\mathbb Z}^2$
and associate with each lattice site $x\in {\mathbb Z}^2 $ a discrete spin variable
$s(x) =1,2,\ldots,q$. The model
Hamiltonian is defined as
\begin{equation}
{\cal E}=-\frac{1}{T} \sum_{<x,\,y>} \delta_{s(x),s(y)}-H\sum_x
\delta_{s(x),q}.
\label{HamP}
\end{equation}
Here the first summation is over nearest neighbour pairs, $T$ is the
temperature, $H$ is the external magnetic field
applied along the $q$-th direction, and $\delta_{\alpha,\alpha'}$ is the Kronecker symbol.
At $H=0$, the Hamiltonian (\ref{Ham}) is invariant under the permutation group ${\mathbb S}_{q}$; at $H\ne0$
the symmetry group reduces to ${\mathbb S}_{q-1}$. At $q=2$, model \eqref{HamP} reduces to the Ising model.
The order parameters $\langle\sigma_\alpha\rangle$ can be associated with the variables
\[
\sigma_\alpha(x) =\delta_{s(x),\alpha}-\frac{1}{q}, \quad \alpha=1,\ldots,q.
\]
The parameters $\langle\sigma_\alpha\rangle$ are not independent, since
\begin{equation}
\sum_{\alpha=1}^q\sigma_\alpha(x)=0.
\end{equation}
Two complex spin variables $\sigma(x)$ and $\bar{\sigma}(x)$
defined by the relations
\begin{align}
\sigma(x)=\exp [2 \pi i s(x)/q]=\sum_{\alpha=1}^q \exp (2 \pi i \alpha/q)\,\sigma_\alpha(x),\\
\bar{\sigma}(x)=\exp [-2 \pi i s(x)/q]=\sum_{\alpha=1}^q \exp (-2 \pi i \alpha/q)\,\sigma_\alpha(x),
\end{align}
are useful in proceeding to the continuous limit.
At zero magnetic field, the model undergoes a ferromagnetic phase transition at the critical temperature
\begin{equation}
T_c=\frac{1}{ \log(1+\sqrt{q})}.
\end{equation}
This transition is continuous for $2\le q\le 4$.
The ferromagnetic low-temperature phase at zero field
is $q$-times degenerated. {
The Potts model \eqref{HamP} at $H=0$ possesses the dual symmetry, which generalizes
the Kramers-Wannier duality of the Ising model. This symmetry connects the properties of the model in the
ordered and disordered phases. By duality, the partition functions of the zero-field Potts model coincide
at the temperatures $T$ and $\tilde{T}$, provided
\[
\left(e^{1/T}-1 \right)\left(e^{1/\tilde{T}}-1 \right)=q.
\]
}
For a review of many other known properties of the Potts model see \cite{Wu82,Bax}.
The scaling limit of
the model (\ref{HamP}) at $H\to 0$, $T\to T_c$, and $q\in [2,4]$ is described by the
Euclidean action \cite{Del08}
\begin{equation} \label{AP}
{\cal A}^{(q)}= {\cal A}_{CFT}^{(q)} -\tau\int d^2x\,{\mathfrak e}(x)-
h\int d^2x\,\sigma_q(x)\,\,,
\end{equation}
Here $x$ denotes the points of the plane $\mathbb{R}^2$ having the cartesian coordinates
$(\rm{x},\rm{y})$.
The first term ${\cal A}_{CFT}^{(q)}$ corresponds to the conformal field theory,
which is associated with the critical point.
Its central charge $c(q)$ takes the value
\begin{equation}
c(q)=1-\frac{6}{t(t+1)}, \quad {\rm where}\;\;\sqrt{q}=2\sin \frac{\pi(t-1)}{2(t+1)}.
\end{equation}
The fields ${\mathfrak e}(x)$ (energy density)
and $\sigma_q(x)$ (spin density) are characterized by the
scaling dimensions
\[
X_{\mathfrak e}^{(q)}=\frac{1}{2}\left(1+\frac{3}{t}\right), \quad \quad
X_\sigma^{(q)}=\frac{(t-1)(t+3)}{8 t(t+1)}.
\]
The parameters $\tau\sim (T-T_c)$ and $h\sim H$ are proportional
to the deviations of the temperature and the magnetic field
from their critical point values. At $h=0$ and $\tau\ne 0$ the field theory \eqref{AP} is integrable,
i.e. it has infinite number of integrals of motion and a
factorizable scattering matrix \cite{CZ92}.
In the rest of this section we shall concentrate on the $q=3$ Potts field theory.
The simpler and better studied Ising case corresponding to $q=2$ will be
discussed in Section \ref{Is}.
\subsection{Disordered phase at $h=0$}
The model has a unique ground state $|0\rangle_{ par}$ in the disordered phase, at $\tau>0$ and $h=0$.
The particle content of the model consists of a massive scalar particle and its antiparticle.
Their momentum $p$ and energy
\begin{equation}\label{omff}
\omega(p)=\sqrt{p^2+m^2}
\end{equation}
can be
conveniently parametrized by the rapidity $\beta$,
\begin{equation}\label{disp}
p(\beta)=m \sinh\beta, \quad \omega(\beta)=m \cosh\beta.
\end{equation}
Here $m\sim {\tau}^{5/6}$ is the particle mass.
The space of states is generated by the Faddeev-Zamolodchikov creation/annihilation operators $Z_\varepsilon^*(\beta)$,
$Z_\varepsilon(\beta)$, where the index $\varepsilon=\pm1$ distinguishes particles ($\varepsilon=1$) and
antiparticles ($\varepsilon=-1$). These operators satisfy the following equations
\begin{align}\label{ZZcom}
Z_{\varepsilon_1}(\beta_1)\,Z_{\varepsilon_2}(\beta_2)=S_{\varepsilon_1,\varepsilon_2}(\beta_1-\beta_2)Z_{\varepsilon_2}(\beta_2)\,Z_{\varepsilon_1}(\beta_1),\\ {\label{ZZ2}
Z_{\varepsilon_1}^*(\beta_1)\,Z_{\varepsilon_2}^*(\beta_2)=S_{\varepsilon_1,\varepsilon_2}(\beta_1-\beta_2)Z_{\varepsilon_2}^*(\beta_2)\,Z_{\varepsilon_1}^*(\beta_1) },\\
Z_{\varepsilon_1}(\beta_1)\,Z_{\varepsilon_2}^*(\beta_2)=S_{\varepsilon_2,\varepsilon_1}(\beta_2-\beta_1)Z_{\varepsilon_2}^*(\beta_2)\,Z_{\varepsilon_1}(\beta_1)+\delta_{\varepsilon_1\varepsilon_2}
\delta(\beta_1-\beta_2),\label{FZ2}
\end{align}
where
\begin{eqnarray}\label{S}
&&S_{-1,-1}(\beta)=S_{1,1}(\beta)=\frac{\sinh[(\beta+2\pi i/3)/2]}{\sinh[(\beta-2\pi i/3)/2]},\\
&&S_{1,-1}(\beta)=S_{-1,1}(\beta)=S_{1,1}(i\pi-\beta). \nonumber
\end{eqnarray}
Equation \eqref{FZ2} implies that the one-particle states are normalized as
\begin{equation}\label{norm3P}
\phantom{x}_{ par} \langle 0|Z_{\varepsilon_1}(\beta_1)Z_{\varepsilon_2}^*(\beta_2)|0\rangle_{ par} =
\delta_{\varepsilon_1\varepsilon_2}
\delta(\beta_1-\beta_2).
\end{equation}
The two-particle scattering amplitudes \eqref{S} were found by K\"oberle and Swieca \cite{KOBERLE1979209}.
The generators of the
permutation group $\mathbb{S}_3\approx \mathbb{Z}_3\times \mathbb{Z}_2$ act
on the paramagnetic vacuum and particles as follows
\begin{align}
\Omega |0\rangle_{ par} =|0\rangle_{ par} ,\quad C |0\rangle_{ par} =|0\rangle_{ par} , \\
\Omega Z_{\varepsilon}^*(\beta)\Omega^{-1}= \upsilon^{\varepsilon} Z_{\varepsilon}^*(\beta), \\
C Z_{\epsilon}^*(\beta) C^{-1}=Z_{-\epsilon}^*(\beta).
\end{align}
Here $\upsilon=\exp(2\pi i/3)$, $\Omega$ is the generator of the
cyclic permutation group $\mathbb{Z}_3$, $\Omega^3 =1$,
$C$ is the charge conjugation, $C^2=1$.
{ The vector space $\mathcal{L}_{par}$ of paramagnetic states
is spanned by the paramagnetic vacuum $|0\rangle$,
and the $n$-particle vectors
\begin{equation}\label{rapibas}
|\beta_n,\ldots,\beta_2,\beta_1\rangle_{\varepsilon_n,\ldots,\varepsilon_2,\varepsilon_1} \equiv
Z_{\varepsilon_n}^*(\beta_n)\ldots Z_{\varepsilon_2}^*(\beta_2) Z_{\varepsilon_1}^*(\beta_1) |0\rangle_{par},
\end{equation}
with $n=1,2,\ldots.$ Corresponding to \eqref{rapibas} bra-vector is denoted as
\[
\phantom{x}_{\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_n}\langle\beta_1,\beta_2,\ldots, \beta_n|\equiv
\phantom{x}_{ par} \langle 0|Z_{\varepsilon_1}(\beta_1) Z_{\varepsilon_2}(\beta_2) \ldots Z_{\varepsilon_n}(\beta_n) .
\]
Let us denote by $\mathcal{L}_{sym}$ the subspace of $\mathcal{L}_{par}$ spanned by the vacuum $|0\rangle$
and vectors \eqref{rapibas}, for which $\sum_{j=1}^n\epsilon_j =0\,{\rm mod}\, 3$.
Operator $\Omega$ acts as the identity operator on the subspace $\mathcal{L}_{sym}$.
The $n$-particle vectors \eqref{rapibas} are not linearly independent,
but satisfy a number of linear relations, which are imposed
on them by the commutation relations \eqref{ZZ2}. For example,
\begin{equation}\label{examp}
|\beta_1,\beta_2\rangle_{\varepsilon_1,\varepsilon_2}=S_{\varepsilon_1,\varepsilon_2}(\beta_1-\beta_2)
|\beta_2,\beta_1\rangle_{\varepsilon_2,\varepsilon_1}.
\end{equation}
The "in"-basis in the $n$-particle subspace
$\mathcal{L}_{par}^{(n)}$ of $\mathcal{L}_{par}$
is formed by the vectors of the form \eqref{rapibas} with $\beta_n>\beta_{n-1}>\ldots>\beta_1$,
and the "out"-basis in the same subspace $\mathcal{L}_{par}^{(n)}$ is formed by the vectors
\eqref{rapibas} with $\beta_n<\beta_{n-1}<\ldots<\beta_1$.
}
Reconstruction of the matrix elements of local operators between such basis states in integrable models
is the main subject of the form factor bootstrap program
\cite{smirnov1992form}. For the three-state PFT, this program was realized by Kirillov and Smirnov in \cite{KS88}, where the explicit representations for the form factors of the main
operators naturally arising in this model were obtained. We postpone the discussion of these results to
Section \ref{FF}.
\subsection{Ordered phase at $h=0$ \label{FERR}}
In the low temperature phase $\tau<0$, the ground state $|0\rangle_\mu$, $\mu={0},1,2\,
{\rm mod}\, 3$ is three-fold degenerate at $h=0$. The elementary excitations are topologically charged being represented by six kinks
{ $|K_{\mu\nu}(\beta)\rangle$}, $\mu,\nu\in \mathbb{Z}\,{{\rm mod}\, 3}$ interpolating between two different vacua $|0\rangle_\mu$ and $|0\rangle_\nu$.
These kinks are massive relativistic particles with the
mass $m\sim\,(-\tau)^{5/6}$.
{
The generators of the symmetry group $\mathbb{S}_3$ act on the vacua and one-kink states as follows,
\begin{eqnarray}
&&{\tilde{\Omega}} |0\rangle_\mu=|0\rangle_{\mu+1}, \\
&&{\tilde{C}} |0\rangle_\mu=|0\rangle_{-\mu},\\
&&{\tilde{\Omega}} |K_{\mu\nu}(\beta)\rangle=|K_{\mu+1,\nu+1}(\beta)\rangle,\label{OmK}\\
&&{\tilde{C}} |K_{\mu\nu}(\beta)\rangle=|K_{-\mu,-\nu}(\beta)\rangle.
\end{eqnarray}
The subspace $\mathcal{L}_{fer}^{(n)}$ of the $n$-kink states in the ferromagnetic space $\mathcal{L}_{fer}$
is spanned by the vectors
\begin{equation}\label{kinkS}
|K_{\mu_n \mu_{n-1}}(\beta_n)\ldots K_{\mu_2 \mu_{1}}(\beta_2) K_{\mu_1 \mu_{0}}(\beta_1)\rangle.
\end{equation}
Corresponding bra-vector is denoted as
\[
\langle K_{\mu_0 \mu_{1}}(\beta_1) K_{\mu_1 \mu_{2}}(\beta_2) \ldots
K_{\mu_{n-1} \mu_{n}}(\beta_1)|.
\]
The $n$-kink states \eqref{kinkS} are called topologically neutral,
if $\mu_n=\mu_0 $, and topologically charged otherwise.
We denote by $\mathcal{L}_0$ the topologically neutral subspace of $\mathcal{L}_{fer}$ spanned by the
ferromagnetic vacuum $|0\rangle_0$, and vectors \eqref{kinkS} with $\mu_n=\mu_0=0 $.
The Kramers-Wannier duality of the square-lattice Potts model \cite{Bax,Wu82} manifests itself also in the quantum
Potts spin chain model \cite{Tak13}, and in the scaling PFT at and beyond the critical point \cite{DOTSENKO198454,CZ92} .
Roughly speaking, the duality symmetry in the latter case can be viewed as the kink-particles correspondence \cite{Del08,Tak13}
\begin{eqnarray*}
&&|K_{10}(\beta) \rangle, \;|K_{21}(\beta) \rangle, \;|K_{02}(\beta) \rangle \quad \longleftrightarrow \quad|\beta \rangle_1,\\
&&|K_{01}(\beta) \rangle, \;|K_{12}(\beta) \rangle, \;|K_{20}(\beta) \rangle \quad \longleftrightarrow \quad |\beta\rangle_{-1}
\end{eqnarray*}
between the elementary excitations in the ferromagnetic and paramagnetic phases.
To be more precise, let us define the duality transform $\mathcal{D}$ as a linear mapping
$\mathcal{L}_0\to \mathcal{L}_{sym} $
determined
by the following relations
\begin{eqnarray}
&&\mathcal{D}\, |0\rangle_0=|0\rangle_{ par},\\\label{dualV}
&&\mathcal{D} |K_{\mu_{n},\mu_{n-1}}(\beta_n),
\ldots, K_{\mu_{1},\mu_0}(\beta_1)\rangle=|\beta_n,\ldots,\beta_1\rangle_{\epsilon_n,\ldots,\epsilon_1},
\end{eqnarray}
where
\begin{equation}
\epsilon_j=\begin{cases}
1, \;\; {\rm if} \;\;\mu_j-\mu_{j-1}=1\,{\rm mod\,} 3,\\
-1, \; {\rm if} \;\;\mu_j-\mu_{j-1}=-1\,{\rm mod\,} 3,
\end{cases}
\end{equation}
and $\mu_n =\mu_0 =0 $.
The Kramers-Wannier duality of the PFT
requires that the mapping $\mathcal{D}$ must be unitary, i.e.
the inverse mapping $\{\mathcal{D}^{-1}|\mathcal{D}^{-1}:\mathcal{L}_{sym}\to \mathcal{L}_{0} \} $ must
exist, and $\mathcal{D}^{-1}=\mathcal{D}^{\dagger}$. These requirements lead to a number of linear
relations between the $n$-kink states \eqref{kinkS}. For example, acting
on the equality
\[
|\beta_1,\beta_2\rangle_{1,-1}=S_{1,-1}(\beta_1-\beta_2)|\beta_2,\beta_1\rangle_{-1,1}
\]
[following from \eqref{examp}] by the mapping $\mathcal{D}^{-1}$, one obtains,
\[
|K_{02}(\beta_1) K_{20}(\beta_2) \rangle=S_{1,-1}(\beta_1-\beta_2)|K_{01}(\beta_2) K_{10}(\beta_1) \rangle.
\]
Application of the same procedure to the $n$-particle states \eqref{kinkS} leads to the Faddeev-Zamolodchikov
commutation relations
\begin{subequations}\label{KK}
\begin{align}
K_{\mu\nu}(\beta_1)K_{\nu\gamma}(\beta_2)= S_{1,1}(\beta_1-\beta_2)K_{\mu\nu}(\beta_2)K_{\nu\gamma}(\beta_1),\\
K_{\mu\nu}(\beta_1)K_{\nu\mu}(\beta_2)= S_{1,-1}(\beta_1-\beta_2)K_{\mu\rho}(\beta_2)K_{\rho\mu}(\beta_1),
\end{align}
\end{subequations}
where $\rho\ne\nu$.
According to the conventional agreement \cite{ZZ79}, notations $K_{\alpha\alpha'}(\beta_j)$ in the
above relations can be understood as the formal
non-commutative symbols representing the kinks in the $n$-kink states
\eqref{kinkS}.
Relations \eqref{KK} describe the two-kink scattering processes in the ferromagnetic phase.
Due to the PFT dual symmetry, they are characterized by the same scattering amplitudes, as the two-particle
scattering in the paramagnetic phase. Furthermore, the scattering theories in the high- and low-temperature phases are equivalent.
Such duality arguments
}
can be also extended to the matrix elements of physical operators. In particular, the matrix elements of the order spin operators in the ferromagnetic phase can be expressed in terms of the form factors of the disorder spin operators \cite{FRADKIN19801}
in the paramagnetic phase. { We shall return to this issue in Section \ref{FF}.}
\section{Quark mass in the ferromagnetic IFT \label{Is}}
The IFT action $A_{IFT}\equiv\mathcal{A}^{(2)}$ is defined by equation \eqref{AP} with $q=2$.
The conformal field theory $\mathcal{A}_{CFT}^{(2)}$ associated with the critical point
is the minimal model $\mathcal{M}_3$, which contains
free massless Majorana fermions \cite{BPZ84}. These fermions acquire a mass $m\sim |\tau|$, as the temperature deviates from the critical point. They remain free at $h=0$. However, application of a magnetic field $h>0$ induces interaction between the fermions.
The Hamiltonian corresponding to the action $A_{IFT}$ can be written as \cite{Rut09}
\begin{eqnarray}
&&\mathcal{H} = \mathcal{H}_0 +h\,V, \label{Ham}\\
\textrm{where }\;\;\;&&\mathcal{H}_0=\int_{-\infty}^\infty \frac{d p}{2 \pi}
\,\omega(p)\, {\bf a}^\dagger (p) \, {\bf a}(p), \label{H0}\\
&&V=-\int_{-\infty}^\infty d\rm{x}\,\sigma(\rm{x}), \label{V}
\end{eqnarray}
and $\omega(p)$ is
the spectrum \eqref{omff} of free fermions. These fermions
are ordinary spinless particles in the disordered phase $\tau>0$, and topologically-charged kinks
interpolating between two degenerate vacua in the ordered phase $\tau<0$.
Fermionic operators $ {\bf a}^\dagger (p') , \,{\bf a}(p)$ obey the canonical
anticommutational relations
\[
\{ {\bf a}(p) , {\bf a}^\dagger (p') \} =2 \pi \,\delta(p-p'), \quad
\{ {\bf a}(p), {\bf a} (p') \} = \{ {\bf a}^\dagger (p) ,{\bf a}^\dagger (p') \}= 0.
\]
Commonly used are also fermionic operators $a(\beta),\, a^\dagger(\beta)$,
corresponding to the rapidity variable $\beta={\rm arcsinh}(p/m)$:
\begin{equation}\label{abeta}
a(\beta)= \omega(p)^{1/2}\, {\bf a}(p), \;a^\dagger(\beta)=\omega(p)^{1/2}\, {\bf a}^\dagger (p).
\end{equation}
The notations
\begin{eqnarray*}
|p_1, \dots ,p_N \rangle = {\bf a}^\dagger (p_1) \dots {\bf a}^\dagger (p_N) |0\rangle, \quad\;\;
\langle p_1,\ldots,p_N|=\langle 0|{\bf a}(p_1) \dots {\bf a}(p_N) , \\
| \beta_1,\dots, \beta_N \rangle = a^\dagger(\beta_1) \dots a^\dagger(\beta_N)|0\rangle, \quad\quad
\langle \beta_1,\dots ,\beta_N|=\langle 0| a(\beta_1) \dots a(\beta_N)
\end{eqnarray*}
for the fermionic basis states with definite momenta will be used.
The order spin operator $ \sigma({\rm x}) =\sigma( {\rm x} ,{\rm y})|_{{\rm y}=0}$ in
the ordered phase $\tau<0$
is completely characterized by the matrix elements
$ \langle \beta_1,\ldots,\beta_K|\sigma(0) |\beta'_1,\ldots,\beta'_{N}\rangle $, whose
explicit expressions are well known
\cite{Berg79,FonZam2003}, see equation (2.14) in \cite{FonZam2003}. These matrix elements are different from zero only if $K+N=0\,({\rm mod}\,2)$.
The matrix elements with $K+N=2$ read as
\begin{align}\label{fIs}
\langle p | \sigma({\rm x})|k\rangle = \frac{ i\,\bar{\sigma}\, \exp[ i {\rm x}(k-p)] }{p-k}\,
\frac{ \omega(p)+\omega(k) }{[\omega(p)\omega(k)]^{1/2}},\\
\langle 0 | \sigma({\rm x}) |k_1 k_2\rangle = \frac{ i\,\bar{\sigma}\, \exp[ i {\rm x}(k_1+k_2)] }{k_1+k_2}\,
\frac{ \omega(k_1)-\omega(k_2) }{[\omega(k_1)\omega(k_2)]^{1/2}},\label{fIs2}\\
\langle k_1 k_2 | \sigma({\rm x}) |0\rangle = \frac{ i\,\bar{\sigma}\, \exp[- i {\rm x}(k_1+k_2)] }{k_1+k_2}\,
\frac{ \omega(k_1)-\omega(k_2) }{[\omega(k_1)\omega(k_2)]^{1/2}},\label{fIs3}
\end{align}
where $\bar{\sigma}=\bar{s} |m|^{1/8}$ is the zero-field vacuum
expectation value of the order field (spontaneous magnetization), and
\begin{equation}
\bar{s}=2^{1/12}e^{-1/8}A^{3/2}=1.35783834...,
\end{equation}
where $A=1.28243...$ stands for the Glaisher's constant.
The matrix elements of the order spin operator with $K+N>2$ can be determined from
\eqref{fIs}-\eqref{fIs3} by means of the Wick expansion.
For real $p$ and $k$, the {"kinematic"} pole at $p=k$ in \eqref{fIs} is understood in the
sense of the Cauchy principal value
\begin{equation}\label{Cau}
\frac{1}{p-k} \to\mathcal{P} \frac{1}{p-k} \equiv
\frac{1}{2}\left(\frac{1}{p-k+i 0}+ \frac{1}{p-k-i 0}
\right).
\end{equation}
The field theory defined by the Hamiltonian \eqref{Ham}-\eqref{V} is not integrable for
generic $m>0$ and $h>0$, but admits exact solutions along the lines $h=0$ and $m=0$.
The line $h=0$ corresponds to the Onsager's solution \cite{Ons44}, whose scaling limit describes
free massive fermions. Integrability of the IFT along the line $m=0$, $h\ne 0$ was established by Zamolodchikov \cite{ZamH}.
Close to integrable directions, it is natural to treat the non-integrable quantum field
theories as deformations of integrable ones. As it was mentioned in the Introduction,
realization of this idea leads to the FFPT, whose original version \cite{Del96}, however,
cannot be applied directly to the confinement problem
since the magnetic field changes the particle content of the theory at arbitrary small $h>0$.
The problem manifests itself already in the naive first-order correction formula for the kink mass \cite{Del96}
\begin{equation}
\delta^{(1)} m= -
\lim_{p\to 0} \lim_{k\to p}h\, \langle p |\sigma(0)|k\rangle,
\end{equation}
which is infinite due to the kinematic pole in the matrix element \eqref{fIs} of the spin operator. To avoid this problem, a modified version of the FFPT was developed in \cite{Rut09}.
Since it is substantially used in this section, it will be helpful to recall here its main issues.
The kea idea of the modified FFPT is to absorb a part of the interaction into the unitary operator
$U(h)$, for which the formal expansion in powers of $h$ is postulated,
\begin{equation}\label{Uh}
U(h)=1+\sum_{n=1}^\infty h^n\,\mathcal{F}_n.
\end{equation}
This operator has been used to define creator and annihilator operators for the "dressed" fermions,
\begin{equation}\label{dr}
\underline{{\bf a}}(p)=U(h)^{-1} \,{{\bf a}}(p) \,U(h), \quad
\underline{{\bf a}}^\dagger(p)=U(h)^{-1}\, {{\bf a}}^\dagger(p) \,U(h),
\end{equation}
which are underlined to distinguish them from the "bare" ones. Similarly, the dressing unitary transform
is defined for arbitrary operators and states,
\begin{equation*}
\underline{A}=U(h)^{-1} \,A\,U(h) , \qquad |\underline{\Phi}\rangle =U(h)^{-1} |{\Phi}\rangle.
\end{equation*}
It was required in \cite{Rut09} that the number of dressed fermions conserves in the evolution
defined by the Hamiltonian \eqref{Ham}-\eqref{V}, i.e.
\begin{equation}\label{NH}
[\underline{N},\mathcal{H}]=0,
\end{equation}
where
\[
\underline{N}=\int_{-\infty}^\infty \frac{d p}{2\pi}\,\underline{{\bf a}}^\dagger(p)\,\underline{{\bf a}}(p).
\]
It was required, further, that
operators $\mathcal{F}_n$ change the number of dressed fermions, i.e.
\begin{equation} \label{mcFd}
\langle \underline{p}|\mathcal{F}_n|\underline{k}\rangle =0 \quad \textrm{for} \quad n(p)= n(k) .
\end{equation}
Here the shortcut notations
$
| \underline {k}\rangle =|\underline {k_1,...,k_{n(k)}}\rangle,$
$\langle \underline {p}|=
\langle \underline {p_1,...p_{n(p)}}|$,
have been used.
Conditions \eqref{NH}, \eqref{mcFd} together with the unitarity requirement
\begin{equation}\label{U}
U(h)U(h)^{-1}=1,
\end{equation}
allow one
to determine the coefficients $\mathcal{F}_n$ in the expansion \eqref{Uh}. In particular, the matrix elements of the first one
read as
\begin{eqnarray} \label{mcF}
\langle \underline{p}|\mathcal{F}_1|\underline{k}\rangle =
\frac{\langle {p}|{V}|{k}\rangle}
{\omega(p)-\omega(k)}, \quad \textrm{for} \quad n(p)\ne n(k) ,
\end{eqnarray}
where we again use the abbreviation $\omega (q)\equiv \omega (q_1)+...+\omega(q_{n(q)})$.
Note that the matrix element \eqref{mcF} diverges at the hyper-surface determined by the
"resonance relation"
\begin{equation} \label{RES}
\omega(p_1)+\ldots+\omega(p_{n(p)})=\omega(k_1)+\ldots+\omega(k_{n(k)}).
\end{equation}
This indicates that, strictly speaking, the unitary operator $U(h)$ satisfying requirements
\eqref{Uh}-\eqref{NH} does not exist. However, in calculations of the small-$h$ asymptotic expansions of certain quantities [e.g. the ground state energy $E_{vac}(m,h)$] the resonance terms do not appear, and the modified FFPT can be effectively used and leads to unambiguous results. This situation is similar to the perturbation theory for nonlinear systems in classical mechanics \cite{Arnold}. The Birkhoff's theorem states that, if the classical nonlinear system is close to some linear one, and {\it the characteristic frequencies of the latter
do not satisfy the resonance relations}, the dynamics of the nonlinear system can be well approximated
by the integrable system which Hamiltonian has the Birkhoff normal form, see page 387 in \cite{Arnold}. The unitary operator $U(h)$ can be viewed as the quantum analogue of the canonical
transform, which maps the original Hamiltonian of a non-integrable classical system to the
integrable Birkhoff normal form.
The second difficulty, which is inherent to the FFPT, comes from the kinematic singularities in the
matrix elements of the spin order operator between the states with nonzero numbers of kinks. Such
singularities contributing in the leading and higher-orders of the FFPT lead to infinite and ill-defined quantities like '$\delta(0)$', which require regularization. This problem has been
widely discussed in the literature, mostly in the context of finite-temperature correlation function calculations
\cite{LeCl99,Saleur2000,Alt06,EsKon09,Tak10}. Several regularization procedures have been proposed, such as finite
volume regularization \cite{Tak10,PozTak10}, and appropriate infinitesimal shiftings of the kinematic poles into the complex plane \cite{LeCl99,EsKon09,Rut09}. Here we apply a different regularization scheme, which seems to be more convenient for the problem considered.
Keeping the length of the system infinite, we replace
the uniform magnetic field $h>0$ by the non-uniform field
$h_R({\rm x})$, which is switched on only in the large, but finite interval $[-R/2,R/2]$, $R\gg m^{-1}$,
\begin{align}
h_R({\rm x}) = h\,\, \chi( {\rm x};-R/2,R/2),\\
{\rm where}\;\;\chi( {\rm x};-R/2,R/2)=\begin{cases}\nonumber
1, &{\rm if}\; {\rm x}\in [-R/2,R/2],\\
0,&{\rm if}\; {\rm x}\notin [-R/2,R/2].
\end{cases}
\end{align}
After performing all calculations, we proceed to the limit $R\to\infty$.
Accordingly, instead of the IFT Hamiltonian \eqref{Ham}, we get a set of Hamiltonians
$\mathcal{H}_R$ parametrized by the length $R$,
\begin{eqnarray}\label{HR}
&&\mathcal{H}_R=\mathcal{H}_0+h\,V_R,\\
&&V_R=-\int_{-R/2}^{R/2} d\rm{x}\,\sigma(\rm{x}). \label{VR}
\end{eqnarray}
After diagonalization of the Hamiltonian $\mathcal{H}_R$ in the fermionic number along the lines
described in Section 5 of \cite{Rut09}, we arrive to equations (35)-(39) of \cite{Rut09}, modified by the following replacements:
\begin{equation}\label{rep}
V\to V_R, \quad \mathcal{H}\to\mathcal{H}_R,\quad U\to U_R, \quad \Lambda\to \Lambda_R.
\end{equation}
In the rest of this Section, the efficiency of the described version of the FFPT will be
demonstrated by the recovery of some well-known features of the IFT in the weak
confinement regime and the derivation of several new results.
\subsection{Vacuum sector}
To warm-up, let us consider the small-$h$ expansion of the ferromagnetic
ground state energy in the IFT. The results will be used in the subsequent subsection
in calculations of the radiative corrections to the
kink dispersion law and string tension.
The expansion of the ground state energy $E_{vac}(m,h,R)$ can be read from Subsection~5.1 of
Reference \cite{Rut09}, with substitutions \eqref{rep}:
\begin{eqnarray} \nonumber
&&E_{vac}(m,h,R)\equiv \langle \underline{0}| \mathcal{H}_R| \underline{0}\rangle=\\
&&\langle \underline{0}| U_R(h)( \underline{\mathcal{H}}_0+h \underline{V}_R)U_R(h)^{-1}| \underline{0}\rangle=
\sum_{j=1}^\infty \delta_{j} E_{vac}(m,h,R) ,\label{RSch}
\end{eqnarray}
where $\delta_{j} E_{vac}(m,h,R)\sim h^j$, and
\begin{eqnarray} \label{E2vac}
&&\delta_{1}E_{vac}(m,h,R)=h \langle {0}| {V}_R| {0}\rangle=-h\bar{\sigma}R ,\\
&&\delta_{2}E_{vac}(m,h,R)=-h^2\sum_{q\atop{n(q)\ne 0}} \label{al2}
\frac{\langle {0} | {V}_R|q\rangle \langle q| {V}_R| {0}\rangle}
{\omega(q)}, \\
&&\delta_{3}E_{vac}(m,h,R)=-h^3\langle {0}| {V}_R| {0}\rangle \sum_{q\atop{n(q)\ne 0}}
\frac{\langle {0}| {V}_R|q\rangle \langle q| {V}_R| {0}\rangle}
{[\omega(q)]^2} \label{al3}
+\\
&&h^3\sum_{q,q'\atop{n(q)\ne 0\ne n(q')}}\frac{\langle {0}| {V}_R|q\rangle\,
\langle q| {V}_R| {q'}\rangle
\, \langle q'| {V}_R| {0}\rangle}{\omega(q)\,\omega(q')}
. \nonumber
\end{eqnarray}
The same abbreviation as in equation \eqref{mcF} have been used,
$n(q)$ denotes the number of fermions in the intermediate state $|q\rangle \equiv | q_1, q_2,\ldots q_{n(q)}\rangle$.
Four comments on equations \eqref{RSch}-\eqref{al3} are in order.
\begin{enumerate}
\item
There are no resonance poles [like in equation \eqref{mcF}] in expansion $\eqref{RSch}$, while
the kinematic singularities are present in its third and higher order terms.
\item
Equation \eqref{RSch} is nothing else but the Rayleigh-Schr{\"o}dinger expansion (see, for example
\textsection 38 in \cite{landau1981quantum}) for the ground state energy of the Hamiltonian \eqref{HR}. This expansion in $h$ is asymptotic. In the limit $R\to\infty$, its convergence radius goes
to zero due to the weak
essential {\it droplet singularity} \cite{FonZam2003,LANGER1967108,Rut99} at $h=0$ of the IFT ground state energy density $\rho(m,h)$. The latter can be identified with the limit
\begin{equation}\label{rhn}
\rho(m,h)\equiv\lim_{R\to\infty} \frac{E_{vac}(m,h,R)}{R}=\sum_{j=1}^\infty \delta_{j} \rho(m,h),
\end{equation}
where $\delta_{j} \rho(m,h)\sim h^j$.
\item The ground state energy density $\rho(m,h)$ is simply related to the universal function
${F}(m,h)$ that describes the singular part of the free energy in the vicinity of the
critical point in the two-dimensional Ising model universality class \cite{FonZam2003,Bazh10},
\begin{equation}
\rho(m,h)={F}(m,h)-{F}(m,0)
=m^2 \,G_{\rm low}(\xi),
\end{equation}
where $\xi=h/|m|^{15/8}$, and the zero-field term ${F}(m,0)$ describes Onsager's singularity
\cite{Ons44} of the Ising free energy at zero $h$,
\begin{equation}
{F}(m,0)=\frac{m^2}{8\pi} \ln m^2.
\end{equation}
The scaling function $G_{\rm low}(\xi)$ can be expanded into the asymptotic expansion in powers of
$\xi$
\begin{equation}\label{Gt}
G_{\rm low}(\xi)\simeq \tilde{G}_1 \xi+ \tilde{G}_2 \xi^2+\ldots,
\end{equation}
whose initial coefficients are known with high accuracy \cite{McCoyWu78,FonZam2003,Bazh10}.
\item Fonseca and Zamolodchikov argued \cite{FZ06}, that the perturbation expansion
for the renormalized string tension $f(m,h)$, which characterizes the linear attractive potential acting between two kinks
at large distances, is related with the ground state energy density $\rho(m,h)$ in the following way
\begin{equation} \label{fh}
f(m,h)={\rho(m,-h)-\rho(m,h)},
\end{equation}
where the right-hand side is understood in the sense of the formal perturbative
expansion in $h$.
Combining \eqref{rhn} and \eqref{fh}, we get
\begin{equation}\label{fh1}
f(m,h)=\sum_{j=0}^\infty f^{(2j+1)}(m,h),
\end{equation}
where
\begin{equation}\label{fhj}
f^{(2j+1)}(m,h)=-2
\lim_{R\to\infty} \frac{\delta_{2j+1}E(m,h,R)}{R}\sim h^{2j+1},
\end{equation}
and
\begin{equation}
f^{(1)}(m,h)=f_0(h)\equiv 2 h \bar{\sigma}.
\end{equation}
\end{enumerate}
The second-order term $\delta_{2}E_{vac}(m,h,R)$ is defined by means of the Lehmann expansion \eqref{al2}, whose explicit form reads as
\begin{equation}\label{dE2a}
\delta_{2}E_{vac}(m,h,R)=\sum_{l=1}^\infty \delta_{2,2l}E_{vac}(m,h,R),
\end{equation}
where
\begin{eqnarray}\label{a22L}
\delta_{2,\nu}E_{vac}(m,h,R)=-\frac{h^2}{\nu!}\iint_{-R/2}^{R/2}d{\rm x}_1\,d{\rm x}_2
\int_{-\infty}^\infty \frac{dq_1\ldots dq_\nu}{(2\pi)^\nu}\cdot \\
\frac{\exp[i(q_1+\ldots+q_\nu)({\rm x_1}-{\rm x_2})]}{\omega(q_1)+\ldots+\omega(q_\nu)}\,
\langle 0|\sigma(0)|q_1,\ldots,q_\nu \rangle\langle q_\nu,\ldots,q_1|\sigma(0)|0\rangle.\nonumber
\end{eqnarray}
Straightforward summation of (\ref{dE2a}) yields,
\begin{equation} \label{dE2b}
\delta_{2}E_{vac}(m,h,R)=-h^2\iint_{-R/2}^{R/2}d{\rm x}_1\,d{\rm x}_2 \int_{0}^\infty d{\rm y}_1 \,
\langle 0 |\Delta \sigma({\rm x}_1,{\rm y}_1)\,\Delta \sigma({\rm x}_2,0)|0\rangle,
\end{equation}
where
\[
\Delta \sigma({\rm x},{\rm y})=\exp( \mathcal{H}_0\, {\rm y} )
\sigma({\rm x})\exp(- \mathcal{H}_0\, {\rm y} )- \bar{\sigma}.
\]
Since the matrix element in the integrand does not depend on $({\rm x}_1+{\rm x}_2)/2$
and vanishes exponentially for $|{\rm x}_1-{\rm x}_2|\, m \gg1$, we can easily proceed
to the limit $R\to\infty$ in
\eqref{dE2b}, arriving at
the well-known representation of the magnetic susceptibility
in terms of the spin-spin correlation function,
\begin{eqnarray}
\delta_2 \rho(m,h)\equiv \lim_{R\to\infty} \frac{\delta_{2,\nu}E_{vac}(m,h,R)}{R}= \\
-h^2 \int_{-\infty}^\infty d{\rm x}\int_{0}^\infty d{\rm y} \,
\langle 0 |\Delta \sigma({\rm x},{\rm y})\,\Delta \sigma(0,0)|0\rangle.\nonumber
\end{eqnarray}
Let us return now to the Lehmann expansion \eqref{dE2a} for the ground state energy,
perform the elementary integration
over ${\rm x}_1,\,{\rm x}_2$ in \eqref{a22L}, and proceed to the limit $R\to\infty$, exploiting the equality
\begin{equation}
\lim_{R\to\infty}\frac{4 \sin^2 ( q R/2)}{R\, q^2} = 2\pi \delta(q).
\end{equation}
As a result, we arrive at the familiar spectral expansion \cite{McCoy76} for the
ground state energy density
\begin{eqnarray}\label{ro2}
&&\delta_2 \rho(m,h)=\sum_{l=1}^\infty \delta_{2,2l}\, \rho(m,h),\\
&&\delta_{2,2l}\, \rho(m,h)=-h^2\,\frac{1}{(2 l)^2}\int_{-\infty}^\infty
\frac{dq_1\ldots dq_{2l}}
{(2\pi)^{2l-1}}
\frac{\delta(q_1+\ldots+q_{2l})}{\omega(q_1)+\ldots+\omega(q_\nu)}\cdot \\
&&\langle 0|\sigma(0)|q_1,\ldots,q_\nu \rangle\langle q_\nu,\ldots,q_1|\sigma(0)|0\rangle.\nonumber
\end{eqnarray}
The first term in expansion \eqref{ro2} can be easily calculated using the explicit expressions
\eqref{fIs2}, \eqref{fIs3} for the
form factors, giving
\begin{equation}
\delta_{2,2} \rho(m,h)=-\frac{h^2\bar{\sigma}^2}{12\pi m}.
\end{equation}
The corresponding two-fermion contribution $\tilde{G}_{2,2}$ to the universal amplitude $\tilde{G}_2$
\begin{equation}\label{G22}
\tilde{G}_{2,2}=-\frac{{\bar{s}}^2}{12\pi}=-0.0489063\ldots
\end{equation}
reproduces the well-known result of Tracy and McCoy \cite{Tracy73},
which is rather close to the exact value \cite{McCoy76,FonZam2003,Bazh10} $\tilde{G}_2=-0.0489532897203\ldots$
Now let us turn to the third order term \eqref{al3} in the expansion \eqref{RSch} for the ground state
energy $E_{vac}(m,h,R)$. Unlike the previous case of the second-order correction,
kinematic singularities do contribute to $\delta_{3}E_{vac}(m,h,R)$ through the
matrix element $\langle q| {V}_R| {q'}\rangle$ in the second line
of \eqref{al3}. Nevertheless, the right-hand side of \eqref{al3}
is well defined due to the chosen regularization \eqref{rep}.
After summation of the
Lehmann expansion in \eqref{al3} one arrives
in the limit $R\to\infty$ at
the well-known integral representation \cite{McCoy78} for $\delta_3 \rho(m,h)$
in terms of the three-point correlation function,
\begin{eqnarray}\label{d3Rho}
&&\delta_3 \rho(m,h)=\\
&&-h^3 \iint_{-\infty}^\infty d{\rm x}_1d{\rm x}_3\int_{0}^\infty d{\rm y}_1
\int_{-\infty}^0 d{\rm y}_3\,
\langle 0 |\Delta \sigma({\rm x}_1,{\rm y}_1)\Delta \sigma(0,0)
\Delta \sigma({\rm x}_3,{\rm y}_3)|0\rangle.\nonumber
\end{eqnarray}
Alternatively, one can truncate the spectral series \eqref{al3} which defines \newline
$\delta_3 \,E_{vac}(m,h,R)$
at the level of the
two-kink intermediate states $n(q)=n(q')=2$. Denoting the result by
$\delta_{3,2} \,E_{vac}(m,h,R)$, we get explicitly
\begin{equation}\label{Evac32}
\delta_{3,2} \,E_{vac}(m,h,R)=A_{3,2}(m,h,R)+B_{3,2}(m,h,R),
\end{equation}
where
\begin{eqnarray}\label{dE3}
&&A_{3,2}(m,h,R)=\frac{h^3\bar{\sigma} R}{2} \iint_{-\infty}^\infty
\frac{dq_1dq_2}{(2\pi)^2}\frac{1}{[\omega(q_1)+\omega(q_2)]^2}\cdot \\\nonumber
&&\iint_{-R/2}^{R/2} d{\rm x}_1 d{\rm x}_2 \,e^{i({\rm x}_1 -{\rm x}_2)(q_1+q_2)}
\langle0|\sigma(0)|q_1,q_2\rangle \langle q_2,q_1|\sigma(0)|0\rangle, \\\label{Bint}
&&B_{3,2}(m,h,R)=-\frac{h^3}{4} \iint_{-\infty}^\infty\frac{dq_1dq_2}{(2\pi)^2}
\frac{1}{[\omega(q_1)+\omega(q_2)]}\iint_{-\infty}^\infty\frac{dq_1'dq_2'}{(2\pi)^2} \cdot\\\nonumber
&&
\frac{1}{[\omega(q_1')+\omega(q_2')]}\iiint_{-R/2}^{R/2} d{\rm x}_1 d{\rm x}_2 d{\rm x}_3 \,e^{i({\rm x}_1 -{\rm x}_2)(q_1+q_2)}
\,e^{i({\rm x}_2 -{\rm x}_3)(q_1'+q_2')}\cdot \\\nonumber
&&\langle0|\sigma(0)|q_1,q_2\rangle \langle q_2,q_1|\sigma(0)|q_1',q_2'\rangle
\langle q_2',q_1'|\sigma(0)|0\rangle.
\end{eqnarray}
Here the two-kink matrix elements of the spin operator are determined by
equations \eqref{fIs}-\eqref{fIs3}, while the
four-kink matrix element in the last line can be expressed in terms of the latter by means of the Wick expansion:
\begin{eqnarray}
\langle q_2,q_1|\sigma(0)|q_1',q_2'\rangle= [\langle q_2,q_1|\sigma(0)|0\rangle \langle 0|\sigma(0)|q_1',q_2'\rangle+ \\
\langle q_1|\sigma(0)|q_1'\rangle \langle q_2|\sigma(0)|q_2'\rangle-
\langle q_1|\sigma(0)|q_2'\rangle \langle q_2|\sigma(0)|q_1'\rangle] \bar{\sigma}^{-1}.\nonumber
\end{eqnarray}
Since the two last terms in the square brackets in the right-hand side provide equal contributions to the
integral \eqref{Bint}, we can replace the four-kink matrix element in its
integrand as follows
\begin{equation}\label{2si}
\langle q_2,q_1|\sigma(0)|q_1',q_2'\rangle\leadsto [\langle q_2,q_1|\sigma(0)|0\rangle \langle 0|\sigma(0)|q_1',q_2'\rangle+ 2
\langle q_1|\sigma(0)|q_1'\rangle \langle q_2|\sigma(0)|q_2'\rangle] \bar{\sigma}^{-1}.
\end{equation}
The second term in the bracket containing the product of two kinematic singularities
can be modified to the
form
\begin{eqnarray}\label{lr}
&&2\langle q_1|\sigma(0)|q_1'\rangle \langle q_2|\sigma(0)|q_2'\rangle=\\
&&- 2\,\bar{\sigma}^2\,
\frac{\omega(q_1)+\omega(q_1')}{\sqrt{\omega(q_1) \omega(q_1')}}\,
\frac{\omega(q_2)+\omega(q_2')}{\sqrt{\omega(q_2) \omega(q_2')}}\,
\mathcal{P}\,\frac{1}{q_1-q_1'} \mathcal{P}\,\frac{1}{q_2-q_2'}=\nonumber \\
&& 8\pi^2 \bar{\sigma}^2\,\delta(q_1-q_1')\,\delta(q_2-q_2')-
{\bar{\sigma}^2}\, \frac{\omega(q_1)+\omega(q_1')}{\sqrt{\omega(q_1) \omega(q_1')}}
\frac{\omega(q_2)+\omega(q_2')}{\sqrt{\omega(q_2) \omega(q_2')}}\cdot \nonumber\\
&&\left(
\frac{1}{q_1-q_1'+i0} \,\frac{1}{q_2-q_2'-i0}+ \frac{1}{q_1-q_1'-i0} \,\frac{1}{q_2-q_2'+i0}
\right).\nonumber
\end{eqnarray}
In deriving \eqref{lr} we have used \eqref{fIs}, \eqref{Cau}, together with the equality
\begin{eqnarray}
\mathcal{P}\,\frac{1}{q_1-q_1'} \mathcal{P}\,\frac{1}{q_2-q_2'}=-\pi^2\,\delta(q_1-q_1')\,\delta(q_2-q_2')+\\
\frac{1}{2}\left(
\frac{1}{q_1-q_1'+i0} \,\frac{1}{q_2-q_2'-i0}+ \frac{1}{q_1-q_1'-i0} \,\frac{1}{q_2-q_2'+i0}
\right).\nonumber
\end{eqnarray}
After substitution of \eqref{lr} into \eqref{2si}, \eqref{Bint}, the term
$
8\pi^2 \bar{\sigma}^2\,\delta(q_1-q_1')\,\delta(q_2-q_2')
$
in the right-hand side of \eqref{lr}
gives rise to the contribution in $B_{3,2}(m,h,R)$, which cancels exactly with the term $A_{3,2}(m,h,R)$
in \eqref{Evac32}. Performing the integration over ${\rm x}_1, {\rm x}_2, {\rm x}_3$ over the cube $(-R/2,R/2)^3$
in the remaining part and
dividing the result by $R$, we obtain
\begin{eqnarray}\nonumber
&&\frac{\delta_{3,2} \,E_{vac}(m,h,R)}{R}=-\frac{h^3\bar{\sigma}^3}{4}\int_{-\infty}^\infty
\frac{dq_1dq_2dq_1'dq_2'}{(2 \pi)^4}\,\Delta_3(q_1+q_2,q_1'+q_2',R) \cdot\\
&&\mathcal{G}(q_1,q_2,q_1',q_2'), \label{dEa}
\end{eqnarray}
where
\begin{equation} \label{Del}
\Delta_3(p,k,R)=\frac{8\sin (p R/2)\,\sin (k R/2)\,\sin[ (k-p) R/2]}{R\,p\, k\, (k-p)},
\end{equation}
and
\begin{eqnarray}
\mathcal{G}(q_1,q_2,q_1',q_2')= \frac{\omega(q_1)-\omega(q_2)}{\sqrt{\omega(q_1)\, \omega(q_2)}}\,
\frac{\omega(q_2')-\omega(q_1')}{\sqrt{\omega(q_1')\, \omega(q_2')}}
\frac{1}{(q_1+q_2)(q_1'+q_2')}\cdot\\\nonumber
\frac{1}{\omega(q_1)+\omega(q_2)} \frac{1}{\omega(q_1')+\omega(q_2')}\Bigg\{
\frac{\omega(q_1)-\omega(q_2')}{\sqrt{\omega(q_1') \omega(q_2')}}\,
\frac{\omega(q_2)-\omega(q_1)}{\sqrt{\omega(q_1) \,\omega(q_2)}}\cdot \\
\frac{1}{(q_1'+q_2')(q_1+q_2)}+\nonumber
\frac{\omega(q_1)+\omega(q_1')}{\sqrt{\omega(q_1)\, \omega(q_1')}}\, \frac{\omega(q_2)+\omega(q_2')}{\sqrt{\omega(q_2) \,\omega(q_2')}}\cdot\\\nonumber
\left(
\frac{1}{q_1-q_1'+i0} \,\frac{1}{q_2-q_2'-i0}+ \frac{1}{q_1-q_1'-i0} \,\frac{1}{q_2-q_2'+i0}
\right)
\Bigg\}.
\end{eqnarray}
It is possible to show that the weak large-$R$ limit of the function $\Delta_3(p,k,R)$ is proportional
to the two-dimensional
$\delta$-function,
\begin{equation}\label{delt}
\lim_{R\to\infty } \Delta_3(p,k,R)=4\pi^2\,\delta(p)\,\delta(k).
\end{equation}
The simplest way to prove this equality
is to integrate $\Delta_3(p,k,R)$ multiplied with the plane-wave
test function. The result reads as
\begin{equation}\label{intJ}
\iint_{-\infty}^\infty dp\, dk\, \Delta_3(p,k,R)\,\exp[i(p{\rm x}+k {\rm y})]=4\pi^2\,
\left[1-
\frac{\max(|{\rm x}|,|{\rm y}|,|{\rm x+y}|)}{R}\right],
\end{equation}
if $\max(|{\rm x}|,|{\rm y}|,|{\rm x+y}|)<R$.
Taking the limit $R\to\infty$ in \eqref{intJ} , we arrive at \eqref{delt}.
Exploiting \eqref{delt}, one can proceed to the limit $R\to\infty$ in \eqref{dEa}, yielding
\begin{eqnarray}\label{d32ro}
\delta_{3,2} \,\rho(m,h) \equiv \lim_{R\to\infty } \frac{\delta_{3,2} \,E_{vac}(m,h,R)}{R}=\\
-\frac{h^3\bar{\sigma}^3}{4}\int_{-\infty}^\infty
\frac{dq_1dq_1'}{(2 \pi)^2}\, \mathcal{G}(q_1,-q_1,q_1',-q_1') =
\frac{h^3\bar{\sigma}^3}{16\pi^2\, m^4}\,( C_1+C_2),\nonumber
\end{eqnarray}
where
\begin{equation}\label{I1}
C_1=-\frac{m^4}{4}\left\{\int_{-\infty}^\infty {dq}\,
\frac{q^2}{[\omega(q)]^5}
\right\}^2=-\frac{1}{9},
\end{equation}
and
\begin{eqnarray}\label{I2}
C_2=-\frac{m^4}{4} \iint_{-\infty}^\infty dq \,dq'\,
\frac{q \,q'\,[\omega(q)+ \omega(q')]^2}{[\omega(q) \omega(q')]^4}\cdot \\
\left[
\frac{1}{(q-q'+i0)^2}+\frac{1}{(q-q'-i0)^2}
\right]=
\frac{4}{3}+\frac{\pi^2}{8}
.
\nonumber
\end{eqnarray}
Calculation of the integral in equation \eqref{I1} is straightforward.
The calculation of the double
integral $C_2$ is harder and described in \ref{int2}.
Combining \eqref{d32ro}-\eqref{I2}, we obtain finally
\begin{eqnarray}
\delta_{3,2} \rho(m,h)=\frac{h^3\bar{\sigma}^3}{16 m^4}\left(
\frac{11}{9\pi^2}+\frac{1}{8}
\right).
\end{eqnarray}
For the two-kink contribution $\tilde{G}_{3,2}$ to the amplitude
$\tilde{G}_3$, this yields
\begin{eqnarray}\label{G32}
\tilde{G}_{3,2}=\frac{\bar{s}^3}{16 }\left(
\frac{11}{9\pi^2}+\frac{1}{8}
\right)=0.0389349\ldots
\end{eqnarray}
The exact value of the universal amplitude $\tilde{G}_3$ is unknown.
In 1978,
McCoy and Wu \cite{McCoyWu78} performed a thorough analysis of the three- and four-point
spin correlation functions in the zero-field
Ising model on the square lattice, from which they obtained the approximate value for this amplitude,
\begin{equation}\label{GMc}
\tilde{G}_3\approx \frac{11 \bar{s}^3}{72}=0.0387529\ldots
\end{equation}
Recently, at least six digits of the exact amplitude $\tilde{G}_3$ have become available
\begin{equation}\label{G3}
\tilde{G}_3=0.0388639\ldots
\end{equation}
due to the very accurate numerical calculations carried out by
Mangazeev {\it et al.} \cite{Mang09,Bazh10}
for the square and triangular lattice Ising models. \begin{footnote}
{The values of the amplitude $\tilde{G}_3$ reported in \cite{Bazh10}
for the square and triangular lattices are
0.038863932(3) and 0.0388639290(1), respectively. }
\end{footnote}
Comparison of \eqref{G32} and \eqref{GMc} with \eqref{G3} indicates, that
(i) the two-kink contribution \eqref{G32} approximates the "exact" amplitude \eqref{G3}
somewhat better than \eqref{GMc};
(ii) the two-kink configurations provide the dominant contribution to the universal amplitude $\tilde{G}_3$.The configurations
with four and more kinks in intermediate states contribute less then $0.2\%$ in the spectral sum \eqref{al3}.
\subsection{One-fermion sector \label{1FS}}
In this subsection we address the modified FFPT in the one-fermion sector $n(\underline{p})=n(\underline{k})=1$, and extend it to the third order in $h$.
The matrix element of the Hamiltonian \eqref{HR} between the dressed one-fermion states $\langle\underline{p}|$ and
$|\underline{k}\rangle$ can be written as
\begin{eqnarray} \label{1sect}
\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle =
\langle {p}|U_R(h)\,\mathcal{H}_R\,U_R(h)^{-1}|{k}\rangle=
2\pi \delta(p-k)\,\omega(p)+
\delta\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle.
\end{eqnarray}
Expanding here the unitary operator $U_R(h)$ and its inverse in powers
of $h$, one arrives at the perturbation expansion
\begin{equation}\label{delta2}
\delta\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle=
\sum_{j=1}^\infty \delta_j \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle.
\end{equation}
Three initial terms in this expansion can be obtained from equation (37)-(39) of \cite{Rut09}
by means of the replacements \eqref{rep}:
\begin{eqnarray}
\label{d21}
&&\delta_1 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle=h \langle p| {V}_R|k\rangle,
\\\label{d22}
&& \delta_2 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle= -
\frac{h^2}{2}\!\!\!\!\sum_{q\atop {n(q)\ne n({p})}}\!\!\!\langle p | {V}_R|q\rangle \langle q| {V}_R|k\rangle
\left\{\frac{ 1 }{\omega(q)- \omega(p)} +\frac{ 1 }{\omega(q)- \omega(k)}\right\}\!\!,\\
&& \delta_3 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle= \nonumber
\frac{h^3}{2}\sum_{q,q'}
\langle p | {V}_R|q\rangle \langle q| {V}_R|q'\rangle\langle q'| {V}_R|k\rangle \Bigg\{
[1-\delta_{n(q),n({p})}][1-\delta_{n(q'),n({p})}]\\\label{delta3}
&& \cdot\Bigg[\frac{1}{[\omega(p)-\omega(q)]}\frac{1}{[\omega(p)-\omega(q')]}
+\frac{1}{[\omega(k)-\omega(q)]}\frac{1}{[\omega(k)-\omega(q')]}\Bigg] \\
&&+ \frac{1}{\omega(q)-\omega(q')}\left[\frac{\delta_{n(q),n(\underline{p})}
[1-\delta_{n(q'),n({p})}]}{\omega(q')-\omega(p)} -
\frac{[1-\delta_{n(q),n({p})}]\delta_{n(q'),n({p})}}{\omega(q)-\omega(k)}\right]\Bigg\}, \nonumber
\end{eqnarray}
where $n(p)=n(k)=1$.
One can easily see, that the matrix elements
$\delta_j \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle$
obey the following symmetry relations:
\begin{eqnarray}\label{hh}
\delta_j \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle&=&
(-1)^j\,\delta_j \langle \underline{k}|\mathcal{H}_R|\underline{p}\rangle,\\
\delta_j \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle&=&(-1)^j\,[\delta_j
\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle]^*,
\end{eqnarray}
for $j=1,2,\ldots$
The kinematic singularity is present already in the first order term
\eqref{d21}. The resonance poles contribute to the second and higher orders of expansion
\eqref{delta2} for large enough momenta $p$ and $k$, due to the terms, like those
in braces in \eqref{d22}, \eqref{delta3}.
Nevertheless, at finite $R$, the right-hand sides of equations \eqref{d21}-\eqref{delta3} determine
well defined generalized functions, if the absolute values of momenta $p$ and $k$ are small enough,
\begin{equation}\label{pkm}
\omega(p)< 3m, \quad {\rm and}\quad \omega(k)< 3m.
\end{equation}
The latter conditions guarantee that the resonance poles do not appear in expansion
\eqref{delta2}.
The constrains \eqref{pkm} will be imposed in the subsequent FFPT calculations at finite $R$.
After proceeding to the limit $R\to \infty$, the results will be analytically continued to
larger momenta, $|p|>\sqrt{2} \,m$.
We postulate the following definition of the renormalized quark dispersion law $\epsilon(p,m,h)$,
\begin{eqnarray}\label{eps}
\lim_{R\to\infty}\left\{ \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle
-\pi \delta(p-k)\left[
E_{vac}(m,h,R)+E_{vac}(m,-h,R)
\right]
\right\}=\\
2\pi\, \epsilon(p,m,h)\,\delta(p-k)+2\pi i f(m,h)\, \delta'(p-k).\nonumber
\end{eqnarray}
Just as in the case of definition \eqref{fh}, both sides in the above equation must be understood as formal power series in $h$. Equating the coefficients in these power series and taking into account \eqref{hh} and \eqref{fhj}, one finds
\begin{equation}\label{epev}
\lim_{R\to\infty}\left\{\delta_{j} \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle
-2\pi \delta(p-k)\,
\delta_{j} \langle \underline{0}|\mathcal{H}_R|\underline{0}\rangle
\right\}=2\pi\, \delta_{j}\,\epsilon(p,m,h)\,\delta(p-k),
\end{equation}
for even $j=2,4,\ldots$, and
\begin{eqnarray}\label{Hod}
&&\lim_{R\to\infty}\left\{\delta_{j} \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle
+4\pi i \,\delta'(p-k) R^{-1} \,\delta_{j} \, \langle \underline{0}|\mathcal{H}_R|\underline{0}\rangle
\right\}=0,\\
&&\delta_{j}\,\epsilon(p,m,h)=0,\label{epsod}
\end{eqnarray}
for odd $j=1,3,\ldots$
So, we can argue on the basis of the above heuristic analysis, that the Taylor expansion of the quark dispersion law $\epsilon(p,m,h)$ contains only even powers
of $h$, which are determined by equation \eqref{epev}.
It was shown in \cite{FZWard03} that
the renormalized quark dispersion law $\epsilon(p,h)$,
does not have the Lorentz covariant form in the confinement regime.
Nevertheless,
the 'dressed quark mass' $m_q(m,h)$
can be extracted from large-$p$ asymptotics of $\epsilon(p,h)$
in the following way \cite{FZWard03, Rut09},
\begin{equation}\label{mq2}
[m_q(m,h)]^2 =\lim_{p\to\infty}\{2 p\, [\epsilon(p,m,h)-p]\}.
\end{equation}
This relation is understood, of course, in the sense of a
power series in $h$, or, equivalently, in the parameter $\lambda= 2 h\bar{\sigma}/m^2$. It follows from
\eqref{epsod}, that this expansion contains only even powers,
\begin{equation}\label{mqS}
{m_q^2} =m^2+m^2 \sum_{l=1}^\infty a_{2l} \lambda^{2l}.
\end{equation}
In order to validate the latter statement, it remains to show that the large-$R$ limits in the left-hand sides of equations
\eqref{epev} and \eqref{Hod} exist, and to prove equalities \eqref{Hod}. In what follows, we shall do it for the three
initial values $j=1,2,3$.
The case $j=1$ is quite simple.
The term (\ref{d21}) linear in $h$ in expansion \eqref{delta2} reads as
\begin{eqnarray}\label{V1}
\delta_1 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle =-h\int_{-R/2}^{R/2} d {\rm x}\, \langle p|\sigma({\rm x})|k\rangle=\\
i\,h\bar{\sigma}\,\frac{ \omega(p)+\omega(k) }{[\omega(p)\omega(k)]^{1/2}}\,
\frac{2 \sin[R(k-p)/2]}{(k-p)}\,
\mathcal{P} \frac{ 1 }{k-p}.\nonumber
\end{eqnarray}
Even though the right-hand side contains the kinematic singularity, it describes a well defined
generalized function at arbitrary finite $R$. Furthermore, exploiting the equality
\begin{equation}\label{R1}
\lim_{R\to\infty} \frac{2\sin(q R/2)}{q}\,\mathcal{P}\,\frac{1}{q}=-2\pi \delta'(q),
\end{equation}
we can proceed to the limit $R\to\infty$ in equation \eqref{V1}, obtaining
\begin{equation} \label{R2}
\lim_{R\to\infty}\delta_1 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle = 4\pi i\,
\delta'(p-k)\, h\bar{\sigma}.
\end{equation}
This proves \eqref{Hod} for $j=1$, since
$\delta_1\langle \underline{0}|\mathcal{H}_R|\underline{0}\rangle=-h \bar{\sigma}R$.
Turning to the term \eqref{d22} quadratic in $h$, we first perform the summation over the
number $n(q)$ of the fermions in the intermediate state $|q\rangle$,
subject to the requirement \eqref{pkm}. The result can be written in the
compact form
\begin{eqnarray}\label{d2pk}
&&\delta_2 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle =
-\frac{h^2}{2} \int_{0}^\infty d{\rm y} \,
\iint_{-R/2}^{R/2}d{\rm x}_1\,d{\rm x}_2 \,\left(1+e^{y[\omega(k)-\omega(p)]}\right)\cdot \\
&&\langle p | \sigma({\rm x}_1-{\rm x}_2,{\rm y})(1-P_1) \sigma(0,0)|k\rangle
e^{i {\rm x}_2(k-p)}
,\nonumber
\end{eqnarray}
where ${P}_1$ denotes the orthogonal projection operator onto the
one-fermion subspace of the Fock space.
The matrix element in the right-hand side can be represented as
\begin{eqnarray}\nonumber
&&\langle p | \sigma({\rm x},{\rm y})(1-P_1) \sigma(0,0)|k\rangle=
2\pi\, \delta(p-k)[\langle 0 | \sigma({\rm x},{\rm y})
\sigma(0,0)|0\rangle-\bar{\sigma}^2]+\\
&&\langle p | \sigma({\rm x},{\rm y})(1-P_1) \sigma(0,0)|k\rangle_{reg},
\label{pk}
\end{eqnarray}
where ${\rm x}={\rm x}_1-{\rm x}_2$.
The first singular term in the right-hand side
represents the 'direct propagation part' \cite{FZWard03}, while the second term is a regular function
of momenta at $k\to p$.
After substitution of \eqref{pk} into \eqref{d2pk}
and subtraction the singular term
we get
\begin{eqnarray}\nonumber
&&\delta_2 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle -2\pi \delta(p-k)\,
\delta_2 E_{vac}(h,R)=\\\label{DE2}
&&-\frac{h^2}{2} \int_{0}^\infty d{\rm y} \,
\iint_{-R/2}^{R/2}d{\rm x}_1\,d{\rm x}_2 \,\left(1+e^{y[\omega(k)-\omega(p)]}\right)
e^{i {\rm x}_2(k-p)}
\cdot \\\nonumber
&&\langle p | \sigma({\rm x}_1-{\rm x}_2,{\rm y})(1-P_1) \sigma(0,0)|k\rangle_{reg}.
\nonumber
\end{eqnarray}
In this equation we can safely proceed to the limit $R\to\infty$.
Comparing the
result with \eqref{eps}, one finds
the second order correction to the kink dispersion law
\begin{eqnarray}\label{d2eps}
&&\delta_2\, \epsilon(p,m,h)=
-{h^2} \int_{0}^\infty d{\rm y} \,
\int_{-\infty}^{\infty}d{\rm x} \cdot\\
&&\lim_{k\to p}\left[\langle p | \sigma({\rm x},{\rm y}) \sigma(0,0)|k\rangle-
\langle p | \sigma({\rm x},{\rm y}) {P}_1 \sigma(0,0)|k\rangle
\right].\nonumber
\end{eqnarray}
Even though the above relation was derived for small $|p|$ satisfying the first inequality
in \eqref{pkm}, we shall extend it to all real momenta $p$ by analytic continuation.
The second order correction to the squared quark mass can be read from
\eqref{mq2} and \eqref{d2eps},
\begin{eqnarray}\label{d2mq}
&&\delta_2 \,[m_q(m,h)]^2=
-2{h^2} \lim_{\beta\to\infty} \int_{0}^\infty d{\rm y} \,
\int_{-\infty}^{\infty}d{\rm x}\cdot\\
&&\lim_{\beta'\to \beta}\left[\langle \beta' | \sigma({\rm x},{\rm y}) \sigma(0,0)|\beta\rangle-
\langle \beta' | \sigma({\rm x},{\rm y}) {P}_1 \sigma(0,0)|\beta\rangle
\right].\nonumber
\end{eqnarray}
This integral representation for the second order correction to the quark mass [written
in a slightly different form \eqref{a2B}] was first
derived by Fonseca and Zamolodchikov \cite{FZWard03}. Exploiting the Ward identities, they
managed to express the matrix element
in the right-hand side in terms of solutions of certain differential equations, and
obtained the value
\begin{equation}\label{aq}
a_q=\bar{s}^2\cdot 0.142021619(1)\ldots
\end{equation}
for the parameter $a_q$,
\begin{equation} \label{aq1}
a_q =2 \bar{s}^2\,a_2
\end{equation}
by numerical integration of the
double integral in \eqref{d2mq} over the half-plane in polar coordinates $r, \theta$.
It turns out, that the integral in the polar angle can be evaluated analytically.
The details of this calculations
are relegated to \ref{IPA}. The results read as,
\begin{eqnarray}\label{sigth}
&&\mathcal{U}(r)\equiv \int_0^\pi \frac{d\theta}{\pi}\,
\lim_{\beta'\to \beta}\langle \beta' | \sigma(r \cos\theta ,r \sin\theta) \sigma(0,0)|\beta\rangle
=\\\nonumber
&&e^{\chi/2}
\bigg\{ r^2 {\mathfrak b}_0'\,\varphi'\, \cosh\frac{\varphi}{2} + \\
&&{\mathfrak b}_0\left[ \sinh\frac{\varphi}{2} +
r\, \varphi'\,\cosh\frac{\varphi}{2}-\frac{ r^2}{4}\left(\sinh\frac{3\varphi}{2}+\sinh\frac{5\varphi}{2}
\right)\right]\bigg\}, \nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{Sth}
&&\mathcal{W}(r)\equiv \lim_{\beta\to \infty}\int_0^\pi \frac{d\theta}{\pi}\,
\lim_{\beta'\to \beta}\langle \beta' | \sigma(r \cos\theta ,r \sin\theta)P_1 \sigma(0,0)|\beta\rangle= \\
&&\frac{2 \bar{s}^2}{\pi}
\bigg\{\left[1-2r^2\right] I_0(r) K_0(r)-
2 r \,K_1(r)\left[I_0(r)+r\, I_1(r)
\right]
\bigg\},\nonumber
\end{eqnarray}
where
${\mathfrak b}_0(r)$ stands for the solution of the second order differential equation
\begin{equation}\label{difb0}
{\mathfrak b}_0''(r)+r^{-1}\, {\mathfrak b}_0(r)= \cosh[2\varphi(r)]\, {\mathfrak b}_0(r),
\end{equation}
which vanishes at $r\to \infty$, and behaves at small $r\to0$ as
\begin{equation}
{\mathfrak b}_0(r)=\frac{1}{\Omega(r)}+O(r^4).
\end{equation}
The auxiliary functions $\varphi(r)$, $\chi(r)$, and $\Omega(r)$ were defined in \cite{FZWard03},
$I_j(r)$ and $K_j(r)$ are the Bessel function of the imaginary argument and
the McDonald's function, respectively.
In order to harmonize notations with \ref{IPA} and reference \cite{FZWard03},
we have chosen the units of mass in equations \eqref{sigth} and \eqref{Sth} so that
$m=1$.
Though the integrals \eqref{sigth} and \eqref{Sth} both increase linearly at large $r$, their
difference vanishes exponentially at $r\to\infty$. The
remaining radial integration in \eqref{d2pk} leads to the explicit representation for the coefficient
$a_2$ in expansion \eqref{mqS},
\begin{equation}\label{a2int}
a_2=\frac{\pi}{2 \bar{s}^2}\int_0^\infty dr\, r[\mathcal{W}(r)-\mathcal{U}(r)].
\end{equation}
Numerical evaluation of this integral yields
\begin{equation}\label{Aa2}
a_2= 0.0710108\ldots,
\end{equation}
in agreement with \eqref{aq}.
The described calculation procedure is based both on the summation of the infinite form factor
series \eqref{d22}, and on the explicit representations for the matrix elements of the
product of two spin operators between the one-fermion states, derived by
Fonseca and Zamolodchikov in \cite{FZWard03}.
Unfortunately, it is problematic to extend this approach to other integrable models,
since it essentially exploits some rather specific features of the IFT, see the 'Discussion' Section in
\cite{FZWard03}. On the other hand, a very good approximation for the constant $a_2$ can be obtained
by truncating the form factor series \eqref{d22} at its first term accounting for
the three-kink intermediate states, $n(q)=3$. We shall describe this technique in some details here, and apply it in Section \ref{SOPFT} to estimate the leading quark-mass perturbative correction in the three-state PFT.
The first term $\delta_{2,3} \,\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle$
in the form factor series \eqref{d22}, which describes contribution of the three-kink
intermediate states has the following explicit form,
\begin{eqnarray}\label{H23pk}
&&\delta_{2,3} \,\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle=-\frac{1}{2}\frac{h^2}{3!}
\int_{-\infty}^\infty\frac{d q_1\,d q_2\,d q_3}{(2\pi)^3}\,\Delta(Q-p,R)\Delta(Q-k,R)\cdot
\\
&&\bigg\{\frac{ 1 }{\omega(q_1)+\omega(q_2)+\omega(q_3)- \omega(p)} +\frac{ 1 }{\omega(q_1)+\omega(q_2)+\omega(q_3)- \omega(k)} \bigg\}\, \nonumber
\cdot\\
&&\langle p|\sigma(0,0)|q_1, q_2,q_3\rangle
\langle q_3, q_2,q_1|\sigma(0,0)|k\rangle,\nonumber
\end{eqnarray}
where $Q=q_1+q_2+q_3$, and
\begin{equation}
\Delta(z,R)=\frac{2\sin(z R/2)}{z}.
\end{equation}
Note, that the two $R$-dependent factors in the integrand in \eqref{H23pk} give rise
to the momentum conservation law in the large-$R$ limit,
\begin{equation}\label{mcons}
\lim_{R\to\infty}\Delta(Q-p,R)\,\Delta(Q-k,R)=4\pi^2 \,\delta(Q-p)\,\delta(k-p).
\end{equation}
The right-hand side of \eqref{H23pk} is a well-defined generalized function
for all finite $R$ under the conditions
\eqref{pkm}. Exploiting the Wick expansion, the product of two matrix elements in the third line of \eqref{H23pk} can be represented as the sum of nine terms.
Taking into account the symmetry of the integrand in \eqref{H23pk} with respect to
permutations of momenta of three virtual kinks, one can leave only two
terms in this expansion multiplied by appropriate combinatoric factors. As the result,
the substitution
\begin{eqnarray}\label{subs}
&&\langle p|\sigma(0,0)|q_1, q_2,q_3\rangle
\langle q_3, q_2,q_1|\sigma(0,0)|k\rangle \leadsto\\\nonumber
&&6\, \bar{\sigma}^{-2}\langle 0|\sigma(0,0)|q_2,q_3\rangle
\langle q_2,q_1|\sigma(0,0)|0\rangle\langle p|\sigma(0,0)|q_1\rangle
\langle q_3|\sigma(0,0)|k\rangle+\\
&&3\,\bar{\sigma}^{-2}\langle 0|\sigma(0,0)|q_2,q_3\rangle
\langle q_3,q_2|\sigma(0,0)|0\rangle
\langle p|\sigma(0,0)|q_1\rangle \langle q_1|\sigma(0,0)|k\rangle\nonumber
\end{eqnarray}
in the integrand in \eqref{H23pk} leaves the integral unchanged.
One cannot proceed directly to the limit $R\to\infty$
in equation \eqref{H23pk} exploiting equality
\eqref{mcons}. The problem comes from the product of two kinematic singularities
in the form factors in the
right-hand side of \eqref{subs},
\begin{eqnarray}\label{ffct}
&& \langle p|\sigma(0,0)|q_1\rangle \langle q_1|\sigma(0,0)|k\rangle
= \\\nonumber
&&- \bar{\sigma}^2
\, \frac{\omega(p)+\omega(q_1)}{\sqrt{\omega(p)\omega(q_1)}}\,
\frac{\omega(q_1)+\omega(k)}{\sqrt{\omega(q_1)\omega(k)}}\,
\mathcal{P}\frac{1}{p-q_1}\,\mathcal{P}\frac{1}{q_1-k}=\\\nonumber
&&4\pi^2 \bar{\sigma}^2 \,\delta(p-k)\delta(p-q_1)
+\left[\langle p|\sigma(0,0)|q_1\rangle \langle q_1|\sigma(0,0)|k\rangle\right]_{reg},
\nonumber
\end{eqnarray}
where
\begin{eqnarray}\label{regpr}
\left[\langle p|\sigma(0,0)|q_1\rangle \langle q_1|\sigma(0,0)|k\rangle\right]_{reg}=
-\frac{\bar{\sigma}^2}{2}
\frac{\omega(p)+\omega(q_1)}{\sqrt{\omega(p)\omega(q_1)}}\,
\frac{\omega(q_1)+\omega(k)}{\sqrt{\omega(q_1)\omega(k)}}\cdot\\\nonumber
\left[\frac{1}{(p-q_1-i0)(k-q_1-i0)}+\frac{1}{(p-q_1+i0)(k-q_1+i0)}
\right].
\end{eqnarray}
Multiplication of the first term in the right-hand side of \eqref{ffct}
representing the direct propagation part by the right-hand side
of equations \eqref{mcons} leads to
the familiar meaningful factor $[\delta(p-k)]^2$. This is not surprising,
since the vacuum energy
$E_{vac}(m,h,R)\sim R$ contributing to
$\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle$
diverges in the limit $R\to\infty$.
One can easily see, that the direct propagation part of the form factors \eqref{ffct},
upon substitution into \eqref{subs} and \eqref{H23pk}, gives rise
to the term
\begin{equation}\label{sing}
2\pi\,\delta(p-k) \,\delta_{2,2}\,E(m,h,R),
\end{equation}
where $\delta_{2,2}\,E(m,h,R)$
was defined in \eqref{a22L}. After subtraction of \eqref{sing} from \eqref{H23pk}, we
obtain a generalized function that has a well defined limit at $R\to\infty$.
According to \eqref{eps}, this limit must be identified with the
three-kink contribution to the second order correction to the kink dispersion law,
\begin{equation}\label{eps23}
2\pi \delta(p-k)\,\delta_{2,3} \,\epsilon(m,h,p)=\lim_{R\to\infty}
[\delta_{2,3} \,\langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle-
2\pi\,\delta(p-k) \,\delta_{2,2}\,E(m,h,R)].
\end{equation}
After analytical continuation to all real $p$ and proceeding to the limit $p\to\infty$,
one obtains from \eqref{eps23} and \eqref{mq2}, the corresponding correction to the squared kink mass
\begin{equation}\label{dm23}
\delta_{2,3}\, [m_q(m,h)]^2=m^2 \lambda^2 \, a_{2,3},
\end{equation}
where
\begin{equation}\label{A23}
a_{2,3}=\frac{1}{16\pi^2}\lim_{p\to\infty}[2\, \mathcal{I}_2(p)-\mathcal{I}_1(p)]
\end{equation}
is the three-kink contribution to the amplitude $a_2$. The explicit form of the integrals
$\mathcal{I}_{j}(p)$ reads as
\begin{eqnarray}\label{IJ}
\frac{\mathcal{I}_j(p)}{m^2}=\int_{-\infty}^\infty\frac{d q_1\,d q_2\,d q_3}{\omega(q_1)\omega(q_2)\omega(q_3)}
\frac{\delta(q_1+q_2+q_3-p)}{\omega(q_1)+\omega(q_2)+\omega(q_3)-\omega(p)}
\mathcal{J}_j(q_1,q_2,q_3),\\\label{II1}
\mathcal{J}_1(q_1,q_2,q_3)=\frac{[\omega(q_2)-\omega(q_3)]^2}{(q_2+q_3)^2}[\omega(p)+\omega(q_1)]^2\,\mathcal{P}\,\left(\frac{1}{p-q_1}\right)^2,\\\label{II2}
\mathcal{J}_2(q_1,q_2,q_3)=\frac{\omega(q_2)-\omega(q_1)}{q_2+q_1}
\frac{\omega(q_2)-\omega(q_3)}{q_2+q_3}
\left[\omega(p)+\omega(q_1)\right] \cdot\\
\left[\omega(p)+\omega(q_3)\right]\,\mathcal{P}\,\frac{1}{p-q_1}\,
\mathcal{P}\,\frac{1}{p-q_3},\nonumber
\end{eqnarray}
where
\begin{equation}\label{prv}
\mathcal{P}\,\left(\frac{1}{p-q}\right)^2=\frac{1}{2}\left[
\frac{1}{(p-k-i0)^2}+\frac{1}{(p-k+i0)^2}
\right].
\end{equation}
The constant \eqref{A23} was first numerically estimated by
Fonseca and Zamolodchikov \cite{FonZam2003}, $a_{2,3}\approx 0.07$.
Its exact value
\begin{equation}\label{aa23}
a_{2,3}=\frac{1}{16}+\frac{1}{12\pi^2}=0.0709434\ldots,
\end{equation}
which is remarkably close to the total amplitude $a_2$ [see (\ref{Aa2})],
was announced later without derivation in \cite{Rut09}. To fill this gap, we
present the rather involved derivation of \eqref{aa23} in \ref{Ca23}.
Finally, let us turn to the third-order term in the form factor expansion \eqref{delta2},
and describe the main steps in proof of equality \eqref{Hod}
for $j=3$, relegating details to \ref{LargeR}.
We start from the form factor expansion \eqref{delta3}
and extract from it the
direct propagation part,
\begin{equation}\label{dppreg}
\delta_3 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle=\delta_3 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle_{dpp}+\delta_3 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle_{reg}.
\end{equation}
After integration over ${\rm x}_1, {\rm x}_2, {\rm x}_3$ over the cube $(-R/2,R/2)^3$, we proceed in \eqref{dppreg} to the
limit $R\to\infty$ understood in the sense of generalized function.
It turns out that only the direct propagation part of the matrix element \eqref{dppreg} contributes to this limit,
giving rise to equality \eqref{Hod} at $j=3$, while the large-$R$ limit of its regular part
vanishes,
\begin{subequations}\label{qq}
\begin{align}
&\lim_{R\to\infty}\delta_3 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle_{dpp}=-4\pi i \delta'(p-k)\delta_3\, \rho(m,h),
\label{RL}\\
&\lim_{R\to\infty}\delta_3 \langle \underline{p}|\mathcal{H}_R|\underline{k}\rangle_{reg}=0. \label{RL0}
\end{align}
\end{subequations}
\section{Form factors in the three-state PFT \label{FF}}
The form factors of physically relevant operators in the three-state PFT were found
in 1988 by Kirillov and Smirnov in the preprint \cite{KS88} of the Kiev Institute for Theoretical Physics. In this section we briefly recall their results with emphasis on the
form factors of the disorder spin operator in the paramagnetic phase. Exploiting the duality
\cite{FRADKIN19801,Wu82,Bax} of the PFT, one can simply relate them to the form factors of the spin order operators in the ferromagnetic phase, which will by used in the next section.
The set of nine operators operators $O_{ij}(x)$, $i,j=0,1,2$ and their descendants were considered in \cite{KS88}.
The operators $O_{ij}$ transform in the following way under the action of the generator of the cyclic permutation
$\Omega$ and charge conjugation $C$,
\begin{equation}
\Omega^{-1} O_{ij} \Omega =\upsilon^i O_{ij}, \quad C^{-1}O_{ij}C=O_{\bar{i}\,\bar{j}},
\end{equation}
where $\upsilon=\exp(2\pi i/3)$, and $\bar{j}=3-j \mod 3$, $0\le \bar{j}\le 2$.
The operators $O_{ij}(x)$ were identified in \cite{KS88}
as the main ones arising naturally in the three-state PFT.
In particular, the operators $O_{0j}$ with $j=1,2$ are proportional to the disorder spin operators \cite{FRADKIN19801}
$\mu$ and $\bar{\mu}$,
\begin{equation}\label{00j}
O_{01}(x)=\frac{\mu(x)}{\langle \mu\rangle}, \quad O_{02}(x)=\frac{\bar{\mu}(x)}{\langle \mu\rangle},
\end{equation}
where $\langle \mu\rangle=\!\!\!\phantom{x}_{par}\langle 0|\mu (0) |0\rangle_{par}=
\!\!\!\phantom{x}_{par}\langle 0|\bar{\mu}(0) |0\rangle_{par}$,
and $|0\rangle_{par}$ is the (non-degenerate) paramagnetic vacuum. The operators $O_{0j}(x)$
transform as scalars under rotations.
{ The operators $O_{j0}$, ($j=1,2$) are proportional to the order spin operators $\sigma$ and $\bar{\sigma}$, respectively.
The operators
$O_{jj}$, ($j=1,2$) correspond to parafermions $\psi_j$ (regularized $\sigma \mu$ and $\bar{\sigma} \bar{\mu}$), while $O_{j\bar{j}}$ $(j=1,2)$
are parafermions $\bar{\psi}_j$ (regularized $\sigma \bar{\mu}$ and $\bar{\sigma }{\mu}$). Finally, the descendants
of the operator $O_{00}(x)$ correspond to the components of the energy-momentum density tensor and to other local conserved fields. The conformal limit of these fields is described in \cite{ZF85}. }
We shall use notations \eqref{rapibas} for the
3-state PFT rapidity
basis states as well as the normalization convention \eqref{norm3P}
in order to harmonize the notations with \cite{KS88}.
The form factors of the operator $O_{ij}(0)$ are defined as the matrix elements of the form
\begin{equation}\label{ff}
f_{ij}(\beta_1,\ldots,\beta_n)_{\varepsilon_1,\ldots,\varepsilon_n}
\equiv \!\!\! {\phantom{x}_{par} }\langle 0 | O_{ij}(0) | \beta_n,\ldots, \beta_1\rangle_{\varepsilon_n,\ldots,\varepsilon_1}.
\end{equation}
Due to their $\mathbb{Z}_3$-transformation properties,
the form factors \eqref{ff} differ from zero only if $\sum_{k=1}^n \varepsilon_k = i \mod 3 $.
The following axioms \cite{smirnov1992form,KS88} are postulated for the form factors.
\begin{enumerate}
\item The symmetry property:
\begin{align}
f_{ij}(\beta_1,\ldots,\beta_l,\beta_{l+1},\ldots,\beta_n)_{\varepsilon_1\ldots,\varepsilon_l,\varepsilon_{l+1},\dots,\varepsilon_n}\, S_{\varepsilon_l,\varepsilon_{l+1}}(\beta_l-\beta_{l+1})=\\
f_{ij}(\beta_1,\ldots,\beta_{l+1},\beta_l,\ldots,\beta_n)_{\varepsilon_1\ldots,\varepsilon_{l+1},\varepsilon_l,\dots,\varepsilon_n}.\nonumber
\end{align}
\item The analytical continuation axiom:
\begin{align}
f_{ij}(\beta_1,\ldots,\beta_n+2\pi i)_{\varepsilon_1\ldots,\varepsilon_n}=\\
\upsilon^{-j\varepsilon_n}f_{ij}(\beta_n,\beta_1\ldots,\beta_{n-1})_{\varepsilon_n,\varepsilon_1,\ldots,\varepsilon_{n-1}}.\nonumber
\end{align}
\item The function $f_{ij}(\beta_1,\ldots,\beta_n)_{\varepsilon_1\ldots,\varepsilon_n}$ analytically
depends on the complex variables
$\beta_n$ and has only simple poles in the strip $0\le \Im \beta_n \le \pi$ located at the
points $\beta_n=\beta_k+\frac{2\pi i}{3}$, and $\beta_n=\beta_k+\pi i$. The residues at these points are:
\begin{align}\label{Bpole}
(2\pi)^{1/2}\,3^{-1/4}
{\rm Res}_{\beta_n=\beta_k+2\pi i/3} \,f_{ij}(\beta_1,\ldots,\beta_n)_{\varepsilon_1\ldots,\varepsilon_n}=\\
\delta_{\varepsilon_n,\varepsilon_k} \,f_{ij}(\beta_1,\ldots,\beta_k+\frac{\pi i}{3},\ldots,\beta_{n-1})_{\varepsilon_1\ldots,-\varepsilon_k,\ldots,\varepsilon_{n-1}}\cdot \nonumber\\
\prod_{l>k}^{n-1}S_{\varepsilon_n,\varepsilon_l}\left(\beta_k-\beta_l+\frac{2\pi i}{3}\right),\nonumber\\
2\pi i \,{\rm Res}_{\beta_n=\beta_k+\pi i } \,f_{ij}(\beta_1,\ldots,\beta_n)_{\varepsilon_1\ldots,\varepsilon_n}=\\
\delta_{\varepsilon_n,-\varepsilon_k} \,f_{ij}(\beta_1,\ldots,\hat{\beta}_k,\ldots,\beta_{n-1})_{\varepsilon_1\ldots,,\hat{\varepsilon}_k,\ldots,\varepsilon_{n-1}}\cdot \nonumber\\
\left\{
\prod_{l>k} S_{\varepsilon_l,\varepsilon_k}(\beta_l-\beta_k)-\upsilon^{\varepsilon_k j}
\prod_{l<k} S_{\varepsilon_k\varepsilon_l}(\beta_k-\beta_l)
\right\}. \nonumber
\end{align}
\end{enumerate}
The calculation of the form factors
$f_{ij}(\beta_1,\ldots,\beta_n)_{\varepsilon_1\ldots,\varepsilon_n}$ determined by the above axioms
was performed by Kirillov and Smirnov in \cite{KS88}. Here we describe their results
for the case $i=0$, and $j=1,2$. It follows from \eqref{Bpole}, that the form factor
$f_{0j}(\beta_1,\ldots,\beta_n)_{\varepsilon_1,\ldots,\varepsilon_n}$ can be expressed in terms of
$f_{0j}(\beta_1,\ldots,\beta_{3n})_{1,\ldots,1}$. The latter form factor will be denoted as
$f_{0j}(\beta_1,\ldots,\beta_{3n})$. Its explicit representation reads as
\begin{align}\label{ff01}
f_{0j}(\beta_1,\ldots,\beta_{3n})=c^{-3n} g_{0j}(\beta_1,\ldots,\beta_{3n})\exp\left(-\frac{j}{3}\sum_{q=1}^{3n}\beta_q\right)\,\prod_{1\le l<k\le 3n}\zeta_{11}(\beta_l-\beta_k).
\end{align}
Here
\begin{equation}
c=-i \sqrt{2\pi}\,3^{-1/12} \,\exp \left[
\frac{\psi^{(1)}(1/3)-\psi^{(1)}(2/3)}{12\sqrt{3}\,\pi}
\right]=-i\cdot 2.5474074563745797... ,
\end{equation}
where $\psi^{(1)}(z)=\frac{d^2}{dz^2}\ln \Gamma(z)$ is the polygamma function.
The function $\zeta_{11}(\beta)$ is defined by the integral representation
\begin{align}\label{zeta11}
\zeta_{11}(\beta)=i \,2^{-2/3}\frac{\sinh ({\beta}/{2})}{\sinh[\frac{1}{2}(\beta-\frac{2\pi i}{3})]\sinh[\frac{1}{2}(\beta+\frac{2\pi i}{3})]} \cdot \\
\exp \left\{
2 \int_0^\infty dk \,\frac{\sin^2[\frac{1}{2}(\beta+ i \pi)k]+\frac{2}{3}\sinh^2(\pi k/3)}{k\, \sinh^2 (\pi k)} \sinh\frac{\pi k}{3}
\right\},\nonumber
\end{align}
which converges in the strip $-8\pi/3<{\rm Im}\,\beta<2\pi/3$. This function can be analytically
continuation into the whole complex $\beta$-plane, where it is meromorphic and
satisfies the equalities,
\begin{eqnarray}
\zeta_{11}(\beta)S_{11}(\beta)=\zeta_{11}(-\beta), \quad
\zeta_{11}(\beta-2\pi i)=\zeta_{11}(-\beta),\\
\zeta_{11}\left(\beta-\frac{2\pi i}{3}\right)\,\zeta_{11}(\beta)\,
\zeta_{11}\left(\beta+\frac{2\pi i}{3}\right)=
\frac{1}{4\, \sinh\left(\frac{\beta}{2}-\frac{\pi i}{3}\right)\sinh\left(\frac{\beta}{2}+\frac{\pi i}{3}\right)}.\label{ze}
\end{eqnarray}
The function $\zeta_{11}(\beta)$ has a simple pole at $\beta=-{2\pi i/3}$ with the
residue
\begin{eqnarray}\label{resid}
{\rm Res}_{\beta=-{2\pi i/3}}\,\zeta_{11}(\beta)=
3^{1/6} i \exp \left[
\frac{\psi^{(1)}(1/3)-\psi^{(1)}(2/3)}{12\sqrt{3}\,\pi}
\right]=
-\frac{3^{1/4}c}{\sqrt{2\pi}}.
\end{eqnarray}
Note that in equations \eqref{zeta11} and \eqref{ze} we have corrected some misprints
which were present in \cite{KS88}.
The functions $g_{0j}(\beta_1,\ldots,\beta_{3n})$ have the following representation,
\begin{equation}
g_{0j}(\beta_1,\ldots,\beta_{3n})=P_{0j,n}\left(
e^{\beta_1},\ldots ,e^{\beta_{3n}}
\right)\,\exp\left[
-(n-1)\sum_{q=1}^{3n}\beta_q
\right],
\end{equation}
where $P_{0j,n}(x_1,\ldots,x_{3n})$ is the uniform symmetric polynomial of the degree
$\deg(P_{0j,n})=3n^2-n\,\bar{j}$.
The polynomial $P_{0j,n}(x_1,\ldots,x_{3n})$ can be represented as the determinant of the matrix $M_{0j,n}$ of the
order $(2n-1) \times (2n-1)$, which has the matrix elements
\begin{equation}
\left(
M_{0j,n}
\right)_{pq}=\sigma_{3p-q-[(q-1+\bar{j})/2]}(x_1,\ldots,x_{3n}),
\end{equation}
where $[a]$ denotes the integer part of $a$, and
$\sigma_k$ is the elementary symmetric polynomial of the variables
$x_1,\ldots,x_{3n}$ of the degree $k$, and $\sigma_k=0$ for $k<0$, and for $k>3n$.
The first polynomials $P_{0j,n}(x_1,\ldots,x_{3n})$ have the form,
\begin{subequations}
\begin{eqnarray}
&&P_{01,1}(x_1,x_2,x_{3})=\sigma_1\equiv x_1+x_2+x_{3},\\
&&P_{02,1}(x_1,x_2,x_{3})=\sigma_2\equiv x_1x_2+x_1x_3+x_2 x_{3},\\
&&P_{01,2}(x_1,\dots,x_{6})=\sigma_1\sigma_3\sigma_4-\sigma_4^2-\sigma_1^2\sigma_6,\\
&&P_{02,2}(x_1,\dots,x_{6})=\sigma_2\sigma_3\sigma_5-\sigma_5^2-\sigma_2^2\sigma_6.
\end{eqnarray}
\end{subequations}
Accordingly, the form factors \eqref{ff01} with $n=0,1$ read as,
\begin{subequations}\label{ff3}
\begin{eqnarray}
&&f_{01}(\varnothing)=1,\\
&&f_{01}(\beta_1,\beta_2,\beta_{3})=c^{-3}
\big[
e^{(-\beta_1-\beta_2+2\beta_3)/3}+
e^{(-\beta_1-\beta_3+2\beta_2)/3}+\\
&&e^{(-\beta_2-\beta_3+2\beta_1)/3}
\big]\prod_{1\le l<k \le 3}\zeta_{11}(\beta_l-\beta_k), \nonumber\\
&&f_{02}(\beta_1,\beta_2,\beta_{3})=c^{-3}
\big[
e^{(\beta_1+\beta_2-2\beta_3)/3}+
e^{(\beta_1+\beta_3-2\beta_2)/3}+\\
&&e^{(\beta_2+\beta_3-2\beta_1)/3}
\big]\prod_{1\le l<k \le 3}\zeta_{11}(\beta_l-\beta_k).\nonumber
\end{eqnarray}
\end{subequations}
The matrix elements of general form can be constructed from the form factor
by means of the crossing relations \cite{smirnov1992form,KS88}. In particular,
\begin{eqnarray}\label{crossing}
\phantom{x}_{-1}
\langle \beta |\mu(0)|\beta_2,\beta_1\rangle_{11}=\!\!\!
{\phantom{x}_{par} }\langle 0| \mu(0)|\beta_2,\beta_1,\beta-i\pi\rangle_{111}=\\\nonumber
\langle\mu\rangle f_{01}(\beta-i\pi,\beta_1,\beta_2),\\\label{crossing2}
\phantom{x}_{-1} \langle \beta |\bar{\mu}(0)|\beta_2,\beta_1\rangle_{11}=
\!\!\! {\phantom{x}_{par} }\langle 0| \bar{\mu}(0)|\beta_2,\beta_1,\beta-i\pi\rangle_{111}=\\
\langle\mu\rangle f_{02}(\beta-i\pi,\beta_1,\beta_2).\nonumber
\end{eqnarray}
The above matrix elements of the disorder operators relate to the paramagnetic phase.
{ Let us connect them with the matrix elements of the order spin operators
in the ferromagnetic phase. This can be easily done
by means of the duality relations
\begin{subequations}\label{dm}
\begin{eqnarray}\label{dm1}
\mu(x) \mathcal{D}=\mathcal{D}\sigma(x),\\
\bar{\mu}(x) \mathcal{D}=\mathcal{D}\bar{\sigma}(x),
\end{eqnarray}
\end{subequations}
which connect the order and disorder spin operators. It is implied in \eqref{dm} that the order spin operators
$\sigma(x), \bar{\sigma}(x)$ act in the subspace $\mathcal{L}_0 $ of the ferromagnetic space
$\mathcal{L}_{fer}$, while the disorder spin operators
$\mu(x), \bar{\mu}(x)$ act in the subspace $\mathcal{L}_{sym}$ of the paramagnetic space $\mathcal{L}_{par}$.
All these vector spaces were described in Section \ref{PFTsec}.
Since $|\beta_2,\beta_1,\beta-i\pi\rangle_{111}\in \mathcal{L}_{sym}$, we can represent this vector as
\[
|\beta_2,\beta_1,\beta-i\pi\rangle_{111}=\mathcal{D}\,
|K_{02}(\beta_2)K_{21}(\beta_1)K_{10}(\beta-i\pi)\rangle.
\]
After substitution of this equality into \eqref{crossing} and straightforward manipulations exploiting
\eqref{dm1} and unitarity of the mapping $\mathcal{D}$, one obtains
\begin{equation*}
\phantom{x}_{par}\langle 0| \mu(0)||\beta_2,\beta_1,\beta-i\pi\rangle_{111}=\!\!\phantom{x}_{0}\langle 0|\sigma(0)|
K_{02}(\beta_2)K_{21}(\beta_1)K_{10}(\beta-i\pi)\rangle.
\end{equation*}
Application of the crossing relation\begin{footnote} {The crossing relations in the ferromagnetic PFT was discussed by
Delfino and Cardy in the Appendix A of reference \cite{DelCard98}.}
\end{footnote}
to the right-hand side yields
\begin{equation*}
\phantom{x}_{0}\langle 0|\sigma(0)|
K_{02}(\beta_2)K_{21}(\beta_1)K_{10}(\beta-i\pi)\rangle=
\langle K_{10}(\beta)|\sigma(0)|
K_{02}(\beta_2)K_{21}(\beta_1)\rangle.
\end{equation*}
The right-hand side can be further transformed to the form
\[
\langle K_{10}(\beta)|\sigma(0)|
K_{02}(\beta_2)K_{21}(\beta_1)\rangle=\upsilon\,\langle K_{02}(\beta)|\sigma(0)|
K_{21}(\beta_2)K_{10}(\beta_1)\rangle,
\]
exploiting the transformation rule $\sigma(0)=\upsilon\,\tilde{\Omega}\,\sigma(0)\,\tilde{\Omega}^{-1}$, and
\eqref{OmK}.
Thus, we obtain finally from the above analysis,
\begin{equation}\label{simu}
\phantom{x}_{-1}
\langle \beta |\mu(0)|\beta_2,\beta_1\rangle_{11} \big|_{par}=\upsilon\,\langle K_{02}(\beta)
|\sigma(0)| K_{21}(\beta_{2}) K_{10}(\beta_{1}) \rangle \big|_{fer}.
\end{equation}
Similarly, one can connect the matrix elements of the operators $\bar{\mu}(0)$ and $\bar{\sigma}(0)$,
\begin{equation}\label{simut}
\phantom{x}_{-1}
\langle \beta |\bar{\mu}(0)|\beta_2,\beta_1\rangle_{11}\big|_{par}= \upsilon^{-1}\,\langle K_{02}(\beta)
|\bar{\sigma}(0)| K_{21}(\beta_{2}) K_{10}(\beta_{1}) \rangle \big|_{fer}.
\end{equation}
Combining \eqref{simu}, \eqref{simut} }
with \eqref{crossing}, \eqref{ff3} we find the three-kink matrix element
of the order operator
$
\sigma_3(0)=(\sigma(0)+\bar{\sigma}(0))/{3}
$
in the ferromagnetic phase, which will be used in the next Section,
\begin{eqnarray}\label{MEsig}
\langle K_{{0}2}(\beta)
|\sigma_3(0)| K_{21}(\beta_{2}) K_{1{0}}(\beta_{1}) \rangle \big|_{fer}=\\\nonumber
\frac{\langle\mu\rangle}{3 c^3}\,\zeta_{11}(\beta-\beta_1-i \pi)\, \zeta_{11}(\beta-\beta_2-i \pi)\, \zeta_{11}(\beta_1-\beta_2)\\\nonumber
\times\Bigg\{\left(
e^{\beta_1} +e^{\beta_2}- e^{\beta}
\right)\exp\left[-\frac{\beta_1+\beta_2+\beta_3+\pi i}{3}\right]+\\
\left(
e^{-\beta_1} +e^{-\beta_2}- e^{-\beta}
\right)\exp\left[\frac{\beta_1+\beta_2+\beta_3+\pi i}{3}\right]
\Bigg\}.\nonumber
\end{eqnarray}
Note that the function $\zeta_{11}(\beta)$ defined by equation \eqref{zeta11}
admits the following explicit representation in terms of the dilogarithm function
${\rm Li}_2(z)=\sum_{n=1}^\infty \frac{z^n}{n^2}$,
\begin{eqnarray}\label{z11dL}
\zeta_{11}(\beta-i\pi)=-e^{-\beta/3}\left(1+e^{-\beta}\right)\left(1-e^{-\beta}+e^{-2\beta}\right)^{-5/6}\\
\times \left(
\frac{e^{\beta}-e^{i\pi/3}}{e^{\beta}-e^{-i\pi/3}}
\right)^{\frac{i\beta}{2\pi}}
\exp\left\{
\frac{i}{2\pi}\left[
{\rm Li}_2\left(e^{-\beta-\frac{i\pi}{3}}\right)-{\rm Li}_2\left(e^{-\beta+\frac{i\pi}{3}}\right)
\right]
\right\}.\nonumber
\end{eqnarray}
The function in the right-hand side is even and real at real $\beta$. At
${\rm Re}\,\beta\to+\infty$
it behaves as
\begin{eqnarray}\label{zetas}
\zeta_{11}(\beta-i\pi)=-e^{-\beta/3}\Bigg[1
+\frac{11\pi+3\sqrt{3}(1+\beta)}{6\pi}e^{-\beta}+\\
\frac{55\pi^2+27(1+\beta)^2+3\sqrt{3}\pi (25+28\beta)}{72\pi^2}e^{-2\beta}+
O\left(\beta^3e^{-3\beta}\right)\Bigg].\nonumber
\end{eqnarray}
To conclude this section, let us present a useful formula for the dilogarithm function
${\rm Li}_2(e^{i \pi p/q})$, with $p< q$ for $p,q\in \mathbb{N}$:
\begin{equation}
{\rm Li}_2(e^{i \pi p/q})=\sum_{j=1}^q e^{i \pi j p/q} s(p,q,j),
\end{equation}
where
\begin{equation}
s(p,q,j)\equiv\sum_{l=0}^\infty\frac{e^{i \pi l p}}{(ql+j)^2}=
\begin{cases}
\frac{\psi^{(1)}(j/q)}{q^2}, & {\rm for \; even} \; p,\\
\frac{\psi^{(1)}[j/(2q)]-\psi^{(1)}[(j+q)/(2q)]}{4q^2}, & {\rm for \; odd} \; p.
\end{cases}
\end{equation}
In particular,
\begin{equation}
{\rm Li}_2(e^{2 i \pi /3})=-\frac{\pi^2}{18}+i\,\frac{\psi^{(1)}(1/3)-\psi^{(1)}(2/3)}{6\sqrt{3}}.
\end{equation}
This equality has been used to derive from \eqref{z11dL} the expression \eqref{resid}
for the residue of the function $\zeta_{11}(\beta)$ at $\beta=-2\pi i/3$.
\section{Second-order quark mass correction in the ferromagnetic three-state PFT \label{SOPFT}}
In this section we estimate the second-order radiative correction to the kink mass in the
ferromagnetic 3-state PFT in the presence of a weak magnetic field $h>0$ coupled to the spin component
$\sigma_3$. Since very similar calculations for the case of the IFT were described in great details in Subsection \ref{1FS} and \ref{Ca23}, we can be brief.
In the presence of the magnetic field, the Hamiltonian of the PFT associated with the action
\eqref{AP} with $q=3$ takes the form
\begin{equation}
\mathcal{H}=\mathcal{H}_0-h\int_{-\infty}^\infty d{\rm x}\, \sigma_3({\rm x}),
\end{equation}
where the Hamiltonian $\mathcal{H}_0$ corresponds to the integrable ferromagnetic 3-state PFT
at zero magnetic field. The kinks $K_{\mu\nu}(p)$ with the dispersion law
$\omega(p,m)=\sqrt{p^2+m^2}$ are elementary excitations of the
model at $h=0$. For $h>0$, they form mesonic and
baryonic bound states at $h>0$ in the confinement regime. Nevertheless, one can determine
the kink dispersion law $\epsilon(p,m,h)$ perturbatively in $h$,
as described in Subsection \ref{1FS}.
For the leading second order radiative correction $\delta_2 \,\epsilon(p,m,h)\sim h^2$ to the
dispersion law of the kink $K_{2{0}}(p)$, one can write down the form factor expansion
\begin{eqnarray}\label{ffeP}
&&\delta_2 \,\epsilon(p,m,h)=\sum_{n=2}^\infty \delta_{2,n} \,\epsilon(p,m,h),\\\nonumber
&&\delta_{2,n} \,\epsilon(p,m,h)=-\frac{1}{n!}\frac{(2\pi)^2 h^2}{\omega(p)}\,
\sum_{\mu_1,\ldots,\mu_{n-1}= 0}^2
\int_{-\infty}^\infty \prod_{l=1}^n d\beta_l
\frac{\delta(p_1+\ldots+p_n-p)}{\omega_1+\dots+ \omega_n-\omega}\\
&&\times|\langle K_{{0}2}(\beta) |\sigma_3(0)|K_{2, \mu_{n-1}}(\beta_{n}) K_{\mu_{n-1}, \mu_{n-2}}
(\beta_{n-1}) \dots K_{\mu_{1},{0}}(\beta_{1}) \rangle|_{reg}^2,\label{deps2}
\end{eqnarray}
which is analogous to \eqref{d22}.
Here $p_j= m \sinh \beta_j$, $\omega_j= m \cosh \beta_j$,
$p=m\sinh\beta$, $\omega= m\cosh\beta$. Of course, the same result holds for the kinks
$K_{1{0}}(p)$, $K_{{0}2}(p)$ and $K_{{0}1}(p)$.
The matrix elements in the right-hand side of
\eqref{deps2} may contain kinematic singularities at $\beta_j=\beta$, which must be regularized
as done for the IFT in Subsection \ref{1FS}.
The second-order radiative
correction to the squared kink mass can be gained from $\delta_2 \,\epsilon(p,m,h)$
by taking the ultra-relativistic limit. Using \eqref{mq2} gives
\begin{equation}
\delta_2 \,m_q(m,h)=2\lim_{p\to\infty} [\omega(p,m)\, \delta_2 \,\epsilon(p,m,h)].
\end{equation}
Let us truncate the form factor expansion \eqref{ffeP} at its first term with $n=2$,
\begin{eqnarray}\nonumber
\delta_{2,2} \,\epsilon(p,m,h)=-\frac{1}{2}\,\frac{(2\pi)^2 h^2}{\omega(p)}\,
\int_{-\infty}^\infty d\beta_1 d\beta_2 \,
\frac{\delta(p_1+p_2-p)}{\omega_1+ \omega_2-\omega}\\
\times|\langle K_{{0}2}(\beta)
|\sigma_3(0)| K_{21}(\beta_{2}) K_{1{0}}(\beta_{1}) \rangle|^2.\label{deps22}
\end{eqnarray}
The matrix element in the right-hand side was calculated in the previous section,
see equation \eqref{MEsig}. Since it is regular at all
real $\beta,\beta_1, \beta_2$, it does not require regularization, in contrast to the subsequent terms
in the expansion \eqref{ffeP} with $n=3,4\ldots$.
The correction to the kink mass corresponding to \eqref{deps22} reads as
\begin{eqnarray}\label{dmP22}
\delta_{2,2} \,[m_q(m,h)]^2=
-{(2\pi h)^2 }\, \lim_{\beta\to\infty}
\int_{-\infty}^\infty d\beta_1 d\beta_2 \,
\frac{\delta(p_1+p_2-p)}{\omega_1+ \omega_2-\omega} \\
\times|\langle K_{{0}2}(\beta)
|\sigma_3(0)| K_{21}(\beta_{2}) K_{1{0}}(\beta_{1}) \rangle|^2.\nonumber
\end{eqnarray}
Let us represent it in the form analogous to \eqref{mqS},
\begin{equation}\label{d22m2}
{\delta_{2,2} \,[m_q(m,h)]^2}=\lambda^2 a_{2,2} \,m^2,
\end{equation}
where $\lambda=f_0/m^2$ is the familiar dimensionless parameter proportional
to the magnetic field $h$, and
\begin{equation}
{ f_0= h [\!\!\!\phantom{x}_{0}\langle0| \sigma_3(0)|0\rangle_0
-\!\!\!\phantom{x}_{2}\langle0| \sigma_3(0)|0\rangle_2]= \frac{3}{2}h [\!\!\!\phantom{x}_{0}\langle0| \sigma_3(0)|0\rangle_{0}]
=h \langle 0|\mu(0)|0\rangle_{par}}
\end{equation}
is the "bare" string tension
in the weak confinement regime. For the dimensionless amplitude $a_{2,2}$, we obtain from
\eqref{dmP22} and \eqref{MEsig},
\begin{eqnarray}\label{a22A}
&&a_{2,2}=-\frac{16\pi^2}{9 |c|^6} \lim_{\beta\to\infty}\int_{-\infty}^\infty \,d\beta_1 d\beta_2\,
{\delta(\sinh \beta_1+\sinh \beta_2-\sinh \beta)}\\\nonumber
&&\times({\cosh \beta_1+\cosh \beta_2-\cosh \beta})
\left|\cosh\left(\frac{\beta_1+\beta_2+\beta+\pi i}{3}\right)\right|^2\\
&&\times|\zeta_{11}(\beta-\beta_1-i\pi)\zeta_{11}(\beta-\beta_2-i\pi)
\zeta_{11}(\beta_1-\beta_2)|^2.\nonumber
\end{eqnarray}
After changing the integration variables to $x_j=\sinh(\beta_j)/\sinh(\beta)$, $j=1,2$,
and integrating over $x_2$ exploiting the $\delta$-function, one obtains
\begin{equation}\label{a22B}
a_{2,2}= \lim_{\beta\to\infty}\int_{-\infty}^\infty dx_1 \,\mathcal{M}(x_1,\sinh \beta).
\end{equation}
The function $\mathcal{M}(x_1,\mathfrak{p})$ is even with respect to the reflection
$x_1\to1-x_1$, and has the following asymptotic behavior at large ${\mathfrak{p}}\to\infty$,
\begin{equation}
\mathcal{M}(x_1,\mathfrak{p})=\begin{cases}
\mathcal{M}(x_1,\infty)+O({\mathfrak{p}}^{-1}) , & {\rm for}\;0<x_1<1,\\
O(\mathfrak{p}^{-2}), & {\rm for}\;x_1<0, \; {\rm and \;for}\;x_1>1,
\end{cases}
\end{equation}
where
\begin{eqnarray}\label{Minf}
\mathcal{M}(x_1,\infty)=-\frac{8\pi^2}{9 |c|^6} \,\frac{1-x_1+x_1^2}{x_1^{4/3}(1-x_1)^{4/3}}\\
\times\left|\zeta_{11}(-\ln x_1-i\pi)\,\zeta_{11}[-\ln (1-x_1)-i\pi]
\,\zeta_{11}[\ln x_1-\ln (1-x_1)]\right|^2.\nonumber
\end{eqnarray}
Plots of $\mathcal{M}(x_1,\mathfrak{p})$ versus $x_1$ at $\mathfrak{p}=100$ and at $\mathfrak{p}=\infty$ are shown in Figure \ref{fig:M}.
Thus, we arrive at the result
\begin{equation}
a_{2,2}=\int_{0}^1 dx_1 \,\mathcal{M}(x_1,\infty),
\end{equation}
with $\mathcal{M}(x_1,\infty)$ given by \eqref{Minf}.
We did not manage to evaluate the integral in the right-hand side
analytically, and instead computed it numerically
using \eqref{z11dL} and \eqref{zetas}.
The resulting number
\begin{equation}\label{a22num}
a_{2,2}=-\frac{4}{27}+\delta, \quad{\rm with}\; |\delta|<2\times 10^{-16}
\end{equation}
is remarkably close to $-\frac{4}{27}$, which we assume to be the exact value of the amplitude $a_{2,2}$.
\begin{figure}
\includegraphics{M.eps}
\caption{Plot of the function $\mathcal{M}(x_1,\mathfrak{p})$ defined by \eqref{a22A}, \eqref{a22B} for $\mathfrak{p}=100$ (blue solid line), and of its ultra-relativistic limit $\mathcal{M}(x_1,\infty)$ given
by \eqref{Minf} (red circles).
} \label{fig:M}
\end{figure}
\section{Conclusions \label{Conc}}
In this paper we have investigated the effect of the multi-quark (multi-kink) fluctuation
on the universal characteristics of the IFT and 3-state PFT in the weak confinement
regime, which is realized in these models in the low-temperature
phase in the presence of a weak magnetic field. For this purpose we
refined the form factor perturbation technique which was adapted in \cite{Rut09} for the
confinement problem in the IFT.
Due to proper regularization of the merging kinematic singularities arising from the products of spin-operator matrix elements, the refined technique allowed us
to perform systematic high-order form factor perturbative calculations in the weak confinement
regime. After verifying the efficiency of the proposed method
by recovering several well-known results for the Ising model in the ferromagnetic phase in the
scaling region, we have applied it to obtain the following new results.
\begin{itemize}
\item
The explicit expression \eqref{G32} for the contribution $\tilde{G}_{3,2}$ caused by two-quark
fluctuations to the universal amplitude $\tilde{G}_{3}$, which characterizes the third derivative of the
free energy of the scaling ferromagnetic Ising model with respect to the magnetic field $h$ at $h=0$.
\item
Proof of the announced earlier \cite{Rut09} exact result \eqref{aa23} for the amplitude $a_{2,3}$ describing the
contribution of three-quark fluctuations to the second order correction to the quark mass in the IFT
in the weak confinement regime.
\item We showed that the third order $\sim h^3$ correction to the quark self-energy
and to the quark mass vanishes in the ferromagnetic IFT.
This completes also calculations of the low-energy and semiclassical expansions for the meson masses
$M_{n}(h,m)$ in the weak confinement regime to third order in $h$. The final expansions for
$M_{n}^2(h,m)$ to third order in $h$ are described by the representations
given in \cite{Rut09}, since only the terms (which are now shown to be zero)
proportional to the third order quark mass corrections were missing there.
\end{itemize}
In addition, a new representation \eqref{sigth}-\eqref{a2int} for the amplitude $a_2$ characterizing the second order
radiative correction to the quark mass in the ferromagnetic IFT
was obtained by performing the explicit integration over the polar angle in the
double-integral representation \eqref{a2B} for this amplitude obtained in \cite{FZWard03}.
Finally, exploiting the explicit expressions for the form factors of the spin operators in the 3-state PFT at zero magnetic field
obtained in \cite{KS88}, we have estimated the second-order radiative correction to the quark mass in the ferromagnetic
3-state PFT, which is induced by application of a weak magnetic field $h>0$. To this end, we have truncated the
infinite form factor expansion for the second-order correction to the quark mass at its first term, which
represents fluctuations with two virtual quarks in the intermediate state.
Our result for the corresponding amplitude $a_{2,2}$ defined in \eqref{d22m2} is given in equations
\eqref{Minf}-\eqref{a22num}, \eqref{z11dL}.
To conclude, let us mention two possible directions for further developments.
Though the Bethe-Salpeter for the
$q$-state PFT was obtained in paper \cite{Rut09}, it was not used there for the calculation of the meson mass spectrum.
Instead, the latter was determined in \cite{Rut09} to the leading order in $h$ exploiting solely the zero-field scattering matrix known from
\cite{CZ92}. The integral kernel in the Bethe-Salpeter for the $q$-state PFT equation
contains matrix elements of the spin operator $\sigma_q(0)$ between the two-quark states, that are not known for general $q$. In the case of $q=3$, however, such matrix elements
can be gained from the form factors found by Kirillov and Smirnov \cite{KS88}. This opens up the possibility to use
the Bethe-Salpeter equation for the 3-state PFT for analytical perturbative evaluation of the meson masses
in subleading orders in small $h$. On the other hand, one can also study the magnetic field dependence of
the meson masses in the 3-state PFT at finite magnetic fields by numerical solution of the Bethe-Salpeter equation.
It was shown in \cite{FZ06} that the Bethe-Salpeter equation reproduces surprisingly well
the mesons masses in the IFT not only in the limit $h\to0$, but also at finite, and even at large
values of the magnetic field $h$. It would be interesting to check, whether this situation also takes
place in the case of the $3$-state PFT.
Recently, a dramatic effect of the kink confinement on the dynamics following a quantum quench
was reported in \cite{RTak06,Kor16} for the IFT and for its discrete analogue - the Ising chain in both
transverse and longitudinal magnetic fields. It was shown, in particular, that the masses of light mesons
can be extracted from the spectral analysis of the post-quench time evolution of the one-point functions.
It would be interesting to extend these results to the 3-state PFT, in which
both mesons and baryons are allowed.
{\section*{Acknowledgements}
\noindent
I am grateful to A.~B.~Zamolodchikov for many important and stimulating discussions on the subject,
and to H.~W.~Diehl for interesting communications and numerous suggestions leading to improvement of the text.
I would like to thank F.~A.~Smirnov for sending me his preprint \cite{KS88}.
In the initial stage, this work was supported by Deutsche Forschungsgemeinschaft (DFG) via Grant
Ru 1506/1.
|
1,116,691,500,346 | arxiv | \section{Introduction}
Spoken documents refer to audio recordings of multi-speaker events like lectures, meetings, news, etc. We use \emph{acoustic domain} as a descriptor for the type of spoken document. Although it sounds similar to \emph{acoustic scene}, it is not so because the latter concerns only the background environment of the audio which may or may not have speech~\citep{barchiesi2015acoustic}. In spoken documents, speech is the main source of information while other elements such as location and channel form the metadata. Examples of acoustic domains are broadcast news, telephone conversations, web video, movies, and so on, which may have different backgrounds across the data from same class, or even in the same document. Nowadays, easily accessible and simple to use audio recorders have led to an exponential increase in the amount of speech recordings which are available in myriad conditions in terms of environment, recording equipment, number of speakers, to name a few. An artificial intelligence system can deal with the multi-domain scenarios either by normalizing to the domains, i.e. \textit{domain-invariant processing}, or by adapting to the domains i.e. \textit{domain-dependent processing}. For diverse and challenging recording conditions, it is difficult to design a system following the first strategy. Therefore, in such cases, a system that identifies the domain and then proceeds with relevant information extraction accordingly is expected to perform better than a \emph{one-size-fits-all} method.
For \emph{rich transcription} (RT) of spoken documents, besides ``what is being spoken" and ``by whom", it is also imperative to know ``who spoke when". The answers to the three questions are provided by automatic speech recognition, speaker recognition, and speaker diarization, respectively. \emph{Speaker diarization} is the task of generating time-stamps with respect to the speaker labels~\citep{Anguera2012, PARK2022101317}. In earlier studies, RT researchers mainly considered recordings of broadcast news, telephone conversations and meetings/conferences~\citep{Anguera2012}. The NIST RT challenges (RT02 - RT09)\footnote{\url{https://www.nist.gov/itl/iad/mig/rich-transcription-evaluation}} were organized for metadata extraction from these acoustic domains. There, the type of data was known beforehand which implies that the diarization systems were designed to suit the application. The factors mainly contributing to poor diarization were number of speakers, speech activity detection, amount of overlap, and speaker modelling \citep{mirghafori2006nuts, huijbregts2007blame, sinclair2013challenges}.
In this paper, we propose a simple but efficient method for acoustic domain identification (ADI). The proposed ADI system uses speaker embeddings. Our study reveals that i-vector and OpenL3 embedding based method achieves considerably better performance than x-vector based approach in the third DIHARD challenge dataset. Of all the three, i-vector system's results were found to be better. Next, we show that domain-dependent threshold for speaker clustering helps to improve the diarization performance only when \emph{probabilistic linear discriminant analysis} (PLDA) adaptation uses audio-data from all the domains. The proposed system stands apart from other submissions to the challenge because unlike most of the top-performers, it is not an ensemble or fusion-based system. Moreover, from the study of speaker diarization literature, we found that the domain-dependent processing approach is the first of its kind \citep{sahidullah2019speed}.
In the upcoming sections, first, we talk about previous studies relevant to the topic, i.e. acoustic scene classification and speaker diarization in Section~\ref{sec:lit}. Then, Section~\ref{sec:database} presents a detailed description of the speaker diarization data used in our study. The major aspects of the contribution of this work, that is, ADI and classification of domains are discussed in Section~\ref{sec:adi} and Section~\ref{sec:classify} respectively. The experimental setup is outlined in Section~\ref{sec:expt} while the results of the experiments are presented and discussed in Section~\ref{sec:result}. We conclude the paper in Section~\ref{sec:conc}.
\section{Related works}
\label{sec:lit}
In the classical audio processing applications, where speech is regarded as the primary information source, the domain was kept constant to get the best performance. However, in the present times of technological advancements, multi-domain data is unavoidable, whether it is for speech recognition, language recognition, speaker recognition, or speaker diarization. In this paper, we consider the speaker diarization (SD) application. The research in this area took a leap two decades ago with the NIST RT evaluations. Back then, broadcast news (BN), conversational telephone speech (CTS), and meetings/conferences were the only three acoustic domains considered.
The use of \emph{total variability} (TV) space was proposed in \citep{shum2011exploiting} to exploit intra-conversation variability. The same authors later applied Bayesian-\emph{Gaussian mixture model} (GMM) to \emph{principal component analysis} (PCA)-processed i-vectors as a probabilistic approach to speaker clustering \citep{shum2013unsupervised}. The authors of \citep{sell2014speaker} incorporated PLDA for i-vector scoring and used unsupervised calibration of the PLDA scores to determine the clustering stopping criterion. They also showed improved diarization by denser sampling in the i-vector space with overlapping temporal segments. A \emph{deep neural network} (DNN) architecture was introduced by \citep{garcia2017speaker} that learned a fixed-dimensional embedding for variable length acoustic segments along with a scoring function for measuring the likelihood that the segments originated from the same or different speakers. Combination of long short-term memory based speaker-discriminative d-vector embeddings and non-parametric clustering was done by \citep{wang2018speaker} while unbounded interleaved-state \emph{recurrent neural network} (RNN) with d-vectors were employed by \citep{zhang2019fully} for SD.
The annual DIHARD challenges focusing on \emph{hard diarization} revamped SD research. The objective is to build SD systems for spoken documents from diverse and challenging domains where the existing state-of-the-art systems fail to deliver. The first one was held in 2018 \citep{ryant2018first}. The data for this challenge was taken from nine domains. The top performing system \citep{sell2018diarization} explored the i-vector and x-vector representations and found that the latter trained on wideband microphone data was better. They also showed that a \emph{speech activity detection} (SAD) system specifically trained for SD with in-domain data helped in improving performance. For the second challenge, the data was drawn from 11 domains consisting of single-channel and multi-channel audio recordings \citep{ryant2019second}. This version was provided with baseline systems for speech enhancement, SAD and SD. The given SD baseline was based on the top-performing system of its predecessor. The first rank system performed \emph{agglomerative hierarchical clustering} (AHC) over x-vectors with Bayesian hidden Markov model \citep{landini2019but}. In the third challenge, the multichannel audio condition and also the child speech data was not there, whereas CTS data was included for the first time \citep{ryant2020third}. The top performing system had speech enhancement and audio domain classification before SD \citep{wang2021ustc}. It combined multiple front-end techniques, such as speech separation, target-speaker based voice activity detection and iterative data purification. Its SD system was based on DIHARD II's top-ranked system.
The challenges on detection and classification of acoustic scenes and events (DCASE) have rendered a boost to research in \emph{acoustic scene classification} (ASC). Discussion about the state-of-the-art in this field is necessary before we go into the details of ADI because the two bear some degree of conceptual similarity. The baseline systems provided with the ASC task of the DCASE challenges have ranged from \emph{Mel-frequency cepstral coefficients} (MFCC)-GMM based systems \citep{giannoulis2013database, mesaros2016tut}, log mel-band energy with \emph{mulltilayer perceptron} (MLP) \citep{mesaros2018acoustic}, with convolutional neural network (CNN) \citep{mesaros2018multi, mesaros2019acoustic} to OpenL3 embeddings with two fully-connected feed-forward layers \citep{Heittola2020}. On the other hand, the top-performing systems have shown a variety of approaches to extract as much as possible differences between the given acoustic scenes. These approaches include the use of spectrograms, \emph{recurrence quantification analysis} (RQA) of MFCC, i-vectors, log-mel band energies, and constant-Q transform as features, with support vector machine, MLP, DNN, CNN, RNN, and trident residual neural network as classifiers \citep{Roma2013, Eghbal-Zadeh2016, Mun2017, Sakashita2018, Chen2019, Suh2020}. A few systems have also used data augmentation techniques like generative adversarial network, graph neural network, temporal cropping and mixup \citep{Mun2017, Sakashita2018, Chen2019, Suh2020}. The multi-classifier systems have employed majority vote, average vote, or random forest for decision making \citep{Mun2017, Sakashita2018, Chen2019, Suh2020}. Some other noteworthy experiments have been done with various spectral and time-frequency features, classification techniques along with fusion approaches for ASC \citep{ren2018deep, rakotomamonjy2015histogram, waldekar2018classification, waldekar2020analysis}. Matrix factorization was applied in \citep{bisot2017feature}. i-vector and x-vector embeddings were used with CNN in \citep{dorfer2018acoustic} and \citep{zeinali2018convolutional} respectively.
\section{DIHARD III: A large-scale realistic speech corpora with multiple domains}
\label{sec:database}
In the third edition of the DIHARD challenge series, the DIHARD III data set is taken from eleven diverse domains. This brings a lot of variability in data conditions like background noise, language, source, number of speakers, amount of overlapped speech, and speaker demographics. Table~\ref{Table:dtdesc} enlists the acoustic domains along with their corresponding source corpus, type of recording, and a brief description. The data for `Meeting' and `Sociolinguistic (Field)' are taken from two different speech corpora. The table also shows how the domains differ in terms of the number of speakers per sample and total speakers. The database is divided into development and evaluation sets. Both have 5--10 minute duration samples, except in the case of `Web videos’ where the sample duration ranges from less than 1 min to more than 10 min. Evaluation has to be done on two partitions of the evaluation data: Core evaluation set --- a balanced set where the total duration of each domain is approximately equal, and Full evaluation set --- all the samples are considered for each domain. The development set composition is the same as that of the evaluation set. The metadata provided for each data sample in the development set is its domain, source, language, and whether or not it was selected for the core set. This information was not provided for the evaluation data.
\section{Audio domain analysis of the corpora}
\label{sec:adi}
The heterogeneity in the recording conditions of the audio samples under consideration can be observed from the last two columns of Table~\ref{Table:dtdesc}. To gain better insights into their domain-wise differences, we analyzed the DIHARD III development data's audio samples using three methods: \emph{signal-to-noise ratio} (SNR) estimation, \emph{long-term average spectrum} (LTAS) analysis, and speech-to-non-speech ratio computation.
\begin{landscape}
\thispagestyle{plain}
\begin{table}[t]
\begin{scriptsize}
\begin{center}
\centering
\renewcommand{\arraystretch}{1.2}
\caption{Description of different source domains for the DIHARD III dataset.}
\label{Table:dtdesc}
\begin{tabular}{|c|c|c|c|c|l|}
\hline
\multirow{2}{*}{Domain} & Source & Speakers per & Total & Conversation & Description \\
& Corpora & Recording & Speakers & Type & \\ \hline \hline
\multirow{2}{*}{Audiobooks (Ab)} & \multirow{2}{*}{LIBRIVOX} & \multirow{2}{*}{1} & \multirow{2}{*}{12} & \multirow{2}{*}{Read} & Amateur readers reading aloud passages from public-domain\\ & & & & & English texts. \\ \hline
\multirow{4}{*}{Broadcast} & \multirow{4}{*}{YOUTHPOINT} & \multirow{4}{*}{3-5} & \multirow{4}{*}{46} & & Student-lead radio interviews conducted during the 1970s with \\ \multirow{4}{*}{Interview (Bi)} & & & & Radio & then-popular figures. The audio files are recorded in a studio on\\ & & & & Interview & open reel tapes, and these were digitized and transcribed later\\ & & & & & at LDC. \\
\hline
\multirow{3}{*}{Clinical (Cl)} & \multirow{3}{*}{ADOS} & \multirow{3}{*}{2} & \multirow{3}{*}{96} & \multirow{3}{*}{Interview} & Semi-structured interviews recorded with ceiling-mounted mic\\ & & & & & to identify whether the 2-16 yrs children fit the clinical autism \\ & & & & & diagnosis \\ \hline
\multirow{4}{*}{Courtroom (Ct)} & \multirow{4}{*}{SCOTUS} & \multirow{4}{*}{5-10} & \multirow{4}{*}{83} & \multirow{4}{*}{Argument} & Oral arguments from the U.S. Supreme Court (2001 term). The\\ & & & & & table-mounted speaker-controlled mics' outputs were summed\\ & & & & & and recorded on a single-channel reel-to-reel \textcolor{black}{analog} tape recorder.\\ \hline
\multirow{1}{*}{CTS} & \multirow{1}{*}{FISHER} & \multirow{1}{*}{2} & \multirow{1}{*}{122} & \multirow{1}{*}{Telephonic} & Ten min conversations between two native English speakers. \\ \hline
\multirow{4}{*}{Map Task (Mt)} & \multirow{4}{*}{DCIEM } & \multirow{4}{*}{2} & \multirow{4}{*}{46} & \multirow{4}{*}{Formal} & Pairs of speakers sat opposite each other. One communicated a\\ & & & & & route marked on a map so that the other may precisely reproduce\\ & & & & & it on his map. Speech recorded on separate mics was mixed later.\\ \hline
\multirow{3}{*}{Meeting (Mg)} & \multirow{3}{*}{RT04/ROAR} & \multirow{3}{*}{3-10} & \multirow{3}{*}{75} & & Highly interactive recordings consisting \textcolor{black}{of} large amounts of spon-\\ & & & & {Formal} &-taneous speech with frequent interruptions and overlaps, each\\ & & & & & recorded with a centrally located distant mic with small gain.\\ \hline
\multirow{3}{*}{Restaurant (Rt)} & \multirow{3}{*}{CIR} & \multirow{3}{*}{5-8} & \multirow{3}{*}{86} & Interactive & Mix of the two channels of binaural mic recordings. Highly \\ & & & & Informal & frequent overlap speech and interruptions along with speech\\ & & & & & from nearby tables and restaurant-typical background noise.\\ \hline
{Sociolinguistic} & \multirow{3}{*}{SLX/DASS} & \multirow{3}{*}{2-6} & \multirow{3}{*}{42} & \multirow{3}{*}{Interview} & An interviewer tried to elicit vernacular speech from an informant\\ (Field) (Sf) & & & & & during informal conversation. Most recordings are at home, while\\ & & & & & some are at public places (park or cafe). \\ \hline
{Sociolinguistic} & \multirow{2}{*}{MIXER6} & \multirow{2}{*}{2} & \multirow{2}{*}{32} & \multirow{2}{*}{Interview} & Controlled environment interviews recorded under quiet\\ (Lab) (Sl) & & & & & conditions. \\ \hline
\multirow{2}{*}{Web Video (Wv)} & {Video-sharing} & \multirow{2}{*}{1-9} & \multirow{2}{*}{127} & \multirow{2}{*}{Variable} & Collection of amateur videos on diverse topics and recording \\ & sites & & & & conditions in English and Mandarin. \\ \hline
\end{tabular}
\end{center}
\end{scriptsize}
\end{table}
\end{landscape}
\subsection{Signal-to-noise ratio (SNR) analysis}
SNR is the most popular measure to quantify the presence of noise in signals. In SD, noisier audio samples are expected to be difficult to diarize. \emph{Waveform amplitude distribution analysis} (WADA) SNR estimation is a well-known technique for SNR estimation. It assumes that speech is totally independent of the background noise and clean speech always follows a gamma distribution with a fixed shaping parameter (between 0.4 and 0.5), whereas background noise follows Gaussian distribution \citep{kim2008robust}. The result of this estimation for the data in hand is shown in Fig.~\ref{Fig:SNR}. `CTS' and `Map task' show considerable variation in values and are on the higher side in the box-plot. This could be attributed to having only two speakers speaking on separate channels. `Meeting' and `Restaurant' have spontaneous speech from multiple speakers in noisy environment, recorded with one mic contributing to the low SNR.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{SNRBoxPlot.eps}
\caption{SNR distribution of DIHARD III development data.}
\label{Fig:SNR}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{LTASDomainwise.eps}
\caption{Comparison of long-term average spectrum (LTAS) of speech files from each domain.}
\label{Fig:LTAS}
\end{figure}
\subsection{Long-term average spectrum (LTAS) analysis}
With LTAS, we are trying to gain knowledge of the spectral distribution of the signals over a period of time \citep{lofqvist1986long}. LTAS tends to average out segmental variations \citep{assmann2004perception}. This information is displayed in Fig.~\ref{Fig:LTAS} for DIHARD III development data in domain-wise manner. The fact that speech signals generally have no energy below 300~Hz but are low-frequency signals is discernible in all the plots. The peculiar nature of `CTS' after 4~kHz is because telephonic audio is sampled at 8~kHz. Controlled and quiet recording conditions like that of `Map task' and `Sociolinguistic (Lab)' show lower LTAS values. Note that the latter had low SNR too. On the other hand, multi-speaker and noisy environments of `Courtroom', `Restaurant', and `Web videos' exhibit higher magnitudes.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{SNSPercentageDomainWise.eps}
\caption{Domain-wise percentage of Speech to non-speech average duration for DIHARD III development data.}
\label{Fig:hist_sp2nsp}
\end{figure}
\subsection{Speech-to-nonspeech ratio}
The final goal of this work is to assign time-stamps to spoken documents. So, the information regarding the amount of speech in the audio samples would be worthwhile. The speech-to-nonspeech ratio offers an acoustic representation of the language in daily conversations. It helps obtain additional information on the spectral energy distribution of a speech signal in a longer speech sample. As can be seen in Fig.~\ref{Fig:hist_sp2nsp}, around 20\% of the recorded signals from most of the domains are non-speech. The reason for exceptional behaviour of `Clinical' signals could be the unusual language use by children with autism, non-autistic language disorders, and ADHD. For `Courtroom' the high nonspeech value could be attributed to the participants' control on the table-top microphone and the overall noisy environment.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{overlapDurDomainWise.eps}
\caption{Percentage of overlapped speech against non-overlapped speech averaged across all the samples of each domain for DIHARD III development data.}
\label{Fig:hist_olsp}
\end{figure}
\subsection{Percentage of overlapped speech}
The presence of overlapped speech in a spoken document is a serious factor affecting the performance of a diarization system. The SD systems are generally designed to assign one speaker label to a segment. Thus, when more than one speaker is present, the other speaker's or speakers' speech contributes to the \emph{missed detection rate} (MDR) component of the DER. A system having the capacity to assign multiple labels would also find it challenging to characterize particular speakers when their speech is superimposed. As can be seen in Fig.~\ref{Fig:hist_olsp}, overlapped speech amount varies with the domain. `Restaurant' has the highest value because many speakers are involved in informal conversation. In `Meeting', although the conversation is formal, the speakers can still interrupt each other resulting in high percentage of overlapped speech. In spite of multiple speakers having argument in `Courtroom', the reason for less overlapped speech is the same as that of high speech-to-nonspeech ratio.
\section{Baseline Diarization System}
\label{baseline}
A general SD system consists of an SAD, a segmentation, and a clustering module. In this work, we are concerned with only identifying the various domains of spoken documents and hence have only considered the Task 1 of DIHARD III \citep{ryant2020third} where the reference SAD was given. The diarization baseline provided with this challenge \citep{ryant2020thirdpaper}, which was based on one of the submissions of the predecessor challenge \citep{singh2019leap}, was used to benchmark our proposed SD system.
The baseline system extracts 512-dimensional x-vectors using 30-dimensional MFCCs from 25~ms frames with overlap of 15~ms. Three second sliding windows are used for mean normalization. VoxCeleb 1 \citep{nagrani2017voxceleb} and VoxCeleb 2 \citep{chung2018voxceleb2} are combined and augmented with additive noise from MUSAN \citep{snyder2015musan} and reverberation from RIR datasets \citep{ko2017study} for training. Segments of duration 1.5~sec with shift of 0.25~sec are used for embeddings extraction during testing. Statistics estimated from DIHARD III Dev and Eval sets are used for centering and whitening of the embeddings, post which they are length normalized.
Before scoring with Gaussian PLDA model \citep{prince2007probabilistic} trained from the x-vectors extracted from VoxCeleb, conversation-dependent PCA \citep{zhu2016online} preserving 30\% of the total variability is used for dimensionality reduction. Clustering is done using AHC with minimum DER for Dev set as the stopping criteria \citep{han2008strategies}. The output is then refined by using \emph{variational Bayes hidden Markov model} (VB-HMM) re-segmentation \citep{diez2019analysis} which is initialized separately for each recording and run for one iteration. For this, 24-dimensional MFCCs are used without mean or variance normalization nor delta coefficients, extracted from 15~ms frames with 5~ms overlap. A 1024-diagonal covariance components GMM-\emph{universal background model} (UBM) and 400 eigenvoices TV matrix, both trained with the x-vector extractor data, are used. The zeroth-order statistics are boosted before VB-HMM likelihood computation for posterior scaling to reduce frequent speaker transitions \citep{singh2019leap}.
\section{Domain classification}
\label{sec:classify}
The framework of proposed ADI system is shown in Fig.~\ref{Fig:BlockADI}. It is based on the speaker embeddings as recording-level feature and nearest neighbor classifier. Though the speaker embeddings are principally developed for speaker characterization, they also capture information related to acoustic scene~\citep{zeinali2018convolutional}, recording session~\citep{Probing2019}, and channel~\citep{Wang2017}. In this work, we studied two frequently used speaker embeddings: discriminatively trained x-vectors and generative i-vectors. In an earlier study, these speaker embeddings were investigated for the second DIHARD dataset on related tasks~\citep{sahidullah2019speed, fennir2020acoustic}. Owing to the similarity of ADI with ASC, we additionally investigated L3-net embeddings, which were a part of the baseline system provided with the ASC task of the DCASE 2020 challenge \citep{Heittola2020}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\linewidth]{ADI-Blockdia.png}
\caption{Block diagram of the proposed acoustic domain identification system.}
\label{Fig:BlockADI}
\end{figure*}
\subsection{i-vector embeddings}
Speaker diarization involves splitting multi-speaker spoken documents according to speaker changes and segment clustering. Since it closely resembles speaker recognition, the latter’s algorithms are applied to the former with good results. i-vectors are one such example~\citep{dehak2010front}. Speaker recognition systems work in two steps, enrolment and recognition. During the speaker enrolment process, a UBM~\citep{reynolds2000speaker} is generated using the data collected from non-target utterances. A target model is generated by adapting the UBM according to the target data. Whenever any test utterance is given as input to the system, features are extracted from it, and pattern matching algorithms are applied using one or more kinds of target models. i-vectors is designed to improve the joint factor analysis~\citep{kenny2007joint} by combining the inter and intra-domain variability and modeling it in the same low dimensional total variability space. This space $\mathbf{T}$ accounts for both speaker and channel variability.
The speaker and channel dependent supervector $\mathbf{M}$ is given by
\begin{equation} \label{eqn1}
\mathbf{M} = \mathbf{m}+\mathbf{T}\mathbf{x}
\end{equation}
\noindent If $C$ is the number of components in UBM and $F$ is the dimension of acoustic feature vectors then for a given utterance, we concatenate the $F$-dimensional GMM mean vectors to get the speaker and channel independent UBM supervectors $\mathbf{m}$ of dimension $CF$. The normally distributed random vectors $\mathbf{x}$ are known as identity vectors or i-vectors. It is assumed that $\mathbf{M}$ is distributed normally with $\mathbf{T}\mathbf{T}^{\top}$ as its covariance matrix and $\mathbf{m}$ as its mean vector. For training of the matrix $\mathbf{T}$, it is assumed that the utterances of the same speaker are produced by different speakers.
\subsection{x-vector embeddings}
x-vectors represent a recent but popular approach from speaker recognition research~\citep{snyder2018x, BAI202165}. Here, long-term speaker characteristics are captured in a feed-forward deep neural network by a temporal pooling layer that aggregates over the input speech \citep{snyder2017deep}. The network has layers that operate on speech frames, a statistics pooling layer that aggregates over the frame-level representations, additional layers that operate at the segment-level, and a softmax output layer at the end. The nonlinearities are \emph{rectified linear units} (ReLUs).
Suppose an input segment has $K$ frames. The first five layers operate on speech frames, with a small temporal context centered at the current frame. The statistics pooling layer aggregates all $K$ frame-level outputs from fifth frame-layer and computes its mean and standard deviation. This process aggregates information across the time dimension so that subsequent layers operate on the entire segment.The mean and standard deviation are concatenated together and propagated through segment-level layers, and finally, the softmax output layer.
The goal of training the network is to produce embeddings that generalize well to speakers that have not been seen in the training data. The embeddings should capture speaker characteristics over the entire utterance, rather than at the frame-level. After training, the embeddings extracted from the affine component of first segment-layer are referred to as x-vectors. The pre-softmax affine layer is not considered because of its large size and dependence on the number of speakers.
\subsection{L3-net embeddings}
The look, listen and learn (L3)-network embeddings were proposed for learning \emph{audio-visual correspondence} (AVC) in videos in a self-supervised manner \citep{arandjelovic2017look}. The goal of AVC learning was to \emph{learn} audio and visual semantic information simultaneously by simply \emph{watching} and \emph{listening} to many unlabelled videos. Without needing annotated data and a relatively simple convolutional architecture, L3-net successfully produced powerful embeddings that led to state-of-the-art sound classification performance. Thus, it can be applicable to a wide variety of machine learning tasks with scarce labeled data. However, in \citep{arandjelovic2017look}, non-trivial parameter choices that may affect the performance and computation cost of the model were not elaborated. This issue was addressed in \citep{cramer2019look} in the context of sound event classification. The authors worked with three publicly available environmental audio datasets and found that mel-spectrogram gave better audio representation than linear spectrogram, which was used in \citep{arandjelovic2017look}, Their results also indicated that for a downstream classification task, using the audio that makes the embeddings most discriminative, independently of the downstream domain, is more important. Regarding the training data, it was suggested that at least 40M samples should be used to train the L3-Net embedding.
\par
We first computed the average of the embeddings for each domain and stored it as trained models. We calculated the cosine similarity during testing and assigned the class label with the highest similarity.
\section{Experimental setup}
\label{sec:expt}
The ADI experiments were performed on the development set of 254 speech utterances from 11 domains. We randomly selected 200 utterances for training and used the remaining 54 for test. For similarity measurement between training and test data, \emph{cosine similarity} was employed. The experiments were repeated 1000 times.
To extract utterance-level embeddings for the ADI task, we used pre-trained x-vector and i-vector models trained on VoxCeleb audio-data\footnote{\url{https://kaldi-asr.org/models/m7}}. Pre-trained versions of the L3-Net variants studied in \citep{cramer2019look} are made freely available online by the name OpenL3\footnote{\url{https://github.com/marl/openl3}}. Open L3 embeddings were used as feature representation for the baseline system provided with the DCASE 2020's ASC task \citep{Heittola2020}. The system had a one-second analysis window, with a 100~ms hop, 256 mel filters for input representation, content type music, and a 512-dimensional embedding. We used the same for our ADI task.
\textcolor{black}{We performed domain-dependent speaker diarization on the evaluation set of DIHARD III data by first predicting the domain for every utterance using the ADI system and grouping the utterances according to the predicted domains. i-vector embeddings manifest superior domain-clustering abilities (refer Fig.~\ref{Fig:tSNE} and Fig.~\ref{Fig:ADIaccuracy} in Section~\ref{sec:result_adi}), so we resort to them for ADI. We used domain-specific thresholds for speaker clustering in the diarization pipeline. Towards this, we computed the domain-specific speaker diarization performance for different thresholds of the development set. We selected the threshold value that shows the best SD performance for a specific subset.}
For diarization, our experimental setup is based on the baseline system created by the organizers~\citep{ryant2020thirdpaper}. We have used the toolkit\footnote{\url{https://github.com/dihardchallenge/dihard3_baseline}} with the same frame-level acoustic features, embedding extractor, scoring method, etc. The system is trained with all 254 sentences for identifying the domains of evaluation data samples.
\begin{figure*}[t]
\centering
\includegraphics[trim={1cm 0 0 0},clip,width=0.95\linewidth]{tsne.eps}
\caption{\textcolor{black}{Domain-wise scatter plot of x-vector and i-vector features.}}
\label{Fig:tSNE}
\end{figure*}
The primary evaluation metric of the challenge is \emph{diarization error rate} (DER), which is the sum of all the errors associated with the diarization task. \emph{Jaccard error rate} (JER) is the secondary evaluation metric, and it is based on the Jaccard similarity index. The details of the challenge with rules are available in the challenge evaluation plan~\citep{ryant2020third}.
\begin{figure}[t]
\centering
\includegraphics[trim={0 1.1cm 0 0},clip,width=4.7in]{ADIModelComp.eps}
\caption{Acoustic domain identification performance in terms of accuracy, F1-score, unweighted average recall (UAR), Matthews correlation coefficient (MCC) using i-vector, x-vector, and OpenL3 embeddings.}
\label{Fig:ADIaccuracy}
\end{figure}
\section{Results}
\label{sec:result}
\subsection{ADI experiments}
\label{sec:result_adi}
First, we present in Fig.~\ref{Fig:tSNE} the domain-wise clustering ability of the two speaker embeddings: i-vectors and x-vectors. The scatter-plots show that `CTS' samples are well-separated from the rest of the data because this subset consists of narrowband telephone speech, whereas the others are from wideband speech. We also observe that i-vector embeddings have done better for the other domains than the x-vector embeddings. However, for `Sociolinguistic (Field)', the clusters are not well represented by both the embeddings, likely due to the within-class variability in the subset (refer Table~\ref{Table:dtdesc}, Row 9).
We carried out ADI with the three embeddings described in Section~\ref{sec:classify}. Figure~\ref{Fig:ADIaccuracy} shows the corresponding performance comparison. Besides the commonly used classification accuracy, we also reported \emph{unweighted average recall} (UAR), \emph{F1-score}, and \emph{Mathews correlation coefficient} (MCC), because of the unbalanced nature of the data. The figure shows that the i-vector and OpenL3 systems are substantially better than the x-vector system for ADI on the DIHARD III dataset. With the present experimental setup, i-vectors surpass the other two embeddings on all four metrics. For instance, the average domain classification accuracy over 1000 repetitions was 91.11\%, 72.98\%, and 89.64\% for i-vector, x-vector, and OpenL3 systems, respectively. The poor performance of x-vectors could be because they are originally designed to capture better speaker characteristics over the entire utterance. However, here we are trying to classify domains where one utterance may have multiple speakers, and the same domain data may not have common speakers.
The ADI results over evaluation data of the DIHARD III challenge for the three embeddings are displayed in Table~\ref{Table:adi_eval}. Here, we notice that the four metrics follow the same pattern as in Fig.~\ref{Fig:ADIaccuracy}, but the difference in the performance of the embeddings is more prominently visible. Based on these values, we chose i-vectors for ADI and proceeded to the speaker diarization stage.
\begin{table}[]
\caption{Performance of ADI system on DIHARD III evaluation for i-vector, x-vector, and OpenL3 based systems.}
\label{Table:adi_eval}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\hline
& Accuracy (\%) & F1-Score (\%) & UAR (\%) & MCC (\%) \\ \hline
\hline
i-vector & \textbf{86.49} & \textbf{80.26} & \textbf{82.76} & \textbf{80.24} \\ \hline
x-vector & 69.88 & 56.61 & 60.14 & 55.85 \\ \hline
OpenL3 & 75.29 & 72.84 & 73.43 & 71.66 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!t]
\caption{The impact of domain-dependent processing on speaker diarization performance (DER in \%/JER in \%) for different acoustic domains of the development set of the third DIHARD challenge. (P1: Domain-dependent threshold and PLDA adaptation, P2: Domain-dependent threshold but PLDA adaptation with full data.)}
\begin{scriptsize}
\begin{center}
\label{Table:ResultsSubset}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& & \multicolumn{2}{c|}{First Pass} & \multicolumn{2}{c|}{Re-segmentation with VB-HMM} \\ \cline{3-6}
\multirow{-2}{*}{Domain} & \multirow{-2}{*}{Method} & Full & Core & Full & Core \\ \hline \hline
& Baseline & 4.95 / 4.81 & 4.95 / 4.81 & 4.55 / 4.45 & 4.55 / 4.45 \\
& P1 & \textbf{0.00 / 0.00} & \textbf{0.00 / 0.00} & \textbf{0.40 / 0.40} & \textbf{0.40 / 0.40} \\
\multirow{-3}{*}{Audiobooks} & P2 & \textbf{0.00 / 0.00} & \textbf{0.00 / 0.00} & \textbf{0.40 / 0.40} & \textbf{0.40 / 0.40} \\ \hline
& Baseline & 3.75 / 25.62 & 3.75 / 25.62 & 3.56 / 23.83 & 3.56 / 23.83 \\
& P1 & 6.47 / 39.38 & 6.47 / 39.38 & 6.10 / 38.98 & 6.10 / 38.98 \\
\multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}Broadcast\\ interview\end{tabular}} & P2 & \textbf{3.51 / 24.05} & \textbf{3.51 / 24.05} & \textbf{3.29 / 22.27} & \textbf{3.29 / 22.27} \\ \hline
& Baseline & 17.55 / 28.88 & 16.08 / 26.38 & 17.69 / 28.46 & 16.72 / 27.21 \\
& P1 & 20.06 / 29.92 & 18.88 / 28.11 & 16.71 / 25.50 & 15.21 / 23.66 \\
\multirow{-3}{*}{Clinical} & P2 & \textbf{15.81 / 23.69} & \textbf{14.61 / 22.37} & \textbf{14.69 / 22.07} & \textbf{13.78 / 21.59} \\ \hline
& Baseline & 10.81 / 38.75 & 10.81 / 38.75 & 10.17 / 37.63 & 10.17 / 37.63 \\
& P1 & 12.19 / 43.99 & 12.19 / 43.99 & 9.03 / 37.97 & 9.03 / 37.97 \\
\multirow{-3}{*}{Courtroom} & P2 & \textbf{5.82 / 23.91} & \textbf{5.82 / 23.91} & \textbf{4.77 / 22.22} & \textbf{4.77 / 22.22} \\ \hline
& Baseline & 22.25 / 27.85 & 23.48 / 28.41 & 18.38 / 22.32 & 18.84 / 22.37 \\
& P1 & 20.22 / 27.73 & 20.28 / 27.18 & 17.31 / 22.31 & 16.89 / 21.07 \\
\multirow{-3}{*}{CTS} & P2 & \textbf{19.23 / 25.93} & \textbf{19.81 / 25.96} & \textbf{16.55 / 20.94} & \textbf{17.10 / 21.20} \\ \hline
& Baseline & 9.92 / 18.90 & 9.92 / 18.90 & 8.75 / 16.66 & 8.75 / 16.66 \\
& P1 & 12.19 / 25.75 & 12.19 / 25.75 & 9.23 / 20.25 & 9.23 / 20.25 \\
\multirow{-3}{*}{Map Task}& P2 & \textbf{9.21 / 18.20} & \textbf{9.21 / 18.20} & \textbf{8.50 / 16.50} & \textbf{8.50 / 16.50} \\ \hline
& Baseline & \textbf{29.65} / 51.57 & \textbf{29.65} / 51.57 & 28.80 / 51.01 & 28.80 / 51.01 \\
& P1 & 31.31 / 52.71 & 31.31 / 52.71 & 29.35 / 50.85 & 29.35 / 50.85 \\
\multirow{-3}{*}{Meeting} & P2 & 29.80 / \textbf{50.72} & 29.80 / \textbf{50.72} & \textbf{28.62 / 50.09} & \textbf{28.62 / 50.09} \\ \hline
& Baseline & 46.99 / 73.80 & 46.99 / 73.80 & 46.60 / 74.83 & 46.60 / 74.83 \\
& P1 & 48.68 / 74.11 & 48.68 / 74.11 & 46.40 / 73.83 & 46.40 / 73.83 \\
\multirow{-3}{*}{Restaurant} & P2 & \textbf{45.34 / 70.49} & \textbf{45.34 / 70.49} & \textbf{45.34 / 72.98} & \textbf{45.34 / 72.98} \\ \hline
& Baseline & 17.67 / \textbf{49.48} & 17.67 / \textbf{49.48} & 16.24 / \textbf{47.68} & 16.24 / \textbf{47.68} \\
& P1 & 23.06 / 60.61 & 23.06 / 60.61 & 21.53 / 58.31 & 21.53 / 58.31 \\
\multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}Sociolingusitic\\ (Field)\end{tabular}} & P2 & \textbf{17.46} / 51.19 & \textbf{17.49} / 51.19 & \textbf{16.16} / 49.16 & \textbf{16.16} / 49.16 \\ \hline
& Baseline & 11.97 / 17.65 & 11.97 / 17.65 & 10.33 / 14.94 & 10.33 / 14.94 \\
& P1 & 13.60 / 23.84 & 13.60 / 23.84 & 11.41 / 18.70 & 11.41 / 18.70 \\
\multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}Sociolinguistic\\ (Lab)\end{tabular}} & P2 & \textbf{10.38 / 15.92} & \textbf{10.38 / 15.92} & \textbf{9.21 / 13.87} & \textbf{9.21 / 13.87} \\ \hline
& Baseline & 40.99 / 76.97 & 40.99 / 76.97 & \textbf{40.06 / 76.77} & \textbf{40.06 / 76.77} \\
& P1 & 41.60 / 79.93 & 41.60 / 79.93 & 40.97 / 79.66 & 40.97 / 79.66 \\
\multirow{-3}{*}{Web Video} & P2 & \textbf{40.14 / 79.29} & \textbf{40.14 / 79.29} & 40.25 / 79.38 & 40.25 / 79.38 \\ \hline
\end{tabular}
\end{center}
\end{scriptsize}
\end{table}
\subsection{Speaker diarization experiments: Development set}
\label{sec:resultsanalysis}
In our first SD experiment with the baseline system, we evaluated the development set of the DIHARD III dataset and analyzed the diarization performance for each domain separately. The top row for each domain listed in Table~\ref{Table:ResultsSubset} shows the domain-wise baseline results in terms of DER and JER. As expected, we notice that the SD performance varied with domains. The DER values range from less than $4\%$ for `Broadcast Interview' domain to greater than $46\%$ corresponding to 'Restaurant' domain.
Next, we performed the diarization experiments with domain-dependent processing. We selected a domain-dependent threshold for speaker clustering. In addition, for PLDA adaptation during the scoring process, we considered audio data from specific domains only. This system is referred to as \textbf{P1} in Table~\ref{Table:ResultsSubset}. To our surprise, this degraded the performance compared to the baseline for most of the domains. We hypothesize that the limited speaker variability in domain-wise audio data (refer to Table~\ref{Table:dtdesc}) could be a factor as we used this data as in-domain target data for PLDA adaptation. The PLDA adaption with domain-specific data was helpful for `Audiobooks' and `CTS' because the two domains are substantially different from the others. For example, `Audiobooks' consists of high-quality recordings with a single speaker, and `CTS' consists of narrow-band telephone speech with two speakers.
In the next set of SD experiments, we performed PLDA adaptation with audio data from all the eleven subsets while applying domain-specific thresholds for speaker clustering. We refer to this method as \textbf{P2} in Table~\ref{Table:ResultsSubset}. The noticeable improvement in `Courtroom' audio diarization could be because of the following reason. As pointed out earlier in Section~\ref{sec:adi}, despite having multiple speakers in argument, `Courtroom' has less overlapped speech and a low value of the speech-to-nonspeech ratio. Another interesting characteristic of this domain's data is its high spectral energy, even at high frequencies, along with only a slight variation in the SNR. The minor improvement in the DER and worsened JER for `Sociolinguistic (Field)' reiterates the poor clustering of this domain's samples (refer Fig.~\ref{Fig:tSNE}). A similar argument applies to the trends of results in `Meeting' and `Web videos.'
After the first pass for the `Full' data condition, we observe that `Audiobooks,' `Courtroom,' and `CTS' exhibit a major relative improvement of 100\%, 46.16\%, and 13.57\% in DER, respectively, to the baseline system. However, following re-segmentation (second pass), the highest improvement in DER is exhibited by `Audiobooks': 91.21\%, `Clinical': 16.96\%, and `Courtroom': 53.09\%, respectively. These domains have less speaker overlap and lower speech-to-nonspeech ratios except the `CTS' domain. On the other hand, a poor relative improvement of -0.50\%, 1.19\%, and 2.07\% in DER is registered for `Meeting,' `Sociolinguistic (Field),' and `Web video' respectively after the first pass of diarization. The same domains show low relative DER improvements of 0.63\%, 0.49\%, and -0.47\%, respectively, after the second pass of diarization. This can be explained by the fact that these domains have considerable overlapping speech and poor SNR. For most domains, the relative improvement is higher after the first pass than after the second because the clustering was done in the first pass at an ideal threshold, leaving little room for improvement during re-segmentation.
Our detailed analysis of the development set results indicates that the \textbf{P2} method shows reduced DER/JER values for eight out of eleven domains. This confirms our hypothesis that domain-dependent threshold for speaker clustering helps. We subsequently used \textbf{P2} method for diarization of the evaluation set.
\iffalse
\begin{table}[h]
\caption{Results showing the impact of segmentation parameters on speaker diarization performance (DER in \%/JER in \%) on development set of third DIHARD challenge.}
\begin{footnotesize}
\begin{center}
\label{Table:ResultsSubmission}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
Segmentation parameters & \multicolumn{2}{|c|}{First Pass} & \multicolumn{2}{|c|}{Re-segmentation with VB-HMM}\\
\cline{2-5}
window/period/minseg in sec & Full & Core & Full & Core \\
\hline
1.5 / 0.75 / 0.5 (DIHARD 2) & 21.47 / 46.61& 21.99 / 50.51 &20.45 / 46.03 & 21.36 / 50.25\\
1.5 / 0.25 / 0.25 (DIHARD 3) &21.56 / 46.93 &21.88 / 50.48 &20.58 / 46.12 &21.28 / 50.05\\
2.0 / 0.10 / 0.25 (Our Baseline) & 21.38 / 44.49 & 21.33 / 48.16 & 19.59 / 43.01 & 20.17 / 47.28\\
\hline
\end{tabular}
\vspace{-0.5cm}
\end{center}
\end{footnotesize}
\end{table}
\fi
\begin{table}[t]
\caption{Results showing the speaker diarization performance using baseline and proposed method on development and evaluation set of third DIHARD challenge. For this phase, we consider the second pass with re-segmentation as it consistently gives superior SD performance over the first pass.}
\begin{footnotesize}
\begin{center}
\label{Table:Dev}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Set} & \multirow{2}{*}{Method} & \multicolumn{2}{|c|}{Full} & \multicolumn{2}{|c|}{Core}\\
\cline{3-6}
& & DER & JER & DER & JER \\
\hline
\hline
\multirow{2}{*}{Development}&Baseline & 19.59 &43.01 &20.17& 47.28\\
&Proposed & 17.97 &40.33 &18.73 &44.77\\
\hline
\multirow{2}{*}{Evaluation}&Baseline & 19.19 &43.28& 20.39& 48.61\\
&Proposed & 17.56 &38.60& 19.23& 43.74\\
\hline
\end{tabular}
\vspace{-0.5cm}
\end{center}
\end{footnotesize}
\end{table}
\subsection{Speaker diarization experiments: Evaluation set}
We performed experiments on the evaluation set by predicting the domain for each of its recordings which is followed by selecting the corresponding clustering threshold. For domain prediction, we used the i-vector-based embeddings as features and the nearest neighbor classifier as this gave promising results for ADI (See details in Section~\ref{sec:result_adi}).
Table~\ref{Table:Dev} summarizes the results for the entire development and evaluation sets. We observe substantial overall improvements in development and evaluation data. For the `Full' condition of the evaluation set consisting of all the audio files, we obtained a relative improvement of 8.49\% and 10.81\% in terms of DER and JER, respectively. Whereas for the `Core' condition with a balanced amount of audio data per domain, we achieved a relative improvement of 5.69\% and 10.02\%. This reflects the results of the detailed analysis of individual domains in Section~\ref{sec:resultsanalysis} where we established that the relative improvements with the proposed method varied for different domains.
\section{Conclusion}
\label{sec:conc}
This work advances the state-of-the-art DNN-based speaker diarization performance with the help of domain-specific processing. We applied speech embeddings computed from the entire audio recording for the supervised audio domain identification task. After experimenting with three different embeddings, we employed i-vector embeddings and the nearest neighbor classifier for the ADI task. We integrated the ADI system with the x-vector based state-of-the-art diarization system and performed the experiments on the DIHARD III challenge dataset consisting of multiple domains. We obtained up to 8\% relative improvement on the `Full' condition of the evaluation set. The proposed method substantially improved the diarization performance on most of the subsets of the DIHARD III dataset.
Our work involved the application of domain-specific thresholds for speaker clustering. This study can be extended by incorporating other domain-specific operations such as speech enhancement and feature engineering. We used a basic \emph{time delay neural network} (TDNN) based x-vector embedding for audio segment representation. The future work may also include investigation of more advanced embeddings (e.g., \emph{emphasized channel attention, propagation, and aggregation in TDNN} or ECAPA-TDNN), which have shown promising results in related tasks.
The present study is limited to a supervised domain classification problem where the training phase requires class-annotated data. However, in practice, the audio data may come from myriad unknown sources, and our domain classification method may not be applicable in its current form. Nevertheless, an investigation of unsupervised domain clustering to exploit the advantage of domain-dependent processing in such a scenario can be carried out.
\section*{Acknowledgements}
\textcolor{black}{We thank the anonymous reviewers for their careful reading of our manuscript and their insightful comments and suggestions.} This work is funded by the Jagadish Chandra Bose Centre of Advanced Technology (JCBCAT), Govt. of India. Experiments presented in this paper were partially carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see \url{https://www.grid5000.fr}).
\bibliographystyle{aps-nameyear}
|
1,116,691,500,347 | arxiv | \section{Introduction}
Our goal in this paper is to extend and generalize in different directions some of the results obtained in \cite{RR-tori}. First, let $K$ be a finitely generated field, and let $V$ be a divisorial set of discrete valuations of $K$. This means that $V$ consists of the discrete valuations corresponding to the prime divisors on a separated normal scheme $\mathfrak{X}$ of finite type over $\mathbb{Z}$ with function field $K$, which we will refer to as a {\it model} of $K$ (cf. the discussion in \cite[5.3]{RR-survey}). For a $K$-defined linear algebraic group $G$, one considers the global-to-local map in Galois cohomology
$$
\lambda_{G , V} \colon H^1(K , G) \longrightarrow \prod_{v \in V} H^1(K_v , G)
$$
(we refer the reader to \cite[Ch. III]{Serre-GC} and \cite[4.2]{RR-survey} for the basic notions and standard notions pertaining to the Galois cohomology of algebraic groups). It has been known for a long time (see \cite{BS}) that if $K$ is a number field, then the map $\lambda_{G , V}$ is {\it proper} (i.e., the preimage of any finite set is finite) for every algebraic $K$-group $G$ and any divisorial set $V$ (which in this situation simply means that $V$ contains almost all nonarchimedean places of $K$). Recent work, however, has led to the expectation that $\lambda_{G , V}$ should be proper much more generally; in particular, this was conjectured to be the case for any (connected) reductive $K$-group $G$ and any divisorial set $V$ (see \cite[Conjecture 6.1]{RR-survey}). In \cite{RR-tori}, this conjecture was proved for any $K$-torus $T$ --- note that in this case, the properness of $\lambda_{T , V}$ is equivalent to the finiteness of the Tate-Shafarevich group $\text{\brus{ Sh}}(T , V) = \ker \lambda_{T , V}$. In the present paper, we extend this result to $K$-groups whose connected component is a torus; it should be pointed out that the passage to connected groups is a non-trivial matter even in the number field situation (cf. \cite[\S 7]{BS}), and the general case requires new tools. More precisely, we will prove the following statements.
\begin{thm}\label{T:1}
Let $K$ be a finitely generated field and $V$ be a divisorial set of places of $K$. Then for any linear algebraic $K$-group $D$ whose connected component $D^{\circ}$ is a torus and any finite Galois extension $L/K$ we have:
\vskip2mm
\noindent \ {\rm (i)} \parbox[t]{16cm}{if $D$ is commutative, then the map $$\lambda^i_{D , V , L/K} \colon H^i(L/K , D) \longrightarrow \prod_{v \in V} H^i(L_w/K_v , D), \ \ w \vert v$$ is proper for any $i \geq 1$;}
\vskip2mm
\noindent {\rm (ii)} \parbox[t]{16cm}{in the general case, the map $$\lambda^1_{D, V, L/K} \colon H^1(L/K , D) \longrightarrow \prod_{v \in V} H^1(L_w/K_v , D), \ \ w \vert v$$ is proper.}
\end{thm}
\vskip3mm
\begin{thm}\label{T:2}
For $K$, $V$, and $D$ as in Theorem \ref{T:1}, the map $$\lambda_{D , V} \colon H^1(K , D) \longrightarrow \prod_{v \in V} H^1(K_v , D)$$ is proper.
\end{thm}
As an application, we obtain the following finiteness result on the local-global conjugacy problem for maximal tori in any reductive group.
\begin{thm}\label{T:2A}
Let $G$ be a connected reductive group over a finitely generated field $K$, and let $V$ be a divisorial set of places of $K$. Fix a maximal $K$-torus $T$ of $G$ and let $\mathscr{C}(T)$ be the set of all maximal $K$-tori $T'$ of $G$ such that $T$ and $T'$ are $G(K_v)$-conjugate for all $v \in V$. Then $\mathscr{C}(T)$ consists of finitely many $G(K)$-conjugacy classes.
\end{thm}
\vskip2mm
Second, we extend some of the finiteness results from \cite{RR-tori} to a different class of fields. More precisely, let $K = k(X)$ be the function field of a normal irreducible variety $X$ over a field $k$ that satisfies Serre's condition (F) (see \S \ref{S:Funct} and references therein), and let $V$ be the set of discrete valuations associated with the prime divisors on $X$ (geometric places).
\begin{thm}\label{T:3}
With notations and conventions as above, if $k$ is a field of characteristic zero that satisfies condition $(\mathrm{F})$, then for any $d \geq 1$, there exist only finitely many $K$-isomorphism classes of $d$-dimensional $K$-tori that have good reduction at all $v \in V$.
\end{thm}
As we discussed in \cite[Remark 2.5]{RR-tori}, results about the finiteness of the number of $K$-isomorphism classes of algebraic $K$-tori of a given dimension $d \geq 1$ that have good reduction at all places $v \in V$ are no longer valid as stated in positive characteristic. More precisely, Theorem \ref{T:3} above and \cite[Theorem 1.1]{RR-tori} are {\it false} even for a global function field $K$ and an arbitrary divisorial set $V$. The situation can be fixed by considering some special sets $V$. We recall that a finitely generated field $K$ of positive characteristic can be presented as the function field $k(X)$ of a {\it complete} normal variety $X$ over a finite field $k$. Then the set $V$ of discrete valuations of $K$ associated with the prime divisors on $X$ will be called a {\it complete divisorial} set of places. It turns out that Theorem \ref{T:3} (hence also \cite[Theorem 1.1]{RR-tori}) extends to characteristic $p > 0$ if we assume that the divisorial set of places $V$ is complete --- see Theorem \ref{T:Z1}(i). On the other hand, this theorem remains valid for any divisorial set of places $V$ if one considers only those $K$-tori that split over an extension $K_T/K$ of degree prime to $p$ --- see Theorem \ref{T:Z1}(ii). These results are derived from a more general statement (see Theorem \ref{T:tori-GR}) that also subsumes the essential part of the proof of \cite[Theorem 1.1]{RR-tori}.
Next, over function fields $K = k(X)$ as above, we have the following finiteness result for the Tate-Shafarevich group.
\begin{thm}\label{T:4}
If $k$ has characteristic zero and is of type $(\mathrm{F})$, then for any $K$-torus $T$ the Tate-Shafarevich group $\text{\brus{ Sh}}(T , V)$ is finite.
\end{thm}
We also completely resolve the question of the properness of the global-to-local map for finite Galois modules over function fields $K = k(X)$ in all characteristics without any assumptions on the base field --- see Proposition \ref{P:Finite2} and Remark 6.3.
Finally, we have the following finiteness result for forms of absolutely almost simple groups with good reduction over the function fields of
complex surfaces.
\begin{thm}\label{T:CSurf}
Let $K = k(S)$ be the function field of a smooth surface $S$ over an algebraically closed field $k$ of characteristic zero, and let $V$ be the set of discrete valuations of $K$ associated with the prime divisors of $S$. Then for any absolutely almost simple simply connected algebraic $K$-group $G$, the set of $K$-isomorphism classes of $K$-forms $G'$ of $G$ that have good reduction at all $v \in V$ is finite.
\end{thm}
\section{Theorem \ref{T:2}: the case of a finite group}
The goal of this section is to prove the following.
\begin{prop}\label{P:1}
Assume that $K$ is a finitely generated field equipped with a divisorial set of places $V$, and
let $\Omega$ be a finite (but not necessarily commutative) Galois module\footnotemark. Then the map
$$
H^1(K , \Omega) \stackrel{\kappa}{\longrightarrow} \prod_{v \in V} H^1(K_v , \Omega)
$$
is proper.
\end{prop}
\footnotetext{I.e., a finite group with a continuous action of the absolute Galois group $\mathrm{Gal}(K^{\mathrm{sep}}/K)$.}
The proof relies on the following (known) result, which is of independent interest.
\begin{prop}\label{P:2}
Let $K$ be an infinite finitely generated field and $V$ be a divisorial set of places of $K$. If a finite separable extension $L/K$ satisfies $L_w = K_v$ for all $v \in V$ and $w \vert v$, then $L = K$.
\end{prop}
\begin{proof}
Without loss of generality, we may assume that $L/K$ is a Galois extension; let $\mathscr{G} = \mathrm{Gal}(L/K)$ be its Galois group.
First, let us consider the case where $K$ is a global field; then $V$ consists of almost all nonarchimedean places of $K$. Assume that $L \neq K$ and pick a nontrivial automorphism $\sigma \in \mathscr{G}$. It follows from Chebotarev's Density Theorem (cf. \cite[Ch. VII, 2.4]{ANT}) that there exists $v \in V$ such that some for $w \vert v$, the extension $L_w/K_v$ is unramified and has $\sigma$ as its Frobenius automorphism. Then in particular $L_w \neq K_v$, a contradiction.
We now turn to the situation where the field $K$ is not global and first consider the case where $K$ has characteristic zero. We may assume that $V$ consists of the discrete valuations corresponding to the prime divisors on a model $\mathfrak{X} = \mathrm{Spec}\: A$, where $A$ is a finitely generated integrally closed $\mathbb{Z}$-algebra with fraction field $K$. Since $K$ is not global, it has transcendence degree $r \geq 1$ over $\mathbb{Q}$, so one can find $x_1, \ldots , x_r \in A$ that form a transcendence basis of $K/\mathbb{Q}$. Set $B = \mathbb{Z}[x_1, \ldots , x_r]$ and $F = \mathbb{Q}(x_1, \ldots , x_r)$. We can find a nonzero polynomial $h \in \mathbb{Z}[x_1, \ldots , x_r]$ such that the localization $A_h$ is integral over $B_h$.
Next, pick a primitive element $\alpha$ of $L$ over $k$ that is integral over $B$. We can write its minimal polynomial $g \in B[y]$ over $k$ as $g = g(x_1, \ldots , x_r, y) \in \mathbb{Z}[x_1,\ldots , x_r, y]$.
Since $g$ is irreducible in $\mathbb{Z}[x_1, \ldots , x_r, y]$, by the version of Hilbert's Irreducibility Theorem given in \cite[13.4]{FJ}, there exists an $r$-tuple $(x_1^0, \ldots , x_r^0) \in \mathbb{Z}^r$ such that $h(x_1^0, \ldots , x_r^0) \neq 0$ and the polynomial $g(x_1^0, \ldots , x_r^0, y) \in \mathbb{Q}[y]$ is irreducible. Then the polynomial $$\varphi := g(x_1^0, x_2, \ldots , x_r) \in \mathbb{Q}[x_2, \ldots , x_r, y]$$ is also irreducible. Indeed, the leading coefficient of $\varphi$ as a polynomial in $y$ is 1, so $\varphi$ has content 1 as a polynomial in $\mathbb{Q}[x_2, \ldots , x_r][y]$. This means that any factor in a possible factorization of $\varphi$ must have a positive $y$-degree. Since the specialization $\varphi(x_2^0, \ldots , x_r^0, y) = g(x_1^0, \ldots , x_r^0, y)$ is irreducible, $\varphi$ itself is irreducible.
Let $v_0$ be a discrete valuation of $F$ corresponding to the irreducible polynomial $x_1 - x_1^0$. Since $h(x_1^0, \ldots , x_r^0) \neq 0$, we see that $h$ is relatively prime to $x_1 - x_1^0$, so $B_h$ is contained in the valuation ring of $v_0$. As $A_h$ is integral over $B_h$, we conclude that $A_h$ is contained in the valuation ring of an extension $v$ of $v_0$ to $K$, hence $v \in V$. Furthermore, $\alpha$ is contained in the valuation ring of an extension $w$ of $v$ to $L$. We can view $\varphi$ as a polynomial over the residue field $F^{(v_0)} = \mathbb{Q}(x_2, \ldots , x_r)$. We note that the image $\bar{\alpha}$ of $\alpha$ in the residue field $L^{(w)}$ satisfies $\varphi$, and since $\varphi$ is irreducible of degree $\deg_y \varphi = \deg_y g$, we conclude that the residual degree $f(w \vert v_0)$ equals $[L : F]$. As $f(w \vert v_0) = f(w \vert v) f(v \vert v_0)$ and $f(w \vert v) \leq [L : K]$ and $f(v \vert v_0) \leq [K:F]$, it follows that $f(w \vert v) = [L : K]$, and therefore $[L_w : K_v] = [L : K]$. So, our assumption that $L_w = K_v$ yields that $L = K$.
In order to treat the positive characteristic case, we observe that any finitely generated field $K$ of positive characteristic can be presented as the function field $K = k(X)$ of a geometrically integral normal variety $X$ over a finite field $k$, and then we may assume that $V$ consists of the discrete valuations associated with the prime divisors of $X$. Since the case of global fields has already been considered, we may assume that $\dim X \geq 2$. So, we conclude the proof of Proposition \ref{P:2} by applying the following more general statement.
\begin{prop}\label{P:FFF3}
Let $K = k(X)$ be the function field of a normal geometrically integral variety $X$ of dimension $\dim X \geq 2$ defined over a field $k$, and let $V$ be the set of discrete valuations of $K$ associated with the prime divisors of $X$. If $L/K$ is a finite separable extension such that $L_w = K_v$ for all $v \in V$ and $w \vert v$, then $L = K$.
\end{prop}
The argument is similar to the characteristic zero case in the proof of Proposition \ref{P:2}. Since $X$ is geometrically integral, it is, in particular, geometrically reduced, and hence $K$ is a separable extension of $k$ (cf. \cite[Lemma 10.44.1]{Stacks}). Thus, there exists a separating transcendence basis $x_1, \ldots, x_r$,
so that $K$ is a finite separable extension of $F = k(x_1, \ldots , x_r)$ (cf. \cite[2.6]{FJ}). Note that $r \geq 2$ by assumption. Replacing $X$ by an open subset (which may only reduce $V$), we may assume that $X$ is affine and $x_1, \ldots , x_r$ lie in the algebra of regular functions $A = k[X]$. Set $R = k[x_1]$ and $B = k[x_1, \ldots , x_r] = R[x_2, \ldots , x_r]$. We can find a nonzero polynomial $h \in B$ such that the localization $A_h$ is integral over $B_h$. Since, by construction, the extension $L/F$ is separable, we can choose a primitive element $\alpha$ for this extension that is integral over $B$. Let $g \in B[y]$ be the minimal polynomial of $\alpha$ over $E$. Then $g$ can be viewed as a polynomial in $R[x_2, \ldots , x_r, y]$, which is separable in $y$. According to \cite[Proposition 13.4.1]{FJ}, the ring $R$ is Hilbertian. Thus, one can find $(x_2^0, \ldots , x_r^0) \in R^{r-1}$ so that the polynomial $g(x_2^0, \ldots , x_r^0, y) \in k(x_1)[y]$ is irreducible and $h(x_2^0, \ldots , x_r^0) \neq 0$. Then the same argument as above shows that the valuation $v_0$ of $F$ associated with $x_2 - x_2^0$ extends to a valuation $v \in V$, and then for $w \vert v$ we have $[L_w : K_v] = [L : K]$, implying that $L = K$.
\end{proof}
\noindent {\bf Remark 2.4.} The assertion of Proposition \ref{P:2} was stated in geometric language by Raskind (see \cite[Lemma 1.7]{Rask}). The argument he sketches is based on the consideration of $\zeta$-functions. The above proof based on Hilbert's Irreducibility Theorem is of somewhat more general nature as it applies to function fields of algebraic varieties over not necessarily finitely generated fields. This will be used in \S \ref{S:local-global}.
\vskip1mm
\noindent {\it Proof of Proposition \ref{P:1}.}
Using twisting (cf. \cite[Ch. I, 5.3]{Serre-GC}), we see that it is enough to establish the finiteness of $\ker \kappa$ for any finite Galois module $\Omega$. Furthermore, for any Galois extension $L/K$, we have the following inflation-restriction exact sequence in non-commutative cohomology (cf. {\it loc. cit.}, Ch. I, 5.8):
$$
1 \to H^1(L/K , \Omega) \longrightarrow H^1(K , \Omega) \longrightarrow H^1(L , \Omega).
$$
If $L/K$ is a finite Galois extension, then the set $H^1(L/K , \Omega)$ is also finite, and it is enough to show that the kernel of the map $$H^1(L , \Omega) \to \prod_{w \in V^L} H^1(L_w , \Omega),$$ where $V^L$ consists of all extensions of places in $V$ to $L$, is finite. Thus, replacing $K$ by a suitable finite Galois extension, we may assume that $\Omega$ is a trivial Galois module over $K$. Let us show that in this case $\ker \kappa$ is actually trivial.
Indeed, let $x \in \ker \kappa$. Since the Galois action on $\Omega$ is trivial, $x$ is represented by a continuous homomorphism $\chi \colon \mathscr{G} \to \Omega$ of the absolute Galois group $\mathscr{G} = \mathrm{Gal}(K^{\mathrm{sep}}/K)$. Let $\mathscr{H} = \ker \chi$, and let $L$ be the finite Galois extension of $K$ corresponding to $\mathscr{H}$. The fact that $x \in \ker \kappa$ then implies that $\chi$ vanishes on every decomposition group $\mathscr{G}(v) = \mathrm{Gal}(K_v^{\mathrm{sep}}/K_v)$, $v \in V$. Then $\mathscr{G}(v) \subset \mathscr{H}$, implying that $L_w = K_v$ for all $v \in V$, $w \vert v$. Proposition \ref{P:2} then yields $L = K$, i.e. $\mathscr{H} = \mathscr{G}$, proving that $\chi$ is the trivial homomorphism. Thus, $x$ is the trivial class, as required. \hfill $\Box$
\vskip1mm
\noindent {\bf Remark 2.5.} The map $H^1(K , \Omega) \to \prod_{v \in V} H^1(K , \Omega)$ may not be injective even when the finite Galois module $\Omega$ is commutative and $V$ is the set of all places (including archimedean ones) of a number field $K$ --- see \cite[Ch. III, 4.7]{Serre-GC}.
\vskip1mm
\noindent {\bf Remark 2.6.} The assertion of Proposition \ref{P:2} is false when $K$ is the function field of a smooth projective irreducible curve complex curve $X$ of genus $\geq 1$ and $V$ is the set of places associated with the closed points of $X$. Then for any $n > 1$, one can find an element $x \in \mathrm{Pic}^0(X)$ of order precisely $n$, and let $D$ be the degree zero divisor on $X$ representing $x$. Then $nD$ is the divisor $(f)$ of a function $f \in K^{\times}$. The fact that the order of $x$ in $\mathrm{Pic}^0(X)$ is $n$ implies that the order of $f$ in the quotient $K^{\times}/{K^{\times}}^n$ is also $n$, and therefore $L = K(\sqrt[n]{f})$ is a cyclic Galois extension of $K$ of degree $n$. On the other hand, let $p \in X$ be any closed point and $v = v_p$ be the corresponding discrete valuation of $K$. Since $v(f)$ is a multiple of $n$, the extension $L/K$ is unramified at $v_p$, i.e. $L_w/K_v$ is unramified. But since the residue field of $K_v$ is $\mathbb{C}$, hence algebraically closed, the field $K_v$ does not have any nontrivial unramified extensions. Thus, $L_w = K_v$.
\section{Proofs of Theorems \ref{T:1}, \ref{T:2}, and \ref{T:2A}}\label{S:Proofs}
We begin by reformulating
the assertion of Theorem \ref{T:1} in the language of adeles. Let $K$ be a field equipped with a set $V$
of discrete valuations, and for $v \in V$, denote by $\mathcal{O}_v$ the valuation ring in the corresponding completion $K_v$. To define the adelic group associated with
a linear algebraic $K$-group $D$, we first fix a faithful $K$-defined representation $D \hookrightarrow \mathrm{GL}_n$, and then let
$$
D(\mathbb{A}(K , V)) := \left\{ \left. \ (g_v) \in \prod_{v \in V} D(K_v) \ \right\vert \ g_v \in D(\mathcal{O}_v) \ \text{for almost all} \ v \in V \ \right\},
$$
where $D(\mathcal{O}_v) = D(K_v) \cap \mathrm{GL}_n(\mathcal{O}_v).$
In the situations that are most relevant for our discussion,
the set $V$ satisfies the following property:
\vskip2mm
\noindent (A) For any $a \in K^{\times}$, the set $V(a) := \{ v \in V \ \vert \ v(a) \neq 0 \}$ is finite.
\vskip2mm
\noindent For example, this is the case for any divisorial set of places $V$ of a finitely generated field $K$. This property has several important consequences. First, it implies that the adelic group does not depend on the initial choice of a faithful $K$-defined representation $D \hookrightarrow \mathrm{GL}_n$; more precisely, a $K$-defined isomorphism between two linear $K$-groups induces an isomorphism between the corresponding adelic groups. Second, property (A)
enables us to consider the diagonal embedding $D(K) \hookrightarrow D(\mathbb{A}(K , V))$. More generally, for any finite separable field extension $L/K$, the set $V^L$ consisting of all extensions of places from $V$ to $L$, also satisfies (A), so we again have the diagonal embedding $D(L) \hookrightarrow D(\mathbb{A}(L , V^L))$. Moreover, if $L/K$ is a Galois extension with Galois group $\mathscr{G}$, then the standard action of $\mathscr{G}$ on $D(L)$ naturally extends to an action on $D(\mathbb{A}(L , V^L))$ (cf. \cite[\S 3]{RR-tori}). We then have the following.
\begin{prop}\label{P:adeles}
Let $K$ be a finitely generated field, $V$ a divisorial set of places of $K$, and $D$ a linear algebraic $K$-group whose connected component $D^{\circ} = T$
is a torus. Given a finite Galois extension $L/K$,
\vskip1.5mm
\noindent \ {\rm (i)} \parbox[t]{16cm}{if $D$ is commutative, then the kernel of $\lambda^i_{D, V, L/K} \colon H^i(L/K , D) \to \prod_{v \in V} H^i(L_w/K_v , D)$ coincides with the kernel of $\theta^i_{L/K} \colon H^i(L/K , D(L)) \to H^i(L/K , D(\mathbb{A}(L , V^L)))$ for all $i \geq 1$;}
\vskip1mm
\noindent {\rm (ii)} \parbox[t]{16cm}{in the general case, the kernel of the map $\lambda^1_{D, V, L/K} \colon H^1(L/K , D) \to \prod_{v \in V} H^1(L_w/K_v , D)$ coincides with the kernel of the map $\theta^1_{L/K} \colon H^1(L/K , D) \to H^1(L/K , D(\mathbb{A}(L , V^L)))$.}
\end{prop}
The proposition is an immediate consequence of the following result.
\begin{lemma}\label{L:integr}
With notations as in Proposition \ref{P:adeles}, we have the following statements:
\vskip2mm
\noindent \ {\rm (i)} \parbox[t]{16cm}{If $D$ is commutative, then for almost all $v \in V$ and $w \vert v$, the group homomorphisms
$$
\mu^i \colon H^i(L_w/K_v , D(\mathcal{O}_{L_w})) \to H^i(L_w/K_v , D(L_w))
$$
are injective for all $i \geq 1$.}
\vskip2mm
\noindent {\rm (ii)} \parbox[t]{16cm}{In the general case, for almost all $v \in V$ and $w \vert v$, the map
$$
\mu^1 \colon H^1(L_w/K_v , D(\mathcal{O}_{L_w})) \to H^1(L_w/K_v , D(L_w))
$$
has trivial kernel.}
\end{lemma}
\begin{proof}
Let $E$ be the splitting field of the torus $T = D^{\circ}$, and set $P = EL$. Then for almost all $v \in V$ and $u \vert v$, the following two properties hold:
\vskip1.5mm
\noindent (a) the extension $P_u/K_v$ is unramified;
\vskip1mm
\noindent (b) \parbox[t]{16cm}{all co-characters $\chi \in X_*(T)$ are defined over $\mathcal{O}_{P_u}$, and consequently
$T(\mathcal{O}_{P_w})$ is a maximal bounded subgroup of $T(P_u)$.}
\vskip1.5mm
\noindent Furthermore, it was shown in \cite[p. 9-10]{RR-tori} that for almost all $v \in V$ and $w \vert v$, we have
\vskip1.5mm
\noindent (c) $D(L_w) = T(L_w) D(\mathcal{O}_{L_w})$.
\vskip1.5mm
\noindent We will now show that the validity of the three properties (a)-(c) implies the assertions of the lemma.
Let $\pi \in K_v$ be a uniformizer. Since $P_u/K_v$ is unramified, $\pi$ remains a uniformizer in $P_u$, so we have the following decomposition of the multiplicative group $P_u^{\times}$ as $\mathrm{Gal}(P_u/K_v)$-module:
$$
P_u^{\times} = \langle \pi \rangle \times U_{P_u} \simeq \mathbb{Z} \times U_{P_u},
$$
where $U_{P_u} = \mathcal{O}_{P_u}^{\times}$ is the group of units of $P_u$. As above, let $X_*(T)$ be the group of cocharacters of $T$. We then have the following decomposition as $\mathrm{Gal}(P_u/K_v)$-modules:
$$
T(P_u) \simeq X_*(T) \otimes_{\mathbb{Z}} P_u^{\times} \simeq X_*(T) \otimes_{\mathbb{Z}} (\Z \times U_{P_u}).
$$
Clearly, $X_*(T) \otimes_{\mathbb{Z}} U_{P_u}$ is a maximal bounded subgroup of $T(P_u)$, hence coincides with $T(\mathcal{O}_{P_u})$. Thus,
$$
T(P_u) \simeq X_*(T) \times T(\mathcal{O}_{P_u}).
$$
Taking $\mathrm{Gal}(P_u/L_w)$-fixed points, we obtain the decomposition
$$
T(L_w) \simeq \Gamma \times T(\mathcal{O}_{L_w}) \ \ \text{where} \ \ \Gamma = X_*(T)^{\mathrm{Gal}(P_u/L_w)}.
$$
Combining this with (c), we thus have
\begin{equation}\label{E:X1}
D(L_w) = \Gamma D(\mathcal{O}_{L_w}),
\end{equation}
noting that $\Gamma \cap D(\mathcal{O}_{L_w}) = \{ 1 \}$ since $\Gamma$ does not contain any bounded subgroups.
If $D$ is commutative, then
(\ref{E:X1}) is actually a direct product decomposition, which immediately implies part (i) of the lemma. To prove part (ii) in the general case, let us assume
that a cocycle $\xi \in Z^1(L_w/K_v , D(\mathcal{O}_{L_w}))$ represents an element of $\ker \mu^1$. Then there exists $x \in D(L_w)$ such that
$$
\xi(\sigma) = x^{-1} \sigma(x) \ \ \text{for all} \ \ \sigma \in \mathrm{Gal}(L_w/K_v).
$$
Using (\ref{E:X1}), we can write $x = yz$, with $y \in \Gamma$ and $z \in D(\mathcal{O}_{L_w})$. Then
$$
z \xi(\sigma) \sigma(z)^{-1} = y^{-1} \sigma(y) \in D(\mathcal{O}_{L_w}) \cap \Gamma = \{ 1 \}.
$$
So, $\xi(\sigma) = z^{-1} \sigma(z)$, demonstrating that $\xi$ represents the trivial class in $H^1(L_w/K_v , D(\mathcal{O}_{L_w}))$.
\end{proof}
Next, we consider the subgroup of {\it integral adeles}
$$
D(\mathbb{A}^{\infty}(K , V)) := \prod_{v \in V} D(\mathcal{O}_v)
$$
and the corresponding class set
$$
\mathrm{cl}(D, K, V) := D(\mathbb{A}^{\infty}(K, V)) \backslash D(\mathbb{A}(K , V)) / D(K).
$$
In \cite{CRR-Isr}, \cite{RR-tori} we introduced
\vskip1.5mm
\noindent {\bf Condition (T)} {\it There exists a finite subset $S \subset V$ such that $\vert \mathrm{cl}(D, K, V \setminus S) \vert = 1$.}
\vskip1.5mm
The following result is \cite[Theorem 3.4]{RR-tori}.
\begin{thm}\label{T:ConT}
Let $K$ be a finitely generated field and $V$ be a divisorial set of places of $K$. Then any linear algebraic $K$-group $D$ whose connected component is a torus satisfies Condition $(\mathrm{T})$.
\end{thm}
\vskip2mm
\noindent {\it{Proof of Theorem \ref{T:1}.}} (i): By Proposition \ref{P:adeles}, we have $$\ker \lambda^i_{D, V, L/K} = \ker \theta^i_{L/K},$$ so it is enough to prove the finiteness of the latter. Applying Theorem \ref{T:ConT} to $D$ over $L$ and the divisorial set of places $V^L$ of $L$, we see that after deleting from $V$ a finite set of places (which can only make $\ker \theta^i_{L/K}$ larger), we can assume that $\vert \mathrm{cl}(D, L, V^L) \vert = 1$, i.e. $D(\mathbb{A}(L , V^L)) = D(\mathbb{A}^{\infty}(L , V^L)) D(L)$. We then have the short exact sequence
$$
1 \to E \longrightarrow D(\mathbb{A}^{\infty}(L , V^L)) \times D(L) \stackrel{\mu}{\longrightarrow} D(\mathbb{A}(L , V^L)) \to 1,
$$
where $\mu$ is the product map and $E := D(L) \cap D(\mathbb{A}^{\infty}(L , V^L))$, and we consider the
following fragment of the corresponding cohomological long exact sequence
$$
H^i(L/K , E) \longrightarrow H^i(L/K , D(\mathbb{A}^{\infty}(L , V^L))) \times H^i(L/K , D(L)) \stackrel{\mu^i}{\longrightarrow} H^i(L/K , D(\mathbb{A}(L , V^L))).
$$
As we discussed in the proof of \cite[Proposition 3.2]{RR-tori}, the intersection $T(L) \cap T(\mathbb{A}^{\infty}(L , V^L))$ is a finitely generated group. Since $E$ contains this intersection as a subgroup of finite index, it is itself finitely generated. Then the group $H^i(L/K , E)$ is finite (cf. \cite[Ch. IV, \S 6, Corollary 2]{ANT}), implying the finiteness of $\ker \mu^i$. On the other hand, we clearly have $\{ 0 \} \times \ker \theta^i_{L/K} \subset \ker \mu^i$, and our claim follows.
\vskip1mm
(ii): We now give a noncommutative version of the above argument for $i = 1$, which requires the following.
\begin{lemma}\label{L:Finite}
Let $G$ be a finite group and $\Lambda$ be a $G$-group. Assume that $\Lambda$ admits a finite index subgroup $\Theta \subset \Lambda$ that is a finitely generated abelian group. Then the set $H^1(G , \Lambda)$ is finite.
\end{lemma}
\begin{proof}
The intersection $\Theta' = \bigcap_{\lambda \in \Lambda} (\lambda \Theta \lambda^{-1})$ is a {\it normal} subgroup of $\Lambda$ of finite index. Being a subgroup of $\Theta$, the subgroup $\Theta'$ is itself a finitely generated group. Thus, replacing $\Theta$ by $\Theta'$, we may assume that $\Theta$ is normal in $\Lambda$. Similarly, replacing $\Theta$ by $\bigcap_{g \in G} g(\Theta)$, we may also assume that $\Theta$ is $G$-invariant. Let $$\varphi \colon
H^1(G , \Lambda) \longrightarrow H^1(G , \Lambda/\Theta)$$ be the canonical map. Since the set $H^1(G , \Lambda/\Theta)$ is finite, it is enough to show that for each $x \in H^1(G , \Lambda/\Theta)$, the fiber $\varphi^{-1}(x)$ is finite. There is nothing to prove if $\varphi^{-1}(x) = \emptyset$. Otherwise, we pick a cocycle $\zeta \in Z^1(G , \Lambda)$ so that the image of its cohomology class under $\varphi$ is $x$, and let ${}_{\zeta}\Theta$ denote the corresponding twisted group. Then there is a surjection of $H^1(G , {}_{\zeta}\Theta)$ onto $\varphi^{-1}(x)$ (see \cite[Ch. I, \S 5.5, Corollary 2]{Serre-GC}). But since ${}_{\zeta}\Theta$ is a finitely generated abelian group, $H^1(G , {}_{\zeta}\Theta)$ is finite (cf. \cite[Ch. IV, \S 6, Corollary 2]{ANT}), so our claim follows.
\end{proof}
Using twisting, we see that it is enough to prove the finiteness of $\ker \theta^1_{L/K}$ for any $D$ as in theorem. Furthermore, due to Theorem \ref{T:ConT},
we may assume that
\begin{equation}\label{E:1Y}
D(\mathbb{A}(L , V^L)) = D(\mathbb{A}^{\infty}(L , V^L)) D(L).
\end{equation}
Suppose that $x \in \ker \theta^1_{L/K}$ is represented by a cocycle $\zeta \in Z^1(L/K , T(L))$. Then there exists $g \in D(\mathbb{A}(L , V^L))$ such that
$$
\zeta(\sigma) = g^{-1} \sigma(g) \ \ \text{for all} \ \ \sigma \in \mathrm{Gal}(L/K).
$$
According to (\ref{E:1Y}), we can write $g = ab$ where $a \in D(\mathbb{A}^{\infty}(L , V^L))$ and $b \in D(L)$. Then
$$
a \zeta(\sigma) \sigma(a)^{-1} = b^{-1} \sigma(b) \in D(L) \cap D(\mathbb{A}^{\infty}(L , V^L)) =: E.
$$
Thus $x$ lies in the image of the map $H^1(L/K , E) \to H^1(L/K , D(L))$, and hence all of $\ker \theta^1_{L/K}$ lies in this image. As we pointed out in the proof of (i), $E$ contains a finitely generated abelian group as a subgroup of finite index. So, by Lemma \ref{L:Finite}, the set $H^1(L/K , E)$ is finite, and the finiteness of $\ker \theta^1_{L/K}$ follows. \hfill $\Box$
\vskip2mm
\noindent {\it Proof of Theorem \ref{T:2}.} Again, the use of twisting shows that it is enough to establish the finiteness of the kernel of the map
$$
\theta \colon H^1(K , D) \longrightarrow \prod_{v \in V} H^1(K_v , D)
$$
for any $D$ as in the statement of the theorem. We will derive this from Theorem \ref{T:1}(ii). Let $T = D^{\circ}$. Then the quotient $\Omega := D/T$ is finite, so the kernel of $$\kappa \colon H^1(K , \Omega) \longrightarrow \prod_{v \in V} H^1(K_v , \Omega)$$ is finite by Proposition \ref{P:1}. So, we can find a finite Galois extension $L/K$ that splits $T$ and for which the image of $\ker \kappa$ under the restriction map $H^1(K , \Omega) \to H^1(L , \Omega)$ is trivial (see the proof of Proposition \ref{P:1}). Let us show that then for the restriction map $\rho \colon H^1(K , D) \to H^1(L , D)$, we also have
\begin{equation}\label{E:1Z}
\rho(\ker \theta) = \{ 1 \}.
\end{equation}
The exact sequence
$$
1 \to T \longrightarrow D \longrightarrow \Omega \to 1,
$$
gives rise to the following commutative diagram
$$
\xymatrix{H^1(K,D) \ar[rr]^{\delta_K} \ar[d]_{\theta} & & H^1(K, \Omega) \ar[d]^{\kappa} \\ \displaystyle{\prod_{v \in V}} H^1(K_v, D) \ar[rr] & & \displaystyle{\prod_{v \in V}} H^1(K_v, \Omega)}
$$
Let $x \in \ker \theta$. We then conclude from the diagram that $\delta_K(x) \in \ker \kappa$, and hence by our construction, the image of $x$ under the composite map
$$
H^1(K , D) \stackrel{\delta_K}{\longrightarrow} H^1(K , \Omega) \longrightarrow H^1(L , \Omega)
$$
is trivial. Consequently, in view of the commutative diagram
$$
\xymatrix{H^1(K, D) \ar[r]^{\delta_K} \ar[d]_{\rho} & H^1(K, \Omega) \ar[d] \\ H^1(L, D) \ar[r]^{\delta_L} & H^1(L, \Omega)}
$$
the image of $\rho(x)$ under the map $\delta_L \colon H^1(L , D) \to H^1(L , \Omega)$ is also trivial. Since $H^1(L , T)$ is trivial by Hilbert's Theorem 90, we conclude from the exact sequence
$$
H^1(L , T) \longrightarrow H^1(L , D) \longrightarrow H^1(L , \Omega).
$$
that $\rho(x)$ is trivial, proving (\ref{E:1Z}).
To conclude the argument, we consider the following commutative diagram, in which the rows are the inflation-restriction exact sequences:
$$
\xymatrix{1 \ar[r] & H^1(L/K, D) \ar[rr]^v \ar[d]_{\theta_L} & & H^1(K,D) \ar[d]^{\theta} \ar[r]^{\rho} & H^1(L,D) \\ 1 \ar[r] & \displaystyle{\prod_{v \in V}} H^1(L_w/K_v, D) \ar[rr]^{\Upsilon} & & \displaystyle{\prod_{v \in V}} H^1(K_v, D) & &}
$$
In conjunction with (\ref{E:1Z}), the diagram yields $\ker \theta = \upsilon(\ker \theta_L)$, and since $\ker \theta_L$ is finite by Theorem \ref{T:1}(ii), we conclude that $\ker \theta$ is finite, as required. \hfill $\Box$
\vskip1mm
\noindent {\it Proof of Theorem \ref{T:2A}.} Let $G$ be a reductive $K$-group, $T$ be a maximal $K$-torus of $G$, and $N = N_G(T)$ be its normalizer. The variety $\mathscr{T}$ of maximal tori of $G$ is a $K$-defined variety whose points $\mathscr{T}(F)$ over a field extension $F/K$ bijectively correspond to the maximal $F$-defined tori of $G$; note that $\mathscr{T}$ can be identified with the quotient $G/N$ (cf. \cite[2.4.8]{Pl-R}). Furthermore,
there is a map $\gamma_F \colon \mathscr{T}(F) \to H^1(F , N)$ that is defined as follows: given an $F$-defined maximal torus $T' \in \mathscr{T}(F)$, we pick an arbitrary $g \in G(F^{\mathrm{sep}})$ such that $T' = g T g^{-1}$, and then $\xi(\sigma) = g^{-1} \sigma(g)$ for $\sigma \in \mathrm{Gal}(F^{\mathrm{sep}}/F)$ defines a Galois 1-cocycle with values in $N(F^{\mathrm{sep}})$ and $\gamma_F$ sends $T'$ to the cohomology class of this cocycle. A standard argument shows that the fibers of $\gamma_F$ correspond to the $G(F)$-conjugacy classes of $F$-defined maximal tori of $G$, and the fiber over the trivial class is precisely the $G(F)$-conjugacy class of $T$.
Now, to prove the theorem we need to show that the image $\gamma_K(\mathscr{C}(T))$ is finite. For this we observe that this image is contained in
$$
\ker\left(H^1(K , N) \to \prod_{v \in V} H^1(K_v , N) \right).
$$
Since the connected component $N^{\circ}$ is $T$ (see Corollary 2 of 8.10 and Corollary 2 of 13.17 in \cite{BorelAG}), this kernel is finite by Theorem \ref{T:2}, and the finiteness of $\gamma_K(\mathscr{C}(T))$ follows. \hfill $\Box$
\vskip2mm
\section{Tori with good reduction: general set-up}\label{S:Funct}
We now turn to finiteness results for tori with good reduction over function fields of algebraic varieties defined over special fields.
In this section, we will describe a general formalism that extends the argument used in the proof of \cite[Theorem 1.1]{RR-tori}; for the reader's convenience we include all the details. After these preparations, we will formulate and prove our finiteness results in the most general form possible in the next section, obtaining, in particular, a proof of Theorem \ref{T:3}. First, we will describe the required condition on the base field. In \cite[Ch. III, \S 4]{Serre-GC}, Serre introduced the following condition on a profinite group $\mathscr{G}$:
\vskip2mm
\noindent (F) \parbox[t]{15cm}{For any finite group $\Phi$, the set $\mathrm{Hom}_{\mathrm{cont}}(\mathscr{G} , \Phi)$ of
continuous group homomorphisms $\mathscr{G} \to \Phi$ is finite.}
\vskip2mm
\noindent (We recall that this condition is equivalent to the requirement that for each $n \geq 1$, the group $\mathscr{G}$ has finitely many open subgroups of index $n$.) Nowadays, groups satisfying condition (F) are often called {\it small}; note that every finitely generated profinite group is small. Furthermore, a field $k$ is said to be of {\it type} (F) if its absolute Galois group $\mathrm{Gal}(k^{\mathrm{sep}}/k)$ is small (we note that in his definition, Serre also requires $k$ to be perfect, but this is not needed in our context). Well-known examples of fields of type (F) are algebraically closed fields, finite fields, and finite extensions of the $p$-adic field $\mathbb{Q}_p$, but in fact, there are many others --- see the discussion in \cite[\S 2]{IR}.
To formulate our results in full generality, it is convenient to introduce a variant of condition (F). Let $p$ be either 1 or a prime number. We then consider the following condition on a profinite group $\mathscr{G}$:
\vskip2mm
\noindent $(\mathrm{F}(p))$ \parbox[t]{15cm}{For any finite group $\Phi$ \underline{of order prime to} $p$, the set $\mathrm{Hom}_{\mathrm{cont}}(\mathscr{G} , \Phi)$ is finite.}
\vskip2mm
\noindent We note that condition $(\mathrm{F}(1))$ coincides with (F), and that (F) implies $(\mathrm{F}(p))$ for all $p$. Given a profinite group $\mathscr{G}$, we let $\mathscr{N}$ denote the intersection of the kernels of all continuous homomorphisms $\varphi \colon \mathscr{G} \to \Phi$, where $\Phi$ is a finite group of order prime to $p$. Then $\mathscr{G}^{(p)} := \mathscr{G}/\mathscr{N}$ is referred to as the {\it maximal prime-to-$p$ quotient} of $\mathscr{G}$.
It follows from the definition that for any finite group $\Phi$ of order prime to $p$, we have
$$
\mathrm{Hom}_{\mathrm{cont}}(\mathscr{G} , \Phi) = \mathrm{Hom}_{\mathrm{cont}}(\mathscr{G}^{(p)} , \Phi).
$$
On the other hand, the image of any continuous homomorphism $\varphi \colon \mathscr{G}^{(p)} \to \Phi$ to a finite group $\Phi$ is a subgroup $\Phi' \subset \Phi$ of order prime to $p$. We conclude that $\mathscr{G}$ satisfies $(\mathrm{F}(p))$ if and only if $\mathscr{G}^{(p)}$ satisfies $(\mathrm{F})$. With these definitions in place, we are now ready to formulate the following.
\begin{thm}\label{T:tori-GR}
Let $\mathfrak{X}$ be a normal separated integral scheme of finite type over $\mathbb{Z}$ or over a field, let $K$ be the function field of $\mathfrak{X}$, and let $V$ be the set of discrete valuations of $K$ corresponding to the prime divisors on $\mathfrak{X}$. Assume that the fundamental group $\pi_1^{\text{\'et}}(\mathfrak{X})$ satisfies condition $(\mathrm{F}(p))$. Then for every $d \geq 1$, there exist finitely many $K$-isomorphism classes of $d$-dimensional $K$-tori $T$ that have good reduction at all $v \in V$ and for which the degree $[K_T : K]$ of the splitting field is prime to $p$.
\end{thm}
\noindent (Since $\mathfrak{X}$ is irreducible, the isomorphism class of the fundamental group does not depend on the choice of a geometric point, so, to simplify notation, we do not specify the geometric point in the statement. We also note that when $p = 1$, i.e. when the fundamental group is small, the conclusion applies to all $d$-dimensional tori without any restrictions on the degree of the splitting field.)
For the argument, we fix an algebraic closure $\overline{K}$ of $K$ and let $\overline{y} \colon {\rm Spec}(\overline{K}) \to \mathfrak{X}$ be the corresponding geometric point of $\mathfrak{X}.$ We also fix a separable closure $K^{\rm sep} \subset \overline{K}.$
Furthermore, we denote by $K_V/K$ the maximal subextension of $K^{\rm sep}$ that is unramified at all $v \in V$. With these notations, we have the following statement, which is a key element in the proof of Theorem \ref{T:tori-GR}.
\begin{prop}\label{P-HigherHM}
The extension $K_V/K$ is Galois and $\Ga(K_V/K)$ satisfies condition $(\mathrm{F}(p))$.
\end{prop}
We begin the proof with the following
\begin{lemma}\label{L-ZNPurity}
With the above notations, let $K_{\mathfrak{X}}/K$ be the compositum of all finite subextensions $L/K$ of $K^{\rm sep}$ such that the normalization of $\mathfrak{X}$ in $L$ is \'etale over $\mathfrak{X}$. Then $K_{\mathfrak{X}} = K_V.$
\end{lemma}
\begin{proof}
It follows from the definitions that we have the inclusion $K_{\mathfrak{X}} \subset K_V.$ To show the reverse inclusion, suppose that $L/K$ is a finite subextension of $K^{\rm sep}$ that is unramified at all $v \in V$, and let $\mathfrak{Y}$ be the normalization of $\mathfrak{X}$ in $L.$ Then by assumption, $\mathfrak{Y} \to \mathfrak{X}$ is finite \'etale over each codimension 1 point of $\mathfrak{X}$. The Zariski-Nagata purity theorem, whose statement we recall below for completeness, then implies that $\mathfrak{Y}$ is \'etale over $\mathfrak{X}$, hence $L \subset K_{\mathfrak{X}}.$
\end{proof}
\begin{thm}\label{T-ZariskiNagata}{\rm (Zariski-Nagata purity theorem)} Let $\varphi \colon Y \to S$ be a finite surjective morphism of integral schemes, with $Y$ normal and $S$ regular. Assume that the fiber of $Y_P$ of $\varphi$ above each codimension 1 point of $S$ is \'etale over the residue field $\kappa(P).$ Then $\varphi$ is \'etale.
\end{thm}
\noindent (See, for example, \cite[Theorem 5.2.13]{Szamuely} for the statement and related discussion and \cite[Exp. X, Th\'eor\`eme 3.4]{SGA2} for a detailed proof.)
\vskip2mm
\noindent {\it Proof of Proposition \ref{P-HigherHM}.} Since $\pi^{\text{\'et}}(\mathfrak{X})$ is assumed to be small, our claim follows immediately from the well-known facts that $K_{\mathfrak{X}}/K$ is a Galois extension, and $\Ga(K_{\mathfrak{X}}/K)$ is canonically isomorphic to the fundamental group $\pi_1^{\text{\'et}}(\mathfrak{X}, \bar{y})$ for the geometric point $\bar{y} \colon {\rm Spec}(\overline{K}) \to \mathfrak{X}$ (see, for example, \cite[Proposition 5.4.9]{Szamuely}). $\Box$
\vskip2mm
\noindent We now turn to
\vskip1mm
\noindent {\it Proof of Theorem \ref{T:tori-GR}.} Recall that given a $d$-dimensional torus $T$, the action on the group of characters $X(T)$ gives rise to
a continuous representations $\rho \colon \Ga (K^{\mathrm{sep}}/K) \to {\rm GL}_d(\Z)$ of the absolutely Galois group. The kernel $\ker \rho$ is the subgroup
$\Ga(K^{\mathrm{sep}}/K_T)$ corresponding to the splitting field $K_T$ of $T$, so the image is isomorphic to the Galois group $\Ga(K_T/K)$.
Moreover, we thereby obtain a bijection between the $K$-isomorphism classes of such tori and the equivalence classes of such representations (see, for example, \cite[\S 2.2.4]{Pl-R}). On the other hand, a $K$-torus $T$ has good reduction at a place $v$ of $K$ if and only if $T \times_K K_v$ splits over an unramified extension of the completion $K_v$ (see, for example, \cite[1.1]{NX}); in other words, the extension $K_T/K$ is unramified at $v$. This means that
the $K$-isomorphism classes of $d$-dimensional $K$-tori having good reduction at all $v \in V$ are in bijection with the equivalence classes of continuous representations $\rho \colon \Ga(K_V/K) \to {\rm GL}_d(\Z).$
Next, by reduction theory (see \cite[Theorem 4.3]{Pl-R}), the group $\mathrm{GL}_d(\mathbb{Z})$ has finitely many conjugacy classes of finite subgroups; let $\Phi_1, \ldots , \Phi_r$ be a complete set of representatives of the conjugacy classes of subgroups that have order prime to $p$. Then the isomorphism class of a given $K$-torus $d$-dimensional $T$ that has good reduction at all $v \in V$ and for which the degree $[K_T : K]$ is prime to $p$ corresponds to the equivalence class of a representation $\rho \colon \Ga(K_V/K) \to {\rm GL}_d(\Z)$ whose image is one of the $\Phi_i$'s. But according to Proposition \ref{P-HigherHM}, for each $i \in \{1, \ldots , r\}$, there are finitely continuous homomorphisms $\Ga(K_V/K) \to \Phi_i$. It follows that there are finitely
many equivalence classes of the relevant representations $\rho \colon \Ga(K_V/K) \to {\rm GL}_d(\Z)$, hence finitely many isomorphism classes of tori having good reduction at all $v \in V$. $\Box$
\vskip5mm
\vskip2mm
\section{Tori with good reduction over function fields}
As we observed in \cite{RR-tori}, the finiteness results for the isomorphism classes of tori of a given dimension having good reduction at a certain natural set of places of the base field cannot be extended from characteristic zero to positive characteristic without additional assumptions. The precise conditions for function fields in all characteristics are given in the next statement, which will then be used to treat the case of finitely generated fields in positive characteristic.
We let $p$ denote the characteristic exponent of the base field $k$, i.e. $p = 1$ if $\mathrm{char}\: k = 0$ and $p = \mathrm{char}\: k$ otherwise.
\begin{thm}\label{T:Z1}
Let $K = k(X)$ be the function field of a normal geometrically integral variety defined over a field $k$ of type $(\mathrm{F})$, and let $V$ be the set of discrete valuations of $K$ associated with the prime divisors of $X$.
\vskip2mm
\noindent \ {\rm (i)} \parbox[t]{16cm}{If $X$ is complete, then for each $d \geq 1$, the set of $K$-isomorphism classes of $d$-dimensional $K$-tori that have good
reduction at all $v \in V$ is finite.}
\vskip2mm
\noindent {\rm (ii)} \parbox[t]{16cm}{In the general case, for each $d \geq 1$, the set of $K$-isomorphism classes of $d$-dimensional $K$-tori $T$ that have good reduction at all $v \in V$ and for which the degree $[K_T : K]$ of the splitting field is prime $p$ is finite.}
\end{thm}
If $\mathrm{char}\: k = 0$, then part (ii) of the theorem applies to any $d$-dimensional torus $T$ having good reduction at all $v \in V$, thus yielding Theorem \ref{T:3}. On the other hand, any finitely generated field $K$ of characteristic $p > 0$ can be realized as the function field $k(X)$ of a geometrically integral normal variety $X$ over a finite field $k$. The choice of such a realization gives rise to a divisorial set $V$ of discrete valuations of $K$ that are associated to prime divisors of $X$. If $X$ is chosen to be complete, the corresponding $V$ is also called {\it complete}. Since finite fields are of type (F), we obtain the following.
\begin{cor}
Let $K$ be a finitely generated field of characteristic $p > 0$, and let $V$ be a divisorial set of places of $K$.
\vskip1mm
\noindent {\rm (i)} \parbox[t]{16cm}{If $V$ is complete, then for any $d \geq 1$, the set of $K$-isomorphism classes of $d$-dimensional $K$-tori
that have good reduction at all $v \in V$ is finite.}
\vskip1mm
\noindent {\rm (ii)} \parbox[t]{16cm}{In the general case, for any $d \geq 1$, the set of $K$-isomorphism classes of $d$-dimensional $K$-tori $T$ that have good reduction at all $v \in V$ and for which the degree $[K_T : K]$ of the splitting field is prime to $p$ is finite.}
\end{cor}
\vskip2mm
We will derive both parts of Theorem \ref{T:Z1} from Theorem \ref{T:tori-GR}. Its application in the situation of part (i) is justified
by the following statement.
\begin{prop}\label{P:Z1}
Let $X$ be a complete normal geometrically integral variety over a field $k$ of type $(\mathrm{F})$. Then the \'etale fundamental group $\pi_1^{\text{\'et}}(X)$ (with respect to any geometric point) satisfies $(\mathrm{F})$, i.e. is small.
\end{prop}
\begin{proof}
We fix a separable closure $k^{\mathrm{sep}}$ of $k$, set $X^{s} = X \times_{\mathrm{Spec}\: k} \mathrm{Spec}\: k^{\mathrm{sep}}$, and pick a geometric point $\bar{x}$ of $X^s$. We then have the following standard exact sequence of profinite groups
\begin{equation}\label{E:Z1}
1 \to \pi_1^{\text{\'et}}(X^s, \bar{x}) \to \pi_1^{\text{\'et}}(X, \bar{x}) \to \Ga(k^{\mathrm{sep}}/k) \to 1.
\end{equation}
(see \cite[Exp. IX, Th. 6.1]{SGA1}) By our assumption, $\Ga(k^{\mathrm{sep}}/k)$ is small. Furthermore, since $X$ is complete, the fundamental group $\pi_1(X^s, \bar{x})$ is topologically finitely generated for $k$ of any characteristic (see Exp. X, Th. 2.9, combined with Exp. IX, 4.10, in \cite{SGA1}),
and hence is small as well. Applying \cite[Lemma 2.7]{HH-small}, we conclude that $\pi^{\text{\'et}}_1(X, \bar{x})$ is small.
\end{proof}
\vskip1mm
To treat part (ii) of the theorem using a similar strategy, we need information about finite generation of the prime-to $p$ part $\pi^{\text{\'et}}_1(X^s, \bar{x})^{(p)}$ of the fundamental group. Raynaud \cite{Ray} established this fact for any scheme $\mathscr{X}$ of finite type over a separably closed field
$\mathscr{K}$ assuming that all schemes of finite type over an algebraic closure of $\mathscr{K}$ that have dimension $\leq \dim \mathscr{X}$ are ``fortement d\'esingularisable" as defined in \cite[Exp. I, 3.1.5]{SGA5} (it should be noted that this assumption is known to hold if either $p = 1$ or $\dim \mathscr{X} \leq 2$). Later, by working with alterations, Orgogozo \cite{Orgogozo} established this fact unconditionally. Recently, a new proof of the finite generation
of the prime-to $p$ part of the fundamental group of a smooth quasi-projective variety over an algebraically closed field was given in \cite[Corollary 3.5]{Esn}, which, by \cite[Exp. IX, 4.10]{SGA1}, implies the same result over separably closed fields (we note that in \cite{Esn}, it is in fact shown that the tame fundamental group is finitely presented when there is a good compactification, from which the finite presentation of the prime-to $p$ part of the fundamental group is deduced).
In terms of proving part (ii), we can always replace $X$ with an open subset as this will result in a smaller set of places $V$. Thus, we can assume that $X$ is affine and smooth. In this case, part (ii)
follows from Theorem \ref{T:tori-GR} and the proposition below.
\begin{prop}\label{P:Z2}
Let $X$ be a geometrically integral smooth affine variety over a field $k$ whose absolute Galois group satisfies $(\mathrm{F}(p))$. Then the \'etale fundamental group $\pi_1^{\text{\'et}}(X)$ also satisfies $(\mathrm{F}(p))$.
\end{prop}
The proof is similar to the proof of Proposition \ref{P:Z1}, so we will use the same notations and utilize the exact sequence (\ref{E:Z1}).
As we discussed above, it follows from \cite[Proposition 3.1]{Esn} that the maximal prime-to-$p$ quotient $\pi_1^{\text{\'et}}(X^s, \bar{x})^{(p)}$ is finitely generated, hence satisfies $(\mathrm{F})$. Thus, the fundamental group $\pi_1^{\text{\'et}}(\overline{X}, \bar{x})$ itself satisfies $(\mathrm{F}(p))$. Since the absolute Galois group $\mathrm{Gal}(k^{\mathrm{sep}}/k)$ satisfies $(\mathrm{F}(p))$ by our assumption, our claim is a consequence of the following.
\begin{lemma}
Let $1 \to \mathscr{E} \longrightarrow \mathscr{G} \stackrel{\alpha}{\longrightarrow} \mathscr{H} \to 1$ be an extension of profinite groups. If $\mathscr{E}$ and $\mathscr{H}$ satisfy $(\mathrm{F}(p))$, then so does $\mathscr{G}$.
\end{lemma}
\begin{proof}
Write the maximal prime-to-$p$ quotient $\mathscr{G}^{(p)}$ in the form $\mathscr{G}^{(p)} = \mathscr{G}/\mathscr{N}$, where $\mathscr{N}$ is the intersection of the kernels of all continuous homomorphisms $\varphi \colon \mathscr{G} \to \Phi$, with $\Phi$ a finite group of order prime to $p$. Then $\mathscr{G}^{(p)}$ is contained in the exact sequence
$$
1 \to \mathscr{E}/ (\mathscr{E} \cap \mathscr{N}) \longrightarrow \mathscr{G}^{(p)} \longrightarrow \mathscr{H}/\alpha(\mathscr{N}) \to 1.
$$
It is easy to see that $\mathscr{E}/ (\mathscr{E} \cap \mathscr{N})$ and $\mathscr{H}/\alpha(\mathscr{N})$ are quotients of the maximal prime-to-$p$ quotients
$\mathscr{E}^{(p)}$ and $\mathscr{H}^{(p)}$, respectively. Since $\mathscr{E}$ and $\mathscr{H}$ satisfy $(\mathrm{F}(p))$, the groups $\mathscr{E}^{(p)}$ and $\mathscr{H}^{(p)}$ satisfy $(\mathrm{F})$, and therefore so do the groups $\mathscr{E}/ (\mathscr{E} \cap \mathscr{N})$ and $\mathscr{H}/\alpha(\mathscr{N})$.
Then by \cite[Lemma 2.7]{HH-small}, the group $\mathscr{G}^{(p)}$ also satisfies $(\mathrm{F})$, and therefore $\mathscr{G}$ satisfies $(\mathrm{F}(p))$.
\end{proof}
\vskip3mm
\section{Properness of the global-to-local map over function fields of algebraic varieties}\label{S:local-global}
In this section, $K$ will denote the function $k(X)$ of a normal geometrically integral variety $X$ over a field $k$. Let $V$ be the set of discrete valuations of $K$ associated with the prime divisors of $X$.
\vskip1mm
\noindent {\it Proof of Theorem \ref{T:4}.} Assuming that the base field $k$ has characteristic zero and is of type $(\mathrm{F})$, we will prove that for any algebraic $K$-torus $T$, the Tate-Shafarevich group
$$
\Sha^1(T,V) = \ker \left(H^1 (K,T) \to \prod_{v \in V} H^1(K_v, T) \right)
$$
is finite. One can give an argument based on the method developed in the second proof of \cite[Theorem 1.2]{RR-tori}.
To avoid repetition, however, we will describe a slightly different approach that, in the case of finitely generated fields, was suggested by Colliot-Th\'el\`ene. We begin by fixing some notations. Let $U \subset X$ be a smooth open affine subvariety such that $T$ extends to a torus $\mathbb{T}$ over $U$ and let $V_U$ be the set of discrete valuations of $K$ corresponding to the points of codimension 1 of $U$. For each such $x \in U$, let $v_x$ be the associated valuation of $K$. Denote by $\mathcal{O}_{x}$ the local ring of $U$ at $x$ and by $\hat{\mathcal{O}}_{v_x}$ its completion with respect to $v_x.$ Note that the fraction field of the latter is simply the completion $K_{v_x}$ of $K$ with respect to $v_x.$ According to \cite[Proposition 2.2]{CTS78}, the natural maps
$$
\he^1(\hat{\mathcal{O}}_{v_x}, \mathbb{T}) \to H^1(K_{v_x}, T)
$$
are injective for all $v_x \in V_U.$ Consider the global-to-local map
$$
\lambda_{T, V_U} \colon H^1(K,T) \to \prod_{v_x \in V_U} H^1(K_{v_x}, T).
$$
We then have the following general statement that immediately implies Theorem \ref{T:4}.
\begin{prop}\label{P-ToriSha}
The inverse image of $\displaystyle{\prod_{v_x \in V_U}} \he^1(\hat{\mathcal{O}}_{v_x}, \mathbb{T})$ in $H^1(K,T)$ under the map $\lambda_{T,V_U}$ is finite.
\end{prop}
\begin{proof}
Set
$$
R = \lambda_{T, V_U}^{-1} \left(\prod_{v_x \in V_U} \he^1(\hat{\mathcal{O}}_{v_x}, \mathbb{T})\right) \subset H^1(K,T)
$$
and let $\xi \in R.$ First, according to \cite[Lemma 4.1.3]{Harder} (see also \cite[Lemma 4.1]{CTPS}), the fact that the $v_x$-component of $\lambda_{T, V_U}(\xi)$ is contained in the image of $\he^1(\hat{\mathcal{O}}_{v_x}, \mathbb{T}) \to H^1(K_{v_x}, T)$ implies that $\xi$ is contained in the image of the natural map $\he^1({\mathcal{O}}_{x}, \mathbb{T}) \to H^1(K, T)$, which is injective by \cite[Proposition 2.2]{CTS78}. Thus, $\xi$ lies in the image of $\he^1({\mathcal{O}}_{x}, \mathbb{T}) \to H^1(K, T)$ for all codimension 1 points $x \in U$, and hence by purity, $\xi$ is contained in the image of the natural map
$$
\he^1(U, \mathbb{T}) \to H^1(K,T)
$$
(see \cite[Proposition 4.1]{CTS78} and \cite[Corollaire 6.9]{CTS79}). On the other hand, since $k$ is of type (F), the image of the latter is finite by \cite[Proposition 3.3]{CTGP}. It follows that $R$ is finite, as claimed.
\end{proof}
Our next statement treats the question of the properness of the global-to-local map in the Galois cohomology of finite Galois $K$-modules in all characteristics. We let $p$ denote the characteristic exponent of $k$.
\begin{prop}\label{P:Finite2}
Let $K = k(X)$ be the function field of a geometrically integral normal variety $X$ defined over a field $k$, $V$ be the set of discrete valuations associated
with the prime divisors of $X$, and $\Omega$ be a finite (but not necessarily commutative) Galois module. Then in each of the following situations
\vskip2mm
{\rm (1)} $\dim X \geq 2$,
{\rm (2)} $X$ is a projective curve,
{\rm (3)} \parbox[t]{15cm}{$X$ is an arbitrary curve, but the order of $\Omega$ is prime to the characteristic exponent $p$ of $k$}
\vskip2mm
\noindent the map
$$
H^1(K , \Omega) \stackrel{\kappa}{\longrightarrow} \prod_{v \in V} H^1(K_v , \Omega)
$$
is proper.
\end{prop}
\begin{proof}
As in the proof of Proposition \ref{P:1}, by using twisting, it is enough to show that $\ker \kappa$ is finite, and then one can assume (which we will) without loss of generality that the Galois action on $\Omega$ is trivial. Under this assumption, one actually proves that in case (1), the kernel $\ker \kappa$ is trivial --- this is derived from Proposition \ref{P:FFF3} by repeating verbatim the argument used to derive the corresponding statement in the proof of Proposition \ref{P:1} from Proposition \ref{P:2}.
\vskip1mm
(2): Let $K_V$ that be the compositum of all finite Galois extensions contained in $K^{\mathrm{sep}}$ that are unramified at all $v \in V$. By Lemma \ref{L-ZNPurity}, the field $K_V$ coincides with the compositum $K_X$ of all finite subextensions $L$ of $K^{\mathrm{sep}}$ such that the normalization of $X$ in $L$ is etale over $X$. Since $X$ is projective, as we noted in the proof of Proposition \ref{P:Z1}, the \'etale fundamental group $\mathscr{H} = \pi_1^{\text{\'et}}(X^s)$ of $X^s = X \times_{\mathrm{Spec}\: k} \mathrm{Spec}\: k^{\mathrm{sep}}$
is topologically finitely generated, hence satisfies condition $(\mathrm{F})$. Let $\widetilde{\mathscr{H}}$ be the absolute Galois group of $k^{\mathrm{sep}}(X)$, and let $\widetilde{M}$ be the fixed subfield for the intersection of the kernels of all continuous homomorphisms $\tilde{\chi} \colon \widetilde{\mathscr{H}} \to \Omega$ of the form $\tilde{\chi} = \chi \circ \nu$, where $\nu \colon \widetilde{\mathscr{H}} \to \mathscr{H}$ is the canonical epimorphism and $\chi \colon \mathscr{H} \to \Omega$ is a continuous homomorphism. Since the set $\mathrm{Hom}_{\mathrm{cont}}(\mathscr{H} , \Omega)$ is finite, $\widetilde{M}$ is a finite Galois extension of $k^{\mathrm{sep}}(X)$ having the following property: if $L/K$ is a finite Galois extension that is unramified at all $v \in V$ and the Galois group of which is isomorphic to a subgroup of $\Omega$, then $L \subset \widetilde{M}$. We pick a finite Galois extension $M$ of $K$ contained in $\widetilde{M}$ so that $\widetilde{M} = M \cdot k^{\mathrm{sep}}$. We then fix $v_0 \in V$ and its extension $w_0$ to $M$, and let $E$ denote the maximal separable extension of $k$ contained in the residue field $M^{(w_0)}$.
Now let $\chi \colon \mathscr{G} \to \Omega$ be a continuous homomorphism of the absolute Galois group $\mathscr{G} = \mathrm{Gal}(K^{\mathrm{sep}}/K)$ that lies in $\ker \kappa$, and let $L_{\chi} $ be the fixed field of $\ker \chi$. To prove the finiteness of $\ker \kappa$, it is enough to show that for any such $\chi$, the field $L_{\chi}$ is contained in the finite extension $ME$ of $K$. In any case, $L = L_{\chi}$ is a finite Galois extension of $K$ whose Galois group is isomorphic to a subgroup of $\Omega$ and which satisfies $L_w = K_v$ for all $v \in V$ and $w \vert v$. In particular, $L/K$ is unramified at all $v \in V$, and hence $L \subset \widetilde{M}$. Because of the canonical isomorphism of Galois groups $$\mathrm{Gal}(\widetilde{M}/M) \simeq \mathrm{Gal}(k^{\mathrm{sep}}/(k^{\mathrm{sep}} \cap M)),$$ we conclude that there is a finite separable extension $\ell/k$ such that $F=ML$ coincides with $M\ell$. Let $u_0$ be the extension of $w_0$ to $F$. Since the completion of $L$ with respect to the restriction of $u_0$ coincides with $K_{v_0}$, we see that $F_{u_0} = M_{w_0}$, implying the equality $F^{(u_0)} = M^{(w_0)}$ of the residue fields. On the other hand, $\ell \subset F^{(u_0)}$ hence $\ell \subset E$. Thus, $L \subset ML = M\ell \subset ME$, as required.
\vskip1mm
(3): Since in this case $\Omega$ is assumed to be of order prime to $p$, for the argument used to treat case (2) to work in this situation one only needs
to make sure that the prime-to-$p$ part $\pi^{\text{\'et}}_1(X^s)^{(p)}$ of the fundamental group is finitely generated, which was known for any curve $X$ (see \cite[Exp. XIII, Cor. 2.12]{SGA1}) even before \cite{Esn}.
\end{proof}
\noindent {\bf Remark 6.3.} In the case where $X$ is an affine curve over a field $k$ of characteristic $p > 0$ and $\Omega$ is a finite Galois module of order divisible by $p$ over the function field $K = k(X)$, which is excluded in the proposition, the finiteness assertion may be false. Indeed, let $k$ be an algebraically closed field of characteristic $p$ and $X = \mathbb{A}^1_k$, and let $\Omega = \mathbb{Z}/p\mathbb{Z}$ be the trivial Galois module over the field $K = k(X)$ of rational functions. As above, we let $V$ denote the set of places of $K$ corresponding to the closed points of $X$. If $L/K$ is a Galois extension of degree $p$ corresponding to some Artin-Schreier cover $Y \to X$, then any $v \in V$ is unramified in $L$, implying that $L_w = K_v$ for $w \vert v$ since the residue
field $K^{(v)} = k$ is algebraically closed. This means that a character $\chi \colon \mathscr{G} \to \Omega$ of the absolute Galois group $\mathscr{G} = \mathrm{Gal}(K^{\mathrm{sep}}/K)$ whose kernel is the subgroup corresponding to $L$ lies in the kernel of the global-to-local map $\kappa \colon H^1(K , \Omega) \to \prod_{v \in V} H^1(K_v , \Omega)$. Since $X$ has infinitely many distinct Artin-Schreier covers, $\kappa$ is {\it not} proper in this case.
\vskip3mm
\addtocounter{thm}{1}
With $K$ and $V$ as in Proposition \ref{P:Finite2}, we have the following statement.
\begin{prop}\label{P:Finite3}
Let $G$ be a connected reductive $K$-group. Fix a maximal $K$-torus $T$ of $K$ and let $\mathscr{C}(T)$ be the set of all maximal $K$-tori $T'$ of $G$ such that $T$ and $T'$ are $G(K_v)$-conjugate for all $v \in V$. Then, with the exception of the following case
\vskip2mm
\noindent $(*)$ \parbox[t]{15cm}{$X$ is an affine curve and the order of the Weyl group $W(G , T)$ is divisible by $p$,}
\vskip2mm
\noindent $\mathscr{C}(T)$ consists of finitely many $K$-isomorphism classes.
\end{prop}
\begin{proof}
We will freely use the notations introduced in the proof of Theorem \ref{T:2A} --- see \S\ref{S:Proofs}. We will identify the Weyl group $W = W(G , T)$ with the quotient $N/T$, and for a field extension $F/K$ consider the natural map $\delta_F \colon H^1(F , N) \to H^1(F , W)$. It is well-known that if $T_1 , T_2 \in \mathscr{T}(F)$ are such that
\begin{equation}\label{E:equal}
\delta_F(\gamma_F(T_1)) = \delta_F(\gamma_F(T_2))
\end{equation}
then $T_1$ and $T_2$ are $F$-isomorphic. Indeed, using twisting, one easily reduces the argument to the case where $T_1 = T$. To be consistent with the notations used in the proof of Theorem \ref{T:2A}, set $T' = T_2$, pick $g \in G(F^{\mathrm{sep}})$ such that $T' = g T g^{-1}$, and define $\xi(\sigma) = g^{-1} \sigma(g)$ for $\sigma \in \mathrm{Gal}(F^{\mathrm{sep}}/F)$. Then $\gamma_F(T')$ is the class of the cocycle $\xi = \{ \xi(\sigma) \}$ in $H^1(F , N)$. Condition (\ref{E:equal}) yields
that $\delta_F(\xi)$ is trivial, so the exact sequence
$$
H^1(F , T) \longrightarrow H^1(F , N) \stackrel{\delta_F}{\longrightarrow} H^1(F , W)
$$
tells us that $\xi$ is equivalent to a cocycle with values in $T(F^{\mathrm{sep}})$. This means that the element $g$ can be chosen so that $\xi(\sigma) = g^{-1} \sigma(g)$ belongs to $T(F^{\mathrm{sep}})$ for all $\sigma \in \mathrm{Gal}(F^{\mathrm{sep}}/F)$. Using the fact that $T$ is commutative, one can easily see that the isomorphism $T \to T'$, $x \mapsto g x g^{-1}$, is defined over $F$.
Having recalled these facts, we are now ready to complete the proof of the proposition. It is enough to prove that the set $\delta_K(\gamma_K(\mathscr{C}(T)))$ is finite. By construction, $\gamma_{K_v}(\mathscr{C}(T))$ consists only of the trivial class for any $v \in V$. So, using the commutative diagram
$$
\xymatrix{H^1(K,N) \ar[rr]^{\delta_K} \ar[d] & & H^1(K, W) \ar[d]^{\kappa} \\ \displaystyle{\prod_{v \in V}} H^1(K_v, N) \ar[rr]^{\prod \delta_{K_v}} & & \displaystyle{\prod_{v \in V}} H^1(K_v, W)}
$$
we see that $\delta_K(\gamma_K(\mathscr{C}(T))) \subset \ker \kappa$. Since we exclude the case $(*)$, the finiteness of $\ker \kappa$ follows from Proposition \ref{P:Finite2}, completing the argument.
\end{proof}
\vskip1mm
\noindent {\bf Remark 6.5.} As in Remark 6.3, suppose $k$ is an algebraically closed field of characteristic $p > 0$, and let $K = k(X)$ be the function field of $X = \mathbb{A}^1_k$ and $V$ be the set of discrete valuations of $K$ associated with the closed points of $X$. We have seen that $K$ has an infinite family $L_1, L_2, \ldots $ of degree $p$ cyclic extensions corresponding to the Artin-Schreier covers of $X$, and that for every $i$ we have $L_{i w} = K_v$ for all $v \in V$ and $w \vert v$. Then the corresponding maximal tori $T_i = \mathrm{R}_{L_i/K}(G_m)$ of $G = \mathrm{GL}_p$ are pairwise non-isomorphic over $K$ (since they have different splitting fields). On the other hand, for any $v \in V$, all of them become split over $K_v$, hence are $G(K_v)$-conjugate. This demonstrates that the case $(*)$ in Proposition \ref{P:Finite3} is an honest exception.
\vskip3mm
\section{Simple groups with good reduction over the function fields of complex surfaces}\label{S:CSurf}
The goal of this section is to prove Theorem \ref{T:CSurf}.
The key ingredient in the proof is the following fundamental result on Serre's Conjecture II (cf. \cite{BP}, \cite[Theorem 1.2]{CTGP}, \cite{Gille}, \cite[Theorem 1.6]{dJHS}).
\begin{thm}\label{T-SerreSurf}
Let $k$ be an algebraically closed field of characteristic 0 and $K = k(S)$ be the function field of a surface $S$ over $k$. Then $H^1(K, G) = 1$ for any absolutely almost simple simply connected $K$-group $G$.
\end{thm}
Let $\mathcal{G}$ be an absolutely almost simple simply connected group over a field $\mathcal{K}$, and let $\mathcal{L}$ be the minimal Galois extension of $\mathcal{K}$ over which $\mathcal{G}$ becomes an inner form of a split group. It is well-known that the degree $[\mathcal{L} : \mathcal{K}]$ can only be $1$, $2$, $3$, or $6$. Furthermore, if $\mathcal{G}$ has good reduction at a discrete valuation $v$ of $\mathcal{K}$, then the extension $\mathcal{L}/\mathcal{K}$ is unramified at $v$. Thus, in the situation of Theorem \ref{T:CSurf}, if $G'$ is a $K$-form of $G$ that has good reduction at all $v \in V$, then the corresponding minimal Galois extension $L'$ of $K$ over which $G'$ becomes an inner form of a split group has degree $1$, $2$, $3$, or $6$ and is unramified at all $v \in V$. It follows that $L'$ corresponds to an \'etale cover of $S$ (cf. Lemma \ref{L-ZNPurity}). Since the \'etale fundamental group of $S$ is topologically finitely generated (see \cite[Exp. X, Th. 2.9]{SGA1}), there are only finitely many possibilities $L_1, \ldots , L_r$ for the field $L'$ as $G'$ runs through all possible $K$-forms of $G$ having good reduction at all $v \in V$. Let $G^{(i)}$ be the quasi-split form of $G$ associated with $L_i$. Then any $K$-form $G'$ of $G$ with good reduction at all $v \in V$ is an {\it inner} form of one of the $G^{(i)}$'s. Thus, in order to prove Theorem \ref{T:CSurf}, it is enough to establish the following:
\vskip2mm
\noindent $(*)$ \parbox[t]{15.5cm}{{\it if $G$ is an absolutely almost simple simply connected quasi-split $K$-group, then the set of $K$-isomorphism classes of \underline{inner} forms of $G$ that have good reduction at all $v \in V$ is finite.}}
\vskip2mm
\noindent To prove this statement, we consider the exact sequence
$$
1 \to F \to G \to \overline{G} \to 1,
$$
where $F$ is the center of $G$ and $\overline{G}$ is the corresponding adjoint group, which gives rise to a coboundary map
$$
\theta_K \colon H^1(K, \overline{G}) \to H^2(K,F).
$$
Then Theorem \ref{T-SerreSurf}, in conjunction with the standard argument based on twisting (cf. \cite[\S1.3.2]{Pl-R}, \cite[Ch. I, \S5]{Serre-GC}), shows that $\theta_K$ is injective. Now, let $G'$ be an inner form of $G$ that has good reduction at all $v \in V$. Then $G'$ is obtained from $G$ by twisting by a cocycle with values in the group of inner automorphisms of $G$, which we identify with $\overline{G}$. Let $\xi \in H^1(K, \overline{G})$ be the corresponding cohomology class. For any $v \in V$, the residue field $K^{(v)}$ is the function field of a complex curve, hence has cohomological dimension $\leq 1$
by Tsen's theorem (cf. \cite[Ch. II, 3.3]{Serre-GC}). By our assumption, the reduction ${G'}^{(v)}$ is a connected reductive (in fact, absolutely almost simple) group. Applying Steinberg's theorem (see \cite[Ch. III, \S2.3, Theorem 1$'$]{Serre-GC}), we conclude that the reduction $\underline{G'}^{(v)}$ of $G'$ at $v$ is quasi-split. Then, by Hensel's lemma, $G'$ is quasi-split over $K_v$. This being true for all $v \in V$, we see that $\xi$ lies in the kernel of the global-to-local map
$$
H^1(K, \overline{G}) \stackrel{\lambda_{\overline{G}, V}}{\longrightarrow} \prod_{v \in V} H^1(K_v, \overline{G}).
$$
So, it remains to be establish the finiteness of this kernel. For this we will use the commutative diagram
$$
\xymatrix{H^1(K, \overline{G}) \ar[r]^{\lambda_{\overline{G}, V}} \ar[d]_{\theta_K} & \displaystyle{\prod_{v \in V}} H^1(K_v, \overline{G}) \ar[d]^{\theta_{K_v}} \\ H^2(K, F) \ar[r]^{\lambda^2_{F,V}} & \displaystyle{\prod_{v \in V}} H^2(K_v, F)}.
$$
Clearly,
$$
\theta_K(\ker \lambda_{\overline{G}, V}) \subset \ker \lambda^2_{F, V}.
$$
As we remarked above, the map $\theta_K$ is injective, so it is enough to prove the finiteness of $\ker \lambda^2_{F, V}$. Replacing $S$ by an open subvariety if necessary, we can assume that $F$ extends to a group scheme $\mathbb{F}$ over $S$. Then a consequence of purity for smooth varieties over fields is that $\ker \lambda^2_{F,V}$ is contained in the image of the natural map $\he^2(S, \mathbb{F}) \to H^2(K, F)$ (see \cite[\S3.4]{CT-SB}). Since $k$ is algebraically closed, the group $\he^2(S, \mathbb{F})$ is finite according to \cite[Expos\'e XVI, Th\'eor\`eme 5.2]{SGA4}. (We note that a similar argument was used in the proof of \cite[Proposition 3.3]{CTGP}).
This completes the proof of $(*)$ and hence yields Theorem \ref{T:CSurf}.
\vskip1cm
\bibliographystyle{amsplain}
|
1,116,691,500,348 | arxiv | \section{Introduction}
\subsection{Models for in-vivo virus dynamics}
A number of population dynamics models have been proposed in order to describe the HIV in-vivo dynamics \cite{Perelson:Nelson:1999,Nowak:May:2000}. Although these models have distinct features, since they attempt to incorporate different aspects of the interaction between the virus and the immune system, many of them share a common long-term behaviour, evolving towards an isolated equilibrium state \cite{Nowak:May:2000}.
The basic model for the HIV in-vivo dynamics is given by a three-by-three, first-order system of ordinary differential equations (ODEs)---\cite{Nowak:Bangham:1996,Bonhoeffer:May:Nowak:1997,Nowak:May:2000}:
\begin{equation}
\left\{
\begin{array}{rcl}
\dot{x}&=&\lambda-dx-\beta xv,\\
\dot{y}&=&\beta x v - ay,\\
\dot{v}&=& ky-uv.\\
\end{array}
\right.
\label{nb:basic}
\end{equation}
In this model, $x$ denotes the uninfected cells, $y$ the infected cells and $v$ the free virus particles. The average lifetime of an infected cell is $1/a$, while the average lifetime of a virus particle is $1/u$. The total number of virus particles produced by an infected cell is $k/a$. Healthy cells are infected at a rate $\beta xv$. New CD4+ T cells are produced, in the thymus, at a rate $\lambda$, and die at a rate $dx$.
System \eqref{nb:basic} has two equilibrium points:
\begin{enumerate}
\item The disease free equilibrium: $x^*=\lambda/d$, $y^*=v^*=0$;
\item The endemic equilibrium: $x^*=au/\beta k$, $y^*=(\beta\lambda k - dau)/\beta a k$, $v^*=(\beta\lambda k - dau)/\beta a u$.
\end{enumerate}
The long term dynamics of System~\eqref{nb:basic} can be entirely described in terms of the dimensionless
parameter
\begin{equation}
R_0=\frac{\beta\lambda k}{dau},
\label{r0:def}
\end{equation}
also known as the \textit{basic reproductive ratio}.
If $R_0\le 1$, the disease free equilibrium is a global attractor, and the infection cannot persist. If $R_0>1$, the endemic equilibrium becomes a global attractor, and the infections persists indefinitely. This has been first observed numerically ~\cite{Nowak:Bangham:1996,Bonhoeffer:May:Nowak:1997,Nowak:May:2000}. Mathematical proofs of these global stability characteristics were given by \citeasnoun{Li:Muldowney:1995}, using Hirsch's theory of competitive differential systems---see \citeasnoun{Smith:1995}---and, more recently by \citeasnoun{Leenheer:Smith:2003} and \citeasnoun{Korobeinikov:2004} using a Lyapunov function approach.
Given the notable ability of the HIV to escape from immune response, there is interest in studying models that account for a more detailed immune response as, for instance, the role of cytotoxic T lymphocytes (CTLs). An example is the following four-by-four system of ODEs \cite{Nowak:Bangham:1996}:
\begin{equation}
\left\{
\begin{array}{rcl}
\dot{x}&=&\lambda-dx-\beta xv,\\
\dot{y}&=&\beta x v - ay - pyz,\\
\dot{v}&=& ky-uv,\\
\dot{z}&=&cyz-bz.
\end{array}
\right.
\label{nb:immune}
\end{equation}
System \eqref{nb:immune} extends \eqref{nb:basic} by introducing the $z$ variable, that denotes the CTL response. Infected cells are killed at a rate $pyz$, while antigen stimulation produces CTL cells at a rate $cyz$. In the absence of such a stimulation, CTL cells decay at a rate $bz$.
In the same vein, the high mutation rate of HIV naturally leads to the study of the interplay between immune response and virus diversity for a number of different strains. The immune response produces a selection pressure on these different strains of the virus, as discussed in \citeasnoun{Nowak:Bangham:1996}, when studying and numerically analysing a $(3n+1)$-by-$(3n+1)$ first-order ODE system of the form:
\begin{equation}
\left\{
\begin{array}{rclr}
\dot{x}&=&\lambda-dx-x\sum_{i=1}^n\beta_i v_i,& \\
\dot{y_i}&=&\beta_i x v_i - a_iy_i - p_iy_iz_i,& i=1,\ldots,n,\\
\dot{v_i}&=& k_iy_i-u_iv_i, & i=1,\ldots,n,\\
\dot{z_i}&=&c_iy_iz_i-b_iz_i, & i=1,\ldots,n.
\end{array}
\right.
\label{nb:antigenic}
\end{equation}
System \eqref{nb:antigenic} is a slightly generalised form of the system studied by \citeasnoun{Nowak:Bangham:1996}, where the more restricted case $a_i=a$, $p_i=p$, $u_i=u$, $c_i=c$, and $b_i=b$ was addressed.
In all these models, there is the question as whether the long term dynamics approaches an equilibrium or, more generally, an attractor, and how this might depend on the initial condition. There is compelling numerical evidence---cf. \cite{Nowak:Bangham:1996,Bonhoeffer:May:Nowak:1997,Nowak:May:2000}---that Systems \eqref{nb:immune} and \eqref{nb:antigenic} are globally asymptotically stable. However, no mathematical proof of this fact seems to be available. In System~\eqref{nb:antigenic} there is also the question of determining the antigenic diversity at the equilibrium.
In this work, we study the stability characteristics of the models given by \eqref{nb:antigenic} following a Lyapunov approach. The Lyapunov functional used here has been used before by \citeasnoun{Korobeinikov:Wake:1999}, in the global analysis of three-dimensional predator-prey systems, and by \citeasnoun{Korobeinikov:2004} and \citeasnoun{Korobeinikov:2004b} in the global analysis of various virus models. More precisely, by using an appropriate linear combination of this Lyapunov functional, we are able to show global asymptotic stability results for System \eqref{nb:antigenic}, and hence to \eqref{nb:immune}.
The plan for this article goes as follows: We close this introductory section with further biological background and motivations. In Section~\ref{prelims} we address some preliminary issues such as choice of dimensionless variables and parameter reductions.
We study the global stability characteristics of model~\eqref{nb:immune}.
In Section~\ref{antimodel}, we study the equilibria and global stability of model~\eqref{nb:antigenic} under the assumption of unique fitnesses of the strains. In this case, we determine the possible equilibria of \eqref{nb:antigenic}, and show than there are $2^{n-1}(n+2)$ equilibrium points. We also show that system is globally asymptotic stable, and we determine what is the global attractor in the nonnegative orthant of $\mathbb{R}^{3n+1}$. As a byproduct, we characterise the attained diversity and show that it is monotonically increasing with the strength of the immune response. Some additional results for the case of nonunique fitnesses are also presented.
We conclude in Section~\ref{conclusions} with a discussion of some of the implications
of our results.
\subsection{Biological Background and Motivation}
The models studied by equation~\ref{nb:antigenic} have many potential biological applications. Most notably, to within-host infections connected to
cytotoxic T lymphocytes with antigenic variation including, but not restricted to, HIV infection.
A better understanding of how the within-host HIV, interacts with immune cells seems
to be a key factor in the development of effective long-term therapies or possibly preventive vaccines
for deadly diseases such as the acquired immunodeficiency syndrome \cite{Nowak:May:2000}.
Mathematical modeling of the underlying biological mechanisms and a good understanding of the theoretical implications of such models is crucial in this process.
Indeed, it helps clarifying and testing assumptions, finding the smallest number of determining factors to explain the biological phenomena, and analysing the experimental results \cite{AB2003}.
Furthermore, modeling has already impacted on research at molecular level \cite{Nowak:May:2000} and important
results have been obtained in modeling the virus dynamics for several infections, such as the
HIV \cite{Nowak:Bangham:1996,PKD1993,PNML1996}, hepatitis B \cite{MRB1991}, hepatitis C \cite{NLDG1998}, and influenza \cite{BR1994a}.
In the particular case of the HIV infection, the dynamics of the within-host infection goes as follows: First, the HIV enters a T cell.
Being a retrovirus, once the HIV is
inside the T~cell, it makes a DNA copy of its viral RNA. For this process it requires the reverse transcriptase (RT) enzyme. The DNA of the virus is then inserted in the T-cell's DNA. The latter in turn will produce viral particles that can bud off the T~cell to infect other ones. Before one such viral particle leaves
the infected cell, it must be equipped with {\em protease}, which is an enzyme used to cleave a long
protein chain. Without protease the virus particle is uncapable of infecting other T cells.
One of the key characteristics of HIV is its extensive genetic variability. In fact, the HIV seems
to be changing continuously in the course of each infection and typically the virus strain that initiates the patient's infection differs from the one found a year ore more after the infection. In this respect, the introduction of the different strains in the model
is crucial for it to be realistic.
In general terms, one can also say that Model (\ref{nb:antigenic}) is similar in spirit to other models such as food-chain models. The latter have attracted substantial interest by a number of authors. See for example~\cite{roysolimano,kooibsb2001,kooibsb1998} and references therein.
However, the presence of more general quadratic terms or of logistic terms on the
right hand side of the different ``strains" leads to a potentially richer dynamics than
the globally stable present in Model \ref{nb:antigenic}. See for example \cite{kooibsb2001} for a bifurcation analysis of certain food chain models.
\section{Preliminaries} \label{prelims}
\subsection{Parameter reduction}
As already noticed in the introduction, a more restricted form of \eqref{nb:antigenic} with a number of parameters being strain-independent has been studied by \citeasnoun{Nowak:Bangham:1996}. It turns out that some parameters in \eqref{nb:antigenic} can indeed be taken to be strain-independent, as we now show.
We start by noting that if $k_i=0$, then $v_i$ decays exponentially with rate $u_i$. Also, if $p_i=0$, then the dynamics of $z_i$ does not impinges on the rest of the system. Thus, without loss of generality, we can assume that $k_i,p_i\not=0$, $i=1,\ldots,n$. In this case, following \citeasnoun{Pastore:2005}, we rescale the $v_i$s and $\beta_i$s. In addition, we also rescale the $z_i$s. More precisely, the change of variables
\begin{equation}
v_i\mapsto \frac{k_i}{k}v_i, \quad
z_i\mapsto \frac{p}{p_i}z_i
\quad\text{and}\quad
\beta_i \mapsto \frac{k}{k_i}\beta_i
\label{red:change}
\end{equation}
takes \eqref{nb:antigenic} into
\begin{equation}
\left\{
\begin{array}{rclr}
\dot{x}&=&\lambda-dx-x\sum_{i=1}^n\beta_i v_i,& \\
\dot{y_i}&=&\beta_i x v_i - a_iy_i - py_iz_i,& i=1,\ldots,n,\\
\dot{v_i}&=& ky_i-u_iv_i, & i=1,\ldots,n,\\
\dot{z_i}&=&c_iy_iz_i-b_iz_i, & i=1,\ldots,n.
\end{array}
\right.
\label{our:antigenic}
\end{equation}
Intuitively, the change of variables \eqref{red:change} reflects that only the ratio $\beta_i/k_i$ turns out to be important, and that this can be already taken into account in the $\beta_i$s, provided we rescale the $v_i$s. Moreover, it also shows that the precise value $p_i$ does not matter, as long as it is nonzero.
\subsection{Dimensionless constants}
In \citeasnoun{Nowak:Bangham:1996}, it was already observed that, in addition to $R_0$, the quantity $cy^*/b$ is also important in determining the global equilibria. In a more precise fashion, \citeasnoun{Nowak:May:2000} define
\[
R_{I}=1+\frac{\beta b k}{c d u},
\]
which they term the basic reproductive ratio in the presence of immune response.
\
However, we follow \citeasnoun{Pastore:2005}, and find more convenient to write
\[
R_{I}=1+\frac{R_0}{I_0},
\]
where
\[
I_0=\frac{c\lambda}{ab}.
\]
An alternative dimensionless constant is the CTL reproduction number given by
\[
P_0=\frac{I_0(R_0-1)}{R_0}
\]
Although only two constants among $R_0$, $I_0$ and $P_0$ are independent, and are sufficient to describe the regimes of \eqref{our:antigenic}, we have chosen to use both, three constants, as some conditions are better characterised by $P_0$, while much of the algebra in the Lyapunov functional derivatives is better handled by expressing them in terms of $R_0$ and $I0$. Thus, we shall use the strain dependent constants:
\begin{equation}
R_0^i=\frac{\beta_i\lambda k}{da_iu_i},\quad I_0^i=\frac{c_i\lambda}{a_ib_i}\quad
\text{and}\quad P_0^i=\frac{I_0^i(R_0^i-1)}{R_0^i}.
\end{equation}
\subsection{Strain sets}
In order to deal with plethora of equilibria that arises in System~\eqref{our:antigenic}, we shall now define some notation for some special set of strain indices. This will allows us to deal conveniently with the combinatorial structure of the equilibria.
Without loss of generality, we shall assume that the strains are indexed by increasing order of $R_0^i$.
Let $\mathcal{N}=\{1,2,\ldots,n\}$. Then, we define the set of the strong responders as
\[
\mathcal{S}=\{i \in\mathcal{N} \,| \,P_0^i>1\}.
\]
\begin{defi}
We shall say that the set $\mathcal{S}$ of strong responders is consistent, if
\[
\mathcal{S}=\{1,\ldots,m\}, \quad 1\leq m\leq n.
\]
This is certainly the cases, if the $I_0^i$ satisfy $I_0^{i}\geq I_0^{i+1}$. In particular, this holds if $I_0^i=I_0$ as in the model studied by \citeasnoun{Nowak:Bangham:1996}.
\end{defi}
Given a set of indices $\mathcal{I}$, we define
\[
\rho_0^\mathcal{I}=\sum_{i\in\mathcal{I}}\frac{R_0^i}{I_0^i}.
\]
Two important definitions are given below:
\begin{defi}
We shall say that $\mathcal{I}\subset\mathcal{S}$ is an antigenic set, if
\begin{equation}
R_0^i\geq 1 + \rho_0^\mathcal{I},\quad i\in\mathcal{I}
\label{ineq:cond:1}
\end{equation}
holds. In addition, if
\begin{equation}
R_0^i\leq 1 + \rho_0^\mathcal{I},\, i\not\in\mathcal{I}.\label{ineq:cond:2}
\end{equation}
also holds, we shall say that $\mathcal{I}$ is a stable antigenic set.
Notice that, if $\mathcal{S}\not=\emptyset$, we have that $\mathcal{I}=\{1\}$ is an antigenic set. Let $l$ be the largest integer for which the set $\mathcal{J}=\{1,\ldots,l\}$ is an antigenic set. Then we shall say that $\mathcal{J}$ is the maximal antigenic set.
\end{defi}
Two important facts about the maximal and stable antigenic sets are collected below:
\begin{lem}
\label{lem:sets}
Assume that $\mathcal{S}\not=\emptyset$, and that the strain basic reproductive numbers are distinct.
\begin{enumerate}
\item
If a stable antigenic set exists, then it is also the maximal antigenic set. In particular, stable antigenic sets are unique.
\item If $\mathcal{N}$ is the maximal antigenic set, then it is a stable antigenic set.
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item Assume that $\mathcal{I}$ is a stable antigenic set and let $i_l=\max\mathcal{I}$. If $1\leq k<l$, then $i_k\in\mathcal{I}$. Indeed, by the increasing ordering and \eqref{ineq:cond:1}, we have that
\[
R_0^{i_k}> R_0^{i_l}\geq 1+\rho_0^\mathcal{I}.
\]
But this contradicts \eqref{ineq:cond:2}, thus we must have $i_k\in\mathcal{I}$.
Now assume that $\mathcal{I}'=\{i_1,\ldots,i_{l+1}\}$ is also an antigenic set. Then we must have
\[
R_0^{l+1}\geq 1 + \rho_0^{\mathcal{I}'}= 1 + \rho_0^\mathcal{I}+\frac{R_0^{l+1}}{I_0^{l+1}}\geq 1+\rho^\mathcal{I}.
\]
But this again contradicts \eqref{ineq:cond:2} and, hence, that $\mathcal{I}$ is stable antigenic. Therefore, it is maximal.
\item This follows since, in this case, \eqref{ineq:cond:2} cannot be violated.
\end{enumerate}
\end{proof}
\section{The model with antigenic variation}\label{antimodel}
In this section we shall study the stability of of \eqref{our:antigenic} in the non-negative orthant of $\mathbb{R}^{3n+1}$ which we shall denote by $\O$. The positive orthant will be denoted by $\O^+$.
We observe that the planes $z_i=0$ and that $\O$
are positive invariant sets for \eqref{nb:antigenic}, since the field points inwards.
The equilibria and stability characteristics of \eqref{our:antigenic} depend significantly whether the $R_0^i$s are distinct or not. In \S 3.1 and \S 3.2, we describe the equilibria and study their stability in the case of unique fitnesses, i.e., we assume that if $R_0^i=R_0^j$, then $i=j$. In this case, with the adopted order, we have that
\[
R_0^{i}>R_0^{i+1},\quad i=1,\ldots,n-1.
\]
Additional remarks when the fitnesses are not unique can be found in Section 3.3.
\subsection{Equilibria}
Let $\mathcal{N}=\{1,2,\ldots,n\}$. It turns out that the equilibria of \eqref{nb:antigenic} can be conveniently indexed by $(j,\mathcal{J})$, where $\mathcal{J}\subseteq\mathcal{N}$, and either $j=0$ or $j\not\in\mathcal{J}$. The corresponding equilibrium point will be denoted by $X_{j,\mathcal{J}}$.
Using this notation, we have
\begin{lem}
\label{lem:eq}
System \eqref{our:antigenic} has $2^{n-1}(2+n)$ equilibrium points which can be written as
\[
X_{j,\mathcal{J}} = \left(\frac{\lambda}{d}Q_{j,\mathcal{J}}^x,\frac{\lambda}{a_1}Q_{j,\mathcal{J}}^{y_1},\ldots,\frac{\lambda}{a_n}Q_{j,\mathcal{J}}^{y_n}
,\frac{d}{\beta_1}Q_{j,\mathcal{J}}^{v_1},\ldots,\frac{d}{\beta_n}Q_{j,\mathcal{J}}^{v_n},\frac{a_1}{p}Q_{j,\mathcal{J}}^{z_1},
\ldots,\frac{a_n}{p}Q_{j,\mathcal{J}}^{z_n}\right),
\]
where
\begin{enumerate}
\item
\[
Q_{0,\emptyset}^x=1,\quad\text{and}\quad Q_{0,\emptyset}^{y_i}=Q_{0,\emptyset}^{v_i}=Q_{0,\emptyset}^{z_i}=0.
\]
\item If $j$ is such that $1\leq j \leq n$, then we have
\[
Q_{j,\emptyset}^x=\frac{1}{R_0^j},\quad Q_{j,\emptyset}^{y_j}=1-\frac{1}{R_0^j},\quad
Q_{j,\emptyset}^{v_j}=R_0^j-1\quad\text{and}\quad Q_{j,\emptyset}^{z_j}=0.
\]
and
\[
Q_{j,\emptyset}^{y_i}=Q_{j,\emptyset}^{v_i}=Q_{j,\emptyset}^{z_i}=0,\quad i=1,\ldots,n,\quad i\not=j.
\]
\item Given $\mathcal{J}\subseteq \mathcal{N}$, we have
\[
Q_{0,\mathcal{J}}^x=\frac{1}{1+\rho_0^{\cj}}
\]
and
\[
Q_{0,\mathcal{J}}^{y_i}=\frac{1}{I_0^i},\quad Q_{0,\mathcal{J}}^{v_i}=\frac{R_0^i}{I_0^i},\quad Q_{0,\mathcal{J}}^{z_i}=\frac{R_0^i}{1+\rho_0^{\cj}}-1, \quad i\in\mathcal{J};
\]
also
\[
Q_{0,\mathcal{J}}^{y_i}= Q_{0,\mathcal{J}}^{v_i}= Q_{0,\mathcal{J}}^{z_i}=0,\quad i\not\in\mathcal{J}.
\]
\item Given a proper subset $\mathcal{J}\subset \mathcal{N}$, and $1\leq j'\leq n, j'\not\in \mathcal{J}$, we have that
\[
Q_{j',\mathcal{J}}^x=\frac{1}{R_0^{j'}},\quad Q_{j',\mathcal{J}}^{y_{j'}}=1-\frac{1}{R_0^{j'}}-\frac{\rho_0^{\cj}}{R_0^{j'}},\quad
Q_{j',\mathcal{J}}^{v_{j'}}=R_0^{j'}-1-\rho_0^{\cj},\quad Q_{j',\mathcal{J}}^{z_{j'}}=0;
\]
for $i\in \mathcal{J}$, we have
\[
Q_{j',\mathcal{J}}^{y_i}=\frac{1}{I_0^i},\quad Q_{j',\mathcal{J}}^{v_i}=\frac{R_0^i}{I_0^i}, \quad
Q_{j',\mathcal{J}}^{z_i}=\frac{R_0^i}{R_0^{j'}}-1.
\]
For $i\not\in \mathcal{J}$, and $i\not=j'$, we have
\[
Q_{j',\mathcal{J}}^{y_i}=Q_{j',\mathcal{J}}^{v_i}=Q_{j',\mathcal{J}}^{z_i}=0.
\]
\end{enumerate}
\end{lem}
\begin{proof}
The first equilibrium is trivial. The second type of equilibria is obtained by choosing an index $j$ such that $z_j=0$, but $y_j\not=0$. We can choose only one such $j$, since this determines $x$. For the other indices $i$, we set $y_i=v_i=z_i=0$. The first equation then determines $v_j$. The third type is obtained by choosing a set $\mathcal{J}$ of indices, such that, for $i\in\mathcal{J}$, we have $z_i\not=0$. This readily determines $y_i$ and $v_i$. For $i\not\in\mathcal{J}$, we have $y_i=v_i=z_i=0$. The first equation, then, determines $x$. Finally, the last equilibria is found by having a set of indices $\mathcal{J}$, as in the equilibrium of the third type, and then choosing an index $j'\not\in\mathcal{J}$ as in the second equilibrium. Again, only one such $j'$ can be chosen.
\end{proof}
\subsection{Stability analysis}
We are now ready to study the global stability of the equilibria of System~\ref{our:antigenic}. Surprisingly, although there is a large number of equilibria, only four of them will be globally stable. In what follows, unless otherwise is said, we shall assume that that $R_0^{i}>R_0^{i+1}$, for $i=1,\ldots,n-1$, and that the set of strong responders is consistent.
\begin{thm}
For system \eqref{our:antigenic}, defined on $\O$, and with initial condition at its interior, there is always a globally asymptotically stable equilibrium
given as follows:
\begin{enumerate}
\item $X_{0,\emptyset}$, if $R_0^n\leq 1$;
\item $X_{1,\emptyset}$, if $1<R_0^1$, and $P_0^1\leq1$.
\item If $P_0^1>1$, let $\mathcal{J}$ be the maximal antigenic set. Then
\begin{enumerate}
\item If $\mathcal{J}$ is a stable antigenic set, then the equilibrium $X_{0,\mathcal{J}}$ is globally asymptotically stable.
\item Otherwise, let $j'$ be the smallest integer such that
$j'\not\in\mathcal{J}$, which exists by virtue of Lemma~\ref{lem:sets}. Then the equilibrium $X_{j',\mathcal{J}}$ is globally asymptotically stable.
\end{enumerate}
\end{enumerate}
\label{thm:mutation}
\end{thm}
\begin{proof}[Proof of Theorem~\ref{thm:mutation}]
Following \citeasnoun{Korobeinikov:2004}, we shall use the following Lyapunov function:
\begin{align*}
V(x,\mathbf{y},\mathbf{v},\mathbf{z})&=x-x^*\ln(x/x^*) + \sum_{i=1}^n\left(y_i-y_i^*\ln(y_i/y_i^*)\right) +\\
& \qquad + \sum_{i=1}^nC_i\left(v_i-v_i^*\ln(v_i/v_i^*)\right)
+ p\sum_{i=1}^n\frac{1}{c_i}\left(z_i-z_i^*\ln(z_i/z^*)\right),
\end{align*}
where $C_i$ will be a constant to be specified later on.
Then, using the uniform notation of the the equilibria of \eqref{our:antigenic}, that is set in Lemma~\ref{lem:eq}, see \S3.1, we have that
\begin{align}
\dot{V}=& \frac{d}{dt}V(x(t),\mathbf{y}(t),\mathbf{v}(t),\mathbf{z}(t))\nonumber\\
&\lambda\left[ + Q_{j,\mathcal{J}}^x+ \sum_{i=1}^nQ_{j,\mathcal{J}}^{y_i} + \frac{d}{\lambda}\sum_{i=1}^nC_i\frac{u_i}{\beta_i}Q_{j,\mathcal{J}}^{v_i}+ \sum_{i=1}^n\frac{Q_{j,\mathcal{J}}^{z_i}}{I_0^i}\right]-\left[dx+\frac{\lambda^2 Q_{j,\mathcal{J}}^x}{dx}\right]-\nonumber\\
&-\lambda\sum_{i=1}^n\frac{\beta_i}{a_i}Q_{j,\mathcal{J}}^{y_i}\frac{xv_i}{y_i} - dk\sum_{i=1}^n\frac{C_i}{\beta_i}Q_{j,\mathcal{J}}^{v_i}\frac{y_i}{v_i}
+ \sum_{i=1}^ny_i\left[kC_i-a_i-a_iQ_{j,\mathcal{J}}^{z_i}\right] +\label{master}\\
& \frac{\lambda}{d}Q_{j,\mathcal{J}}^x\sum_{i=1}^nv_i\beta_i -\sum_{i=1}^nC_iu_iv_i
+ p\lambda\sum_{i=1}^na_iz_i\left[Q_{j,\mathcal{J}}^{y_i}-\frac{1}{I_0^i}\right].\nonumber
\end{align}
For the first two equilibria, we consider the Lyapunov function \eqref{master}, with $C_i=a_i/k$.
Then, on using the structure of equilibria of \eqref{our:antigenic}, we may write $\dot{V}$ as follows:
\begin{align*}
\dot{V}=& \lambda\left[1+Q^x_{j,\mathcal{J}} + \sum_{i=1}^nQ^{y_i}_{j,\mathcal{J}}+\sum_{i=1}^n\frac{Q^{v_i}_{j,\mathcal{J}}}{R_0^i}+\sum_{i=1}^n\frac{Q^{z_i}_{j,\mathcal{J}}}{I_0^i}\right]-\left[dx+\frac{\lambda^2 Q^x_{j,\mathcal{J}}}{dx}\right]-\\
&\quad -\lambda\sum_{i=1}^n\frac{\beta_i}{a_i} Q^{y_i}_{j,\mathcal{J}}\frac{xv_i}{y_i}-
d\sum_{i=1}^n\frac{a_i}{\beta_i}Q^{v_i}_{j,\mathcal{J}}\frac{y_i}{v_i}-\sum_{i=1}^na_iQ^{z_i}_{j,\mathcal{J}}y_i +\\
&\quad +
\frac{\lambda}{d}\sum_{i=1}^n\beta_iv_i\left[Q^x_{j,\mathcal{J}}-\frac{1}{R_0^i}\right]+p\lambda\sum_{i=1}^n\frac{z_i}{a_i}\left[Q^{y_i}_{j,\mathcal{J}}-\frac{1}{I_0^i}\right].
\end{align*}
For $X_{0,\emptyset}$, we find, using Lemma~\ref{lem:eq}, that
\[
\dot{V}=2\lambda-\left[dx+\frac{\lambda^2}{dx}\right]+\frac{\lambda}{d}\sum_{i=1}^n\beta_iv_i\left[1-\frac{1}{R_0^i}\right]-p\lambda\sum_{i=1}^n\frac{z_i}{a_iI_0^i}.
\]
Since $R_0^i\leq1$, for $i=1,\dots,n$, and
\[
dx+\frac{\lambda^2}{dx}\geq 2\lambda,
\]
we have that $\dot{V}<0$ in $\O^+$. Hence, that $X_{0,\emptyset}$ is globally asymptotically stable in this case.
Now, suppose that $1<R_0^1$, and that $P_0^1\leq1$.
In this case, Lemma~\ref{lem:eq} yields that
\begin{align*}
\dot{V}=& \lambda\left[3\left(-\frac{1}{R_0^1}\right)+\frac{2}{R_0^1} \right]-\left[dx+\frac{\lambda^2}{R_0^1dx}\right]-
\frac{\lambda}{a_1}\beta_1 \left(1-\frac{1}{R_0^1}\right)\frac{xv_1}{y_1}-
\frac{a_1d}{\beta_1}(R_0^1-1)\frac{y_1}{v_1} +\\
&\quad +
\frac{\lambda}{d}\sum_{i=1}^n\beta_iv_i\left[\frac{1}{R_0^1}-\frac{1}{R_0^i}\right]+p\lambda\frac{z_1}{a_1}\left[1-\frac{1}{R_0^1}-\frac{1}{I_0^1}\right]-p\lambda\sum_{i=2}^n\frac{z_i}{a_iI_0^i}.
\end{align*}
The last term in $\dot{V}$ is clearly negative. We also observe that, since $R_0^{i}<R_0^{1}$, for $1 < i\leq n$, we have that
\[
\sum_{i=1}^n\beta_iv_i\left[\frac{1}{R_0^1}-\frac{1}{R_0^i}\right]<0.
\]
Also, since $P_0^1\leq1$, then we have that $R_0^1 \leq 1+ R_0^1/I_0^1$. Therefore, the last three terms in the expression for $\dot{V}$ are negative.
For the remaining terms, let us write
\[
\frac{\lambda^2}{R_0^1}=\left(\frac{\lambda}{R_0^1}\right)^2 + \left(1-\frac{1}{R_0^1}\right)\frac{\lambda^2}{R_0^1}.
\]
Then we have that
\[
dx+\left(\frac{\lambda^2}{R_0^1dx}\right)^2\geq 2\frac{\lambda}{R_0^1},
\]
and that
\[
.\frac{\lambda^2}{R_0^1dx}\left(1-\frac{1}{R_0^1}\right) +\lambda\frac{\beta_1}{a_1}\left(1-\frac{1}{R_0^1}\right)\frac{xv_1}{y_1}+\frac{da_1}{\beta_1}(R_0^1-1)\frac{y_1}{v_1}\geq 3\lambda\left(1-\frac{1}{R_0^1}\right).
\]
Thus $\dot{V}<0$ in $\O^+$, and hence we have that $X_{1,\emptyset}$ is a globally asymptotically stable equilibrium.
Finally, if $P_0^1>1$, then let $\mathcal{J}$ be the maximal antigenic set. First, we assume that $\mathcal{J}$ is stable antigenic and show that $X_{0,\mathcal{J}}$ is globally asymptotically stable. In this case, we use the Lyapunov function \eqref{master}, with $C_i=x^*\beta_i/u_i$.
Using Lemma~\ref{lem:eq}, this can be further recast as $\dot{V}=\dot{V_1}+\dot{V_2}$, where
\begin{align*}
\dot{V_1}=&\lambda\left[3 - \frac{1}{1+\rho_0^{\cj}}\right]-\left[dx+\frac{\lambda^2 }{dx\left(1+\rho_0^{\cj}\right)}\right]
- \lambda\sum_{i\in\mathcal{J}}\frac{\beta_i}{I_0^ia_i} \frac{xv_i}{y_i}
-\frac{\lambda k}{1+\rho_0^{\cj}}\sum_{i\in\mathcal{J}} \frac{R_0^i}{u_iI_0^i}\frac{y_i}{v_i} ;\\
\dot{V_2}=&\sum_{i\not\in\mathcal{J}}a_iy_i\left[\frac{R_0^i}{1+\rho_0^{\cj}}-1\right] -p\lambda\sum_{i\not\in\mathcal{J}}\frac{z_i}{a_iI_0^i}.
\end{align*}
We treat $\dot{V_2}$ first. The last term is clearly negative. Also, since for $i\not\in\mathcal{J}$, we have that
\[
\frac{R_0^i}{1+\rho_0^{\cj}}<1\quad\text{and thus that}\quad \sum_{i\not\in\mathcal{J}}y_i\left[\frac{R_0^i}{1+\rho_0^{\cj}}-1\right]<0.
\]
Therefore, $\dot{V_2}<0$, when $\mathcal{J}\not=\mathcal{N}$.
Let
\[
\eta=\frac{\rho_0^{\cj}}{1+\rho_0^{\cj}}\quad\text{and}\quad
\eta_i=\frac{\frac{R_0^i}{I_0^i}}{1+\rho_0^{\cj}},\quad i\in\mathcal{J}.
\]
Then, we may write $\dot{V_1}$ as
\begin{align*}
\dot{V_1}&=\lambda\left[3 - \frac{1}{1+\frac{\rho_0^{\cj}}{I_0}}\right]-\left[dx+\frac{ \lambda^2}{dx\left(1+\rho_0^{\cj}\right)^2}\right]
-\\
&\qquad - \sum_{i\in\mathcal{J}}\frac{\lambda^2\eta_i}{dx\left(1+\rho_0^{\cj}\right)}
- \lambda\sum_{i\in\mathcal{J}}\frac{\beta_i}{I_0^ia_i} \frac{xv_i}{y_i}
-\frac{\lambda k}{1+\rho_0^{\cj}}\sum_{i\in\mathcal{J}} \frac{R_0^i}{u_iI_0^i}\frac{y_i}{v_i}
\end{align*}
For each $i\in\mathcal{J}$, we have
\[
-\frac{\lambda^2\frac{R_0^i}{I_0^i}}{dx\left(1+\rho_0^{\cj}\right)^2}
- \lambda\frac{\beta_i}{I_0^ia_i} \frac{xv_i}{y_i}
-\frac{\lambda k}{1+\rho_0^{\cj}}\frac{R_0^i}{u_iI_0^i}\frac{y_i}{v_i}\leq
-3\lambda\left(\frac{\frac{R_0^i}{I_0^i}}{1+\rho_0^{\cj}}\right),
\]
and that
\[
dx+\frac{\lambda^2}{dx\left(1+\rho_0^{\cj}\right)^2}>2\lambda\frac{1}{1+\rho_0^{\cj}}.
\]
After combining these estimates and summing for $i\in\mathcal{J}$, we get that $\dot{V_1}\leq0$
and thus we have the result, If $\mathcal{J}$ is a proper subset of $\mathcal{N}$. In the case that $\mathcal{J}=\mathcal{N}$, we have $\dot{V}\leq0$, with equality ocurring only when
\[
x=\frac{\lambda}{d}Q_{0,\mathcal{J}}^x\quad\text{and}\quad \frac{v_i}{y_i}=\frac{k_i}{u_i}.
\]
Inasmuch this plane is not invariant by the corresponding flow---other than the point $X_{0,\mathcal{J}}$---we have global stability as a consequence of LaSalle's theorem ~\cite{LaSalle:1964}.
For the fourth point, we use a mix of the two Lyapunov functions above, namely:
\begin{align*}
V(x,\mathbf{y},\mathbf{v},\mathbf{z})=&x-x^*\ln(x/x^*) + \sum_{i=1}^n\left(y_i-y_i^*\ln(y_i/y_i^*)\right) + \\
&\qquad +x^*\sum_{\sti{i=1}{i\not=j'}}^n\frac{\beta_i}{u_i}\left(v_i-v_i^*\ln(v_i/v_i^*)\right)
+ p\sum_{i=1}^n\frac{1}{c_i}\left(z_i-z_i^*\ln(z_i/z^*)\right)+\\
&\qquad + \frac{a_{j'}}{k}\left(v_{j'}-v_{j'}^*\ln(v_{j'}/v_{j'}^*)\right).
\end{align*}
Computing $\dot{V}$ and using the uniform notation, we find that
\begin{align*}
\dot{V}=&\lambda\left[1 + Q_{j',\mathcal{J}}^x+\sum_{i=1}^nQ_{j',\mathcal{J}}^{y_i} + Q_{j',\mathcal{J}}^x\sum_{\sti{i=1}{i\not=j'}}^nQ_{j',\mathcal{J}}^{v_i} + \sum_{i=1}^n \frac{Q_{j',\mathcal{J}}^{z_i}}{I_0^i} + \frac{Q^{v_{j'}}_{j',\mathcal{J}}}{R_0^{j'}} \right]-\left[dx+\frac{\lambda^2 Q_{j',\mathcal{J}}^x}{dx}\right]-\\
&\qquad - \lambda\sum_{i=1}^n\frac{\beta_i Q_{j',\mathcal{J}}^{y_i}}{a_i} \frac{xv_i}{y_i}
-\lambda kQ_{j',\mathcal{J}}^x\sum_{\sti{i=1}{i\not=j'}}^n\frac{Q_{j',\mathcal{J}}^{v_i}}{u_i}\frac{y_i}{v_i} + \sum_{\sti{i=1}{i\not=j'}}^ny_i\left[\frac{\beta_i \lambda k}{du_i}Q_{j',\mathcal{J}}^x-a_i-a_iQ_{j',\mathcal{J}}^{z_i}\right]+\\ &\qquad +p\lambda\sum_{i=1}^n\frac{z_i}{a_i}\left[Q_{j',\mathcal{J}}^{y_i}-\frac{1}{I_0^i}\right] -
\frac{da_{j'}}{\beta_{j'}}Q^{v_{j'}}_{j',\mathcal{J}}\frac{y_{j'}}{v_{j'}} - a_{j'}Q^{z_{j'}}_{j',\mathcal{J}}y_{j'} +
\frac{\lambda\beta_{j'}}{d}v_{j'}\left[Q^x_{j',\mathcal{J}}-\frac{1}{R_0^{j'}}\right]
\end{align*}
On using Lemma~\ref{lem:eq}, and that
\[
1=\left(1-\frac{1}{R_0^{j'}}-\frac{\rho_0^{\cj}}{R_0^{j'}}\right)+\frac{1}{R_0^{j'}}+\frac{\rho_0^{\cj}}{R_0^{j'}},
\]
where each term in the sum is positive, but smaller than one, we rewrite it as $ \dot{V}=\dot{V_1}+\dot{V_2} + \dot{V_3} + \dot{V_4}$, where
\begin{align*}
\dot{V_1}&=\left(1-\frac{1}{R_0^{j'}}-\frac{\rho_0^{\cj}}{R_0^{j'}}\right)\left[3\lambda-\frac{\lambda^2}{dxR_0^{j'}}-\frac{\lambda\beta_{j'}}{a_{j'}}\frac{xv_{j'}}{y_{j'}}-\frac{da_{j'}R_0^{j'}}{\beta_{j'}}\frac{y_{j'}}{v_{j'}}\right],\\
\dot{V_2}&=\frac{2\lambda}{R_0^{j'}}-\left[dx+\frac{\lambda^2}{dx(R_0^{j'})^2}\right],\\
\dot{V_3}&=3\lambda\frac{\rho_0^{\cj}}{R_0^{j'}}-\frac{\lambda^2\rho_0^{\cj}}{dx(R_0^{j'})^2}-\lambda\sum_{i\in\mathcal{J}}\frac{\beta_i}{a_iI_0^i}\frac{xv_i}{y_i}-\frac{\lambda k}{R_0^{j'}}\sum_{i\in\mathcal{J}}\frac{R_0^i}{u_iI_0^i}\frac{y_i}{v_i},\\
\dot{V_4}&=\frac{p\lambda z_{j'}}{a_{j'}}\left(1-\frac{1}{R_0^{j'}}-\frac{\rho_0^{\cj}}{R_0^{j'}}-\frac{1}{I_0^{j'}}\right).
\end{align*}
The terms $\dot{V_1}$, $\dot{V_2}$ and $\dot{V_3}$ can be treated similarly as in the previous equilibria and are all nonpositive in the interior.
As for $\dot{V_4}$, first we observe that
\[
1-\frac{1}{R_0^{j'}}-\frac{\rho_0^{\cj}}{R_0^{j'}}-\frac{1}{I_0^{j'}}=\frac{1}{R_0^{j'}}\left(R_0^{j'}-1-\rho_0^{\cj}-\frac{R_0^{j'}}{I_0^{j'}}\right)
\]
if $j'\not\in\mathcal{S}$, then we that
\[
R_0^{j'}-1-\rho_0^{\cj}-\frac{R_0^{j'}}{I_0^{j'}}<R_0^{j'}-1-\frac{R_0^{j'}}{I_0^{j'}}\leq0.
\]
If $j'\in\mathcal{S}$, then let $\mathcal{J}'=\mathcal{J}\cup\{j'\}$. Then we have that
\[
1-\frac{1}{R_0^{j'}}-\frac{\rho_0^{\cj}}{R_0^{j'}}-\frac{1}{I_0^{j'}}=\frac{1}{R_0^{j'}}\left(R_0^{j'}-1 -\rho_0^{\mathcal{J}'}\right).
\]
Since $\mathcal{J}$ is the maximal antigenic set, we must have that
\[
R_0^{j'}-1 -\rho_0^{\mathcal{J}'}\leq0.
\]
\end{proof}
\subsection{Nonunique Fitness}
Given the non-generic nature of this case, we shall only briefly discuss the stability when some of the strains have the same fitness, i.e., there exists at lest one index set $\Gamma$, such that $R_0^i=R_0^j$, for $i,j\in\Gamma$. Notice that, in this case, we have non-isolated equilibria.
We start by observing that the computation with the Lyapunov function for the equilibrium $X_{0,\emptyset}$ does not depend on the uniqueness of fitness. Hence we have
\begin{cor}
If $R_0^i\leq1$, for $i=1,\ldots,n$, then $X_{0,\emptyset}$ is a globally asymptotically stable equilibrium.
\end{cor}
The case $I_0^i=I_0$, $1<R_0^1$ and $P_0^1<1$ can also be partially treated:
\begin{prop}
\label{prop:ls}
Let $\Gamma$ be the set of indices $i\in\mathcal{N}$ such that $R_0^i=R_0^n$. Let $E_\Gamma$ be the set satisfying
\[
x^*=\frac{\lambda}{dR_0^n},\quad y_j=v_j=0,\quad j\not\in\Gamma,\quad \sum_{j\in\Gamma}\beta_jv_j=\frac{dx^*-\lambda}{x^*}, \quad v_j\geq0,
\]
\[
v_i=\frac{k}{u}y_i,\quad\text{and}\quad z_i=0, \quad i\in\mathcal{N}.
\]
If $\mathbf{x}(t,\mathbf{x}_0)$ is a solution of \eqref{nb:antigenic}, with initial condition $\mathbf{x}_0$, then
\[
\mathbf{x}(t)\to E_\Gamma,\quad\text{as}\quad t\to\infty.
\]
\end{prop}
\begin{proof}
Let us denote the omega set of $\mathbf{x}(t,\mathbf{x}_0)$ by $\Omega(\mathbf{x_0})$. As shown in \cite{Pastore:2005}, the solutions to \eqref{nb:antigenic} are bounded in $\mathbb{R}_+^{3n+1}$.
Hence, $\Omega(\mathbf{x_0})$ is compact.
Using the same Lyapunov function for $X_{1,\emptyset}$ as in Section 3.2, we find that
\begin{align*}
\dot{V}=& \lambda\left[3-\frac{1}{R_0^1} \right]-\left[dx+\frac{\lambda^2}{R_0^1dx}\right]-
\frac{\lambda}{a}\beta_1 \left(1-\frac{1}{R_0^1}\right)\frac{xv_1}{y_1}-
\frac{ad}{\beta_1}(R_0^1-1)\frac{y_1}{v_1} +\\
&\quad +
\frac{\lambda}{d}\sum_{i\not\in\Gamma}\beta_iv_i\left[\frac{1}{R_0^1}-\frac{1}{R_0^i}\right]+\frac{p\lambda}{a}\sum_{i=1}^nz_i\left[1-\frac{1}{R_0^i}-\frac{1}{I_0}\right].
\end{align*}
The same calculations in section 3.2 shows that $\dot{V}\leq0$. However, notice that $\dot{V}=0$ in $E_\Gamma$. If $s\in\mathbb{R}^N$ and $S\subset\mathbb{R}^N$ is closed, then let
\[
d(s,S)=\min_{s'\in S}\|s-s'\|.
\]
LaSalle's invariant principle then yields that,
\[
\lim_{t\to\infty}d(\mathbf{x}(t),E_\Gamma)=0.
\]
\end{proof}
\begin{cor}
If $R_0^1>R_0^i$ for $i=2,\ldots,n$, then $X_{1,\emptyset}$ is a globally asymptotically stable equilibrium, when $1<R_0^1\leq1+R_0^1/I_0$.
\end{cor}
We have performed numerical calculations of \eqref{nb:antigenic}, using a high order Runge-Kutta method, which suggest that in the case treated by proposition~1, a solution of System~\eqref{nb:antigenic} will converge to a unique equilibrium point in $E_\Gamma$, that depends only on the initial condition.
Finally, when the viable set of strains is not the full antigenic variation, we have
\begin{prop}
If $I_0^i=I_0$, $P_0^1>1$. Assume that $\mathcal{J}\not=\mathcal{I}$ is a stable antigenic set, then we have that $X_{0,\mathcal{J}}$ is a globally asymptotic stable equilibrium.
\end{prop}
\begin{proof}
In this case, the estimate $\dot{V}_1\leq0$ remains valid. Moreover, if $\mathcal{J}\not=\mathcal{I}$, then we must have $\dot{V}_2<0$, which yields the result.
\end{proof}
\section{Conclusions} \label{conclusions}
In this work, we have performed a thorough study of System~\eqref{nb:antigenic}.
As a preliminary result, we have shown that, when the both the virus production rate and the CTL interaction rate are nonzero for all strains, then System~\eqref{nb:antigenic} is dynamically equivalent to System~\eqref{our:antigenic}, which has strain-independent virus production and CTL interaction rates. In particular, the precise nonzero values of the CTL interaction rates are completely irrelevant to the dynamical behaviour of the system. This seems to suggest that a more refined model which is able to capture this difference is needed.
We have also identified all the $2^{n-1}(2+n)$ equilibria of \eqref{our:antigenic} in Lemma~\ref{lem:eq}. When $n$ is large this can be quite a large number. Nevertheless, under the hypothesis of unique fitness, we were able to show that only four of them are dynamically relevant. More precisely, we assume that the virus basic reproduction rates, $R_0^i$ are distinct and that the CTL reproduction numbers, $P_0^i$ have the same ordering as the $R_0^i$. This last condition is automatically satisfied if $I_0^i=I_0$ for all $i$. In this case, Theorem~\ref{thm:mutation} shows that, if the largest reproduction number, $R_0^1$, is smaller than one, the the disease-free equilibrium---$X_{0,\emptyset}$ in the notation of Lemma~\ref{lem:eq}---is globally asymptotic stable. In this case, no strain is viable and the infection dies out. On the other hand, if $R_0^1>1$, but $P_0^1<1$, then only the first strain survives, and the infection persists. If $P_0^1>1$, then we have that the two outcomes are possible: either a (unique) stable antigenic set exists and then $X_{0,\mathcal{J}}$ is globally asymptotic stable. In this case, the set $\mathcal{J}$ determines the antigenic diversity. In other words, a strong immune response generates a larger antigenic variation. Alternatively, there exits a pair $(j',\mathcal{J})$ such that the point $X_{j',\mathcal{J}}$ is globally asymptotic stable. In this case, the strain with the weakest fitness will not actually trigger the CTL response at all in the long run.
In the case of absence of antigenic variation, i.e. System~\eqref{nb:immune}, then only the first outcome is possible. We were unable to interpret in a biological sense the combinatorial conditions of existence of a stable antigenic set, and we believe that this should be addressed in the future.
The results presented in Theorem~\ref{thm:mutation} show rigorously some of the inferences that have already been made in \citeasnoun{Nowak:Bangham:1996} based on extensive simulations of System~\eqref{nb:antigenic}.
We have also shown some results for very special cases in which the $R_0^i$s are not distinct. In these cases, the equilibria is not isolated and this complicates the matters further. Also, we have not addressed that case when the set of strong responders is not consistent, and this might also merit further study in the future.
\bibliographystyle{harvard}
|
1,116,691,500,349 | arxiv | \section{Introduction}
In recent years, Basis light front quantization (BLFQ) has emerged as one of the most
promising nonperturbative approaches which has been developed for solving many-body bound state problems in quantum field theories~\cite{Vary:2009gt,Zhao:2014xaa,Wiecki:2014,Li:2015zda}. It is based on Hamiltonian formalism and incorporates the advantages of the light front dynamics \cite{Brodsky:1997de,Hiller:2016itl}. This formalism has been successfully applied to QED systems including the electron anomalous magnetic moment~\cite{Zhao:2014xaa} and the strong coupling bound-state positronium problem \cite{Wiecki:2014}. It has also been applied to heavy quarkonia~\cite{Li:2015zda,Li:2017mlw,Lan:2019img}, heavy-light mesons~\cite{Tang:2018myz}, light mesons~\cite{Jia:2018ary,Lan:2019vui,Lan:2019rba}, and proton \cite{Xu:2019xhk} as QCD bound states.
Here, we employ an effective Hamiltonian that includes the holographic QCD confinement potential~\cite{Brodsky:2014yha} supplemented by the longitudinal confinement~\cite{Li:2015zda,Li:2017mlw} along with the one-gluon exchange (OGE) interaction with fixed coupling constant~\cite{Li:2015zda} to account for the dynamical spin effects. By solving its mass eigenstates in BLFQ, we generate the light-front wavefunctions (LFWFs) for the nucleon in the valence quark Fock space.
Fitting the quark mass, confining strength, and coupling constants, which are the model parameters, we obtain high quality descriptions of the electromagnetic form factors (FFs), radius and the parton distribution functions (PDFs) for the proton.
\section{Effective Hamiltonian and nucleon wavefunctions}
The LFWFs are obtained as the eigenfunctions of the light-front eigenvalue equation:
$
H_{\mathrm{eff}}\vert \Psi\rangle=M^2\vert \Psi\rangle
$
We write the LF effective Hamiltonian $H_{\rm eff}$ in the leading Fock representation as \cite{Xu:2019xhk}
\begin{eqnarray}
H_{\rm eff}&=&\sum_a \frac{{\vec p}_{\perp a}^2+m_{a}^2}{x_a}+\frac{1}{2}\sum_{a\ne b}\kappa^4 \Big[x_ax_b({ \vec r}_{\perp a}-{ \vec r}_{\perp b})^2
-\frac{\partial_{x_a}(x_a x_b\partial_{x_b})}{(m_{a}+m_{b})^2}\Big]
\nonumber\\&&+\frac{1}{2}\sum_{a\ne b} \frac{C_F 4\pi \alpha_s}{Q^2_{ab}} \bar{u}_{s'_a}(k'_a)\gamma^\mu{u}_{s_a}(k_a)\bar{u}_{s'_b}(k'_b)\gamma^\nu{u}_{s_b}(k_b)g_{\mu\nu},
\end{eqnarray}\label{hami}
where $\sum_a x_a=1$, and $\sum_a {\vec p}_{\perp a}=0$. $m_{a/b}$ is the quark mass, and $\kappa$ is the confining strength. $x_a$ represents the LF momentum fraction carried by quark $a$. Meanwhile, $\vec{p}_\perp$ is the relative transverse momentum, while $\vec{r}_\perp={ \vec r}_{\perp a}-{ \vec r}_{\perp b}$, related to the holographic variable~\cite{Brodsky:2014yha}, is the transverse separation between two quarks. The last term in Eq. (\ref{hami}) represents the OGE interaction~\cite{Li:2015zda}
Following BLFQ, we expand nucleon state in terms of the two dimensional harmonic oscillator (`2D-HO') basis in the transverse direction and the discretized plane-wave basis in the longitudinal direction \cite{Vary:2009gt,Zhao:2014xaa}. Each single-quark basis state is identified using four quantum numbers, $\bar \alpha = \{k,n,m,\lambda\}$.
For a given quark $i$, the longitudinal momentum fraction $x$ is defined as
$
x_i=p_i^+/P^+=k_i/K,
$ where $K=\sum_i k_i$. We adopt antiperiodic boundary conditions so $k_i$ are positive half-odd integers.
The quantum numbers, $n$ and $m$, denote radial excitation and angular momentum projection, respectively, of the particle within the 2D-HO basis, $\phi_{nm}(\vec{p}_\perp)$ \cite{Vary:2009gt,Zhao:2014xaa}.
The 2D-HO basis should form an efficient basis for systems subject to QCD confinement. For the quark spin, $\lambda$ is used to label the helicity. Our multi-body basis states have fixed values of the total angular momentum projection
$
M_J=\sum_i\left(m_i+\lambda_i\right).
$
The valence wavefunction in momentum space is then given by \cite{Xu:2019xhk}
\begin{eqnarray}\Psi^{M_J}_{\{x_i,\vec{p}_{\perp i},\lambda_i\}}&=&\sum_{\{n_im_i\}}\Bigg\{\psi(\{\bar{\alpha}_i\})\prod_{i=1}^{3}\frac{1}{b}\left(\frac{|\vec{p}_{\perp i}|}{b}\right)^{|m_i|}\nonumber\\
&\times &\sqrt{\frac{4\pi\times n_i!}{(n_i+|m_i|)!}}e^{i m_i\theta_i} L_{n_i}^{|m_i|}\left(-\frac{\vec{p}_{\perp i}^2}{b^2}\right)\exp{\left(-\frac{\vec{p}_{\perp i}^2}{2b^2}\right)}\Bigg\},\label{eq:psi_rs_basis_expansions}
\end{eqnarray}
where $\psi(\{\bar{\alpha}_i\})$ is the LFWF in the BLFQ basis obtained by diagnalizing Eq. (\ref{hami}) numerically. $b=0.6$ GeV is the HO scale parameter and $\tan(\theta)=p_2/p_1$. Here $L_n^{|m|}$ is the associated Laguerre function.
We truncate the infinite basis by introducing the longitudinal regulator $K_{\rm max}$ such that, $\sum_i k_i = K_{\rm max}$. In the transverse direction, we also truncate by limiting $N_\alpha=\sum_i (2n_i+| m_i |+1)$ for multi-particle basis state to $N_\alpha \le N_{\rm{max}}$.
The basis truncation corresponds to a UV regulator $\Lambda_{\rm UV}\sim b\sqrt{N_{\rm max}}$. Since we are modeling the proton at a low-resolution scale, we select $N_{\rm max}=10$ and $K_{\rm max}=16.5$. To attempt to simulate the effect of higher Fock spaces and the other QCD interactions, we use a different quark mass in the kinetic energy, $m_{\rm q/KE}$ and the OGE interaction, $m_{\rm q/OGE}$. We set our parameters $\{m_{\rm q/KE},~m_{\rm q/OGE},~\kappa,~\alpha_s\}=\{0.3~{\rm GeV},~0.2~{\rm GeV},~0.34~{\rm GeV},~1.1\}$ to fit the proton mass and the flavor Dirac FFs \cite{Cates:2011pz,Qattan:2012zf,Diehl:2013xca,Mondal:2015uha,Mondal:2016xpk}. For numerical convenience, we use a small gluon mass regulator ($\mu_g=0.05$ GeV) in the OGE interaction. We find that our results are insensitive to $0.08>\mu_g>0.01$ GeV.
\section{Form Factors and PDFs of the nucleon}
In the LF formalism, the flavor Dirac $F^q_1(Q^2)$ and Pauli $F^q_2(Q^2)$ FFs in the proton can be expressed in terms of overlap integrals as \cite{BDH}
\begin{eqnarray}
F_1^q(Q^2)=
\int_D \Psi^{\uparrow*}_{\{x^{\prime}_i,
\vec{p}^{\prime}_{\perp i},\lambda_i\}} \Psi^{\uparrow}_{\{x_i,
\vec{p}_{\perp i},\lambda_i\}}, \quad
F_2^q(Q^2)= -\frac{2 M }{(q^1-iq^2)}
\int_D \Psi^{\uparrow*}_{\{x^{\prime}_i,
\vec{p}^{\prime}_{\perp i},\lambda_i\}} \Psi^{\downarrow}_{\{x_i,
\vec{p}_{\perp i},\lambda_i\}},
\end{eqnarray}
with
$\int_D\equiv \sum_{\lambda_i}\int \prod_i[{dx d^2
\vec{p}_{\perp}\over 16 \pi^3}]_i 16 \pi^3 \delta\left(1-\sum x_j\right)
\delta^2\left(\sum \vec{p}_{\perp j}\right).
$
For the struck quark of flavor $q$,
${x'}_1=x_1;
~{\vec{p}'}_{\perp 1}=\vec{p}_{\perp 1}+(1-x_1) \vec{q}_\perp$ and ${x'}_i={x_i}; ~{\vec{p}'}_{\perp i}=\vec{p}_{\perp i}-{x_i} \vec{q}_\perp$ for the spectators ($i=2,3$). We consider the frame where the momentum transfer $q=(0,0,\vec{q}_{\perp})$, thus $Q^2=-q^2=\vec{q}_{\perp}^2$.
The nucleon Sachs form factors are written in the terms of Dirac and Pauli form factors,
\begin{eqnarray}
G_E^N(Q^2)= F_1^{N}(Q^2) - \frac{Q^2}{4M_N^2} F_2^{N}(Q^2), \quad\quad
G_M^N(Q^2)= F_1^{N}(Q^2) + F_2^N(Q^2),
\end{eqnarray}
where $F_{1/2}^N=\sum_q e_q F_{1/2}^{q/N}$ is the Dirac (Pauli) form factors of the nucleon. The electromagnetic radii of the nucleon can be obtained from
\begin{eqnarray}
{\langle r^2_{E}\rangle }^N=-6 \frac{d G_{E}^N(Q^2)}{dQ^2}{\Big\vert}_{Q^2=0}, \quad\quad
{\langle r^2_{M}\rangle}^N=-\frac{6}{G_{M}^N(0)} \frac{d G_{M}^N(Q^2)}{dQ^2}{\Big\vert}_{Q^2=0}.
\end{eqnarray}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.4\textwidth]{Ge_Gm.eps}
\includegraphics[width=0.4\textwidth]{f1_g1_4.eps}
\caption{Left panel: proton Sach's FFs $G_E^p (Q^2)$ and $G_M^p (Q^2)$
as functions of $Q^2$.
Right panel: comparison for $xf_1(x)$ in the proton from BLFQ (gray
bands) and global fits; and comparison for $xg_1(x)$ from BLFQ (gray
bands) and measured data from COMPASS. The experimental data can be found in Ref~\cite{Xu:2019xhk,Chakrabarti:2013dda}.
}
\label{figGEM}
\end{center}
\end{figure}
In the left panel of Fig. \ref{figGEM}, we show the $Q^2$ dependence of the proton electric and the magnetic Sach's FFs.
Overall, we obtain a reasonable agreement between theory and experiment for the proton electric FFs. At large $Q^2$, the magnetic form factor is also in good agreement with the data. However, our magnetic form factor at low $Q^2$ exhibits a small deviation from the data. It should be noted that the neglected higher Fock components $|qqqq\bar{q}\rangle$ can have a significant effect on the magnetic form factor.
\begin{table}[htp]
\centering
\caption{The nucleon radii in BLFQ are compared with the experimental data~\cite{Tanabashi:2018oca} and lattice results~\cite{Alexandrou:2018sjm}.}
\label{tab:radii}
\begin{tabular}{cccccc}
\hline\hline
Quantity ~& BLFQ ~& Measurement ~& Lattice \\
\hline
$r^{\rm{P}}_E ~\rm{fm}$ ~& $0.802^{+0.042}_{-0.040}$ ~& $0.833\pm 0.010$ ~& $0.742(13)$ \\
$r^{\rm{P}}_M ~\rm{fm}$ ~& $0.834^{+0.029}_{-0.029}$ ~& $0.851\pm 0.026$ ~& $0.710(26)$ \\
\hline
$(r^{\rm{N}}_E)^2 ~\rm{fm}^2$ ~& $-0.033\pm 0.198$ ~& $-0.1161\pm 0.0022$ ~& $-0.074(16)$ \\
$r^{\rm{N}}_M ~\rm{fm}$ ~& $0.861^{+0.021}_{-0.019}$ ~& $0.864^{+0.009}_{-0.008}$ ~& $0.716(29)$ \\
\hline\hline
\end{tabular}
\end{table}
We present our computed radii in Table \ref{tab:radii} and compare with measured data~\cite{Tanabashi:2018oca} as well as with recent lattice QCD calculations~\cite{Alexandrou:2018sjm}. Here again,
except for the charge radius of the neutron, we find reasonable agreement with experiment.
With our LFWFs, the proton's valence quark PDFs at leading twist are given by
\begin{eqnarray}
f_1^q=
\int_D \Psi^{\uparrow*}_{\{x^\prime_i,
\vec{p}^\prime_{\perp i},\lambda_i\}} \Psi^{\uparrow}_{\{x_i,
\vec{p}_{\perp i},\lambda_i\}} \delta(x-x_1), \nonumber
\quad
g_1^q=
\int_D (\Lambda)~\Psi^{\uparrow*}_{\{x^\prime_i,
\vec{p}^\prime_{\perp i},\lambda_i\}} \Psi^{\uparrow}_{\{x_i,
\vec{p}_{\perp i},\lambda_i\}}\delta(x-x_1) ,
\end{eqnarray}
where $\Lambda=1(-1)$ depends on the struck quark helicity $\lambda_1=\frac{1}{2}(-\frac{1}{2})$.
At the model scale relevant to constituent quark masses which are several hundred $\mathrm{MeV}$, the unpolarized PDFs for the valence quarks are normalized as
$
\int_{0}^{1}f_1^u(x)\,dx =2, \int_{0}^{1}f_1^d(x)\,dx=1
$
We also have the following momentum sum rule:
\int_{0}^{1}x\,f_1^u(x)\,dx +\int_{0}^{1}x\,f_1^d(x)\,dx=1.
The right panel of Fig. \ref{figGEM} shows our results for the valence quark unpolarized and spin dependent PDFs of the proton, where we compare the valence quark distribution after QCD evolution with the global fits by MMHT14, NNPDF3.0, and CTEQ15 Collaborations. The error bands in our evolved distributions are due to the spread in the initial scale $\mu_{0}^2=0.195\pm 0.020$ GeV$^2$ and the uncertainties in the coupling constant, $\alpha_s=1.1\pm0.1$. We determine ${\mu_{0}^2=0.195\pm0.020~\rm{GeV}^2}$ by requiring the result after QCD evolution to produce the total first moments of the valence quark unpolarized PDFs from the global data fits with average values, $\langle x\rangle_{u_v+d_v}=0.37\pm 0.01$ at $\mu^2=10$ GeV$^2$. Our unpolarized valence PDFs for both up and down quarks are found to be in good agreement with the global fits. Meanwhile, we evolve the spin dependent PDFs from our model scale to the relevant experimental scale $\mu^2=3$ GeV$^2$ and find that the down quark helicity PDF agrees well with measured data from COMPASS Collaboration. However, for the up quark, our helicity PDF tends to overestimate the data below $x\sim 0.3$.
\\
\\
{\it Acknowledgment:}
CM is supported by the National Natural Science Foundation of China (NSFC) under the Grant Nos. 11850410436 and 11950410753. XZ is supported by Key Research Program of Frontier Sciences, CAS, Grant No ZDBS-LY-7020. JPV is supported by the Department of Energy under Grants No. DE-FG02-87ER40371, and No. DE-SC0018223 (SciDAC4/NUCLEI). A portion of the computational resources were provided by the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No.DE-AC02-05CH11231.
|
1,116,691,500,350 | arxiv |
\subsection{Evaluated Applications}
We evaluate 10 machine learning programs which are representative of three most commonly-used neural network classes: convolutional neural network (CNN), multi-layer perceptron (MLP), and recurrent neural network (RNN). These applications are
1) LeNet based handwritten digit recognition with \ineq{28 \times 28} images of handwritten digits from the MNIST dataset;
2) AlexNet for ImageNet classification;
3) VGG16, also for ImageNet classification;
4) ECG-based heart-beat classification (HeartClass)~\cite{jolpe18,das2018heartbeat} using electrocardiogram (ECG) data;
5) {image smoothing} (ImgSmooth)~\cite{carlsim} on $64 \times 64$ images;
6) {edge detection} (EdgeDet)~\cite{carlsim} on $64 \times 64$ images using difference-of-Gaussian;
7) {multi-layer perceptron (MLP)-based handwritten digit recognition} (DigitRecogMLP)~\cite{Diehl2015} using the MNIST database;
8) {heart-rate estimation} (HeartEstm)~\cite{HeartEstmNN} using ECG data;
9) RNN-based predictive visual pursuit (VisualPursuit)~\cite{Kashyap2018}; and
10) recurrent digit recognition (DigitRecogSTDP)~\cite{Diehl2015}.
\minor{
To demonstrate the potential of \text{{DFSynthesizer}}}{{}, we consider a real-time neuromorphic system, where these machine learning programs are executed continuously in a streaming fashion. Therefore, by optimizing throughput, \text{{DFSynthesizer}}}{{} improves real-time performance.
}
\mr{
Table~\ref{tab:apps} summarizes the topology, the number of neurons and synapses of these applications, and their baseline accuracy on the DYNAP-SE neuromorphic hardware using the SpiNeMap~\cite{spinemap} mapping framework.
As reported in many recent works~\cite{psopart,spinemap,pycarl}, spike latency on the shared interconnect of a neuromorphic hardware can lead to inter-spike interval (ISI) distortion and spike disorder. Since the performance of an SNN is a function of ISI, such non-idealities can lead to accuracy loss. Therefore, the accuracy of the three CNN architectures -- LeNet, AlexNet, and VGG16 in Table~\ref{tab:apps} is somewhat lower than that reported via functional simulation in Table~\ref{tab:conversion_accuracy}.
}
\vspace{-10pt}
\begin{table}[h!]
\renewcommand{\arraystretch}{0.8}
\setlength{\tabcolsep}{2pt}
\caption{Applications used to evaluate \text{{DFSynthesizer}}}{{}.}
\label{tab:apps}
\vspace{-10pt}
\centering
\begin{threeparttable}
{\fontsize{6}{10}\selectfont
\begin{tabular}{ccc|ccl|c}
\hline
\textbf{Class} & \textbf{Applications} &
\textbf{Dataset} &
\textbf{Synapses} & \textbf{Neurons} & \textbf{Topology} & \textbf{\minor{Top-1 Accuracy (\%)}}\\
\hline
\multirow{4}{*}{CNN} & LeNet & MNIST & 282,936 & 20,602 & CNN & 85.1\%\\
& AlexNet & ImageNet & 38,730,222 & 230,443 & CNN & 69.8\%\\
& VGG16 & ImageNet & 99,080,704 & 554,059 & CNN & 90.7 \%\\
& HeartClass~\cite{jolpe18} & Physionet & 1,049,249 & 153,730 & CNN & 63.7\%\\
\hline
\multirow{3}{*}{MLP} & ImgSmooth \cite{carlsim} & CARLsim & 9,025 & 4,096 & FeedForward (4096, 1024) & 100\%\\
& EdgeDet \cite{carlsim} & CARLsim & 114,057 & 6,120 & FeedForward (4096, 1024, 1024, 1024) & 100\%\\
& DigitRecogMLP & MNIST & 79,400 & 884 & FeedForward (784, 100, 10) & 91.6\%\\
\hline
\multirow{3}{*}{RNN} & HeartEstm \cite{HeartEstmNN} & Physionet & 66,406 & 166 & Recurrent Reservoir & 100\%\\
& VisualPursuit \cite{Kashyap2018} & \cite{Kashyap2018} & 163,880 & 205 & Recurrent Reservoir & 47.3\%\\
& DigitRecogSTDP \cite{Diehl2015} & MNIST & 11,442 & 567 & Recurrent Reservoir & 83.6\%\\
\hline
\end{tabular}}
\end{threeparttable}
\vspace{-10pt}
\end{table}
\subsection{Hardware Parameters}
\minor{We model the DYNAP-SE neuromorphic hardware~\cite{dynapse} with 1024 tiles organized in a $32\times 32$ mesh.} Each tile has one $128 \times 128$ crossbar. To test the scalability of \text{{DFSynthesizer}}}{{}, we also evaluate other crossbar configurations, e.g., $256 \times 256$, $512 \times 512$, and $1024 \times 1024$. Table~\ref{tab:hw_parameters} reports the relevant hardware parameters.
\begin{table}[h!]
\caption{Major simulation parameters extracted from \cite{dynapse}.}
\label{tab:hw_parameters}
\vspace{-10pt}
\centering
{\fontsize{6}{10}\selectfont
\begin{tabular}{lp{5cm}}
\hline
Neuron technology & 28nm FD-SOI\\
\hline
Synapse technology & \mr{HfO${}_2$ -based OxRAM}\\
\hline
Supply voltage & 1.0V\\
\hline
Energy per spike & 50pJ at 30Hz spike frequency\\
\hline
Energy per routing & 147pJ\\
\hline
Switch bandwidth & 1.8G. Events/s\\
\hline
\end{tabular}}
\end{table}
\minor{
The additional overhead in time multiplexing the tiles among multiple crossbars is incorporated in computing the throughput using NeuroXplorer. Specifically, once the cluster mapping to tiles are generated using \text{{DFSynthesizer}}}{{}, the synaptic weights of all clusters mapped to a tile are pre-loaded into the tile's local memory (see our system architecture in Figure~\ref{fig:system_architecture}). In this way, \text{{DFSynthesizer}}}{{} reduces the overhead of transferring synaptic weights at run-time from the shared main memory. Additionally, since the loading of clusters (context switching) in crossbars happen concurrently from their respective private memory, the time-multiplexing overhead is minimal.
}
\subsection{Evaluated Metrics}
We evaluate the following performance metrics.
\begin{itemize}
\item \textbf{Performance.} This is the throughput of each application on the hardware.
\item \textbf{Resource Utilization.} This is the neuron, synapse, buffer, connection, and input and output bandwidth utilization on the hardware for each application.
\item \textbf{Energy Consumption.} This is the energy consumed on the hardware for each application. \mr{This is the total energy consumed to generate spikes on each tile and communicate spike between tiles via the shared interconnect.}
\item \textbf{Cluster Connection.} This is the average degree of the SDFG as percentage of the total number of nodes, obtained using the clustering technique for each application.
\item \textbf{Spike Communication.} This is the total number of spikes communicated on the shared interconnect of the neuromorphic hardware.
\item \textbf{Synthesis Time.} This is the time to compile and map each application on the hardware.
\end{itemize}
\subsection{Evaluated Approaches}
We evaluate the following approaches.
\begin{itemize}
\item \textbf{\text{{SpiNeMap}}}{{~\cite{spinemap}}.} This approach first partitions an SNN into clusters of neurons and synapses by incorporating its workload. The objective is to minimize inter-cluster communication. Clusters are then mapped to tiles while minimizing spike communication on the shared interconnect and reducing energy consumption. When mapping SNNs to neuromorphic hardware with fewer tiles than the number of actors, 1) \text{{SpiNeMap}}}{{} allocates actors to tiles randomly and 2) \text{{SpiNeMap}}}{{} schedules the actors on each tile arbitrarily. Therefore, \text{{SpiNeMap}}}{{} does not consider throughput.
\item \textbf{\text{{PyCARL}}}{{~\cite{pycarl}}.} This approach maps neurons and synapses to tiles of a neuromorphic hardware, balancing the number of neurons and synapses on each tile. \text{{PyCARL}}}{{} does not incorporate SNN workload, i.e., spikes generated by neurons in the SNN. Therefore, some tiles may end up communicating more spikes than others, i.e., those tiles become the energy bottleneck.
\mr{\item \textbf{SDFSNN{~\cite{dfsynthesizer}}.} This approach uses the load-balancing mapping of \text{{PyCARL}}}{{} to allocate actors to tiles. It uses dataflow scheduling to improve the throughput.}
\item \textbf{\text{{DFSynthesizer}}}{{}.} The proposed approach first clusters an SNN, considering its workload. The objective is to improve cluster utilization. This is done by first decomposing the SNN into homogeneous neural units with fanin-of-two. The clusters are then mapped to tiles, jointly optimizing throughput and energy consumption. \text{{DFSynthesizer}}}{{} uses dataflow-based scheduling of actors to tiles to further improve the throughput.
\end{itemize}
\subsection{\mr{Workflow for Workload Generation}}
\mr{
Figure~\ref{fig:workflow_workload_gen} summarizes the workflow of the workload generation step of \text{{DFSynthesizer}}}{{}, where a machine-learning program is analyzed to generate its workload which is then used to map the application to a neuromorphic hardware.
}
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.99\columnwidth]{images/workflow_workload_gen.pdf}}
\caption{Workflow of the workload generation step of \text{{DFSynthesizer}}}{{}.}
\label{fig:workflow_workload_gen}
\end{figure}
\mr{
\text{{DFSynthesizer}}}{{} can incorporate both Artificial Neural Networks (ANNs) and Spiking Neural Networks (SNNs) in its workflow.
At a high level, the proposed workflow consists of a model training component followed by model analysis.
In the following, we elaborate on these components.
}
\subsection{\mr{Model Training}}
\subsubsection{\mr{\underline{Training Artificial Neural Networks}}}
\mr{
\text{{DFSynthesizer}}}{{}'s frontend is integrated with Keras~\cite{keras}, which is used to define a model and train it on a database. Keras utilizes Tensorflow backend~\cite{tensorflow}. \text{{DFSynthesizer}}}{{} also supports other frameworks such as PyTorch~\cite{pytorch}. To demonstrate the capabilities of \text{{DFSynthesizer}}}{{}, we evaluate it with three Convolutional Neural Network (CNN) architectures -- 1) LeNet~\cite{lenet}, trained on MNIST handwritten digit dataset~\cite{mnist}, 2) AlexNet~\cite{alexnet}, trained on ImageNet dataset~\cite{imagenet}, and 3) VGGNet~\cite{vggnet}, trained on ImageNet dataset. These models are derived from the MLPerf~\cite{mlperf} dataset and instantiated in Keras. We use a Lambda workstation with two GPUs (see our evaluation setup in Section~\ref{sec:evaluation}) to train these models.
}
\subsubsection{\mr{\underline{Training Spiking Neural Networks}}}
\mr{\text{{DFSynthesizer}}}{{}'s frontend supports training SNN models using PyCARL~\cite{pycarl}, a Python frontend to CARLsim~\cite{carlsim}.
CARLsim facilitates SNN simulations using CPUs and multi-GPUs.
PyCARL is designed to integrate with PyNN~\cite{pynn}, which provides a common frontend to different SNN simulators with various degrees of neurobiological details. We use CARLsim for model training. CARLsim's support for built-in biologically realistic neuron, synapse, current and emerging learning models and continuous
integration and testing, make it an easy-to-use and powerful simulator of biologically-plausible SNN models. \text{{DFSynthesizer}}}{{} can also utilize other SNN simulators such as Brian~\cite{brian}, NEST~\cite{nest}, and NEURON~\cite{neuron} for model training.
}
\subsection{\mr{Model Analysis}}
\subsubsection{\mr{\underline{Model Parsing and Conversion}}}
\mr{
Unfortunately, ANN models cannot be executed directly on event-driven neuromorphic hardware platforms such as DYNAP-SE~\cite{dynapse}, TrueNorth~\cite{truenorth}, and Loihi~\cite{loihi}. Recently, many tools have been proposed to convert ANN operations to SNNs. Examples include Nengo~\cite{nengo}, N2D2~\cite{n2d2}, and SNNToolBox~\cite{zurich_converter}. A common limitation of these toolboxes is that they are open-loop converters, meaning that the conversion is performed considering performance degradation only. In our prior work~\cite{jolpe18}, we have proposed a closed-loop conversion mechanism, where the conversion of analog operations to spiking equivalent is performed considering the energy consumption on hardware. These conversion steps are briefly discussed below.\footnote{\mr{The conversion framework was introduced in~\cite{jolpe18} for converting CNN-based HeartClass application to its equivalent SNN representation. We used this application to evaluate \text{{DFSynthesizer}}}{{}. Additionally, we have extended the conversion framework to add other key functionalities such as Layer Flattening, Concatenation, Binary Weight Activation, and Non-Zero Biases. These new functionalities allowed the conversion framework to convert state-of-the-art CNN architectures such as LeNet, AlexNet, and VGG16, which are used to evaluate \text{{DFSynthesizer}}}{{}.}}
}
\begin{enumerate}
\item \emph{ReLU Activation Functions:} This is implemented as the approximate firing rate of a leaky integrate and fire (LIF) neuron.
\item \emph{Bias:} A bias is represented as a constant input current to a neuron, the value of which is proportional to the bias of the neuron in the corresponding analog model.
\item \emph{Weight Normalization:} This is achieved by setting a factor \ineq{\lambda} to control the firing rate of spiking neurons.
\item \emph{Softmax:} To implement softmax, an external Poisson spike generator is used to generate spikes proportional to the weighted sum accumulated at each neuron.
\item \emph{Max and Average Pooling:} To implement max pooling, the neuron which fires first is considered to be the winning neuron, and therefore, its responses are forwarded to the next layer, suppressing the responses from other neurons in the pooling function. To implement average pooling, the average firing rate (obtained from total spike count) of the pooling neurons are forwarded to the next layer.
\end{enumerate}
\mr{
We have extended our framework with the following new functionalities to allow for the conversion of CNN architectures such as LeNet, AlexNet, and VGGNet to their spiking counterparts.
\begin{enumerate}
\item \emph{1-D Convolution:} The 1-D convolution is implemented to extract patterns from inputs in a single spatial dimension. A 1xn filter, called a kernel, slides over the input while computing the element-wise dot-product between the input and the kernel at each step.
\item \emph{Residual Connections:} Residual connections are implemented to convert the residual block used in CNN models such as ResNet. Typically, the residual connection connects the input of the residual block directly to the output neurons of the block, with a synaptic weight of `1'. This allows for the input to be directly propagated to the output of the residual block while skipping the operations performed within the block.
\item \emph{Flattening:} The flatten operation converts the 2-D output of the final pooling operation into a 1-D array. This allows for the output of the pooling operation to be fed as individual features into the decision-making fully connected layers of the CNN model.
\item \emph{Concatenation:} The concatenation operation, also known as a merging operation, is used as a channel-wise integration of the features extracted from 2 or more layers into a single output.
\end{enumerate}
}
Table~\ref{tab:conversion_accuracy} reports the accuracy impact due to the SNN conversion of three state-of-the-art supervised CNN models.
\mr{
These accuracy numbers are obtained from CARLsim~\cite{carlsim}, which allows functional simulation and performance estimation of SNN-based applications.
}
We use these three converted CNN models to evaluate \text{{DFSynthesizer}}}{{} (See Section~\ref{sec:evaluation}).
\begin{table}[h!]
\renewcommand{\arraystretch}{1.4}
\setlength{\tabcolsep}{7pt}
\centering
\begin{threeparttable}
{\fontsize{8}{10}\selectfont
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{\textbf{Application}} & \multicolumn{2}{|c|}{\minor{Top-1 Accuracy (\%)}} & \multirow{2}{*}{\textbf{Application}} & \multicolumn{2}{|c|}{\minor{Top-1 Accuracy (\%)}} & \multirow{2}{*}{\textbf{Application}} & \multicolumn{2}{|c}{\minor{Top-1 Accuracy (\%)}}\\
\cline{2-3}\cline{5-6}\cline{8-9}
& Original & SNN & & Original & SNN & & Original & SNN\\
\hline
LeNet & 94.98\% & 94.08\% & AlexNet & 74.1\% & 71.7\% & VGG16 & 93.56\% & 91.62\%\\
\hline
\end{tabular}}
\end{threeparttable}
\caption{Accuracy impact due to conversion of three state-of-the-art CNN models to their SNN equivalent. \mr{The original accuracy numbers are obtained by simulating these architectures in Keras~\cite{keras} with Tensorflow backend~\cite{tensorflow}. The converted accuracy numbers reported in the columns marked ``SNN'' are obtained from CARLsim~\cite{carlsim}. We use a multi-GPU machine to simulate these architectures using both Keras and CARLsim. See our evaluation framework in Section~\ref{sec:evaluation}.}}
\label{tab:conversion_accuracy}
\end{table}
\subsubsection{\mr{\underline{Workload Generation}}}
\mr{
The SNN model (or the converted ANN model) is analyzed in CARLsim to generate the following information.
}
\begin{itemize}
\item \emph{\textcolor{black}{Spike Data:}} the exact spike times of all neurons in the SNN model. We let \ineq{spk(i)} represents a list of spike times of the \ineq{i^\text{th}} neuron in the model.
\item \emph{\textcolor{black}{Weight Data:}} the synaptic strength of all synapses in the SNN model. We let \ineq{w(i,j)} represents the synaptic weight of the connection between the \ineq{i^\text{th}} and \ineq{j^\text{th}} neurons in the SNN model.
\end{itemize}
The spike and weight data of a trained SNN form the \textbf{SNN workload}.
Formally, an SNN workload is defined as
\begin{Definition}{SNN Workload}
{An SNN Workload ${\mathbf{G_{SNN} = (N,S,W)}}$ is a directed graph consisting of a finite set \ineq{N} of neurons, a set \ineq{S} of spikes, and a set \ineq{W} of synapses between the neurons.}
\end{Definition}
\subsection{Throughput}\label{sec:performance_results}
Figure~\ref{fig:throughput} reports the throughput on DYNAP-SE for the evaluated approaches, for each application normalized to \text{{SpiNeMap}}}{{}.
\mr{
For reference, we have reported the maximum throughput in frames-per-second obtained with unlimited hardware resources for each application. For image-based applications (LeNet, AlexNet, VGGNet, EdgeDet, ImgSmooth, and DigitSTDP), a frame corresponds to an individual image. For other time-series applications (HeartClass, HeartEstm, and VisualPursuit), a frame corresponds to a window of 500ms.
}
We make the following \minor{four} key observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/throughput.pdf}}
\vspace{-10pt}
\caption{Throughput on DYNAP-SE for each evaluated application normalized to \text{{SpiNeMap}}}{{}. \mr{The throughput in frames-per-second is reported for the maximum throughput approach for each application assuming unlimited hardware resources.}}
\vspace{-10pt}
\label{fig:throughput}
\end{figure}
\minor{
First, although the number of neurons and synapses of larger applications such as AlexNet and VGG16 is significantly higher than LeNet, the throughput of LeNet on a hardware with unlimited resources,\footnote{\minor{In the context of this work, unlimited resources refer to a neuromorphic hardware that has at least the same number of crossbars as there are clusters in the machine learning program.}} i.e., without time-multiplexing of crossbars is only 1.5x higher than AlexNet and 2x higher than VGG16. This is because with no time-multiplexing of crossbars, computations in a machine learning program take place concurrently on the crossbars, the basic philosophy of distributed computing, which is enabled using neuromorphic platforms.
Therefore, the overhead due to time-multiplexing of crossbars is no longer the throughput bottleneck. Rather, the bottleneck shifts to spike delay between the clusters. Additionally, in our framework we cluster machine learning programs to minimize inter-cluster spikes. Therefore, even though Alexnet has significantly higher number of neurons and synapses than LeNet, its number of inter-cluster spikes is not significantly higher. The throughput of AlexNet is only 33\% lower than LeNet.
Similarly, VGG16, which has higher inter-cluster spikes than AlexNet, has 25\% lower throughput.
}
\mr{
Second, the throughput obtained using \text{{SpiNeMap}}}{{} is the least because \text{{SpiNeMap}}}{{} does not guarantee throughput during actor-to-tile mapping and actor scheduling on tiles. The throughput of \text{{PyCARL}}}{{} is on average 4\% higher than \text{{SpiNeMap}}}{{}. This is because \text{{PyCARL}}}{{} balances the load on the tiles and therefore, the average number of actors mapped to each tile is lower than \text{{SpiNeMap}}}{{}, which results in higher throughput. The throughput of SDFSNN is on average 9.7\% higher than \text{{PyCARL}}}{{}. This improvement is because of the use of dataflow-based scheduling, which maximizes the throughput. \text{{DFSynthesizer}}}{{} improves throughput by an average of 17\% compared SDFSNN. This improvement is because unlike SDFSNN, which maps actors to tiles balancing the tile load without considering the throughput, \text{{DFSynthesizer}}}{{} performs throughput- and energy-aware mapping of actors to tiles and then uses dataflow-based scheduling to further improve the throughput. We have analyzed such throughput differences in Section~\ref{sec:mapping_exploration}.}
\mr{
Third, the throughput using \text{{DFSynthesizer}}}{{} is only 16\% lower on average than the maximum throughput obtained with unlimited hardware resources.
Finally, the throughput of DigitMLP is a very small application. All the techniques generate the same number of clusters for this application, resulting in similar throughput.
}
\subsection{Workload Energy}
Figure~\ref{fig:energy} reports the workload energy estimated on DYNAP-SE of the evaluated approaches for each application normalized to \text{{SpiNeMap}}}{{}.
\mr{
For reference, we have reported the workload energy in \ineq{\mu J} obtained using the maximum throughput approach, which assumes unlimited hardware resources.
}
We make the following observation.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/energy.pdf}}
\vspace{-10pt}
\caption{Workload energy on DYNAP-SE for each evaluated application normalized to \text{{SpiNeMap}}}{{}. \mr{The workload energy in \ineq{\mu J} is reported for the maximum throughput approach for each application assuming unlimited hardware resources.}}
\vspace{-10pt}
\label{fig:energy}
\end{figure}
\mr{
The energy consumption of \text{{SpiNeMap}}}{{} is the least because this approach partitions SNNs into clusters to explicitly minimize the number of inter-cluster spikes. Therefore, when the clusters are mapped to hardware, the energy consumption on the shared interconnect is reduced.\footnote{\mr{The mapping exploration only impacts the communication energy on the shared interconnect. The spike generation energy remains the same for all approaches.}} Second, the energy consumption of \text{{PyCARL}}}{{} is on average 15\% higher than \text{{SpiNeMap}}}{{}. This is because \text{{PyCARL}}}{{} balances the tile load without incorporating energy consumption. Therefore, clusters with high volume of spike communication between them may get placed on different tiles, increasing the communication energy. \text{{SpiNeMap}}}{{} places those tiles on the same tile lowering the communication energy. The energy consumption of SDFSNN is the same as \text{{PyCARL}}}{{} because the cluster-to-tile mapping of these two approaches is the same. SDFSNN gains over \text{{PyCARL}}}{{} in terms of throughput due to its dataflow-based cluster scheduling on tiles. We analyzed this in Section 8.1.
}
\mr{
The energy consumption of \text{{DFSynthesizer}}}{{} is lower than SDFSNN by an average of 8\%. This reduction is due to the cluster-to-tile mapping of \text{{DFSynthesizer}}}{{}, which incorporates energy consumption.
}
\subsection{Scheduling}
Figure~\ref{fig:runtime} reports throughput of each of our applications for our proposed approach normalized to \text{{PyCARL}}}{{}.
We compare throughput obtained using \text{{DFSynthesizer}}}{{} where schedules are independently constructed for each tile against the throughput obtained using our proposed single-tile based schedule (\text{{DFSynthesizer}}}{{}+STS).
We make the following three observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/schedule.pdf}}
\vspace{-10pt}
\caption{Throughput normalized to \text{{PyCARL}}}{{}.
\label{fig:runtime}
\vspace{-10pt}
\end{figure}
First, throughput obtained from a single-tile static-order schedule is on average 15\% lower than the case when schedules are constructed independently --- that is, by using \text{{DFSynthesizer}}}{{}. This verifies our Lemma 2. Second, for some applications such as HeartEstm and HeratClass, throughput obtained using \text{{DFSynthesizer}}}{{}+STS is exactly the same as that obtained using \text{{DFSynthesizer}}}{{}. Third, throughput using \text{{DFSynthesizer}}}{{}+STS is still higher than \text{{PyCARL}}}{{} by an average of 41\%.
\subsection{Resource Utilization}
Table~\ref{tab:resource} reports the utilization of hardware resources (tile resources, buffer size, connections, and input and output bandwidth) on the DYNAP-SE neuromorphic hardware for each application. The average utilization of hardware resources is 92.5\% for the crossbar IOs on each tile, 9.0\% for buffer space, 42.6\% for connections, and 15\% for input and output tile bandwidth.
Since we perform hardware-aware analysis, resource utilization never exceeds 100\%.
\begin{table}[h!]
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{3pt}
\centering
{\fontsize{7}{10}\selectfont
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multirow{3}{*}{\textbf{Application}} & \multicolumn{5}{|c|}{\textbf{Utilization (\%)}}\\ \cline{2-6}
& \multirow{2}{*}{\textbf{Tile}} & \multirow{2}{*}{\textbf{Buffer}} & \multirow{2}{*}{\textbf{Connections}} &
\multicolumn{2}{|c|}{\textbf{Bandwidth}}\\ \cline{5-6}
&&&&\textbf{Input} & \textbf{Output}\\
\hline
LeNet & 100 & 87.8 & 37.5 & 20.34 & 20.34 \\
AlexNet & 100 & 91.8 & 46.87 & 17.09 & 17.09 \\
VGG16 & 100 & 94.2 & 15.62 & 6.51 & 6.51 \\
HeartClass & 100 & 79.1 & 25 & 9.76 & 9.76 \\
DigitMLP & 81.25 & 9.67 & 46.87 & 22.78 & 22.78 \\
EdgeDet & 87.5 & 11.23 & 68.75 & 22.78 & 22.78 \\
ImgSmooth & 87.5 & 8.39 & 37.5 & 17.08 & 17.08 \\
HeartEstm & 96.87 & 9.61 & 62.5 & 4.7 & 4.7 \\
VisualPursuit & 90.12 & 21.2 & 25.04 & 12.11 & 16.6 \\
DigitSTDP & 89.33 & 20.13 & 22.19 & 11.94 & 11.7 \\
\hline
\end{tabular}}
\caption{Resource utilization on DYNAP-SE.}
\label{tab:resource}
\end{table}
These results illustrate that \text{{DFSynthesizer}}}{{} can be used to design neuromorphic hardware while considering key hardware parameters such as number of tiles, but all other resources such as buffer space, connections, and input and output bandwidth.
To give more insight on the utilization within each tile, Figure~\ref{fig:cluster_utilization} reports the average synapse utilization on tiles of the evaluated approaches for each application normalized to \text{{PyCARL}}}{{}. We make the following two key observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/utilization.pdf}}
\vspace{-10pt}
\caption{Average synapse utilization on tiles for each evaluated application normalized to \text{{PyCARL}}}{{}.}
\vspace{-10pt}
\label{fig:cluster_utilization}
\end{figure}
First, the synapse utilization on tiles using \text{{SpiNeMap}}}{{} is the least of all three evaluated approaches. This is because \text{{SpiNeMap}}}{{} produces the highest number of clusters (Sec.~\ref{sec:number_of_clusters}) and therefore, the average number of synapses per cluster is the least. Subsequently, when these clusters are mapped to tiles, the average synapse utilization on tiles reduces.
Second, \text{{DFSynthesizer}}}{{} generates fewer clusters than both \text{{SpiNeMap}}}{{} and \text{{PyCARL}}}{{} due to its dense packing of synapses using Algorithm~\ref{alg:clustering}. Therefore, the average number of synapses per cluster is higher, which increases synapse utilization on tiles when the clusters are mapped to tiles. On average, the average synapse utilization of \text{{DFSynthesizer}}}{{} is 2x higher than \text{{PyCARL}}}{{} and 2.2x higher than \text{{SpiNeMap}}}{{}.
\subsection{Number of Clusters}\label{sec:number_of_clusters}
Figure~\ref{fig:total_clusters} reports the total number of clusters of the evaluated approaches for each application normalized to \text{{PyCARL}}}{{}. We make the following two key observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/total_clusters.pdf}}
\vspace{-10pt}
\caption{Number of clusters for each evaluated application normalized to \text{{PyCARL}}}{{}.}
\vspace{-10pt}
\label{fig:total_clusters}
\end{figure}
First, the number of clusters of \text{{SpiNeMap}}}{{} is the highest of all three evaluated approaches. This is because \text{{SpiNeMap}}}{{} minimizes the number of inter-cluster communication during clustering of an SNN. Therefore, neurons that spike the most are placed within individual clusters along with their fanins. Since \text{{SpiNeMap}}}{{} does not consider cluster utilization, it results in creating more clusters than \text{{PyCARL}}}{{}. Second, \text{{DFSynthesizer}}}{{} clusters an SNN to maximize the resource utilization on each tile. Therefore, the number of clusters generated by \text{{DFSynthesizer}}}{{} is the lowest. Overall, the number of clusters of \text{{DFSynthesizer}}}{{} is 41\% lower than \text{{SpiNeMap}}}{{} and 47\% lower than \text{{PyCARL}}}{{}. Lower the number of clusters, lower is the size of hardware needed to achieve highest throughput (Sec.~\ref{sec:performance_results}). Therefore, \text{{DFSynthesizer}}}{{} reduces the hardware requirement for machine learning applications.
\subsection{Cluster Connections}
Figure~\ref{fig:cluster_connections} reports the cluster connections of the evaluated approaches for each application normalized to \text{{PyCARL}}}{{}. We make the following two key observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/cluster_connections.pdf}}
\vspace{-10pt}
\caption{Cluster connections for each evaluated application normalized to \text{{PyCARL}}}{{}.}
\vspace{-10pt}
\label{fig:cluster_connections}
\end{figure}
First, the number of inter-cluster connections of \text{{SpiNeMap}}}{{} is the least of all three evaluated approaches. This is because \text{{SpiNeMap}}}{{} minimizes the number of inter-cluster communication while clustering an SNN, which indirectly reduces the cluster connectivity. Second, \text{{DFSynthesizer}}}{{} clusters an SNN to maximize the resource utilization on each tile. Therefore, the number of connections between the clusters is higher in \text{{DFSynthesizer}}}{{} because of the higher number of post-synaptic neurons mapped to each cluster. Overall, the average cluster connections of \text{{DFSynthesizer}}}{{} is 3.1x higher than \text{{SpiNeMap}}}{{} and 3.9x higher than \text{{PyCARL}}}{{}.
\subsection{Architecture Exploration}
Figure~\ref{fig:arch_crossbars} reports the number of clusters generated using \text{{DFSynthesizer}}}{{} for neuromorphic hardware with $128 \times 128$, $256 \times 256$, and $1024 \times 1024$ crossbars, normalized to a DYNAP-SE configuration with $128 \times 128$ crossbars. We observe that the number of clusters generated using \text{{DFSynthesizer}}}{{} reduces by 60\% and 92\% when the size of a crossbar increases to $256 \times 256$ and $1024 \times 1024$, respectively.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/arch_crossbars.pdf}}
\vspace{-10pt}
\caption{Number of clusters generated using \text{{DFSynthesizer}}}{{} for $128 \times 128$, $256 \times 256$, and $1024 \times 1204$ crossbars, normalized to the configuration of DYNAP-SE with $128 \times 128$ crossbars.}
\vspace{-10pt}
\label{fig:arch_crossbars}
\end{figure}
Fewer number of clusters increases throughput. To illustrate this, Figure~\ref{fig:arch_throughput} reports the throughput using \text{{DFSynthesizer}}}{{} for different crossbar sizes normalized to throughput on DYNAP-SE with four $128 \times 128$ crossbars. We make the following two observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/arch_throughput.pdf}}
\vspace{-10pt}
\caption{Throughput achieved using \text{{DFSynthesizer}}}{{} for $128 \times 128$, $256 \times 256$, and $1024 \times 1204$ crossbars, normalized to throughput on DYNAP-SE with $128 \times 128$ crossbars.}
\vspace{-10pt}
\label{fig:arch_throughput}
\end{figure}
First, throughput increases by 18\% and 30\% when using $256 \times 256$ and $1024 \times 1024$ crossbars, respectively. This improvement is because with larger size crossbars, there are fewer clusters generated by \text{{DFSynthesizer}}}{{} (Fig.~\ref{fig:arch_crossbars}). Therefore, the number of clusters per tile reduces, which reduces the bottleneck of time-multiplexing clusters on tiles. This increases throughput. Second, for applications such as DigitMLP, EdgeDet, and HeartEstm, there is no throughput improvement when the crossbar size increased from $512 \times 512$ to $1024 \times 1024$. This is because for these applications, $256 \times 256$ crossbar configuration is sufficient to achieve the highest throughput. For all other applications, the throughput increases by 11\% when going from $256 \times 256$ to $1024 \times 1024$ crossbars.
\subsection{Synthesis Time}
Figure~\ref{fig:compilation_time} reports the synthesis time on DYNAP-SE for the evaluated approaches, for each application normalized to \text{{PyCARL}}}{{}. We make the following three key observations.
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.99\columnwidth]{images/compile_time.pdf}}
\vspace{-10pt}
\caption{Synthesis time for each application normalized to \text{{PyCARL}}}{{}.}
\vspace{-10pt}
\label{fig:compilation_time}
\end{figure}
{First}, the synthesis time of \text{{SpiNeMap}}}{{} is on average 61.6\% higher than \text{{PyCARL}}}{{}. The higher synthesis time of \text{{SpiNeMap}}}{{} is due to the analysis it performs with the workload to obtain the minimum energy mapping. Second, the synthesis time of \text{{DFSynthesizer}}}{{} is the highest. On average, the synthesis time of \text{{DFSynthesizer}}}{{} is 35x higher than \text{{PyCARL}}}{{} and 25x higher than \text{{SpiNeMap}}}{{}. This higher synthesis time is due to 1) \text{{DFSynthesizer}}}{{}'s mapping explorations using Algorithm~\ref{alg:mapping}, and 2) \text{{DFSynthesizer}}}{{}'s SDFG analysis mechanism using the proposed Max Plus formulation. Third, the synthesis time of \text{{DFSynthesizer}}}{{} increases with model complexity. The synthesis time of \text{{DFSynthesizer}}}{{} is higher than \text{{PyCARL}}}{{} by 3.1x for LeNet, 25.5x for AlexNet, and 272.3x for VGG16.
\subsection{Model Quality}
\text{{DFSynthesizer}}}{{} does not alter synaptic connections. Therefore, the model quality, e.g., accuracy is not impacted by the analysis technique of \text{{DFSynthesizer}}}{{}. The only impact \text{{DFSynthesizer}}}{{} introduces is in converting CNNs.
The accuracy impact is reported in Table~\ref{tab:conversion_accuracy}. For all other applications, \text{{DFSynthesizer}}}{{}'s accuracy is the same as the baseline accuracy reported in Table~\ref{tab:apps}.
\subsection{Neuromorphic Algorithms}
\mr{
Recently, many approaches are proposed to map machine learning workloads to neuromorphic hardware.
Corelet~\cite{amir2013cognitive} is used to map SNNs to TrueNorth~\cite{truenorth}. PACMAN~\cite{galluppi2015framework} is used to map SNNs to SpiNNaker~\cite{spinnaker}.
PyNN~\cite{pycarl} is used to map SNNs on Loihi~\cite{loihi}, BrainScaleS~\cite{schemmel2012live}, and Neurogrid~\cite{neurogrid} by balancing the load on each tile.
PyCARL~\cite{pycarl} is used to map SNNs to DYNAP-SE~\cite{dynapse}.
The primary objective of these approaches is to balance the workload on each tile by distributing the neurons and synapses evenly.
}
\mr{
Beyond load balancing, recent techniques have also explored other objectives.
PSOPART~\cite{psopart} is used to map SNNs to neuromorphic hardware, reducing the energy consumption on the shared interconnect.
\text{{SpiNeMap}}}{{}~\cite{spinemap} performs energy-aware clustering of SNNs and then maps the clusters to tiles, reducing the communication energy. DecomposeSNN~\cite{esl20} decomposes an SNN to improve the cluster utilization. There are also performance-oriented SNN mapping approaches such as~\cite{balaji2020run,dfsynthesizer,balaji2019design,adarsha_igsc}, energy-aware SNN mapping approaches such as~\cite{twisha_energy}, circuit aging-aware SNN mapping approaches such as~\cite{reneu,song2020case,balaji2019framework,vts_das,ncrtm}, endurance-aware SNN mapping approaches such as~\cite{twisha_endurance,espine,song2021improving}, and thermal-aware SNN mapping approaches such as~\cite{twisha_thermal}. These approaches are evaluated with emerging SNN based applications~\cite{moyer2020machine,jolpe18,das2018heartbeat,Diehl2015,HeartEstmNN,Kashyap2018}, which we also use to evaluate \text{{DFSynthesizer}}}{{}.
}
\mr{
There are also other mapping approaches such as
~\cite{ankit2018neuromorphic,zhang2018neuromorphic,xia2019memristive,lee2019system,wijesinghe2018all,wen2015eda,ramasubramanian2014spindle}.
We compare \text{{DFSynthesizer}}}{{} against \text{{PyCARL}}}{{} and \text{{SpiNeMap}}}{{}, and found it to perform significantly better.
}
\subsection*{Similar Concept in Related Domain}
SDFGs are widely used for predictable mapping of applications to multiprocessor systems. Numerous approaches to throughput analysis of SDFGs have been previously proposed~\cite{stuijk2006exploring,stuijk2007multiprocessor,damavandpeyma2012modeling,zhu2012static,shafik2015adaptive,das2015hardware,shafik2015adaptive}. Bonfietti et al. evaluated mappings of SDFG to multiprocessor system, maximizing the throughput~\cite{bonfietti2013maximum}. Stemmer et al. propose to use probabilistic analysis to allocate and schedule SDFGs on multiprocessor systems~\cite{stemmer2020towards}. Das et al. evaluated the fault-tolerant mapping of SDFGs to multiprocessor systems~\cite{das2013communication,das2015reliability,das2014communication,das2012faultRSP,das2013aging,das2014energy,das2012energy,das2013energy,das2016adaptive}. Recently, SDFG-based analysis is also proposed for analyzing machine learning applications~\cite{das2018dataflow,balaji2019ISVLSIframework,hong2017hierarchical,chen2017using,bacis2017pipelined,shihao_designflow}. However, none of these approaches address application analysis with limited hardware resources, both at design-time and at run-time.
\subsection{System Architecture}
\mr{
Figure~\ref{fig:system_architecture} illustrates our system architecture.
}
\minor{
\text{{DFSynthesizer}}}{{} is designed for crossbar-based neuromorphic hardware designs as shown in Figure~\ref{fig:tile}. This is representative of many recent neuromorphic designs~\cite{catthoor2018very,gopalakrishnan2020hfnet,ankit2017trannsformer,hu2016dot}.
}
\mr{
A machine learning model (ANN or SNN) is first analyzed to generate its workload (Section~\ref{sec:formatting}). This workload is then partitioned to generate clusters, where each cluster consists of a fraction of the neurons and synapses of the original machine learning model. The cluster workload is stored in a disk along with other machine learning workloads.
To execute a specific workload on the neuromorphic hardware, it is first loaded into the host memory and then the clusters are programmed on to the crossbars of the hardware via the PCIe interface.\footnote{\mr{Although we illustrate the crossbars to be interconnected in a mesh-based architecture such as Networks-on-Chip (NoC)~\cite{noc_benini}, \text{{DFSynthesizer}}}{{} can work with other interconnect types such as Segmented Bus~\cite{balaji2019exploration}.}}
}
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.99\columnwidth]{images/architecture.pdf}}
\caption{\minor{Our system architecture, integrating a neuromorphic hardware. \text{{DFSynthesizer}}}{{} is designed for crossbar-based neuromorphic hardware~\cite{catthoor2018very,gopalakrishnan2020hfnet,ankit2017trannsformer,hu2016dot}. This is representative of many recent neuromorphic designs. To evaluate \text{{DFSynthesizer}}}{{}, we have configured our evaluation setup to model the DYNAP-SE hardware~\cite{dynapse}.}}
\label{fig:system_architecture}
\end{figure}
\mr{
In the remainder of this section, we describe the workload compilation step of \text{{DFSynthesizer}}}{{}, which consists of the following two design components -- Workload Decomposition and Workload Clustering. We conclude this section by providing a dataflow modeling approach for clustered workloads and performance estimation using such model.
}
\subsection{\mr{Workload Decomposition}}
We note that each $N \times N$ crossbar in a neuromorphic hardware can accommodate up to \ineq{N} pre-synaptic connections per post-synaptic neuron, with typical value of \ineq{N} set between 128 (in DYNAP-SE) and 256 (in TrueNorth).
Figure~\ref{fig:crossbar_mapping} illustrates an example of mapping a) one 4-input, b) one 3-input, and c) two 2-input neurons on a $4 \times 4$ crossbar. Unfortunately, neurons with more than 4 pre-synaptic connections per post-synaptic neuron cannot be mapped to the crossbar. In fact, in many complex machine learning models such as AlexNet and VGG16, the number of pre-synaptic connections per post-synaptic neuron is much higher than 128. Therefore, these neurons cannot be mapped to a $128 \times 128$ crossbar in DYNAP-SE.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.69\columnwidth]{images/crossbar_mapping_v2.pdf}}
\caption{Example mapping of a) one 4-input, b) one 3-input, and c) two 2-input neurons on a $4 \times 4$ crossbar.}
\label{fig:crossbar_mapping}
\end{figure}
To address the above limitation, we have previously proposed a spatial decomposition technique which exploits the firing principle of LIF neurons, decomposing each neuron with many pre-synaptic connections into a sequence of homogeneous fanin-of-two (FIT) neural units~\cite{esl20}.
\mr{
Figure~\ref{fig:decomposition_demo} illustrates the spatial decomposition using a small example of a 3-input neuron shown in Figure~\ref{fig:decomposition_demo}(a). We consider the mapping of this neuron to 2x2 crossbars. Since each crossbar can accommodate a maximum of two pre-synaptic connections per neuron, the example 3-input neuron cannot be mapped to the crossbar directly. The most common solution is to eliminate a synaptic connection, which may lead to accuracy loss. Figure~\ref{fig:decomposition_demo}(b) illustrates the decomposition mechanism, where the 3-input neuron is implemented using two FIT neural units connected in sequence as shown in Figure~\ref{fig:decomposition_demo}(b). Each FIT unit is similar to a 2-input neuron and it exploits the leaky integrate behavior in hardware to maintain the functional equivalence between Figures~\ref{fig:decomposition_demo}(a) and \ref{fig:decomposition_demo}(b).
}
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.99\columnwidth]{images/decomposition_demo.pdf}}
\caption{\mr{Illustrating the decomposition of a 3-input neuron (a) to a sequence of FIT neural units (b). The mapping of the FIT units to two 2x2 crossbars is shown in (c).}}
\label{fig:decomposition_demo}
\end{figure}
\mr{
For the sake of completeness, Figure~\ref{fig:decomposition_demo}(c) illustrates the mapping of the decomposed neuron utilizing two 2x2 crossbars. The functionality of the FIT neural units is implemented using the Non-Volatile Memory (NVM) cells of the two crossbars.
}
\mr{
To describe the decomposition Algorithm, we introduce the following notations.
Let \ineq{n_i^1,n_i^2,\cdots,n_i^{m_i}} be the \ineq{m_i} pre-synaptic connections of the neuron \ineq{N_i}. Let \ineq{F_i^1,F_i^2,\cdots,F_i^{m_i-1}} be the (\ineq{m_i-1}) FIT neural units that are generated by spatially decomposing this neuron. The input of unit \ineq{F_i^j} denoted as \ineq{In(F_i^j)} can be represented as
}
\mr{
\begin{equation}
\label{eq:spatial_decompose}
\footnotesize In(F_i^j) = \begin{cases}
\{n_i^1,n_i^2\} & \text{ for j = 1} \\
\{n_i^{j+1},Out(F_i^{j-1})\} & \text{ otherwise}
\end{cases}~\forall j\in~\{1,2,\cdots,m_i-1\}
\end{equation}
}
\mr{
where \ineq{Out(F_i^j)} is the output of the unit \ineq{F_i^j}. When decomposing a neuron, we note that the first FIT unit uses two of the original inputs of the original neuron. Subsequently, all other FIT units use one of the original inputs and the output of the preceding FIT units as shown in Figure~\ref{fig:decomposition_demo}(b).
}
Formally, a decomposed SNN graph is defined as follows.
\begin{Definition}{Decomposed SNN Graph}
A decomposed SNN graph \ineq{\mathbf{G_{DSNN} = (\textbf{F},\textbf{L})}} is a directed graph consisting of a finite set \ineq{{\textbf{F}}} of FIT neural units and a finite set \ineq{{\textbf{L}}} of links between these units.
\end{Definition}
Algorithm~\ref{alg:unrolling} shows the pseudo-code of the spatial decomposition technique, which performs the graph transformation \ineq{G_{SNN}\rightarrow G_{DSNN}}.
\mr{
For each neuron \ineq{N_i} (line 1), a set of inputs to this neuron is obtained (line 2). The first FIT unit is formed using two input inputs (line 3). This is in accordance with Equation~\ref{eq:spatial_decompose} and Figure~\ref{fig:decomposition_demo}(b). The FIT unit is inserted into the decomposed graph \ineq{G_{DSNN}} (line 4). The algorithm then creates the other FIT units iteratively (lines 5-8) using Equation~\ref{eq:spatial_decompose} and stores those units in \ineq{G_{DSNN}}. Finally, the graph \ineq{G_{DSNN}} is returned (line 10).
}
\mr{
The overall complexity of this algorithm is calculated as follows. The Out for loop (lines 1-9) is executed for the neurons in the original graph \ineq{G_{SNN}}, i.e., for \ineq{|N|} times. Within each iteration, the algorithm creates a total of \ineq{\left(|In(N_i)|-1\right)} FIT units, where \ineq{In(N_i)} is the set of input of neuron \ineq{N_i}. Therefore, the algorithmic complexity is
\begin{equation}
\footnotesize \text{Complexity} = \mathcal{O}\left(\sum_{i=1}^{|N|}\bigg(|In(N_i|-1\bigg)\right) \approx \mathcal{O}\left(|W|\right)
\end{equation}
In deriving the final expression, we note that the input connections of all the neurons in the graph \ineq{G_{SNN}} are the edges \ineq{W} in the graph.
}
\begin{algorithm}[h]
\scriptsize{
\KwIn{\ineq{G_{SNN}= (\textbf{N},\textbf{W})}}
\KwOut{\ineq{G_{DSNN}= (\textbf{F},\textbf{L})}}
\For(\tcc*[f]{for each node of $G_{SNN}$}){$N_i\in \mathbf{N}$}{
$\{n_i^1,n_i^2,\cdots,n_i^{m_i}\} = \texttt{In}(N_i)$ \tcc*[r]{input links of $N_i$}
Create node $F_i^1$ with $\texttt{In}(F_i^1) = \{n_1,n_2\}$ \tcc*[r]{first FIT unit}
$G_{DSNN}.\texttt{insert}(F_i^1)$\tcc*[r]{insert the FIT neural unit $u_1^i$ in $G_{DSNN}$}
\For(\tcc*[f]{remaining FIT units}){$j=2;j<m_i;j++$}{
Create node $F_i^j$ with $\texttt{in}(F_i^j) = \{n_i^{j+1},F_i^{j-1}\}$\;
$G_{DSNN}.\texttt{insert}(F_i^j)$\;
}
}
Return $G_{DSNN}$
}
\caption{Spatial decomposition of SNN graph $G_{SNN}$.}
\label{alg:unrolling}
\end{algorithm}
\subsection{Workload Clustering}
The decomposed SNN graph is clustered such that each cluster is able to fit onto a crossbar. Figure~\ref{fig:clustering_demo} illustrates the concept using an example of a decomposed SNN graph shown in (\ding{182}). The nodes are the FIT neural units and the links are the synaptic connections. The number on a link represents the average number of spikes communicated between the source and destination FIT units for the representative training data. We consider the mapping of this decomposed SNN graph to a hardware with $2 \times 2$ crossbars. Since a crossbar in this hardware can only accommodate a maximum of 2 pre-synaptic connections, we partition the graph of (\ding{182}) into two partitions (shown in two different colors) in (\ding{183}). These partitions can then be mapped to the two crossbars as shown in (\ding{184}), with an average 8 spikes communicated between the crossbars due to the mapping of the link between neuron \textbf{\textit{d}} and \textbf{\textit{e}} on the shared interconnect of the hardware. Finally, the two clusters generated from the SNN graph are shown in (\ding{185}) along with the inter-cluster communication.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.99\columnwidth]{images/clustering_demo.pdf}}
\caption{Illustration of SNN graph clustering. (\ding{182}) is the original decomposed SNN graph with FIT neural units shown as the nodes and average spikes communicated between them shown on the links. (\ding{183}) shows the partitioning of this graph. (\ding{184}) shows the mapping of the partitions to the two crossbars. (\ding{185}) shows the two clusters generated from the SNN graph of (\ding{182}) considering the constraints of the crossbar.}
\label{fig:clustering_demo}
\end{figure}
Formally, a clustered SNN graph is defined as follows.
\begin{Definition}{Clustered SNN Graph}
A clustered SNN graph \ineq{\mathbf{G_{CSNN} = (\textbf{A},\textbf{C})}} is a directed graph consisting of a finite set \ineq{{\textbf{A}}} of clusters and a finite set \ineq{{\textbf{C}}} of connections between these clusters.
\end{Definition}
\mr{Recently, different approaches have been proposed for clustering SNNs.
Examples include SpiNeMap~\cite{spinemap} for energy minimization and NEUTRAMS~\cite{ji2016neutrams} for performance. See Section~\ref{sec:realted_works} for a comprehensive overview of other state-of-the-art SNN clustering approaches.}
\mr{We formulate SNN clustering as a graph transformation problem and introduce an efficient algorithm to improve resource utilization. This objective is essential to provide tighter guarantee on performance of SNNs in hardware as we demonstrate in Section~\ref{sec:results}.
The graph transformation \ineq{G_{DSNN}\rightarrow G_{CSNN}} is a classical graph partitioning problem \cite{kernighan1970efficient}, and has been applied in many contexts, including task mapping on multiprocessor systems \cite{das2014communication}.
We propose a greedy approach to pack the FIT neural units and synapses of the decomposed SNN graph \ineq{G_{DSNN}} into clusters, improving cluster resource utilization. Algorithm~\ref{alg:clustering} provides the pseudo-code of the clustering algorithm. For each node of the unrolled graph, the algorithm tries to see if the node can be merged into one of the existing clusters (line 3), before creating a new one (lines 4--8). In this algorithm, clusters in \ineq{G_{CSNN}} are sorted in descending order of neuron and synapse utilization (line 12), so that the heavily utilized clusters are first considered for packing neurons and synapses, further improving their utilization.
\begin{algorithm}[h]
\scriptsize{
\KwIn{$G_{DSNN} = ({\textbf{F},\textbf{L}})$}
\KwOut{$G_{CSNN} = (\textbf{A},\textbf{C})$}
$G_{CSNN}$ = \{\} and \texttt{cluster\_list} = \{\}\;
\ForEach{$F_i\in \textbf{F}$}{
find $C_j \in \texttt{cluster\_list}$ such that $F_i$ can be packed in $C_j$ while improving neuron and synapse utilization of $C_j$\;
\If{$C_j = \emptyset$}{
Create new cluster $C_\text{new}$\;
Assign $F_i$ and its synaptic connections to $C_\text{new}$\;
$G_{CSNN}$.\texttt{push}($C_\text{new}$)\;
}\Else{
Assign $F_i$ and its synaptic connections to $C_j$\;
}
sort $G_{CSNN}$ in descending order of neuron and synapse utilizations\;
}
}
\caption{\small Utilization-aware SNN clustering.}
\label{alg:clustering}
\end{algorithm}
\subsection{Dataflow Modeling of Clustered Workload}
We model a clustered SNN as a Synchronous Data Flow Graph (SDFG) for predictable performance analysis~\cite{lee1987synchronous}. SDFGs are commonly used to model streaming applications that are implemented on a multi-processor system-on-chip~\cite{SB00}.
These graphs are used to analyze a system in terms of key performance properties such as throughput, execution time, communication bandwidth, and buffer requirements~\cite{Stuijk06dac}.
Nodes of an SDFG are called \textit{actors}. Each node is a cluster of the clustered SNN graph \ineq{\mathbf{G_{CSNN} = (\textbf{A},\textbf{C})}}. Actors are computed by reading \textit{tokens}, i.e., spikes from their input ports and writing the results of the computation as tokens on the output ports. The number of tokens produced or consumed in one execution of an actor is called the \textit{port rate}. They represent the number of spikes per unit time at the input and output of different clusters in the SNN. Port rates are visualized as annotations on edges. Actor execution is also called \textit{firing}, and it requires a fixed amount of time to execute on a crossbar. Edges in the graph are called \textit{channels} and they represent dependencies among actors.
An actor is said to be {\em ready} when it has sufficient input tokens on all its input channels and sufficient buffer space on all its output channels; an actor can only fire when it is ready.
A set $Ports$ of ports is assumed, and with each port $p \in Ports$, a finite rate $Rate(p) \in \mathbb{N}\setminus\{0\}$ is associated.
Formally, an actor is defined as follows.
\begin{Definition}{Actor}
{An actor $\actor{a}_i$ is a tuple $(I_i,O_i,\tau_i,\mu_i)$ consisting of a set $I_i$ ($\subseteq Ports$) of input ports, a set $O_i$ ($\subseteq Ports$) of output ports with $I_i \cap O_i = \emptyset$, $\tau_i$ is the execution time of $\actor{a}_i$ and $\mu_i$ is its state space, i.e., buffer space needed for communicating spikes on all of its channels.}
\end{Definition}
The source of channel $ch_i^j \in C$ is an output port of actor $\actor{a}_i$, the destination is an input port of actor $\actor{a}_j$. All ports of all actors are connected to precisely one channel, and all channels are connected to ports of some actors. The source and the destination port of channel $ch_i^j$ are denoted by $SrcP(ch_i^j)$ and $DstP(ch_i^j)$ respectively. Channels connected to the input and output ports of an actor $\actor{a}_i$ are denoted by $InC(\actor{a}_i)$ and $OutC(\actor{a}_i$) respectively.
Before an actor $\actor{a}_i$ starts its firing, it requires $Rate(q_i)$ tokens from all $(p,q_i)\in InC(\actor{a}_i)$. When the actor completes execution, it produces $Rate(p_i)$ tokens on every $(p_i,q) \in OutC(\actor{a}_i)$. One important property of an SDFG is \textit{throughput}, which is defined as the inverse of its long-term period. A period is the average time needed for one iteration of the SDFG. An iteration is defined as the minimum non-zero execution such that the original state of the SDFG is obtained. This is the performance parameter used in this paper. Following definitions are introduced to formulate throughput.
\begin{Definition}{Repetition Vector}
The Repetition Vector \emph{RptV} of an SDFG is defined as the vector specifying the number of times actors in the SDFG are executed in one iteration.
\end{Definition}
For the SDFG representation of a clustered SNN,
all spikes generated on a channel are consumed by the destination actor. This means that all actors are fired exactly once during one iteration of the application. So, $RptV = [1 1 1 1 1 1 1]$.
\subsection{\mr{Cyclic Dependency and Deadlock Avoidance}}
\mr{
The clustering approach may lead to cyclic dependency among actors. Figure \ref{fig:cycle_example}(a) illustrates a simple feedforward network of 3 neurons (A, B, \& C). Figure \ref{fig:cycle_example}(b) illustrates a scenario where neurons A and C are placed in cluster 1 (actor 1) and neuron B in cluster 2 (actor 2) during partitioning. Due to the connectivity of the neurons in Figure \ref{fig:cycle_example}(a), there is a cyclic dependency between the two actors: \underline{\texttt{actor\_1}$\rightarrow$\texttt{actor\_2}$\rightarrow$\texttt{actor\_1}}. SDF graphs allow representing such cyclic dependency among actors, justifying our choice of using them for modeling clustered SNNs.
}
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.69\columnwidth]{images/cycles_example.pdf}}
\vspace{-10pt}
\caption{\mr{An example cycle generated during clustering of SNNs.}}
\vspace{-10pt}
\label{fig:cycle_example}
\end{figure}
\mr{
However, presence of cycles complicates the scheduling problem because cyclic dependences can lead to deadlocks. To address this, a cyclic SDF graph is decomposed into hierarchies of acyclic subgraphs. To describe this, we introduce the following definition.
\begin{Definition}{Strongly Connected Subgraph}
A subgraph \ineq{Z} of a directed (cyclic or acyclic) graph
is called a strongly-connected subgraph, iff for every pair of vertices \ineq{a} and \ineq{b} of \ineq{Z}, there is a path from \ineq{a} to \ineq{b} and a path from \ineq{b} to \ineq{a}.
\end{Definition}
}
\begin{figure}[h!]
\centering
\vspace{-5pt}
\centerline{\includegraphics[width=0.69\columnwidth]{images/deadlock_avoidance.pdf}}
\vspace{-10pt}
\caption{\mr{Cycle breaking for deadlock avoidance of cyclic SDF graphs~\cite{battacharyya1996loose}.}}
\vspace{-10pt}
\label{fig:liaf}
\end{figure}
\mr{
Figure~\ref{fig:liaf} shows the flowchart for \textit{cycle breaking}, also known as \emph{sub-independence partitioning}, which is the process of decomposition of strongly connected SDF graphs into hierarchies of acyclic graphs. This is roughly based on the Loose Interdependence Algorithms Framework (LIAF)~\cite{battacharyya1996loose}.
A cyclic SDF graph is first decomposed into a series of strongly connected subgraphs \ineq{Z_1,Z_2,\cdots,Z_N}. For each strongly connected subgraph \ineq{Z_i}, the LIAF algorithm tries to break cycles by properly removing edges that have sufficient delays. Let \ineq{Z_i(V_i,E_i)} be the strongly-connected subgraph of the SDF Graph. An edge \ineq{e_j\in E_i} can be removed if it has enough initial tokens to satisfy the consumption requirements of its sink actor for a complete iteration of \ineq{Z_i} and scheduling \ineq{Z_i} without \ineq{e_j} does not lead to deadlock. The edge \ineq{e_j} is called \emph{inter-iteration edge}. The inter-iteration edge removal is performed iteratively until the new subgraph with the inter-iteration edges removed is no longer a strongly connected subgraph (i.e., it becomes a \emph{loosely connected subgraph}). The subgraph is pushed into a ready list for scheduling purposes. The algorithm is repeated for all the strongly-connected subgraphs. At the end, all deadlock-free subgraphs are scheduled.
}
\subsection{Performance Estimation}
We present an approach to compute the application period of an SDFG by analyzing its maximum cycle mean (MCM) and assuming infinite hardware resources.
For this, we use Max-Plus Algebra \cite{heidergott2014max,zhang2013sdc,cong2006efficient}.
The Max-Plus semiring $\mathbb{R}_{\text{max}}$ is the set $\mathbb{R}\cup\{-\infty\}$ defined with two basic operations $\oplus \text{ and } \otimes$, which are related to linear algebra as
\begin{equation}
\label{eq:mpb}
\footnotesize a \oplus b = \max(a,b) \text{ and } a \otimes b = a + b.
\end{equation}
\mr{
The identity element \ineq{\mymathbb{0}} for the addition \ineq{\oplus} is \ineq{-\infty} in linear algebra, i.e., \ineq{a \oplus \mymathbb{0} = a}. The identity element \ineq{\mymathbb{1}} for the multiplication \ineq{\otimes} is 0 in linear algebra, i.e., \ineq{a \otimes \mymathbb{1} = a}.
}
To use Max-Plus Algebra to analyze an SDFG, it is customary to express the time at which an actor fires in terms of preceding firings in linear algebra and then use standard analysis techniques for Max-Plus Algebra to estimate timing performance. We use the running example of the SDFG in Figure~\ref{fig:sdfg_example}(a), which is obtained by clustering EdgeDet~\cite{carlsim}, an application used to evaluate \text{{DFSynthesizer}}}{{} (see Section~\ref{sec:evaluation}). The clustering is performed considering 1024x1024 crossbars.\footnote{We evaluate \text{{DFSynthesizer}}}{{} primarily for DYNAP-SE neuromorphic hardware with $128 \times 128$ crossbars~\cite{dynapse}. Here we configure $1024 \times 1024$ crossbars to generate fewer clusters from EdgeDet for illustration purposes.} The firing end times of all 9 actors in the $k^{\text{th}}$ iteration (in linear algebra) are
\begin{footnotesize}
\begin{align}
\label{eq:laeqn}
t_{0}(k) &\ge t_{0}(k-1) + \tau_{0} & t_{5}(k) &\ge \texttt{max}\Big[t_{2}(k),t_{1}(k),t_{4}(k)\Big] + \tau_5\\
t_{1}(k) &\ge t_{0}(k) + \tau_1 & t_{6}(k) &\ge \texttt{max}\Big[t_{2}(k),t_{0}(k)\Big] + \tau_6\nonumber\\
t_{2}(k) &\ge t_{1}(k) + \tau_2 & t_{7}(k) &\ge \texttt{max}\Big[t_{1}(k),t_{0}(k)\Big] + \tau_7\nonumber\\
t_{3}(k) &\ge \texttt{max}\Big[t_{2}(k),t_{5}(k)\Big] + \tau_3 & t_{8}(k) &\ge \texttt{max}\Big[t_{2}(k),t_{3}(k),t_{6}(k)\Big] + \tau_8\nonumber\\
t_{4}(k) &\ge \texttt{max}\Big[t_{1}(k),t_{0}(k)\Big] + \tau_4 \nonumber
\end{align}
\end{footnotesize}\normalsize
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.99\columnwidth]{images/sdfg_example.pdf}}
\caption{(a) An example of SDFG obtained from clustering of the EdgeDet application~\cite{carlsim}. (b) Mapping of the SDFG to a neuromorphic hardware with 4 tiles.}
\label{fig:sdfg_example}
\end{figure}
Observe that the firing end time of actor \ineq{A_0} in the $k^\text{th}$ iteration is after its firing end time in the $(k-1)^\text{th}$ iteration. Furthermore, the production and consumption rates are the same for every channel in the SDFG. Using previously introduced Max-Plus semantics, firing end times for every actor in the SDFG can be expressed as
\mr{
\begin{equation}
\label{eq:mat}
\footnotesize\mathbf{t_k} = \oplus\mathbf{{T}\otimes t_{k-1}}
\end{equation}
where $\mathbf{{T}}$ is a matrix in \ineq{\mathbb{R}_{\text{max}}^{8\times 8}} that captures the actor execution times $\tau_{n}$ and \ineq{\mathbf{t_k} = \{t_0(k),t_1(k),\cdots,t_8(k)\}}. The following definitions are introduced to estimate latency.
}
\begin{Definition}{{Digraph}}
The digraph $\Gamma(T)$ of a $n\times n$ matrix $T$ with entries defined in $\mathbb{R}_{\text{max}}$ is the tuple $\langle A,E\rangle$, where $A$ is the set of vertices, i.e., $A = \{1,2,\cdots n\}$ and $E$ is the set of connected ordered arcs between vertices i.e., $E = \{(i,j)~|~T_{i,j}\neq -\infty\}$.
\end{Definition}
\mr{
To give an example, the matrix \ineq{T = \begin{bmatrix}
-\infty & 6\\
1 & 3
\end{bmatrix}}
corresponds to the digraph shown in Figure~\ref{fig:digraph_example}.
}
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.19\columnwidth]{images/digraph_example.pdf}}
\caption{An example digraph.}
\label{fig:digraph_example}
\end{figure}
\begin{Definition}{{Walk}}
A walk $w$ in digraph $\Gamma(T)$ is the sequence of arcs $(x_1,x_2)(x_2,x_3)\cdots(x_{k-1},x_k)$; head of an arc in the sequence is either the start vertex of the walk or tail vertex of a preceding arc; and the tail vertex of an arc in the sequence is either the end vertex of the walk or head vertex of a succeeding arc. Weight of the walk is given by
\begin{equation}
\label{eq:weight}
\footnotesize|w|_T = T_{x_1 x_2} + \cdots T_{x_{k-1} x_k}
\end{equation}
\end{Definition}
\begin{Definition}{{Cycle}}
A cycle $c$ in digraph $\Gamma(T)$ is the walk $(x_1,x_2)(x_2,x_3)\cdots(x_{k-1},x_k)$, such that $x_k = x_1$.
\end{Definition}
\begin{Definition}{{Maximum Cycle Mean}}
The maximum cycle mean, $\rho_\text{max} (T)$ is the maximum of the weight-to-length ratio of all cycles $c$ in $\Gamma(T)$ i.e.,
\mr{
\begin{equation}
\label{eq:mcm}
\footnotesize\rho_\text{max} (T) = \max\limits_{\forall c \text{ in }\Gamma(T)}\frac{|c|_T}{|c|} = \max\limits_{k > 1} \max\limits_{x_1,\cdots,x_{k-1}} \frac{T_{x_1 x_2} + \cdots T_{x_{k-1} x_k}}{k-1}
\end{equation}
}
\end{Definition}
In this paper, \textbf{performance of an SNN is defined in terms of throughput} of the equivalent SDFG, measured as the inverse of its \textit{maximum cycle mean} (Equation~\ref{eq:mcm}), i.e.,
\mr{
\begin{equation}
\label{eq:perf_def}
\footnotesize \text{Performance (throughput)} = \frac{1}{\rho_\text{max} (T)}
\end{equation}
In Equation~\ref{eq:perf_def}, the performance is computed using the worst-case execution time of an actor on a crossbar. This is obtained from the propagation delay of current through the synaptic elements in the crossbar. As shown in many recent works~\cite{twisha_endurance,twisha_thermal,espine}, the current propagation delay within a crossbar depends on the specific synaptic elements that are being activated in the crossbar. This is due to the difference in the amount of parasitic components on the bitlines and wordlines of a crossbar along the different current paths. For performance guarantee purposes, we assume the worst-case propagation delay in the crossbar, and use the same to represent the execution time of actors on the crossbars of a neuromorphic hardware.
}
\mr{
The performance metric defined in Equation~\ref{eq:perf_def} provides the maximum throughput, considering only the worst-case execution time of actors. However, a neuromorphic hardware introduces constraints such as limited buffer space on the crossbars and non-zero latency on the interconnect, which can lower the throughput significantly. Therefore,
\begin{equation}
\label{eq:throughput_snn}
\footnotesize\text{Throughput}_{\big{|}_{SNN}} \le \text{Throughput}_{\big{|}_\text{max}} = \frac{1}{\rho_\text{max} (T)}
\end{equation}
}
\mr{
In this work, we show that performance is impacted by
\begin{enumerate}
\item how hardware resources are allocated to actors of a clustered SNN (Section~\ref{sec:resource_allocation}), and
\item how actors mapped to the same crossbar are time-multiplexed and scheduled (Section~\ref{sec:scheduling}).
\end{enumerate}
}
\mr{
We seek to find the lower bound on performance (\ineq{\text{Throughput}_{\big{|}_\text{bound}}}) such that
\begin{equation}
\label{eq:lower_bound}
\footnotesize \text{Throughput}_{\big{|}_\text{bound}} \le \text{Throughput}_{\big{|}_{SNN}} \le \text{Throughput}_{\big{|}_\text{max}}
\end{equation}
}
\mr{
By making \ineq{\text{Throughput}_{\big{|}_\text{bound}}} close to \ineq{\text{Throughput}_{\big{|}_\text{max}}}, we provide a tighter bound on performance.
}
\subsection{Step 1: Modeling Limited Buffer Sizes of Crossbars}\label{sec:step_1}
Limited input and output buffer sizes of a tile are modeled as back-edges with initial tokens indicating the buffer size available on the tile.
This is illustrated in Figure~\ref{fig:sdfg_example}(b) with the back-edge from \ineq{A_8} to \ineq{A_3}, both of which are mapped to tile 0.
When an actor generates spikes on a channel, the available size reduces; when the receiving actor consumes the spike, the available buffer is released.
In the example, before \ineq{A_3}
can be executed, it has to check if enough buffer space is available. This is modeled by requiring tokens from the back-edge to be
consumed. Since it produces 5068 spikes per firing, 5068 tokens from the back-edge are consumed, indicating reservation of the buffer spaces. On the consumption side, when \ineq{A_8} is executed, it frees 5068 buffer spaces, indicated by a release of these tokens on the back-edge.
We assume \emph{atomic} execution of actors on a crossbar, i.e., a crossbar reads input tokens and produces output tokens in the output buffer for no more than one actor at any given instance of time. To prevent other actors mapped to the same tile from firing simultaneously,
the output buffer space is claimed at the start of execution and released only at the end of firing.
\subsection{Step 2: Actor Ordering on Crossbars}\label{sec:step_2}
The number of crossbars in a neuromorphic hardware is limited. Therefore they may have to be shared between actors of an SNN. However, on a tile, only one instance of an actor can be executing at the same moment in time. We use time-division multiple-access (TDMA) to allocate time slices to actors mapped to the same tile. During its allocated time slice, an actor is executed on the crossbar of the tile and generates spikes, which are stored in the output buffer for communication on the interconnect. Next, we generate the order in which the actors bound to a tile are fired to provide performance guarantee, i.e., throughput. For this, we apply our Max-Plus Algebra formulation (Eq.~\ref{eq:mcm}) on the SDFG of Fig.~\ref{fig:sdfg_example}(b). This is our \emph{static-order schedule}, and is constructed at \textit{design time}.
\subsection{Step 3: Actor Execution on Crossbars}\label{sec:step_3}
Once the static-order schedule is constructed for all tiles of the hardware, we use a self-timed execution strategy~\cite{moreira2007self} to execute these actors at run time. Here, the exact firing times of actors are discarded, retaining only the assignment and ordering of actors on each tile as obtained from the design-time analysis (step 2). At run time, ready actors are inserted into a list and fired in the same order previously determined during design time.
\subsection{Mapping Exploration}\label{sec:mapping_exploration}
Sections~\ref{sec:step_1} through \ref{sec:step_3} extend the Max-Plus formulation to incorporate platform constraints.
\mr{
Using these constraints and the new formulation, one can estimate the throughput of a clustered SNN on a neuromorphic hardware for a specific actor-to-tile mapping. In the following, we explain the mapping scenario where the number of tiles in the hardware is less than the number of actors in the clustered SNN. Therefore, each tile needs to be time-multiplexed between multiple actors.
}
\minor{
Figure~\ref{fig:sota} conceptually illustrates the mapping exploration using \text{{DFSynthesizer}}}{{} compared to state-of-the-art solutions and the selection of lower bound on throughput.
}
\mr{
\ding{182} represents the throughput obtained using \underline{SpiNeMap}~\cite{spinemap}, which optimizes energy consumption for a hardware platform where the number of tiles is higher than the number of actors. When SpiNeMap is applied to the case where the tiles need to be time-multiplexed, it randomly distributes the actors to the tiles and schedules them arbitrarily, without considering throughput. Therefore, the throughput represented by \ding{182} (SpiNeMap) is significantly lower than the maximum throughput} \minor{(i.e., the upper bound)} \mr{represented using \ding{187}.
}
\minor{
Therefore, the throughput variation is \ineq{T_{\ding{187}} - T_{\ding{182}}}.
}
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.99\columnwidth]{images/sota.pdf}}
\caption{\mr{Different mapping explorations and choices for the lower bound of throughput (see Equation~\ref{eq:lower_bound}).}}
\label{fig:sota}
\end{figure}
\mr{
In Figure~\ref{fig:sota}, \ding{183} represents the throughput obtained using a solution such as \underline{PyCARL}~\cite{pycarl}, which balances the load on each tile for a scenario where actors need to be time-multiplexed on the tiles. However, the actors mapped to a tile are scheduled in an arbitrary order without considering throughput. By balancing the tile load, PyCARL reduces the number of clusters mapped per tile, which improves throughput. Therefore, the throughput represented by \ding{183} is higher than \ding{182}, but lower than the maximum throughput \ding{187}. \minor{Therefore, the throughput variation is \ineq{T_{\ding{187}} - T_{\ding{183}}}.}
}
\mr{
In Figure~\ref{fig:sota}, \ding{184} represents the throughput obtained using our previous work \underline{SDFSNN}~\cite{dfsynthesizer}, which first balances the load of each tile by distributing the actors evenly, and then uses a dataflow approach to schedule the actors on each tile, improving throughput. The throughput represented by \ding{184} is therefore higher than both \ding{182} and \ding{183}, \minor{but lower than the maximum throughput \ding{187}.
Therefore, the throughput variation is \ineq{T_{\ding{187}} - T_{\ding{184}}}.}
}
\mr{
In Figure~\ref{fig:sota}, \ding{185} represents the throughput obtained using a mapping exploration framework, which explores a combination of actor-to-tile mapping and dataflow-based scheduling of actors on each tile to maximize the throughput. This throughput is higher than \ding{182}-\ding{184}, and is closer to the maximum throughput \ding{187}.
Finally, \ding{186} represents the throughput obtained using
an actor-to-tile mapping that jointly optimizes energy and throughput, and uses dataflow-based scheduling of actors on each tile to further improve the throughput. Since this solution takes energy into consideration in the mapping step, the throughput can be somewhat lower than \ding{185} as illustrated in the figure. In Section~\ref{sec:results}, we evaluate all these approaches and show that \ding{186} is still higher than \ding{182}-\ding{184}.
}
\minor{
To conclude, the design-space exploration of \text{{DFSynthesizer}}}{{} can generate mappings representing two minimum throughput solutions -- \ding{185} and \ding{186}.
Although the maximum throughput remains the same for \text{{DFSynthesizer}}}{{} and other state-of-the-art approaches, the minimum throughput of \text{{DFSynthesizer}}}{{} (i.e, \ding{186}) is higher than the minimum throughput obtained using all state-of-the-art mapping solutions (i.e., \ding{182}-\ding{184}).
Therefore, the difference between maximum and minimum throughput is the least in \text{{DFSynthesizer}}}{{} compared to all state-of-the-art solutions, meaning that \text{{DFSynthesizer}}}{{} provides stricter performance guarantee, which is critical for real-time systems.
}
\mr{
We now describe \text{{DFSynthesizer}}}{{}.
}
We integrate the extended Max-Plus formulation inside a design-space exploration framework to obtain cluster mappings that are Pareto optimal in terms of hardware metrics such as throughput, latency, energy, and reliability. In the following, we describe our mapping explorations considering energy and throughput. Such formulations can be trivially extended to consider other metrics.
The energy consumption \ineq{E_\mathcal{M}} of the mapping \ineq{\mathcal{M}} is measured considering the number of spikes that are generated inside each tile and the number of spikes that are routed on the interconnect~\cite{twisha_energy}. The energy parameters are reported in Table~\ref{tab:hw_parameters}. Using these parameters, the energy consumption is
\mr{
\begin{equation}
\label{eq:energy_computation}
\footnotesize E_\mathcal{M} = E_{spk} + E_\text{comm},
\end{equation}
where \ineq{E_{spk}} is the energy consumed in generating the spikes and propagating the spike current via the synapses, and \ineq{E_{comm}} is the energy consumed in communicating spikes via the shared interconnect.
}
where \ineq{S(T_i)} is the number of spikes generated inside tile \ineq{T_i\in T} and \ineq{S(I_{i,j})} is the number of spikes communicated on the link \ineq{I_{i,j}} between tiles \ineq{T_i} and \ineq{T_j} in the hardware.
Our objective is to maximize throughput of a given machine-learning model on hardware (Eq.~\ref{eq:mcm}) and minimize the hardware energy consumption (Eq. \ref{eq:energy_computation}). We formulate a joint metric \ineq{\lambda = E/\tau}, and minimize it during our mapping explorations.
To this end, we propose an iterative approach, which explores different mapping alternatives, satisfying the cluster mapping constraint (Eq.~\ref{eq:mapping_constraint_1}). For each mapping alternative, we evaluate throughput and energy consumption. Finally, Pareto-optimal mappings are retained and returned.
Algorithm~\ref{alg:mapping} provides the pseudo-code of our proposed mapping exploration. We start by randomly distributing clusters to the tiles (line 3). We evaluate throughput and energy consumption of this mapping and compute the joint metric \ineq{\lambda} (lines 4--5). For each cluster, we do the following. We move the cluster from its current tile to every other tile and recalculate \ineq{\lambda} (lines 6--10). If \ineq{\lambda} reduces, the new mapping is retained (lines 11--13), and the algorithm proceeds to analyze the next cluster. In this way, a local minimum is reached, starting from the initial random allocation of clusters. We re-execute the algorithm \ineq{\eta} times, starting with a different random allocation of the clusters each time. In this way, many mappings are explored. Finally, mappings that are Pareto-optimal in terms of throughput and energy consumption are retained.
\begin{algorithm}[h]
\scriptsize{
\KwIn{\ineq{G_{cl}= (C,A),G_{nh}= (T,I)}}
\KwOut{\ineq{\mathcal{M}_\text{max}}}
$\mathbb{M} = \{\}$\tcc*[r]{This set holds all the mappings}
\For(\tcc*[f]{Run for $\eta$ times}){$r=0;r<\eta;r\texttt{++}$}{
Allocate clusters randomly to tiles. Call this mapping $\mathcal{M}$\;
Calculate $\tau_\mathcal{M}$ using (\ref{eq:mcm}) and energy consumption $E_\mathcal{M}$ using (\ref{eq:energy_computation})\;
Calculate the joint metric $\lambda = \tau_\mathcal{M}\cdot E_\mathcal{M}$\;
\For(\tcc*[f]{For each cluster in the graph $G_{cl}$}){$C_i\in C$}{
$T_{C_i} = \texttt{GetTileOfCluster}(\mathcal{M,C_i})$\tcc*[r]{Get the tile to which the cluster $C_i$ is mapped in the mapping $\mathcal{M}$}
\For(\tcc*[f]{Move the cluster to every other tile }){$T_j\in T\setminus T_{C_i}$}{
$\mathcal{M}_j = \texttt{MoveClusterToTile}(\mathcal{M},C_i,T_j)$ \tcc*[r]{Update the mapping to reflect the movement of cluster $C_i$ to tile $T_j$}
Calculate $\tau_{\mathcal{M}_j}, E_{\mathcal{M}_j}, \text{ and } \lambda_j$\;
\uIf(\tcc*[f]{If the joint metric improves}){$\lambda_j < \lambda$}{
$\mathcal{M} = \mathcal{M}_j$\tcc*[r]{Retain the new mapping}
}
}
}
$\mathbb{M}.\texttt{insert}(\mathcal{M})$
}
$\mathbb{M}_{PO} = \texttt{ParetoOptimization}(\mathbb{M})$\tcc*[r]{Retain only the Pareto-Optimal Mappings}
Return $\mathcal{M}_\text{max}$, the mapping with minimum execution time.
}
\caption{Mapping of the clustered graph $G_{cl}$.}
\label{alg:mapping}
\end{algorithm}
The complexity of this algorithm is as follows. The unit function \ineq{\texttt{GetTileofCluster}} is essentially an \texttt{argmax} function
with a complexity of \ineq{O(|T|)}. The unit function \texttt{MoveClusterToTile} is an update of matrix and can be performed in \ineq{O(1)}. Therefore, the complexity of the algorithm is \ineq{\eta\times|C|\times|T|}. Here, \ineq{\eta} is a user-defined parameter and controls the compilation time with a trade-off on the solution quality, i.e., execution time and energy consumption of the application on hardware.
\section{Introduction}\label{sec:introduction}
\input{sections/introduction}
\minor{
\section{Scope and High-Level Overview of \text{{DFSynthesizer}}}{{}}\label{sec:high_level}
}
\input{sections/overview}
\section{Program Analysis and Workload Generation}\label{sec:formatting}
\input{sections/formatting}
\section{Program Compilation and Performance Estimation}\label{sec:compilation}
\input{sections/compiler}
\section{Resource Allocation and Hardware Mapping}\label{sec:resource_allocation}
\input{sections/allocation}
\section{Scheduling and Performance Guarantee}\label{sec:scheduling}
\input{sections/scheduling}
\section{Evaluation Methodology}\label{sec:evaluation}
\input{sections/evaluation}
\section{Results and Discussions}\label{sec:results}
\input{sections/results}
\section{Related Works}\label{sec:realted_works}
\input{sections/related_works}
\section{Conclusions}\label{sec:conclusions}
\input{sections/conclusions}
\begin{acks}
This work is supported by 1) the National Science Foundation Award CCF-1937419 (RTML: Small: Design of System Software to Facilitate Real-Time Neuromorphic Computing) and 2) the National Science Foundation Faculty Early Career Development Award CCF-1942697 (CAREER: Facilitating Dependable Neuromorphic Computing: Vision, Architecture, and Impact on Programmability).
\end{acks}
\bibliographystyle{IEEEtranSN}
|
1,116,691,500,351 | arxiv | \section{Introduction}
\label{intro}
Continuous-variable cluster states are entangled resources for continuous-variable (CV) measurement-based quantum computation (MBQC)~\cite{Menicucci2006}. They are highly scalable, can be generated deterministically, and operate at room temperature---all of which make them an attractive substrate for quantum computing ~\cite{Yokoyama2013,Asavanant2019,Larsen2019,Raussendorf2001, Menicucci2011a,Chen2014,Yoshikawa2016}.
CV cluster states were originally constructed using single-mode squeezed states and CV controlled-$Z$ gates (in direct analogy to their qubit counterparts)~\cite{Menicucci2006}.
However, realizing these states in the lab demands experimentally difficult inline squeezing operations to implement the CV controlled-$Z$ gates~\cite{van2007building, Gu2009}.
Later, it was realized that CV cluster states could be generated using an experimentally accessible set of resources: offline squeezing and constant-depth local linear optical circuits~\cite{Menicucci2008,menicucci2007ultracompact, flammia2009optical,Menicucci2011a, Wang2014, Alexander2016, Alexander2018, Wu2020, fukui2020temporal, zhu2021hypercubic, larsen2021architecture}.
Since then, large-scale macronode cluster states have been experimentally produced across frequency~\cite{Chen2014} and temporal~\cite{Yokoyama2013,Asavanant2019,Larsen2019,Yoshikawa2016} modes.
The macronode wire is a linear macronode cluster state used to implement single-mode, Gaussian, unitary operations in a measurement-based fashion.
It is constructed from a chain of two-mode squeezed states linked by 50:50 beam splitters~\cite{Menicucci2011a,Walshe2020} as shown in Fig.~\ref{fig:macronodeWireGraph}(a).
Coupling together macronode wires using additional beam splitters produces higher-dimensional macronode cluster states that are useful for universal quantum computing~\cite{Menicucci2011a, Wang2014, alexander2016flexible, Alexander2016, Alexander2018, Wu2020, fukui2020temporal, zhu2021hypercubic, larsen2021architecture}.
We focus on the quad-rail lattice (QRL)~\cite{Menicucci2011a,alexander2016flexible}, which is used to implement the two-mode unitaries required for universality.
Although originally proposed as a macronode-based implementation of a 2D square-lattice cluster state, the QRL construction~\cite{Menicucci2011a} does not require a specific graph topology---it only requires that four local modes are stitched together as in Fig.~\ref{fig:macronodeWireGraph}(b).
This can be used to realize a class of graphs that includes 3D lattices, such as that in Ref.~\cite{Wu2020} and the Raussendorf--Harrington--Goyal~(RHG) lattice~\cite{Raussendorf2007,tzitrin2021fault}, which provide topological fault tolerance when used as a qubit cluster state.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{MacronodeCartoonQRL_4modeindicated}
\caption{\label{fig:macronodeWireGraph} Macronode cluster states for quantum computing. Each light purple oval designates a grouping of two local modes called a (two-mode) macronode. Arrows represent beam splitters between local modes, applied in the order $\{1,2,3\}$. (a) In the one-dimensional case, known as a macronode wire, macronodes are chained together using beam splitters. Macronode-local measurements teleport an input state along the macronode wire with gates applied at each macronode that depend on the measurement bases and the specific states in each wire. (b) An example of a 2D quad-rail-lattice (QRL) construction.
Macronode wires are periodically coupled to one another using additional (red) beam splitters, and
local measurements teleport multimode input states and facilitate two-mode gates.
Previous work on the QRL interprets coupled two-mode macronodes as a four-mode macronode~\cite{alexander2016flexible};
we circle one of these four-mode macronodes (solid outline) to highlight the defining property of the QRL: local four-mode coupling.
Dashed boxes indicate the \emph{macronode gadgets} used to implement (a) single-qubit and (b) two-qubit Clifford gates on a GKP-encoded qubit. Their respective circuit diagrams can be found in Eq.~\eqref{circuit:macrnonodegadget} and Eq.~\eqref{twoModeCircuit_QRL}.
}
\end{figure}
Any physical implementation of quantum circuits will involve some degree of noise, but CV MBQC also has to contend with \emph{intrinsic noise} that results from finite energy constraints~\cite{Gu2009, Alexander2014}. Thus, considerations of scalable quantum computing with CV cluster states require a method for addressing noise up front.
Bosonic codes fill this role by encoding discrete-variable
quantum registers within the Hilbert space of one or more bosonic modes.
Reference~\cite{menicucci2014fault} proposed using the Gottesman-Kitaev-Preskill (GKP) encoding~\cite{GKP} to discretize the intrinsic Gaussian noise that arises in MBQC with CV cluster states.
GKP error correction collapses CV noise (including intrinsic noise) into probablistic qubit-level errors.
These errors will also need to be corrected, so they must be passed on to a higher level quantum error correcting code. Fault-tolerant quantum computation is possible given an effective qubit error rate below some threshold value (specific to the noise model, chosen higher level code, and decoder) \cite{menicucci2014fault,larsen2021architecture,Noh2021lowoverhead}.
Recent studies have married macronode cluster states with the GKP code~\cite{Walshe2020, Larsen2020noiseAnalysis} and found squeezing thresholds~\cite{larsen2021architecture, tzitrin2021fault, Noh2021lowoverhead} in reach of near-term technology. This provides a (non-optimized) target for experimental efforts into creating these resource states.
GKP error correction, which mitigates intrinsic finite-squeezing noise, is possible out of the box, because the Gaussian unitary operations from MBQC with CV cluster states are all that is needed for the full set of single- and two-qubit GKP Cliffords~\cite{menicucci2014fault}.
However, current proposals that are based on local implementation of gates rely on compiling GKP Clifford gates and error correction across several teleportation steps~\cite{larsen2021architecture,Noh2021lowoverhead}, which is undesirable since each additional step adds finite-squeezing noise.
In this work, we provide three critical advances. First, we simplify gate implementations by showing that all GKP Cliffords can be performed deterministically in a single teleportation step.
Second, we show that preparing the cluster-state modes in squeezed GKP states (called qunaught states~\cite{Walshe2020}), automatically implements GKP error correction during gate execution.
Together, these streamline quantum computing in the QRL construction by using the minimal number of noisy ancilla states.
Third, we embed the scheme for making GKP magic states from Ref.~\cite{Baragiola2019} directly into the macronode setting---attaching supplementary modes to the cluster state and additional inline squeezing for CV controlled-$Z$ gates are not necessary.
Using the first two advances, we calculate the logical-gate error rates for GKP Clifford implementations and find that they surpass previous best-case gate error rates~\cite{Larsen2020noiseAnalysis}. For the noisiest gate, the logical controlled-$Z$, we find that gate error rates of $10^{-2}$--$10^{-3}$ are achievable with 11.9--13.7~dB of squeezing in the resource states that comprise the cluster state. Since our advances use the minimum number of ancillae per gate, they outperform previous studies by at least $\sim 1.3$~dB~\cite{menicucci2014fault, Larsen2020noiseAnalysis, larsen2021architecture}.
\section{Notation and conventions}\label{notation}
We first lay out notational conventions and important continuous-variable circuit identities used throughout this work, most of which come from Ref.~\cite{Walshe2020}, where further details can be found.
\subsection{CV bases and unitary operators}
\label{subsec:CVunitaries}
Each CV mode has canonical position and momentum operators, $\op{q} = \frac{1}{\sqrt{2}}(\op{a} + \op{a}^\dagger)$ and $\op{p} = \frac{-i}{\sqrt{2}}(\op{a} - \op{a}^\dagger )$, obeying the canonical commutation relation $[\op{q}, \op{p}] = i$. This corresponds to an implicit choice of $\hbar = 1$, with measured vacuum variance equal to $\tfrac 1 2$ in both quadratures. Their eigenstates $\qket{s}$ and $\pket{t}$ satisfy $\op{q} \qket{s} = s \qket{s}$ and $\op{p} \pket{t} = t \pket{t}$.
The displacement operators
\begin{align}
\op X (s) &\coloneqq e^{-i s \op p} = \op D(\tfrac {s} {\sqrt 2}),
\\
\op Z (t) &\coloneqq e^{i t \op q} = \op D(\tfrac {i t} {\sqrt 2}),
\end{align}
displace by $+s$ in position and $+t$ in momentum, respectively: $\op X^\dag(s) \op q \op X(s) = \op q + s$, $\op Z^\dag(t) \op p \op Z(t) = \op p + t$, with $\op D(\alpha) = e^{\alpha \op a^\dag - \alpha^* \op a}$ being the ordinary quantum-optics displacement operator.
With the phase-delay operator
\begin{equation} \label{eq:phasedelay}
\op R(\theta) \coloneqq e^{i \theta \op{a}^\dagger \op{a}}
\, ,
\end{equation}
we define a rotated momentum quadrature
\begin{align} \label{eq:rotatedquadrature}
%
\op{p}_{\theta} &\coloneqq \op{R}^\dagger(\theta) \op{p} \op{R}(\theta) =
\op p \cos \theta + \op q \sin \theta
\, ,
\end{align}
whose eigenstates, satisfying $ \op{p}_{\theta} \ketsub{t}{p_{\theta}} = t \ketsub{t}{p_{\theta}}$, are given by
$\ketsub{t}{p_{\theta}} \coloneqq \op R^\dagger(\theta) \pket{t}$~\cite{Walshe2020}.
A special case of the phase delay operator is the Fourier transform operator,
\begin{equation} \label{eq:Fouriergate}
\op{F} \coloneqq \op R(\tfrac{\pi}{2})
\, ,
\end{equation}
which rotates the canonical quadratures, $\op{F}^\dagger \op{q} \op{F} = -\op{p}$, and $\op{F}^\dagger \op{p} \op{F} = \op{q}$.
We describe measurements of a rotated quadrature $\op{p}_\theta$, realized via homodyne detection, as projections onto rotated eigenstates.
In the circuit setting, these projections are described by the bra
\begin{align}\label{rotmeasure}
\brasub{m}{p_\theta}\coloneqq\pbra{m}\op {R}(\theta),
\end{align}
where $m$ is the measurement outcome.
We use two additional single-mode Gaussian operators. First is the squeezing operator with squeezing factor~${\zeta \in \reals}$,
\begin{align}
\op S(\zeta) & \coloneqq \op R(\Im \ln \zeta) e^{-\tfrac{i}{2} (\ln{\abs \zeta}) (\op q \op p-\op p \op q)} \label{eq:squeezinggate} \\
&=\op R(\Im \ln \zeta) e^{\tfrac{1}{2} (\ln{\abs \zeta}) (\op a^{\dag 2} -\op a^2 )}
\, ,
\end{align}
where $\Im \ln \zeta = \pi$ if $\zeta < 0$ and 0 otherwise. This appends a $\pi$ phase shift to the squeeze
if and only if $\zeta < 0$, keeping its symplectic Heisenberg action consistent for all real~$\zeta$ (see Ref.~\cite{Alexander2016}).
Next is the momentum-shear operators with shear parameter~$\sigma$:
\begin{align}
\op P(\sigma) &\coloneqq e^{\frac{i}{2}\sigma \op q^2}
\, .\label{eq:sheargate}
\end{align}
The two-mode CV gates we focus on are the controlled-$Z$ gate and the balanced beam splitter. A controlled-$Z$ gate with real weight $g$,
\begin{align} \label{eq:CVCZ}
\CZ(g) &\coloneqq e^{i g \op q \otimes \op q}
\, ,
\end{align}
is symmetric (invariant under swapping the inputs).
The two-mode entangling gate for macronode cluster states is a balanced beam splitter. For modes $j$ and $k$, the beam splitter convention we use is
\begin{align} \label{beam splitterdef}
\op{B} _{jk} &\coloneqq e^{-i \frac{\pi}{4}(\op{q}_j \otimes \op{p}_k - \op{p}_j \otimes \op{q}_k )}\\
&= e^{-\tfrac{\pi}{2}(\op a_j \otimes \op a_k^\dag-\op a_j^\dag \otimes \op a_k)}
\, .
\end{align}
Note that Hermitian conjugation is equivalent to exchanging the inputs: $\op B_{jk}^\dag = \op B_{kj}$.
\subsection{Quantum circuits and right-to-left convention}
Following Ref.~\cite{Walshe2020}, the circuits in this work flow \emph{from right to left}, with input states specified by kets on the right-hand side of the circuit and projective measurements (including the outcome) specified by bras on the left-hand side. With this convention, which is sometimes called the Kitaev convention, each circuit maps simply to Dirac notation. This means gates merge together without reversing their order, and projective measurements can be straightforwardly represented as bras. For instance,
\label{eq:RtoLcircuit}
\begin{align}
\quad\;\;
\Qcircuit @C=0.6em @R=1em
{
&\lstick {\brasub{m}{b}\!} & \gate B & \gate A & \rstick{\!\ket{\psi}} \qw
}
\qquad=\quad\;\;\;
\Qcircuit @C=0.6em @R=1em
{
& \lstick {\brasub{m}{b}\!} & \gate {B A} & \rstick{\!\ket{\psi}} \qw
}
\qquad=\,
\brasub{m}{b} \op B \op A \ket \psi
\raisebox{-0.5ex}{,}
\end{align}
where $\ket \psi$ is the input state, $\brasub{m}{b}$ indicates a measurement in basis~$b$ with outcome~$m$, and the result of the circuit is an amplitude for that outcome. Circuits for which only some of the systems are measured produce Kraus operators associated with that outcome under a similarly straightforward mapping. The notation for other circuit elements is standard and without modification, except for the understanding that time flows right to left.
One circuit element of particular importance, whose novel notation was first introduced in Ref.~\cite{Walshe2020}, is that of the beam splitter, Eq.~\eqref{beam splitterdef}. We represent this as a vertical arrow pointing from the wire for mode~$j$ to that for mode~$k$:
\begin{equation}\label{BScircuit}
\raisebox{-1em}{$\op{B} _{jk} =\quad {\scriptsize\text{(out)}}~~$}
\Qcircuit @C=1.25em @R=2.1em {
&
\bsbal{1}
&
\rstick{j}
\qw \\
&
\qw
&
\rstick{k}
\qw
}
\raisebox{-1em}{\quad\;\; {\scriptsize\text{(in)}} \quad.}
\end{equation}
Since $\op B_{jk}^\dag = \op B_{kj}$, taking the Hermitian conjugate reverses the direction of the arrow.
\subsection{Gottesman-Kitaev-Preskill code}
The ideal, square-lattice GKP computational basis states, indexed by $j \in \{0,1\}$, are described by periodic wavefunctions in position and momentum, respectively given by~\cite{GKP}
\begin{align}\label{GKP}
\ket{j_\GKP} &\coloneqq
\int ds\, \Sha_{2\sqrt\pi}( s-j\sqrt{\pi} ) \qket s\\
&= \int dt\, e^{ij\sqrt{\pi}t}\Sha_{\sqrt{\pi}}(t)\pket{t},
\end{align}
where a Dirac comb of period $T$ is defined as~\cite{mensen2021}
\begin{equation}
\Sha_{ T }(x) \coloneqq \sqrt{T} \sum_{n=-\infty}^\infty \delta(x - n T)
.
\end{equation}
Together, the basis states span a two-dimensional subspace of a CV mode's Hilbert space that is described by the projector
\begin{align} \label{eq:GKPproj}
\op{\Pi}_\GKP \coloneqq &\outprod{0_\GKP}{0_\GKP} + \outprod{1_\GKP}{1_\GKP}
\, .
\end{align}
A distinguishing feature of GKP codes is that the Clifford group can be implemented with Gaussian unitary operations on the CV mode. For the square-lattice GKP code considered here, the correspondence between Gaussian CV unitaries and their action as Clifford gates in the square-lattice GKP encoding is
\begin{align} \label{eq:gateconnections}
\underbrace{\big\{ \op{I}, \op{F}, \op{P}(\pm 1), \CZ(\pm 1)\big\}}_{\text{CV unitaries}}
\longmapsto
\underbrace{\big\{ \bar{I}, \bar{H}, \bar{P}, \bar{\text{C}}_{Z} \big\}}_{\text{GKP Cliffords}}
,
\end{align}
respectively.
The CV unitaries were introduced in Sec.~\ref{subsec:CVunitaries}, and the GKP Clifford gates use standard notation for qubit gates, with $\bar{P}$ indicating the phase gate ($\tfrac \pi 2$ rotation about the $Z$ axis of the Bloch sphere).
Throughout this work, CV unitaries are indicated by hats and logical GKP gates by overbars.
Many CV unitaries can perform the same logical gate on a square-lattice GKP state---for example, $\op{F}^\dagger$ also implements $\bar{H}$; see Ref.~\cite{GKP} for further details.
\section{Quantum computing with quad-rail-lattice cluster states and the GKP code}
\label{sec:QCmacronode}
The key to using CV cluster states for computing with the GKP code is determining the measurement bases that implement GKP Clifford gates.
For a slightly different macronode cluster state, Larsen \emph{et al.} recently proposed a set of gates (including two-qubit Clifford gates)~\cite{larsen2021architecture} that requires at most two steps (teleportation through two macronodes) and relies on variable-transmission beam splitters for error correction.
In this work, we give an improved protocol that provides three advances. First, the
full generating set of GKP Clifford operations can be performed in a single measurement step. That is, all single-qubit Cliffords are executed during teleportation through a single macronode, and the two-qubit Cliffords through two entangled macronodes. This more efficient use of the macronode cluster state reduces the amount of finite-squeezing noise per gate. Second, GKP error correction is performed in parallel with each logical gate by teleporting through an encoded GKP Bell pair. This introduces less noise than an ancilla-coupled approach and leads to the improved squeezing thresholds in Sec.~\ref{sec:squeezingthresholds}.
Third, our protocol does not require variable-transmission beam splitters; rather, the beam splitter network is fixed, which simplifies experimental implementation.
\subsection{Single-mode gates}
We describe the above concepts in more detail using the essential primitive element in a macronode wire---the macronode teleportation gadget, indicated by a dashed black box in Fig.~\ref{fig:macronodeWireGraph}(a). Each macronode gadget consists of three modes. Measuring the first two teleports the input state to the output mode with a Kraus operator applied. The circuit identity for the macronode gadget, derived in Ref.~\cite{Walshe2020}, is
\begin{align}\label{circuit:macrnonodegadget}
%
\resizebox{\columnwidth}{!}
{
\phantom{$\scriptsize ~p_{\theta_a}$}\Qcircuit @C=1.75em @R=2em
{
&\lstick{\brasub{m_a}{p_{\theta_a}}} & \bsbal{1} & \qw & \rstick{\text{(in)}} \qw[-1] & \\
&\lstick{\brasub{m_b}{p_{\theta_b}}} & \qw & \bsbal{1} & \rstick{\ket{\psi}} \qw \\
&\lstick{\text{(out)}} & \qw & \qw & \rstick{ \ket{\phi} } \qw
}\,
\raisebox{-2.2em}{~=}
\Qcircuit @C=1em @R=1em {
&\ar @{-} [dl(0.5)] &\gate{ D(\mu_{a,b}) } &\gate{\frac{1}{\sqrt{\pi}} V(\theta_a ,\theta_b)} &\rstick{\text{(in)}} \qw[-1] & \\
&\ar @{-} [ul(0.5)] &\qw &\qw &\qw[-1] \ar @{-} [dr(0.5)] && \\
&&\lstick{\text{(out)}}
&\gate{\frac{1}{\sqrt{\pi}}\bounceEPRgate{\psi,\phi}{}} &\qw[-1] \ar @{-} [ur(0.5)] &&
}
}
\!\!\raisebox{-2em}{,}
%
\end{align}
where the specific elements in the right hand side are defined and discussed below.
We call this circuit the \emph{single-mode macronode gagdet} since it takes a single mode as input and is implemented at a single (two-mode) macronode. Note that the term `single-mode' refers to the number of inputs, with the size of the macronode being double that since two measurements are required to teleport a single mode.
The gate $\op V(\theta_a,\theta_b)$ is a Gaussian unitary determined by the
homodyne measurement angles $\theta_a$ and $\theta_b$.
We give the standard form of this unitary (up to an overall phase)~\cite{Alexander2014, Walshe2020} along with a new decomposition that will be useful when working with the GKP code:
\begin{align}\label{vGateOrig}
\op V(\theta_a,\theta_b)&=\op R(\theta_+-\tfrac{\pi}{2})\op S(\tan{\theta_-})\op R(\theta_+)\\
&= \op R(\theta_a-\tfrac{\pi}{2}) \op P[2\cot (2 \theta_-)] \op R(\theta_a-\tfrac{\pi}{2}) \label{vGateqShear}
\, ,
\end{align}
where the operators on the right are defined in Sec.~\ref{subsec:CVunitaries}, and
\begin{align}
\theta_\pm &\coloneqq \frac{\theta_a \pm \theta_b}{2}
.
\end{align}
Equation~\eqref{vGateqShear}
arises from the Bloch-Messiah decomposition~\cite{braunstein2005squeezing} of the squeezing operation (up to an overall phase),
\begin{align} \label{eq:localsqzdecomp}
\op S(\tan \theta_-)
&= \op R(\theta_-) \op P[2 \cot (2\theta_-)] \op R(\theta_- - \tfrac \pi 2)
\, ,
\end{align}
which can be verified directly using the symplectic representation of the Heisenberg action of these operators~\cite{Alexander2014}.
The influence of the measurement outcomes $m_a$ and $m_b$ is through a displacement $\op {D}(\mu_{a,b})$ whose amplitude depends on these outcomes and on the chosen measurement bases:
\begin{equation} \label{eq:mu}
\mu_{a,b}
%
\coloneqq \frac{ - m_a e^{i \theta_b} - m_b e^{i \theta_a } }{ \sin (2 \theta_-)}
\, .
\end{equation}
At each macronode, this displacement is known and can be corrected. For this reason, we frequently set $m_a = m_b = 0$ throughout this work so that $\mu = 0$. Interested readers can consult Ref.~\cite{Walshe2020} for more details.
Finally, the local states $\ket{\psi}$ and $\ket{\phi}$ at the input
determine the gate $\op {A}(\psi,\phi)$, which is, in general, not unitary. The precise definition of this gate and more details about it can be found in Ref.~\cite{Walshe2020}. When its particular form is required, we will give it explicitly for that special case.
The circuit in Eq.~\eqref{circuit:macrnonodegadget} describes the Kraus operator
\begin{equation} \label{genkraus}
\op{K}(m_a, m_b) = \frac{1}{\pi} \bounceEPR{\psi, \phi}{} \op D(\mu_{a,b}) \op{V}(\theta_a,\theta_b)
\, ,
\end{equation}
which gives the evolution of an input state $\op{\rho}_\text{in}$ as it is teleported from the top mode to the bottom mode:
\begin{align} \label{eq:Krausmap}
\op{\rho}_\text{out} = \frac{1}{\Pr(m_a, m_b)} \op{K}(m_a, m_b) \op{\rho}_\text{in} \op{K}^\dagger(m_a, m_b)
\, .
\end{align}
Since the Kraus operator is not unitary, the output state is renormalized by the probability density of the outcomes,
$\Pr(m_a, m_b) = \Tr [\op{K}^\dag(m_a, m_b) \op K(m_a, m_b) \op{\rho}_\text{in} ]$.
The macronode gadget, Eq.~\eqref{circuit:macrnonodegadget}, allows us to implement a deterministic single-mode Gaussian unitary $\op {V}(\theta_a,\theta_b)$, Eq.~\eqref{vGateOrig}, through a choice of homodyne measurement bases. (It is deterministic because the displacement is known and can be corrected at the end of the gadget or accounted for in later steps of the computation.)
Two consecutive $\op {V}(\theta_a,\theta_b)$ gates can enact any single-mode Gaussian operation~\cite{Alexander2014}. Such generality is not required for the GKP encoding, however. In fact, the minimal set of single-qubit GKP Clifford gates in Eq.~\eqref{eq:gateconnections} can be performed in a \emph{single step}---\emph{i.e.},~a single macronode gadget---using the
measurement bases presented in Table~\ref{tab:twoModeMeasurementAngles}.
This is a novel and distinctive feature of GKP Clifford gates that has not, to the authors' knowledge, been previously reported in the literature. Single-step operation allows for lower gate error rates than previously reported (discussed further in Sec.~\ref{sec:squeezingthresholds}).
\begin{table}[t]
\def1.5{1.5}
\begin{tabular}{|c|c|c|c|}
\hline
$\{\theta_a,\theta_b\}$ & $\op V(\vec \theta)$ & Logical Gate\\
\hline
$\{\tfrac{\pi}{2},0\}$ & $\op I$ & $\Bar{I}$\\
$\{\tfrac{3\pi}{4},\tfrac{\pi}{4}\}$ & $\op F$ & $\Bar{H}$\\
$\{\tfrac{\pi}{2},\tfrac{\pi}{2}\mp {\chi} \}$ & $\op P(\pm 1)$ & $\Bar{P}$\\
\hhline{|===|}
$\{\theta_a,\theta_b,\theta_c,\theta_d\}$
& $\op {V}^{(2)}(\vec{\theta})$ & Logical Gate \\
\hline
$\{\tfrac{\pi}{2},\tfrac{\pi}{2}\pm {\chi},\tfrac{\pi}{2},\tfrac{\pi}{2} \mp {\chi}\}$& $\CZ(\pm 1)$ &$\bar{\text{C}}_{Z}$ \\
%
$\{0,\tfrac \pi 2,\tfrac \pi 2,0 \}$& SWAP & $\overbar{\text{SWAP}}$ \\
$\{\tfrac{\pi}{2},0, \tfrac{\pi}{2},0\}$&$ \op I \otimes \op I$ &$\bar{I} \otimes \bar{I} $\\
$\{\frac{3 \pi}{4},\tfrac{\pi}{4},\frac{3 \pi}{4},\tfrac{\pi}{4}\}$& $\op F\otimes \op F$ & $\bar{H} \otimes \bar{H}$\\
$\{\frac{\pi}{2},\tfrac{\pi}{2}\mp {\chi},\frac{ \pi}{2},\tfrac{\pi}{2}\mp {\chi}\}$& $\op P(\pm 1)\otimes \op P(\pm 1)$ & $\bar{P} \otimes \bar{P}$ \\
\hline
\end{tabular}
\caption{
Measurement bases and the resulting GKP Clifford gates for the QRL macronode cluster state.
The upper set (above the double horizontal line) are single-mode CV gates realized with the single-mode macronode gadget in Eq.~\eqref{circuit:macrnonodegadget}. These gates implement single-qubit GKP Cliffords.
The lower set (below the double horizontal line) are two-mode CV gates realized via the two-mode macronode gadget in Eq.~\eqref{twoModeCircuit_QRL}. This set includes an entangling GKP Clifford gate, the SWAP gate, and several pairs of identical single-mode gates.
The constant angle ${\chi}$ is defined as ${\chi} \coloneqq \arctan{2 }\approx 1.1071~\mathrm{rad} \approx 63.435^\circ$.
}
\label{tab:twoModeMeasurementAngles}
\end{table}
\subsection{Two-mode gates}
Thus far, we have considered only single-mode gates implemented by measurements on a macronode wire. To complete the Clifford group, we need to implement an entangling gate between encoded GKP qubits.
A suitable resource for this purpose is created by coupling macronode wires together via additional beam splitters into a two-dimensional lattice.
There are various ways to do this~\cite{Larsen2019,Asavanant2019,Alexander2016,Menicucci2011a}, distinguished by (among other things) the number of modes per macronode in the final state. Here, we focus on an architecture with four modes per macronode, called the \emph{quad-rail lattice}~(QRL)~\cite{Menicucci2011a,alexander2016flexible,Alexander2017},
which was previously found to have favorable noise properties compared to other lattices~\cite{Larsen2020noiseAnalysis}.
The QRL construction depends only on each macronode comprising exactly four modes. Thus, when we refer to ``the QRL,'' we mean the particular method of stitching together a four-node macronode~\cite{Menicucci2011a,alexander2016flexible}, with the understanding that this can be applied to any graph of degree four---\emph{i.e.},~a graph in which every node is connected to exactly four neighbors. This is an important class of graphs that includes, among others, the RHG lattice~\cite{Raussendorf2007} for which a QRL construction has been proposed and a squeezing threshold found~\cite{Tzitrin2020}.
In any QRL-based architecture, groups of four modes are couplied using a four-splitter~\cite{alexander2016flexible}, which is typically implemented using four beam splitters.
One can equally well consider two of these beam splitters to be first creating macronode wires, as shown by the dashed arrows in Fig.~\ref{fig:macronodeWireGraph}(a) and also in (b).
Macronode wires are then stitched together at a four-mode macronode using the remaining two beam splitters, as shown by the red arrows in Fig.~\ref{fig:macronodeWireGraph}(b), with an example of a four-mode macronode circled (with a solid border) in that figure.
Each four-mode macronode powers a two-mode gadget capable of implementing either two single-mode gates or one two-mode gate~\cite{alexander2016flexible}. We illustrate both actions together using a single quantum circuit, which corresponds directly to the dashed box of Fig.~\ref{fig:macronodeWireGraph}(b):
\begin{equation}\label{twoModeCircuit_QRL}
\begin{split}
\Qcircuit @C=1.4em @R=2em {
&&& \lstick{\brasub{m_a}{p_{\theta_a}}} & \qw & \qw & \bsbal[red]{3} & \bsbal{1}[.>] & \qw &\qw & \rstick{\text{(in)}} \\
&&& \lstick{\brasub{m_b}{p_{\theta_b}}} & \qw &\bsbal[red]{3} & \qw & \qw & \bsbal{1} & \qw & \rstick{\ket{\psi}}& \\
\lstick{\text{(out)}} & \qw& \qw& \qw & \qw& \qw &\qw & \qw & \qw & \qw& \rstick{\ket{\phi}}& \\
&&& \lstick{\brasub{m_c}{p_{\theta_c}}} & \qw & \qw & \qw & \bsbal{1}[.>] & \qw & \qw&\rstick{\text{(in)}} \\
&&& \lstick{\brasub{m_d}{p_{\theta_d}}} &\qw& \qw & \qw & \qw & \bsbal{1}&\qw & \rstick{\ket{\psi'}}& \\
\lstick{\text{(out)}} & \qw& \qw&\qw & \qw & \qw &\qw & \qw & \qw&\qw & \rstick{\ket{\phi'}}& \
}
\end{split}
\quad ,
\end{equation}
where the beam splitters are all the same 50:50 beam splitter but have been styled to match Fig.~\ref{fig:macronodeWireGraph}. We call this circuit the \emph{two-mode macronode gadget} since it takes two modes as inputs, produces outputs over two modes, and is implemented at a single (four-mode) macronode. It is the two-mode generalization of the single-mode macronode gadget shown in Eq.~\eqref{circuit:macrnonodegadget}.
This macronode gadget generates a two-mode quantum operation (which could be separable or entangling) between the input modes as they are jointly teleported to the output modes. Just as for the single-mode macronode gadget, Eq.~\eqref{circuit:macrnonodegadget}, the two-mode quantum operation that gets implemented depends on the quadratures being measured, the measurement outcomes $\vec{m}$, and the ancilla states that comprise the two-mode gadget.
The two-mode Kraus operator
\begin{equation}\label{eq:twomodekraus_QRL}
\op K^{(2)}(\vec m)= \frac{1}{\pi^2} \big[ \op {A}_1(\psi,\phi) \otimes \op {A}_2(\psi',\phi')\big] \op{D}_\text{QRL}^{(2)}(\vec{m} ) \op V_\text{QRL}^{(2)}(\vec \theta)
\end{equation}
has three parts. First, each macronode contributes a teleported gate $\op{A}_j(\psi,\phi)$ with the subscript indicating the output mode on which the gate acts. Second, each contributes an outcome-dependent displacement, and the two displacements are mixed by the beam splitters into
\begin{align}\label{eq:QRLdisplacements}
\op{D}_\text{QRL}^{(2)}(\vec{m} ) \coloneqq \op{D}_1(\mu_+) \otimes \op{D}_2(\mu_-)
\, ,
\end{align}
with $\mu_\pm = \tfrac{\mu_{c,d}\pm\mu_{a,b}}{\sqrt{2}}$.
Finally, the quadrature measurement bases $\vec \theta = ( \theta_a, \theta_b, \theta_c, \theta_d )$ implement the deterministic two-mode Gaussian unitary (up to a global phase),
\begin{align} \label{VtwoMode_QRL}
\op{V}_\text{QRL}^{(2)}(\vec \theta) \coloneqq \op{B} _{21} [\op{V}_1(\theta_a, \theta_b) \otimes \op{V}_2(\theta_c, \theta_d)] \op{B} _{12}
\, ,
\end{align}
which is represented by the circuit,
\begin{equation}\label{VtwoMode_QRLcircuit}
\Qcircuit @C=1.4em @R=1.4em {
&\qw&\gate{V_1(\theta_a, \theta_b)} & \bsbal{1} &\qw \\
&\bsbal{-1}& \gate{ V_2(\theta_c, \theta_d) } & \qw &\qw \\
}
\, \raisebox{-1.5em}{.}
\end{equation}
A derivation of this gate is given in Appendix~\ref{TwoModeV}.
Choosing various measurement angles allows us to realize various two-mode Gaussian unitaries.
Most important for GKP quantum computing is a two-mode Clifford gate, which can be implemented by a CV controlled-$Z$ gate $\CZ(\pm1)$; see Eq.~\eqref{eq:GKPproj}.
To find the measurement angles that realize this gate,
we decompose a CV controlled-$Z$ gate of weight $g$ as
\begin{align}
\CZ(g) =\op{B} _{21} [\op{P}(-g) \otimes \op{P}(g)] \op{B} _{12}
\, ,
\end{align}
equivalently described by the circuit identity
\begin{equation}\label{}
\begin{split}
\raisebox{0.6em}{
\Qcircuit @C=1.4em @R=2.6em {
&&\ctrlg{g}{1}&\qw \\
&& \ctrl{-1}&\qw \\
}
}
\,
\end{split}
=
\begin{split}
\Qcircuit @C=1.4em @R=1.2em {
& \qw &\gate{P(-g)} & \bsbal{1} &\qw \\
&\bsbal{-1}& \gate{P(g)} & \qw &\qw \\
}
\end{split}
\; ,
\end{equation}
with the left-hand side being the circuit for $\CZ(g)$.\footnote{A related decomposition in terms of local squeezing between beam splitters is given in Ref.~\cite{Kalajdzievski2021}. That decomposition is related to ours through Eq.~\eqref{eq:localsqzdecomp}.}
This convenient decomposition allows us to implement the gate using two single-mode shears of equal magnitude and opposite sign.
When $g=\pm 1$, the gate acts as the two-qubit GKP Clifford $\bar{\text{C}}_{Z}$ gate we desire; measurement angles that produce this gate are given in Table~\ref{tab:twoModeMeasurementAngles}. We note that there are many other sets of measurement angles that implement Clifford equivalent entangling gates, which we will detail in future work.
In Table~\ref{tab:twoModeMeasurementAngles} we also include the CV SWAP gate that exchanges states across the two modes, $\text{SWAP} \ket{\psi} \otimes \ket{\phi} = \ket{\phi} \otimes \ket{\psi}$. This includes the case when those states are GKP encoded.
We also review a method of disentangling the two-mode gate~\cite{alexander2016flexible}, allowing more versatile use of the QRL. When the single-mode Gaussian unitaries $\op{V}$ in Eq.~\eqref{VtwoMode_QRLcircuit} are identical, they commute with the beam splitters, which then cancel:
\begin{equation}\label{}
\begin{split}
\Qcircuit @C=1.4em @R=1.5em {
&\qw&\gate{V} & \bsbal{1} &\qw \\
&\bsbal{-1}& \gate{V} & \qw &\qw \\
}
\,
\end{split}
=
\begin{split}
\Qcircuit @C=1.4em @R=1.5em {
&\qw & \bsbal{1}&\gate{V} &\qw \\
&\bsbal{-1} & \qw& \gate{V} &\qw \\
}
\,
\end{split}
=
\begin{split}
\Qcircuit @C=1.4em @R=1.5em {
&\gate{V} &\qw \\
& \gate{V} &\qw \\
}
\end{split}
\; .
\end{equation}
Thus, one can implement two identical single-mode GKP gates simultaneously on both modes with the two-mode macronode gadget. Several useful examples are given in Table~\ref{tab:twoModeMeasurementAngles}.
\subsection{Teleporation-based GKP error correction}
In addition to the measurement-basis-dependent gates discussed above, the macronode gadget in Eq.~\eqref{circuit:macrnonodegadget} also applies a quantum operation $\op{A}(\psi,\phi)$ that depends on the states $\ket{\psi}$ and $\ket{\phi}$~\cite{Walshe2020}. For an ideal macronode-based CV cluster state (with no qubit encoding), $\ket{\psi} = \pket{0}$ and $\ket{\phi} = \qket{0}$, which is depicted as
\begin{equation}
\begin{split}\label{KnillCartoonQP}
\centering
\includegraphics[width=.65\columnwidth]{KnillCartoonPQ}
\end{split}\, ,
\end{equation}
in the schematics of Fig.~\ref{fig:macronodeWireGraph}.
This choice generates a maximally entangled EPR pair across two neighboring macronodes~\cite{Walshe2020}, giving the trivial operation
\begin{align} \label{eq:teleportedidentity}
\op{A}(0_p,0_q) = \op{I}
\,
\end{align}
that forms the backbone of standard CV teleportation.
GKP error correction can be performed automatically when the states in a macronode gadget are themselves GKP states~\cite{Walshe2020}. However, since beam splitters introduce additional squeezing that modifies the spacing of a periodic wavefunction, the appropriate states are not square-lattice $\ket{+_\GKP}$ states. Rather, they are Fourier-invariant \emph{qunaught} states~\cite{Walshe2020} (also called sensor states~\cite{Duivenvoorden2017}),
with wavefunction period $T=\sqrt{2\pi}$ in both position and momentum,
\begin{equation} \label{qunaught}
\ket{\varnothing} := \int ds \; \Sha_{\sqrt{2\pi}}(s)\qket{s}
=\int dt \; \Sha_{\sqrt{2\pi}}(t)\pket{t}
\, ,
\end{equation}
with the empty-set symbol~$\varnothing$ and `naught' in the name indicating that the state carries no quantum information~\cite{Walshe2020}.
Nevertheless, combining two qunaught states on a beam splitter produces an encoded Bell pair of square-lattice GKP qubits~\cite{Walshe2020},
\begin{equation} \label{eq:GKPBellPair}
\op{B} _{12}\ket{\varnothing} \otimes \ket{\varnothing}
= \tfrac{1}{\sqrt{2}}\big( \ket{0_\GKP} \otimes \ket{0_\GKP} + \ket{1_\GKP} \otimes \ket{1_\GKP} \big)
\, .
\end{equation}
A GKP Bell pair across neighboring macronodes is produced by preparing both $\ket{\phi}$ and $\ket{\psi}$ in the macronode gadget, Eq.~\eqref{circuit:macrnonodegadget}, in qunaught states:
\begin{equation}
\begin{split}\label{KnillCartoonQunaught}
\centering
\includegraphics[width=.65\columnwidth]{KnillCartoonQnaught}
\end{split}\, .
\end{equation}
Teleportation through an encoded Bell pair is the foundation for Knill-style GKP error correction~\cite{Knill2005}, and indeed this choice yields the quantum operation
\begin{align} \label{eq:teleportedGKPproj}
\op{A}(\varnothing,\varnothing) = \sqrt{ \frac{\pi}{ 2 } } \op{\Pi}_\GKP
\, ,
\end{align}
with $\op{\Pi}_\GKP$ being the projector onto the square-lattice GKP subspace [Eq.~\eqref{eq:GKPproj}]. From the Kraus operator in Eq.~\eqref{genkraus}, we see that this allows a GKP Clifford gate followed by GKP error correction to be performed in the same step (\emph{i.e.},~teleportation through a single macronode).
The two-mode macronode gadget works identically, since the quantum operations $\op{A}(\psi,\phi)$ are local to each of the output modes as shown in the two-mode Kraus operator, Eq.~\eqref{eq:twomodekraus_QRL}. Preparing all four ancilla states in the gadget in $\ket{\varnothing}$ implements the two-mode Gaussian gate determined by $\vec\theta$ and $\vec m$ followed by GKP error correction on each output mode.
\subsection{Magic states}
\label{subsec:magicstates}
Extending the above described operations to a universal gate set requires a logical non-Clifford element. Given access to high-fidelity Clifford gates, a universal set of operations is attainable by probabilistically distilling a high-quality magic state from multiple noisier copies via Clifford operations~\cite{bravyi2005universal}.
Two commonly considered magic states are the $\ket{+T}$ state (stabilized under a Clifford gate that permutes the positive Pauli axes),
\begin{align}
\ket{+T}
&= \biggl(\frac{\sqrt{3}+3}{6} \biggr)^{\frac{1}{2}} \ket{0} + e^{i \pi/4} \biggl( \frac{2-\sqrt{3}}{6} \biggr)^{\frac{1}{4}} \ket{1}
\, ,
\end{align}
and the $\ket{+H}$ state (stabilized under the Hadamard gate),
\footnote{Both of these magic states and their Clifford equivalents can be used to deterministically teleport non-Clifford gates (given Clifford resources). Unfortunately, they go by different names in the literature as do the gates they teleport. We follow the conventions in Ref.~\cite{bravyi2005universal}.}
\begin{align}
\ket{+H}
&= \cos\tfrac{\pi}{8}\ket{0} + \sin\tfrac{\pi}{8}\ket{1}
\, .
\end{align}
These states are distillable with ideal Clifford operations from noisy copies with fidelities no worse than 0.853~\cite{reichardt2005quantum} and 0.8273~\cite{jochym2013robustness}, respectively.
Distillation of magic states with imperfect (but high-fidelity) Clifford gates is possible but requires copies with higher fidelities.
An experimentally convenient method for probabilistically generating magic states of a desired fidelity in the GKP code was introduced in Ref.~\cite{Baragiola2019}. Performing GKP error correction on the vacuum state (or a low-photon-number thermal state)
produces a heralded, distillable GKP magic state with high probability.\footnote{The analysis in Ref.~\cite{Baragiola2019} focuses on vacuum and thermal states, but many other Gaussian states of high purity will also yield heralded, distillable magic states. Notable exceptions are position- or momentum-squeezed vacuum states, which are often used in combination with GKP states to produce hybrid cluster states for use in fault-tolerant architectures~\cite{menicucci2014fault, bourassa2021blueprint, larsen2021architecture}. These Gaussian states instead yield undistillable encoded states that are close to Pauli eigenstates~\cite{pantaleoni2021subsystem}, so a different type of Gaussian state must be used instead.
}
In a cluster-state setting, there are two straightforward ways to implement this protocol using state injection. The first is to inject externally produced noisy magic states
using the above or some other method. The second is to inject Gaussian states (notably, vaccum states) and then project them into the GKP code space through teleportation as described in Eq.~\eqref{KnillCartoonQunaught}.
In either case, homodyne measurements on surrounding cluster-state modes provide the GKP Clifford machinery to perform distillation as needed within the cluster state itself.
An intriguing alternative approach that also yields a distillable magic state is to measure half of an encoded GKP Bell pair using heterodyne detection~\cite{Baragiola2019}, which projects that mode onto the coherent-state basis.
We modify this approach for streamlined implementation in macronode cluster states. We make use of two facts. First,
beam splitters acting before projections onto coherent states are equivalent to projections onto different coherent states,
\begin{align}
\bra{\alpha_1} \otimes \bra{\alpha_2} \hat{B}_{12} =
\bra{ \alpha_+ } \otimes
\bra{ \alpha_- }
\, ,
\end{align}
with $\alpha_\pm \coloneqq \tfrac{1}{\sqrt{2}} (\alpha_1 \pm \alpha_2)$. With this, beam splitters can be effectively removed by postprocessing of outcomes.
Second, performing heterodyne on both modes of the single-mode macronode gadget, Eq.~\eqref{circuit:macrnonodegadget}, disentangles the input mode, leaving the final two modes in a GKP Bell pair, Eq.~\eqref{eq:GKPBellPair}:
\begin{align}\label{circuit:magicmaker}
\Qcircuit @C=1.75em @R=1.75em
{
&\lstick{\bra{\alpha_a}} & \bsbal{1} & \qw & \rstick{\text{(in)}} \qw[-1] & \\
&\lstick{\bra{\alpha_b}} & \qw & \bsbal{1} & \rstick{\ket{\varnothing}} \qw \\
&\lstick{\text{(out)}} & \qw & \qw & \rstick{ \ket{\varnothing} } \qw
}\,
%
\quad
\raisebox{-1.7em}{~=}
\qquad
%
\Qcircuit @C=1.75em @R=1.75em
{
&\lstick{\bra{\alpha_+}} & \qw & \qw & \rstick{\text{(in)}} \qw[-1] & \\
&\lstick{\bra{\alpha_-}} & \qw & \bsbal{1} & \rstick{\ket{\varnothing}} \qw \\
&\lstick{\text{(out)}} & \qw & \qw & \rstick{ \ket{\varnothing} } \qw
}\,
\quad
\!\!\raisebox{-2em}{.}
\end{align}
Since one of these modes is measured by heterodyne detection, this implements the conditional GKP magic-state approach described above.
From a resource perspective, this approach is equivalent to introducing a vacuum state into the cluster state, since heterodyne can be realized via dual homodyne detection in which vacuum enters through an empty beampslitter port.
This technique has broader application, too. When the input state is itself half of a GKP Bell pair, two GKP magic states are produced, which are generally different but both distillable.
In the context of the two-mode macronode gadget, Eq.~\eqref{twoModeCircuit_QRL}, performing heterodyne detection on all measured modes produces up to four GKP magic states---one at each output and one for each input mode
that is part of a GKP Bell pair with another mode of the cluster state.
The generated GKP magic states are nearby in the cluster state, potentially making their Clifford-circuit distillation convenient.
The suitability of any particular method of including magic states in a cluster-state framework will depend on the details of the architecture. Our novel contribution here is to illustrate how to implement the methods proposed in Ref.~\cite{Baragiola2019}, which involve either injection of the vacuum or heterodyne detection, in a way that dovetails naturally with QRL-based cluster states.
\section{GKP-qubit gate noise}
\label{sec:squeezingthresholds}
The success of error correction and ultimately the reliability of the computation depend on the amount of noise in the resources used as well as the machinery employed to handle this noise.
The Clifford resources for ideal GKP quantum computing with macronode-based cluster states generally include
ideal 0-momentum, 0-position, and GKP qunaught states~\cite{Walshe2020,tzitrin2021fault}. These ideal resources, which are combined using beam splitters and then measured, allow multiple GKP Clifford gates to be applied consecutively without introducing additional noise. Physical approximations to these resource states---\emph{i.e.},~momentum- and position-squeezed vacuum states and approximate GKP qunaught states, respectively---have finite energy, so teleported GKP Clifford gates are accompanied by additional noise~\cite{Walshe2020}.
\color{black}
In this section, we quantify how the CV noise inherent in a macronode cluster state manifests as logical noise on GKP-encoded qubits. Provided the qubit-level noise is low enough, concatenation with qubit codes can reduce the effective noise as low as required
for any particular quantum computation.
Following the method introduced in Ref.~\cite{menicucci2014fault} for noise analysis, we consider teleported CV gates followed by GKP error correction as noisy qubit gates. As we have shown above, the GKP Cliffords in Table~\ref{tab:twoModeMeasurementAngles} followed by GKP error correction can be performed in a single step.
To quantify the performance of these error-corrected gates, we calculate the qubit-level \emph{error rate} associated with each gate after the error correction. This is the probability that one or more of the GKP error correction steps fails, resulting in a qubit-level error.
Having done so, we can abstract away the CV level entirely and treat these gates as noisy qubit gates whose error rates may be compared to the fault-tolerance thresholds of typical quantum error correction codes using qubits to make claims about the level of squeezing required for fault tolerance (before any other sources of error are considered). While it is possible to choose a qubit-level code and numerically derive a squeezing threshold for that specific code~\cite{tzitrin2021fault,larsen2021architecture,Noh2021lowoverhead}, we choose to remain agnostic about the qubit-level code used and instead focus on the gate error rates associated with different levels of squeezing in the approximate states.
As discussed in Sec.~\ref{subsec:magicstates}, supplementing fault-tolerant GKP Clifford operations with easy-to-produce vacuum or thermal states can be used to make GKP magic states required for universality~\cite{Baragiola2019}. The quality requirements for the Clifford gates are much more stringent than for noisy magic states since the latter can be improved using distillation if the Clifford circuits are good enough. For this reason, we focus on the noise in Clifford gate implementations.
\subsection{Gaussian blurring channel}
\label{subsec:gaussblur}
In what follows, we will make liberal use of the (incoherent) Gaussian blurring channel
\begin{align}\label{GaussianNoiseChannel}
\mathcal{E}_{\delta^2}
%
&\coloneqq \frac{1}{\pi \delta^2}\int d^2 \alpha \, e^{-|\alpha|^2/\delta^2 } \op{D}( \alpha ) \odot \op{D}^\dagger ( \alpha)
\, ,
\end{align}
which applies random displacements whose amplitudes are drawn independently from a zero-mean Gaussian with variance $\delta^2$.
This channel will be used to conceptually describe the initial states in the analysis---noisy GKP and qunaught states---as well as the noise-accumulation effects of each step in the measurement-based quantum computation.
The action of the channel on a state is the Gaussian weighted average of displacements; in the Wigner representation, this is simply a Gaussian blurring in each quadrature~\cite{mensen2021}.
Consider a multimode Gaussian state with zero mean and Wigner covariance matrix~$\mat \Sigma$. The elements of this covariance matrix are $\Sigma_{jk} = \tfrac 1 2 \avg{\{\op x_j, \op x_k\}}$, where $\opvec x = (\op q_1, \dotsc, \op q_N, \op p_1, \dotsc, \op p_N)^\tp$, and $\{ \cdot, \cdot \}$ is the anticommutator. This is the same ordering convention for the quadratures that is used, for instance, in Ref.~\cite{menicucci2014fault}. Applying this channel~$\mathcal{E}_{\delta^2}$ independently on all~$N$ modes produces a new Gaussian state with the same mean and broader Wigner covariance~$\mat \Sigma + \delta^2 \mat \id_{2N}$, where $\mat \id_{2N}$ is the ${2N\times 2N}$ identity matrix.
For a single mode, we write the action of~$\mathcal{E}_{\delta^2}$ as a simple map on the Wigner covariance matrix:
\begin{align}
\label{eq:bluroncovmat}
\mat \Sigma
+
\begin{bmatrix}
\delta^2 & 0 \\
0 & \delta^2
\end{bmatrix}
\xmapsfrom{~\mathcal{E}_{\delta^2}~}
\mat \Sigma
\, .
\end{align}
Note that this evolution is right to left.
\subsection{Representing approximate GKP states and teleportation-based error correction}
\label{subsec:repapproxGKPteleportation}
Ideal GKP states are unique in that their Wigner functions are a weighted sum of delta spikes---\emph{i.e.},~individual Gaussians whose covariance approaces the zero matrix (while remaining positive definite). Formally, we can write this covariance matrix as $0^+ \mat \id_2$, where $0^+$ is an infinitesimal positive constant. The reason such a spike is allowed in a Wigner function---despite violating the Heisenberg uncertainty principle when considered on its own---is that it is part of an infinite ensemble of regularly spaced spikes that form the GKP grid~\cite{GKP,mensen2021}. Finite approximations to these states can have spikes as narrow as allowed by the envelope of the state~\cite{mensen2021}, with larger envelopes allowing for smaller spikes.
Physical approximations to ideal GKP states are described by replacing each $\delta$-function tooth in the ideal wavefunctions (position or momentum) with a sharp Gaussian and then damping the comb with a broad Gaussian envelope~\cite{GKP, matsuura2019, mensen2021}.
For our study of logical error rates and corresponding levels of squeezing, we consider high-quality GKP states. In this regime, the analysis is simplified by ignoring the broad envelope (or broadening it out to infinity) and considering only the noise on the individual spikes~\cite{Glancy2006,menicucci2014fault, Noh2020, fukui2021all, Hillmann2021performance}.\footnote{This can be modelled formally as resulting approximately from a high-quality physical GKP state (\emph{i.e.},~one that has a finite Gaussian envelope that limits the total energy of the state) and twirling it by the GKP stabilizer group~\cite{noh2020fault,mensen2021}.
This leaves the logical information invariant but blurs out the envelope (in the Wigner picture) to the point where it is approximately constant.} The result is a Gaussian-blurred version of an ideal GKP state, \emph{i.e.},~$\mathcal{E}_{\delta^2}(\outprod{\psi_\GKP}{\psi_\GKP})$, which gives a blurred version of the original state,\footnote{Normalization for ideal GKP states is a subtle issue~\cite{GKP,Baragiola2019,noh2020fault,Walshe2020,mensen2021}. Whatever norm is chosen for the ideal state is preserved by this channel.} whose Wigner-function spikes have covariance matrix
\begin{align} \label{eq:CovarianceSpike}
\mat{\eta} = \begin{bmatrix} \delta^2 & 0 \\ 0 & \delta^2 \end{bmatrix}
\,
\end{align}
that we refer to as the \emph{error matrix} for the approximate GKP state. This is because it determines the probability of a logical error occuring after ideal GKP error correction is performed on the state~\cite{Glancy2006,menicucci2014fault}.
The diagonal elements of~$\mat \eta$ give the variance along each quadrature that would be measured in an experiment, $(\sigma^2_{\text{spike}, q},\sigma^2_{\text{spike}, p})$. We call these \emph{measured variances} to distinguish them from wavefunction variances such as $\Delta^2$ and $\kappa^2$ as defined in Ref.~\cite{GKP} for pure approximate GKP states.%
\footnote{For ease of comparison to other works, we note that these measured variances would also appear in the quadrature statistics of pure approximate GKP states with $\Delta^2 = 2\sigma^2_{\text{spike}, q}$ and $\kappa^2 = 2\sigma^2_{\text{spike}, p}$.
Our input GKP states, which are slightly blurred (and thus, mixed) ideal states, have the same measured quadrature statistics for each spike as pure approximate GKP states with $\Delta^2 = \kappa^2 = 2 \delta^2$.}
The off-diagonal elements of~$\mat \eta$ describe correlations between the two quadratures (\emph{i.e.,}~the covariance of the two).
For the blurred ideal states under consideration, the variances are
$\sigma^2_{\text{spike}, q} = \sigma^2_{\text{spike}, p} = \delta^2$, and there are no correlations between them (zeros off the diagonal), which gives $\mat \eta$ the form shown.
\color{black}
In figures and discussion, we present the measured variances of the resources used for quantum computation%
---the input GKP qubit and qunaught states comprising the cluster sate---in decibels, with the measured vacuum variance of $\frac 1 2$ taken as the reference value. This is a standard way to characterize the quality of the squeezing in both types of states:
\begin{align}\label{squeezingfactor}
%
(\delta^2)_{\text{dB}}
= -10\log_{10} ( 2 \delta^2 ).
\end{align}
We choose a convention for which $\delta^2 < \tfrac 1 2$ (squeezed below vacuum variance) corresponds to positive decibel values.
Using the noise model described above, we replace the pure states $\ket{\psi}$ and $\ket{\phi}$ in macronode gadget, Eq.~\eqref{circuit:macrnonodegadget}, with blurred qunaught states, $\mathcal{E}_{\delta^2}(\outprod{\varnothing}{\varnothing})$.
In this more general case, the input state evolution through the circuit is not given by the Kraus-operator map in Eq.~\eqref{eq:Krausmap}; instead, it is described by the map%
\footnote{The sandwiching of $\op{\Pi}_\GKP$ by two instances of $\mathcal{E}_{\delta^2}$ is the result of teleportation through the noisy GKP Bell pair~\cite{fukui2021all}. It is the mixed-state version of the Kraus operator discussed in Ref.~\cite{Walshe2020} that implements GKP error correction when used with pure approximate qunaughts, namely $e^{-\beta \op n} \op{\Pi}_\GKP e^{-\beta \op n}$. When mixed qunaughts (as discussed here) are used instead, the damping operators $e^{-\beta \op n}$ become quantum channels~$\mathcal{E}_{\delta^2}$. More precisely, twirling $e^{-\beta \op n}$ by the GKP stabilizers gives the channel~$\mathcal{E}_{\delta^2}$~\cite{mensen2021}, with $\beta = 2\delta^2$ under our assumption of high-quality states ($\delta^2 \ll 1$). In a related (but not identical) setting to what we consider here, Ref.~\cite{Hillmann2021performance} compared average fidelity for teleportation error correction using these two types of noisy GKP states and found no differences.}%
\begin{align}\label{errorChannel}
\op{\rho}_\text{out} =
\op D_{\text{c}} \mathcal{E}_{\delta^2} \big[ \op{\Pi}_\GKP \mathcal{E}_{\delta^2}(\op{D}_\mu \op{V} \op{\rho}_\text{in} \op{V}^{\dagger} \op{D}^\dagger_\mu ) \op{\Pi}_\GKP \big]
\op D_{\text{c}}^\dag
\, ,
\end{align}
where $\op{V}$ stands for the intended gate to be applied $\op{V}(\theta_a,\theta_b)$ and $\op{D}_\mu$ for the outcome-dependent displacement $\op{D}(\mu_{a,b})$. The final displacement $\op D_{\text{c}}$ implements a possible logical Pauli correction $\bar{P}_{\text{c}}$ at the very end of the protocol, which is determined by a decoder using the measurement outcomes~$\mu_{a,b}$.%
\footnote{
The fact that only logical corrections are needed is a feature of teleportation-based~\cite{Walshe2020} (Knill-style~\cite{Knill2005a}) error correction. The state returns to the GKP subspace (followed by blurring) via the teleportation, but logical errors can be introduced in the process. These are heralded by the measurement outcomes and corrected by applying an appropriate~$\op D_{\text{c}}$. When this correction is incorrectly determined, a logical Pauli error occurs.}%
\subsection{Noise in GKP macronode-based quantum computing}
\label{subsec:noisecalc}
We now focus on using the QRL macronode gadgets for GKP quantum computing, where the input state~$\op \rho_{\text{in}}$ is itself an approximate GKP state from the output of the previous macronode gadget, $\op \rho_{\text{out,previous}}$.
Since displacements commute with~$\mathcal{E}_{\delta^2}$, we can postpone $\mathcal E_{\delta^2}$ to the very end and collect $\op D_\text{c} \op \Pi_{\GKP} \op D(\mu_{a,b})$ into a single operation that describes \emph{ideal} GKP error correction.
With this change of ordering, we schematically describe the map implemented in Eq.~\eqref{errorChannel} as a sequence of transformations on the input state $\op{\rho}_\text{in}$,
\begin{align}\label{eq:rhoevolution}
\op \rho_{\text{out}}
\xmapsfrom{~\mathcal{E}_{\delta^2}~}
\op \rho_3
%
\xmapsfrom{~\text{GKP EC}~}
\op \rho_2
\xmapsfrom{~\mathcal{E}_{\delta^2}~}
\op \rho_1
\xmapsfrom{~\op{V}(\theta_a,\theta_b)~}
\op \rho_{\text{in}}
\, ,
\end{align}
where ``GKP~EC'' stands for $\op D_{\text{c}} \op \Pi_{\GKP} \op D(\mu_{a,b})$ and includes the final logical Pauli correction, if required.
This sequence is read right to left, with $\op \rho_{1,2,3}$ mathematically representing the state at various points through the evolution.
Although we have broken it down into four pieces for analysis, the entire transformation $\op \rho_{\text{in}} \mapsto \op \rho_{\text{out}}$ actually happens all at once
during teleportation through a single macronode (which includes the final displacement correction). That is, this entire operation is performed in a \emph{single step} of teleportation. We stress this fact to contrast with prior protocols in which the gates and GKP error correction are performed separately~\cite{Larsen2020noiseAnalysis,larsen2021architecture}.
\color{black}
Since the input state~$\op \rho_{\text{in}}$ is itself an approximate GKP state,
we can study the noise properties of macronode gadget evolution
by evolving the input-state error matrix under an analogous transformation,
\begin{align}\label{eq:etaevolutionsimple}
\mat \eta_{\text{out}}
\xmapsfrom{~\mathcal{E}_{\delta^2}~}
\mat \eta_3
\xmapsfrom{~\text{GKP~EC}~}
\mat \eta_2
\xmapsfrom{~\mathcal{E}_{\delta^2}~}
\mat \eta_1
\xmapsfrom{~\op{V}(\theta_a, \theta_b)~}
\mat \eta_{\text{in}}
\, ,
\end{align}
with the understanding that $\mat \eta_s$ is the corresponding error matrix for $\op \rho_s$ for each subscript~$s$ in Eq.~\eqref{eq:rhoevolution}.
Also, note that displacements have no effect on error matrices, which allows us to reuse the same notation for them as above. In what follows, we illustrate the effect of each of these operations on the error matrix in order to determine the success probability for a variety of GKP logical Clifford gates, which will ultimately depend on the amount of initial squeezing~$(\delta^2)_{\text{dB}}$.
The Gaussian unitary operation $\op{V}(\theta_a, \theta_b)$, which implements a GKP Clifford gate using the measurement angles in Table~\ref{tab:twoModeMeasurementAngles}, updates the error matrix according to a symplectic matrix $\mat{S}_{\op{V}}$ representing the Heisenberg action of the gate on the quadratures~\cite{menicucci2014fault}. Thus,
\begin{align}
\mat \eta_1
&=
\mat{S}_{\op{V}}
\mat \eta_{\text{in}}
\mat{S}_{\op{V}}^\tp
\, .
\end{align}
The effect of~$\mathcal{E}_{\delta^2}$ is additive on the error matrix, Eq.~\eqref{eq:bluroncovmat}, so
\begin{align}
\mat \eta_2
&=
\mat \eta_1
+
\begin{bmatrix}
\delta^2 & 0 \\
0 & \delta^2
\end{bmatrix}
\, .
\end{align}
Ideal GKP error correction produces an output state with delta-function spikes. Thus, formally, $\mat \eta_3 \to 0^+ \mat \id$. This is a good time to recall that that this error matrix is never realized in practice and is merely a mathematical tool used to assist with the calculation. Finally, the second $\mathcal{E}_{\delta^2}$ gives a fixed, final (and physical) error matrix of
\begin{align}
\mat \eta_{\text{out}}
&=
\begin{bmatrix}
\delta^2 & 0 \\
0 & \delta^2
\end{bmatrix}
\, .
\end{align}
After the whole procedure, the noise properties of the output state are identical to those of the qunaught states used for error correction. However, during the error correction, logical errors may have been introduced, and it is these errors that constitute the logical-qubit gate noise.
\color{black}
Noting that it is $\op \rho_2$ that undergoes GKP error correction, $\mat \eta_2$ can be used to determine its probability of success. Furthermore, this error matrix depends on $\op V(\theta_a, \theta_b)$, so for these reasons, we rename $\mat \eta_2$ as
\begin{align}
\mat \eta_{\op V}
\coloneqq
\mat \eta_2
=
\mat{S}_{\op V}
\mat \eta_{\text{in}}
\mat{S}_{\op V}^\tp
+
\begin{bmatrix}
\delta^2 & 0 \\
0 & \delta^2
\end{bmatrix}
\, .
\end{align}
This notation will let us differentiate between the error matrices for different gates.
We also need the analogous result for a two-mode gate, corresponding to the output of Eq.~\eqref{eq:twomodekraus_QRL}. We write $\mat \eta^{(2)}$ for a general two-mode error matrix, which is the covariance matrix for a single spike in the Wigner function of a two-mode GKP state, with respect to the quadrature ordering~$(\op q_1, \op q_2, \op p_1, \op p_2)$. An entirely analogous procedure to the single-mode case gives the relevant error matrix
\begin{align}
\mat \eta_{\op V}^{(2)}
\coloneqq
\mat \eta_2^{(2)}
=
\mat{S}_{\op V}
\mat \eta_{\text{in}}^{(2)}
\mat{S}_{\op V}^\tp
+
\begin{bmatrix}
\delta^2 & 0 & 0 & 0\\
0 & \delta^2 & 0 & 0\\
0 & 0 & \delta^2 & 0\\
0 & 0 & 0 & \delta^2
\end{bmatrix}
\, .
\end{align}
The error matrices for the single-mode gates in Table~\ref{tab:twoModeMeasurementAngles} are
\begin{align}\label{singleModeErrorMats}
\mat{\eta}_{\op{I}} = \mat{\eta}_{\op{F}}
&=\begin{bmatrix}
2 \delta^2 & 0 \\
0 & 2 \delta^2
\end{bmatrix}
\, , \quad \quad
%
\mat{\eta}_{\op{P}(\pm 1)}
=\begin{bmatrix}
2 \delta^2 & \delta^2 \\
\delta^2 & 3 \delta^2
\end{bmatrix}
\, .
\end{align}
For the two-mode gate $\CZ(\pm 1)$ that completes the Clifford generating set, the error matrix is
\begin{align}\label{CzErrorMats}
\mat{\eta}_{\CZ(\pm 1)}
&=\begin{bmatrix} 2 \delta^2 & 0 & 0 & \delta^2 \\
0 & 2 \delta^2 & \delta^2 & 0 \\
0 & \delta^2 & 3 \delta^2 & 0 \\
\delta^2 & 0 & 0 & 3 \delta^2 \\
\end{bmatrix}
\, .
\end{align}
The error matrix for the SWAP gate is
\begin{align}\label{SwapErrorMats}
\mat{\eta}_\text{SWAP}
&=\begin{bmatrix} 2 \delta^2 & 0 & 0 & 0 \\
0 & 2 \delta^2 &0 & 0 \\
0 &0 & 2 \delta^2 & 0 \\
0 & 0 & 0 & 2 \delta^2 \\
\end{bmatrix}
\, .
\end{align}
For comparison, we also consider the separable gates. The error matrices for two-mode identity and identical Fourier transforms on both modes are the same as that for the SWAP gate in Eq.~\eqref{SwapErrorMats}:
\begin{align}
\mat{\eta}_{\op I \otimes \op I}&=
\mat{\eta}_{\op F \otimes \op F}=
\mat{\eta}_\text{SWAP}
\, .
\end{align}
Finally, the error matrices for identical unit-weight shears on both modes are
\begin{align}
\mat{\eta}_{ \op P(\pm 1) \otimes \op P(\pm 1)}&=\begin{bmatrix} 2 \delta^2 & 0 & \delta^2 & 0 \\
0 & 2 \delta^2 & 0 & \delta^2 \\
\delta^2 & 0 & 3 \delta^2 & 0 \\
0 & \delta^2 & 0 & 3 \delta^2 \\ \end{bmatrix}
\, .
\end{align}
We will use these error matrices to determine gate error rates as a function of $(\delta^2)_{\text{dB}}$ in the next subsection.
\color{black}
\subsection{Logical gate error rates and fault tolerance}
\label{results}
The probability that a logical Pauli error is introduced during error correction is a function of the noise in the input GKP data qubit, the noise in the qunaught states that comprise the macronode gadget, and which Clifford is being implemented. The error matrices above capture all of these effects on each Wigner-function spike of a GKP state. For the square-lattice GKP code, the probability that a Pauli-$X$ or a Pauli-$Z$ error is introduced during error correction is determined by the leakage of the Wigner-function GKP spike out of its unit cell in $q$ and in $p$, for each half of error correction~\cite{menicucci2014fault}. These logical error probabilities are given by
\begin{align}\label{probLogicalErr}
P_{\text{err,$X|Z$}}=1 - P_{\text{succ,$q|p$}},
\end{align}
where $\cdot | \cdot$ represents alternatives, respectively, on each side of the equation.
The probability of success (no error during that half of error correction) is given by
\begin{align}\label{probSuccMode}
P_{\text{succ,$q|p$}} = \erf\left(\sqrt{\frac{\pi}{8 \, \sigma^2_{\text{spike},q|p}}}\right),
\end{align}
where $\sigma^2_{\text{spike},q|p}$ is the measured variance of the GKP spike,
either $q$ or $p$~\cite{menicucci2014fault}.
For each gate, we find these spike variances from the diagonal elements of the gate's noise matrix (which are all integer multiples of the baseline noise $\delta^2$ in the input GKP and qunaught states).
To consider gate errors at the logical-qubit level, we are interested in the probability that at least one Pauli error occurs,
\begin{align}\label{probErr}
P_{\text{err}}= 1 - P_{\text{succ}},
\end{align}
where $P_\text{succ}$ is the total success probability, given by
\begin{align}
P_\text{succ} = \prod_j P^{(j)}_{\text{succ},q} P^{(j)}_{\text{succ},p}
\, ,
\end{align}
with $j$ iterating over the modes being considered (either one or two modes, depending on whether we are considering single- or two-mode gates).
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig-ErrorRates}
\caption{
Logical gate error rates, Eq.~\eqref{probErr}, for resources (input GKP states and qunaught states) of a given quality described by the squeezing, Eq.~\eqref{squeezingfactor}. Dashed curves are single-mode gates and solid curves are two-mode gates. Unfilled circles indicate the squeezing required for error rate $10^{-2}$ and filled circles for $10^{-3}$; squeezing values at these points are given in Table~\ref{thresholdComparison}.
The two-mode gate $\CZ(\pm 1)$ sets the squeezing threshold for Clifford implementation, since it is the worst performing gate in the set, Eq.~\eqref{eq:gateconnections}.
The grey line (furthest right) shows the error rate for the $\CZ$ gate implemented in a canonical CV cluster state using the method of Ref.~\cite{menicucci2014fault} and is included for comparison.
}
\label{Fig-sec:squeezingthresholds}
\end{figure}
In Fig.~\ref{Fig-sec:squeezingthresholds}, we plot the gate error rate, Eq.~\eqref{probErr}, as a function of the squeezing in the resources---specifically, the input GKP states and the qunaught states that comprise the macronode cluster state---required to implement the gate.
Noisy GKP Clifford gates have different error rates, with the worst-performing gate setting the required squeezing for a fixed tolerable error rate.
That gate is the two-mode gate $\CZ (\pm 1)$ for all levels of squeezing.
Required squeezing for selected error rates are shown in Table~\ref{thresholdComparison}.
We find that error rates of $10^{-2}$--$10^{-3}$, which are compatible with the thresholds of 3D topological codes ($\sim$1\% for local noise~\cite{Raussendorf2007,Fowler2012surface}), require a minimum of 11.9--13.7~dB of squeezing in the resources.
For comparison, we also give the required squeezing at these error rates for several previous proposals.
An important benchmark is the required squeezing set out in Ref.~\cite{menicucci2014fault} for canonical CV cluster states (built with noise-free CV controlled-$Z$ gates and squeezed vacuum states) and GKP ancillae resources. That work found, using an extremely conservative logical error rate of $10^{-6}$, that with 20.5~dB of squeezing in the resource states (cluster state and GKP states) Clifford gates had no more than this level of error. This level of squeezing
has been considered a ``squeezing threshold'' associated with the $10^{-6}$ error rate.
We do not use the term ``squeezing threshold'' to characterize the results in this work to avoid making claims about the practical viability of fault-tolerant quantum computing with a given level of squeezing; doing that would require simulations of particular implementations, such as in Refs.~\cite{larsen2021architecture,bourassa2021blueprint,tzitrin2021fault}.
Instead, we focus on the relationship between the level of required squeezing to achieve target gate error rates.
\begin{table}[t]
\resizebox{\columnwidth}{!}
{
\def1.5{1.5}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Gate & \multicolumn{4}{c|}{Error rate: $10^{-2}$} & \multicolumn{4}{c|}{Error rate: $10^{-3}$} \\ \hline
& Ref.~\cite{menicucci2014fault} &Ref.~%
\cite{Larsen2020noiseAnalysis} & Ref.~\cite{larsen2021architecture} & ~ours~
& Ref.~\cite{menicucci2014fault} & Ref.~\cite{Larsen2020noiseAnalysis} & Ref.~\cite{larsen2021architecture} & ~ours~ \\ \hline
$\op I$ & 14.0 & 13.2 & 11.8 & 10.0 & 15.9 & 15.0 & 13.6 & 11.9 \\ \hline
$\op F$ & 14.8 & 14.9 & 11.8 & 10.0 & 16.8 & 16.7 & 13.6 & 11.9 \\ \hline
$\op P(\pm1)$ & 14.4 & 15.2 & 12.5 & 11.2 & 16.4 & 17.1 & 14.5 & 13.7 \\ \hline
$\CZ(\pm 1)$ & 15.6 & - & - & 11.9 & 17.4 & - & - & 13.7 \\ \hline
$\op F \op F \CZ$ & - & 16.0 & 13.2 & - & - & 17.6 & 15.0 & - \\ \hline
\end{tabular}
}
\caption{Squeezing requirements (reported in~dB) for implementing GKP Clifford gates with $10^{-2}$ and $10^{-3}$ logical-qubit error rates.
We consider the following CV gates that together generate the GKP Clifford group: $\{\op I,\op F,\op P(\pm 1),\CZ(\pm 1)\}$.
We compare the results in this work (`ours') with those of prior studies~\cite{menicucci2014fault, Larsen2020noiseAnalysis, larsen2021architecture}, noting that we use $\op F \op F \CZ$ as shorthand for $(\op F \otimes \op F) \CZ(1)$ from Ref.~\cite{Larsen2020noiseAnalysis} and $(\op F^\dagger \otimes \op F) \CZ(1)$ from Ref.~\cite{larsen2021architecture}.
}\label{thresholdComparison}
\end{table}
The first reported required squeezing values for macronode cluster states (built with beam splitters) were given in Larsen \emph{et al.}~\cite{Larsen2020noiseAnalysis}, which considered quantum computation protocols for a variety of macronode lattices, including the QRL.
These protocols differ from ours in two significant ways: their single-mode Clifford gate implementations require two steps---four homodyne measurements---and used ancilla-assisted GKP error correction (Steane-type error correction~\cite{Steane1997}).
The required squeezing was later improved by 2.8~dB in Larsen \emph{et~al.}~\cite{larsen2021architecture} by upgrading to teleportation-based error correction (Knill-type~\cite{Knill2005a}), yet the new protocol was still limited by the fact that it required two steps, one to perform the gate and another for GKP error correction.\footnote{That work did not report required squeezing values. We have calculated them using the methods described here, including for the two-mode gate that arises from a different macronode teleportation gadget.}
Our further improvement of $\sim$1.3~dB over that scheme results from combining Clifford gates and teleportation-based GKP error correction into a single teleportation ste
.
Moreover, we have reduced GKP gate implementations to the minimum number of noisy ancilla states in the macronode gadgets (four qunaughts in the two-mode gadget), so further improvements
are unlikely without devising new schemes for gates requiring fewer ancillae.
\subsection{Applications for fault tolerance in topological codes}
\label{subsec:topological}
A promising avenue for fault-tolerant quantum computing is concatenation of GKP qubits with a topological code~\cite{vuillot2019quantum,hanggli2020enhanced,noh2020fault}.
Measurement-based implementations wire up cluster states in various ways into 3D cluster states with topological fault tolerance. A key example is the Raussendorf--Harrington--Goyal~(RHG) 3D lattice~\cite{Raussendorf2007}, which can be used with GKP qubits~\cite{fukui2020temporal,bourassa2021blueprint} including macronode-based architectures~\cite{larsen2021architecture, tzitrin2021fault}. Since the RHG lattice is of uniform degree four, it is directly compatible with a QRL construction (\emph{i.e.},~with four modes per macronode), as one recent proposal illustrates~\cite{tzitrin2021fault}.
The QRL construction that we consider here serves as a canonical base case that showcases improved use of noisy GKP resources in a macronode setting. Preliminary results indicate that GKP computing with other similar macronode lattices of interest---including
that employed in Refs.~\cite{larsen2021architecture,Noh2021lowoverhead}---perform equally well. That is, GKP Clifford gates and error correction can be executed in a single step (using the minimum number of noisy qunaught ancilla states) and the logical error rates are identical to those for the QRL presented here. This noise equivalence
dovetails with current proposals for topological quantum computing by enabling known fault-tolerance thresholds of 10.2~dB in Ref.~\cite{larsen2021architecture} and in 9.9~dB in Ref.~\cite{Noh2021lowoverhead}. In those works, simultaneous GKP gate and error correction were anticipated, but the methods to execute them were unknown. Further threshold improvements may result from combining other techniques such as ``hyper-enriching'' the GKP qubits in the cluster state~\cite{tzitrin2021fault}, concatenating with other codes~\cite{fukui2021efficient}, and using the analog syndrome to improve the error correction~\cite{fukui2018high,yamasaki2020polylog}.
\section{Conclusion}
\label{con}
Using the quad-rail lattice CV cluster state,
we have introduced improvements that allow for more efficient use of the cluster state for fault-tolerant quantum computing with the GKP code.
These improvements include single-step implementations of all single- and two-mode GKP Clifford gates using only the minimum number of noisy ancilla states in the QRL macronode gadgets that realize the gates.
This allows GKP Cliffords and GKP error correction to be performed simultaneously, lowering GKP gate noise by at least 1.3~dB over similar protocols.
Additionally, we show how to produce GKP magic states without modifying the cluster state itself by using heterodyne detection instead of homodyne to measure the modes.
It is interesting to note that for a squeezing of 10.5~dB (the fault-tolerance threshold reported in Ref.~\cite{bourassa2021blueprint}), we find a gate error rate of 3.6\%---very near that of the RHG lattice under local noise (3.3\%~\cite{Raussendorf2007}), which is the code that work used for concatenation.
However, beyond simple comparisons that can give insight for rigorous studies, one should resist the temptation to take the gate-error rates reported here and draw conclusions about fault tolerance. Fault-tolerance thresholds depend on many specific factors, notably the decoder and the error model.
Finally, preliminary studies indicate that the results here are not unique to the QRL construction: simultaneous GKP Clifford gates and error correction can be implemented in various other macronode lattices with identical gate noise to that for the QRL presented here. This puts many macronode cluster states on the same footing, providing flexibility for experimental implementation. This will be reported in a future publication.
\acknowledgments
We thank Mikkel Larsen for useful feedback. This work is supported by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (Project No.\ CE170100012).
|
1,116,691,500,352 | arxiv | \section{INTRODUCTION}
\label{intro}
Galaxy-galaxy mergers are the phenomena violent enough to disturb the
galactic potentials of both of the colliding galaxies.
They create shocked gas at the merging region, make interstellar
matter fall into the new galactic potential, and induce strong
starbursts.
This extreme starburst heats up the surrounding dust, which sometimes
radiates $10^{12}$ L$_{\odot}$ or more in infrared luminosity.
These sources are often called ultra-luminous infrared galaxies
(ULIRGs).
Since molecular gas and dust create stars, observations of these
components toward ULIRGs are important to understand the nature
of extreme starbursts.
Multiple molecular lines or multiple transition studies revealed that
a large fraction of molecular gas in ULIRGs is dominated by dense and
warm gas \citep[e.g.,][]{sol92,gao04,pap07}.
In addition, the efficiency of star formation is tightly correlated
with the dense gas fraction in molecular gas \citep{sol92,gao04}.
ULIRGs are also important for the study of high-z submillimeter
galaxies.
The Submillimetre Common-User Bolometer Array (SCUBA) on the James
Clerk Maxwell Telescope (JCMT) has detected many high-z submillimeter
galaxies \citep[SMGs; e.g.,][]{hug98,eal99,eal00,sco02,bor03}.
Most of them have infrared luminosities of $\geq10^{12}$ L$_{\odot}$,
and bright ones have $\sim10^{13}$ L$_{\odot}$.
Local ULIRGs can be studied as nearby counterparts of high-z SMGs or
used as nearby templates to derive photometric redshifts.
To compare redshifted emission lines or continuum emission in high-z
galaxies to those in local ULIRGs, we need to observe local ULIRGs at
the same wavelengths as the rest wavelengths of the detected lines or
continuum from the high-z galaxies.
Many of the high-z galaxies were observed at millimeter-wave
\citep[around 1 -- 3~mm; e.g.,][]{gre05,tac06}.
Therefore the rest wavelengths of the detected lines or continuum are
at submillimeter wavelengths.
Our knowledge about submillimeter lines or continuum from local
ULIRGs, especially from compact starburst regions or from nuclei, is
very limited so far, because only recently have high resolution
submillimeter observations been possible.
Here we present the first interferometric $^{12}$CO(6-5) line and
435~$\mu$m (690~GHz) continuum observations of Arp 220 using the
Submillimeter Array \citep[SMA;][]{ho04}.
The interferometric $^{12}$CO(2-1), $^{13}$CO(2-1), and
C$^{18}$O(2-1) lines and 1.3~mm (226~GHz) continuum observations have
also been made simultaneously with the $^{12}$CO(6-5) and
435~$\mu$m observations.
Both $^{12}$CO(6-5) and C$^{18}$O(2-1) lines have never been observed
(even in a single-dish telescope) toward this galaxy so far.
Arp 220 is the nearest ULIRG
\citep[79.9 Mpc, $1\arcsec=387$~pc;][]{san03}, and has therefore been
well studied in various wavelengths.
Optical images show disturbed faint structures \citep[``galaxies
with adjacent loops'';][]{arp66}, which seem like remnants of tidal
tails produced by a galaxy-galaxy merger \citep{jos85}, similar to
numerical simulation results \citep[e.g.,][]{her92,her93}.
The infrared luminosity at $8-1000~\mu$m of this galaxy is
$1.6\times10^{12}$ L$_{\odot}$ \citep{san03},
and high spatial resolution radio continuum \citep{nor88},
near-infrared \citep{gra90}, and mid-infrared \citep{soi99}
observations revealed two nuclei at the center with a separation of
$\sim0.95''$ ($\sim370$ pc).
Molecular gas is very rich in this galaxy,
$\sim9\times10^9$ M$_{\odot}$, and two-thirds of this mass is
concentrated within 400 pc in radius \citep{sco97}.
Such high gas mass concentration is similar also to the numerical
simulation results \citep[e.g.,][]{bar91,mih96}.
These observations and numerical simulations indicate that Arp 220 is
in the final stage of merging.
The past high spatial resolution millimeter-wave interferometric
observations show molecular gas concentrations at the two nuclei, and
these are embedded in an extended ($\sim1$~kpc) molecular structure,
which seems to be a rotating disk coincident with a dust lane in
optical images \citep{sco97,dow98}.
The molecular gas peak at each nucleus shows a steep velocity
gradient with the direction of the gradient different from that of
the large-scale molecular gas disk.
This suggests that a small-scale molecular gas disk rotates around
each nucleus, and both disks are embedded in the large-scale rotating
disk \citep{sak99}.
Recent $\sim0\farcs3$ molecular gas images resolved the detailed
nuclear gas distributions \citep{dow07,sak08}, which are consistent
with the Hubble Space Telescope NICMOS near-infrared imaging results
that suggest an opaque disk around one of the nuclei \citep{sco98}.
Dust emission also peaks at the two nuclei.
\citet{sak99} reported that most of the continuum flux at 1.3~mm
comes from the two nuclei, while \citet{dow98} mentioned that half of
the 1.3~mm continuum flux comes from the extended component.
It is suggested that a few tenths of the 860~$\mu$m \citep{sak08} and
24.5~$\mu$m \citep{soi99} continuum flux comes from the extended
component.
Far-infrared observations with the Infrared Space Observatory (ISO)
also suggest the existence of extended dust emission \citep{gon04}.
\section{OBSERVATIONS AND DATA REDUCTION}
\label{obs}
We observed the center of Arp 220 with the SMA on March 2nd, 2005.
The phase reference center was at
$\alpha(2000) = 15^{\rm h}34^{\rm m}57\fs19$ and
$\delta(2000) = 23\arcdeg30\arcmin11\farcs3$.
The 225~GHz atmospheric opacity was between 0.03 and 0.04, which was
measured at the nearby Caltech Submillimeter Observatory.
Six of the eight 6~m antennas were used with projected antenna
separations between 14~m and 68~m.
High frequency receivers were tuned to observe the redshifted
$^{12}$CO(6-5) line (679.13~GHz) in the lower side band (LSB) and
the 689.13~GHz ($\sim$435~$\mu$m) continuum emission in the upper
side band (USB).
Low frequency receivers were tuned to observe the redshifted
$^{13}$CO(2-1) line (216.47~GHz) and C$^{18}$O(2-1) line (215.64~GHz)
in the LSB and the redshifted $^{12}$CO(2-1) line (226.42~GHz) in
the USB.
The double sideband system temperature for the high frequency band
ranged from 2000~K to 2500~K for most of the time (i.e., at high
elevation), and that for the low frequency band from 140~K to 180~K.
The SMA correlator covers a 2~GHz bandwidth for each sideband of both
high and low frequency bands, which corresponds to the velocity range
of $\sim880$~km~s$^{-1}$ and $\sim2700$~km~s$^{-1}$ for each sideband
of the high and low frequency bands, respectively.
The channel width was configure to have 3.25~MHz
($\sim1.4$~km~s$^{-1}$) and 0.8125~MHz ($\sim1.1$~km~s$^{-1}$) for
the high and low frequency bands, respectively.
We calibrated the data using the Owens Valley Radio Observatory
software package MIR, which is modified for the SMA.
For the high frequency band calibration, we used a partially resolved
source, Callisto, as a gain (amplitude and phase) and flux
calibrator\footnote[9]{The flux values for Callisto and Ceres are
inferred from the SMA Planetary Visibility Function Calculator
(\url{http://sma1.sma.hawaii.edu/planetvis.html})}, since quasars
were too weak for gain calibration.
Callisto was about $50\arcdeg$ away from Arp 220.
The r.m.s.\ phase fluctuation was about $21\arcdeg$.
We used a source model at the gain calibration to correct for the
effect of the partially resolved structure.
Bandpass calibration was done using three sources, Mars, Callisto,
and Ganymede to gain the signal-to-noise ratio (S/N).
Ceres was imaged after the calibrations, and its flux was 22\% lower
than the calculated flux\footnotemark[9].
Although the flux error for Ceres was about 20\%, we conservatively
adopt the flux error of 30\% for the high frequency band data
throughout this paper.
For the low frequency band calibration, we used Ceres, which was
about $40\arcdeg$ away, as the gain calibrator due to its closeness,
and Mars, Callisto, and Ganymede were used as bandpass calibrators.
Callisto was imaged after the calibrations, and showed 21\% lower
than the calculated flux.
Hereafter, we adopt the flux error of 20\% for the low frequency band
data.
\begin{deluxetable*}{cccc}
\tablecaption{Parameters for the continuum and molecular line images
\label{tab-obs-contline}}
\tablehead{
\colhead{Wavelength (Frequency)}
& \colhead{Synthesized Beam Size}
& \colhead{Velocity Resolution}
& \colhead{R.M.S. Noise} \\
\colhead{or Line}
& \colhead{and Position Angle}
&
& \\
\colhead{[$\mu$m (GHz)]}
& \colhead{(Linear scale)}
& \colhead{[km~s$^{-1}$]}
& \colhead{[mJy~beam$^{-1}$ (mK)]}
}
\startdata
435 (689.13) & $1\farcs2\times0\farcs9$, $139\arcdeg$ & --- & 190 (450) \\
& (470~pc $\times$ 350~pc) & & \\
1320 (226.46) & $3\farcs8\times3\farcs3$, $28\arcdeg$ & --- & 4.8 (9.1) \\
& (1.47~kpc $\times$ 1.28~kpc) & & \\
1380 (216.46) & $3\farcs9\times3\farcs5$, $27\arcdeg$ & --- & 5.7 (11) \\
& (1.51~kpc $\times$ 1.36~kpc) & & \\ \hline
$^{12}$CO(6-5) & $1\farcs3\times0\farcs8$, $129\arcdeg$ & 30.1 & 535 (1400) \\
& (500~pc $\times$ 310~pc) & & \\
$^{12}$CO(2-1) & $3\farcs8\times3\farcs3$, $28\arcdeg$ & 5.4 & 32.2 (61) \\
& (1.47~kpc $\times$ 1.28~kpc) & & \\
$^{13}$CO(2-1), C$^{18}$O(2-1)
& $3\farcs9\times3\farcs5$, $27\arcdeg$ & 30.8 & 13.1 (25) \\
& (1.51~kpc $\times$ 1.36~kpc) & &
\enddata
\end{deluxetable*}
Data from 5 antennas were used for the $^{12}$CO(6-5) line
imaging because of a correlator problem.
For $435~\mu$m continuum imaging, we used the data from all 6
antennas after discarding the problematic data.
We subtracted the continuum emission from the line emission data
before line imaging.
Since the $^{12}$CO(6-5) line width is comparable with the band
width, we could not obtain the continuum emission from the same
sideband.
We therefore subtracted the continuum emission using the other
sideband (i.e., USB) in the $uv$ plane using the National Radio
Astronomy Observatory software package AIPS.
For the low frequency (1~mm) data, we created the continuum image
using the line free channels in the same band.
The line images for the low frequency data were made after the
subtraction of the continuum emission from the data in the $uv$
plane.
The calibrated data were binned, and the final channel maps have
velocity resolutions of about 30~km~s$^{-1}$ for the $^{12}$CO(6-5),
$^{13}$CO(2-1), and C$^{18}$O(2-1) lines, and about 5~km~s$^{-1}$ for
the $^{12}$CO(2-1) line.
We use the radio definition for the LSR velocity in this paper, which
is $v_{\rm LSR} = c (1 - \nu / \nu_{\rm rest})$.
The C$^{18}$O(2-1) line was observed up to the LSR velocity of
5550~km~s$^{-1}$, covering about 72\% of the total line width, since
this line was at an edge of the bandpass.
We CLEANed the images with natural weighting, and the resulting
synthesized beam sizes were about $1\arcsec$ for 690~GHz band
images, and about $3\arcsec-4\arcsec$ for 230~GHz band images.
The beam sizes and the r.m.s.\ noise levels for all the images
are summarized in Table~\ref{tab-obs-contline}.
The half-power width of the primary beam at 690~GHz and 230~GHz are
$17\arcsec$ (6.6~kpc) and $52\arcsec$ (20.1~kpc), respectively.
These sizes are much larger than the sizes of the line/continuum
emitting regions (at most a few arcseconds), and we did not make any
primary beam correction to our images.
Our $^{12}$CO(6-5) images show a systematic position shift of about
$0\farcs7$ from the peak positions of the double nucleus reported in
the previous observations at longer wavelengths
\citep[e.g.,][]{sak99}.
This offset can be explained by the baseline error, which is about
$0.3\lambda$ or less at 690~GHz.
Furthermore, we used the phase calibrator with about $50\arcdeg$ away
from the source.
Hence we shifted the positions of the $^{12}$CO(6-5) line images
based on the peak positions and kinematics information to be
consistent with the previously published results.
The $435~\mu$m continuum image also shows about $0\farcs1$ position
shift, which is again explained by the baseline error, and therefore
shifted the position with the same manner as the $^{12}$CO(6-5) data.
The low frequency band images did not show any noticeable position
shifts, and therefore we did not shift any images.
\section{RESULTS}
\label{res}
\subsection{$435~\mu$m Continuum Emission}
\label{res-cont435}
The $435~\mu$m (689~GHz) continuum emission image is shown in
Fig.~\ref{fig-cont435}.
The image clearly shows two peaks with a separation of about
$1\arcsec$, consistent with the past continuum images in centimeter,
millimeter, and infrared wavelengths.
We therefore call these peaks eastern and western nuclei as is the
past studies.
Intensities of the western and eastern nuclei are
$1.28\pm0.38$~Jy~beam$^{-1}$ and $0.96\pm0.29$~Jy~beam$^{-1}$,
respectively.
The total flux density of the continuum emission is $2.5\pm0.8$~Jy.
Since the flux distribution can be smeared by the phase fluctuation
or baseline errors, we convolved the image to larger beam sizes (up
to $10\arcsec$ with a Gaussian convolution) and measured the total
flux density.
It did not change from 2.5~Jy, indicating that the flux smearing
effect due to the phase or baseline errors is small.
The r.m.s.\ phase fluctuation at this wavelength was indeed only
$21\arcdeg$ (Sect.~\ref{obs}), which induces $<0\farcs1$
smearing, so it is consistent with this result.
\begin{figure}
\plotone{figure1.eps}
\caption{
The $435~\mu$m (689~GHz) continuum emission image of the central
region of Arp 220.
The synthesized beam ($1\farcs2\times0\farcs9$ or 470~pc $\times$
350~pc) with the P.A.\ of $139\arcdeg$ is shown at the
bottom-left corner.
The two crosses indicate the 1.3~mm continuum peaks
\citep{sak99,dow98}, which corresponds to the double nucleus.
The contour levels are $3, 4, 5,$ and $6\sigma$,
where $1\sigma$ = 190 mJy beam$^{-1}$.
The position offsets are measured from
$\alpha(2000) = 15^{\rm h}34^{\rm m}57\fs25$ and
$\delta(2000) = 23\arcdeg30\arcmin11\farcs4$.
\label{fig-cont435}}
\end{figure}
The previously published $450~\mu$m (670~GHz) single dish results
show a very large variation in the detected flux densities; the
United Kingdom Infrared Telescope (UKIRT) UKT14 result shows the flux
density of $3.0\pm1.1$~Jy \citep{eal89}, but that of the JCMT SCUBA
result shows $6.286\pm0.786$~Jy \citep{dun01}.
We plot the submillimeter (1.5~mm -- $300~\mu$m or
200~GHz -- 1000~GHz) spectrum energy distribution (SED) using
published single dish results in Fig.~\ref{fig-sed}.
The large crosses in the plot are the data for the total flux
densities at various frequencies, and the solid line shows the
$\chi^2$ fitting of the data (see Sect.~\ref{dis-sed} for details).
As shown in the figure, the JCMT SCUBA data point is on the fit, but
the UKIRT UKT14 data point is significantly lower than the fit.
There is no report for the time variation of flux density at
submillimeter wavelength in this galaxy so far.
In addition, the measurements for the total flux density in the plot
is distributed over three decades, but only $450~\mu$m shows the
large difference, which is unlikely to be caused by the time
variation.
We therefore assume that the JCMT SCUBA value is more accurate.
Using the fit, we estimated the total flux density at $435~\mu$m as
5.9~Jy.
This suggests that our $435~\mu$m continuum observations missed
$\sim58\%$ of the total flux density.
Since this missing flux is larger than our flux error of $\sim30\%$,
and the effect of the flux smearing due to phase or baseline errors
is small, this missing flux is probably due to the existence of an
extended component.
If the missing flux of 3.4~Jy is due to an extended component with a
Gaussian distribution that has a full width at half maximum of
$\sim3\arcsec$, its peak will be only twice of our r.m.s. noise.
We did not detect any significant signal with larger beam as
mentioned above, but the r.m.s.\ noise level increased in the larger
beam images due to a small number of data points at shorter
baselines.
It is therefore not clear whether the missing flux in our data is due
to the lack of shortest baselines or to low S/N.
\begin{figure}
\plotone{figure2.eps}
\caption{Submillimeter wavelength (1.5~mm -- 300~$\mu$m or 200~GHz --
1000~GHz) SED for various components in Arp 220.
Crosses, squares, diamonds, and pluses indicate the continuum
flux density of the entire system, the eastern nucleus, the
western nucleus, and the extended component, respectively.
A down-arrow indicates the $3\sigma$ upper limit for the extended
component.
Bars on the observational points indicate the observational
errors.
Solid, dotted, dashed, and dot-dashed lines are the $\chi^2$
fitting for each component.
The low frequency data for the double nucleus and the extended
component are taken from \citet{dow98} and \citet{sak99} for
229~GHz, \citet{wie02} for 343~GHz, and \citet{sak08} for
345~GHz.
The data for the total flux density are taken from following
papers:
214~GHz: \citet{woo89},
229~GHz: \citet{sco97,dow98,sak99},
240~GHz: \citet{car92},
273~GHz: \citet{rig96},
353~GHz: \citet{dun01},
375~GHz: \citet{eal89,rig96},
667~GHz: \citet{eal89,dun01}, and
857~GHz: \citet{eal89,rig96}.
\label{fig-sed}}
\end{figure}
\subsection{$^{12}$CO(6-5) Line Emission}
\label{res-co65}
The channel maps of the $^{12}$CO(6-5) line emission at the nuclear
region of Arp 220 are shown in Fig.~\ref{fig-co65ch}, and the
integrated intensity (moment 0) and intensity-weighted mean velocity
(moment 1) maps are shown in Fig.~\ref{fig-co65mom01}.
The channel maps reveal that the molecular gas traced by the
$^{12}$CO(6-5) line exhibits different kinematics around each
nucleus; molecular gas associated with the eastern nucleus tends to
display gas kinematics moving from south-west to north-east, and that
associated with the western nucleus from south-east to north-west.
These features can also be seen in the moment 1 map for the eastern
nucleus.
Molecular gas around the western nucleus in the moment 1 map, on the
other hand, does not exhibit a clear velocity gradient as in the
channel maps.
This may be due to the nature of the moment 1 map (i.e., intensity
weighted velocity field map) with weak emission at the northwestern
region.
The kinematics on spatial scales larger than that immediately
associated with the individual nucleus shows a velocity gradient
along north-east to south-west direction with much shallower gradient
than that of the eastern nucleus.
All of these kinematics features mentioned above are also seen in the
previously published lower-J $^{12}$CO lines
\citep{sak99,sak08,dow98,sco97}.
The kinematics in the west nucleus displays a smaller velocity
gradient in our data, probably due to poorer spatial resolution and
lower S/N of our data.
The detection of the large-scale kinematics feature suggests that our
data detected some of the extended (a few arcsecond scale) component,
but much less than that detected in $^{12}$CO(1-0), $^{12}$CO(2-1),
or $^{12}$CO(3-2) observations \citep{sco97,dow98,sak99,sak08}.
\begin{figure}
\plotone{figure3.eps}
\caption{
Channel maps of the $^{12}$CO(6-5) line emission.
Continuum emission is already subtracted.
The LSR velocity (radio definition) in km s$^{-1}$ is shown at
the upper-left corner of each channel map, and the synthesized
beam ($1\farcs3\times0\farcs8$ or 500~pc $\times$ 310~pc) with
the P.A.\ of $129\arcdeg$ is shown at the lower-left corner of
the first channel map.
The position offsets are measured from
$\alpha(2000) = 15^{\rm h}34^{\rm m}57\fs25$ and
$\delta(2000) = 23\arcdeg30\arcmin11\farcs4$.
The two crosses in each channel map indicate the 1.3~mm continuum
peaks \citep{sak99}.
The contour levels are $-3, 3, 4, 5,$ and $6\sigma$,
where $1\sigma$ = 535~mJy~beam$^{-1}$ (= 1.4 K).
\label{fig-co65ch}}
\end{figure}
\begin{figure}
\plotone{figure4a.eps}
\plotone{figure4b.eps}
\caption{
(a) Integrated intensity (moment 0) and (b) intensity-weighted
mean velocity (moment 1) maps of the $^{12}$CO(6-5) line
emission.
Continuum emission is already subtracted.
The position offsets, the synthesized beam, and the crosses are
the same as in Fig.~\ref{fig-co65ch}.
The contour levels for the moment 0 map are (2, 4, 6, 8, 10, 12
and $14)\times68$~Jy~beam$^{-1}$~km~s$^{-1}$ (= 175 K km s$^{-1}$).
The contour levels for the moment 1 map are $-90, -60, -30, 0,
30, 60, 90, 120, 150,$ and $180$~km~s$^{-1}$, where 0~km~s$^{-1}$
corresponds to the LSR velocity of 5351~km~s$^{-1}$.
Zero velocity is shown in thick solid contour, and negative and
positive velocities are shown in dashed and solid contours,
respectively.
\label{fig-co65mom01}}
\end{figure}
The integrated intensity image, on the other hand, displays a single
peak that is elongated along the two nuclei, which looks different
from our $435~\mu$m continuum image or the previously published high
resolution $^{12}$CO(2-1) and $^{12}$CO(3-2) images
\citep{sak99,sak08,dow98} at first glance.
Our image is rather similar to the low resolution maps of
$^{12}$CO(2-1) line \citep{sak99,sco97} without diffuse and extended
(larger than a few arcsecond) emission from their maps.
The total integrated $^{12}$CO(6-5) intensity is
$1250\pm250$~Jy~km~s$^{-1}$ in our data.
Since there is no published result for the $^{12}$CO(6-5) line
emission, we could not estimate the missing flux of our
$^{12}$CO(6-5) data.
Fig.~\ref{fig-co65spec} shows the $^{12}$CO(6-5) line spectrum at
each nucleus.
The peak brightness temperature of the western nucleus is
$8.7\pm2.6$~K ($3.4\pm1.0$~Jy~beam$^{-1}$) around the LSR velocity of
$\sim5300$~km~s$^{-1}$, and that of the eastern
nucleus $6.1\pm1.8$~K ($2.4\pm0.7$~Jy~beam$^{-1}$) around
$\sim5500$~km~s$^{-1}$.
The overall velocity ranges for the two nuclei are similar to those
in the high resolution $^{12}$CO(2-1) and $^{12}$CO(3-2) observations
\citep{sak99,sak08}.
\begin{figure}
\epsscale{1.0}
\plotone{figure5.eps}
\caption{
Spectra of the $^{12}$CO(6-5) line at the two nuclei.
Continuum emission is already subtracted.
\label{fig-co65spec}}
\end{figure}
\subsection{1.32~mm and 1.38~mm Continuum Emission}
\label{res-cont13}
Both 1.32~mm (226.46~GHz) and 1.38~mm (216.46~GHz) continuum
emissions are detected at a significant signal level
($\sim30\sigma$), and the emission is unresolved at both wavelengths
at our resolution of $3''-4''$ (the 1.32~mm image is shown in
Fig.~\ref{fig-co21cont}a; the 1.38~mm image is almost the same as the
1.32~mm image, and not shown here).
The total flux densities for 1.32~mm and 1.38~mm are $167\pm33$~mJy
and $160\pm32$~mJy, respectively.
From the dust continuum fitting (Sect.~\ref{res-cont435}), we
estimate the total flux densities at 1.32~mm and 1.38~mm as 153~mJy
and 178~mJy (including the non-thermal flux contribution).
The observed total flux density therefore agrees with that from the
SED fitting within our calibration error.
We therefore conclude that our 1.32~mm and 1.38~mm continuum data
have no missing flux.
\begin{figure}
\plotone{figure6.eps}
\caption{
Continuum and line images taken in the low frequency band.
Continuum is already subtracted from the line images.
Synthesized beams are shown at the lower-left corner of each
image, and their sizes are in Table~\ref{tab-obs-contline}.
The crosses are the same as in Fig.~\ref{fig-co65ch}.
The position offsets are measured from
$\alpha(2000) = 15^{\rm h}34^{\rm m}57\fs24$ and
$\delta(2000) = 23\arcdeg30\arcmin11\farcs3$.
(a) 1.32~mm continuum.
Contour levels are $-3, 3, 5, 10, 20,$ and $30\sigma$, where
$1\sigma$ = 4.8~mJy.
(b) $^{12}$CO(2-1) integrated intensity.
The contour levels are 5, 10, 20, 50, 100, 150, ...,
$350\times2.5$~Jy~beam$^{-1}$~km~s$^{-1}$
(= 4.8 K km s$^{-1}$).
(c) $^{13}$CO(2-1) integrated intensity.
The contour levels are 3, 5, 10, 15, 20, and
$25\times2.1$~Jy~beam$^{-1}$~km~s$^{-1}$
(= 4.0 K km s$^{-1}$).
(d) C$^{18}$O(2-1) integrated intensity.
The contour levels are the same as in the $^{13}$CO(2-1) map.
\label{fig-co21cont}}
\end{figure}
\subsection{$^{12}$CO, $^{13}$CO, and C$^{18}$O J=2-1 Line Emissions}
\label{res-co21}
The $^{12}$CO(2-1) line image exhibits an extended structure along
north-east and south-west direction even with our low resolution
image (Fig.~\ref{fig-co21cont}b), which is consistent with the past
interferometric maps \citep{sak99,dow98,sco97}.
On the other hand, the $^{13}$CO(2-1) and C$^{18}$O(2-1) line images
(Fig.~\ref{fig-co21cont}c, d) are unresolved at our resolution.
The integrated intensities of the $^{12}$CO, $^{13}$CO, and C$^{18}$O
J=2-1 lines are $1430\pm290$~Jy~km~s$^{-1}$,
$45.7\pm9.1$~Jy~km~s$^{-1}$, and $31.5\pm6.3$~Jy~km~s$^{-1}$,
respectively.
Our observation covers only $\sim72\%$ of the line width of the
C$^{18}$O(2-1) line (Sect.~\ref{obs}), so the derived value should be
a lower limit (see also Sect.~\ref{res-ratio}).
We compared the integrated intensities with the single dish results
for the $^{12}$CO(2-1) and $^{13}$CO(2-1) lines.
The JCMT observations of the $^{12}$CO(2-1) line
($21\arcsec-22\arcsec$ resolution) indicate
$1730\pm350$~Jy~km~s$^{-1}$ \citep{wie02} and
$1549\pm311$~Jy~km~s$^{-1}$ \citep{gre08}.
The flux differences between our and these observations are within
the calibration errors.
The $^{13}$CO(2-1) line was observed with the Institut de Radio
Astronomie Millimetrique (IRAM) 30~m telescope ($11\arcsec$
resolution) and JCMT ($21\arcsec$ resolution) \citep{gre08}, and they
obtained the integrated intensities of $60\pm13$~Jy~km~s$^{-1}$ and
$70\pm16$~Jy~km~s$^{-1}$, respectively.
The differences between our and these observations are again
explained by the calibration errors.
We therefore conclude that our $^{12}$CO(2-1) and $^{13}$CO(2-1)
line data have no missing flux.
Since there is no single dish C$^{18}$O(2-1) line observation, we
cannot estimate the missing flux for this line.
On the other hand, since we did not see any significant missing flux
in the 1.32~mm and 1.38~mm continuum data and the $^{12}$CO(2-1) and
$^{13}$CO(2-1) line data, we expect no significant missing flux in
the C$^{18}$O(2-1) line data.
\subsection{Line Spectra and Ratios}
\label{res-ratio}
To compare the line spectra and intensities of all the four obtained
lines, we convolved the data into the largest synthesized beam size
of $3\farcs9\times3\farcs5$ with the P.A.\ of $27\arcdeg$, which is
the beam size of the $^{13}$CO(2-1) and C$^{18}$O(2-1) lines.
Fig.~\ref{fig-cospec} shows the spectra of all these four lines.
Note that the sensitivity toward extended structure (i.e., $uv$
coverage) is different between the J = 2 -- 1 lines and the
J = 6 -- 5 line in this figure.
The $^{12}$CO(2-1) line shows a double peak profile with stronger
intensity at lower velocity, which is consistent with the past
single dish measurements \citep{wie02,sol90}.
The line width of the $^{12}$CO(2-1) line reaches 900~km~s$^{-1}$
(4900 -- 5800~km~s$^{-1}$).
The $^{13}$CO(2-1) and C$^{18}$O(2-1) lines exhibit very similar line
profiles and intensities.
The $^{12}$CO(6-5) line is mostly emitted at higher velocities and is
weak at lower velocities.
This asymmetry is similar to that of the HCN(4-3) line \citep{wie02}.
\begin{figure}
\plotone{figure7.eps}
\caption{
Multiple CO line spectra toward the central region of Arp 220
at $3\farcs9\times3\farcs5$ resolution (P.A.\ = $27\arcdeg$).
Thick solid, thin solid, thin dashed, and thin dash-dot lines are
the spectra of the $^{12}$CO(2-1), $^{13}$CO(2-1),
C$^{18}$O(2-1), and $^{12}$CO(6-5) lines, respectively.
Vertical axis is brightness temperature in Kelvin, and horizontal
axis is LSR velocity in km~s$^{-1}$.
Due to the weakness of the $^{13}$CO(2-1) and C$^{18}$O(2-1)
lines, these intensities are increased by a factor of 10 in this
figure.
Since the C$^{18}$O(2-1) line is located at the edge of the
bandpass, the spectrum finishes around the LSR velocity of
5550~km~s$^{-1}$.
Note that the $uv$ coverage is different between J = 2 -- 1 lines
and the J = 6 -- 5 line in this figure.
\label{fig-cospec}}
\end{figure}
We matched the shortest $uv$ distance between the $^{12}$CO(6-5) and
$^{12}$CO(2-1) data to match the spatial structures between these two
lines, and measured the $^{12}$CO(6-5)/(2-1) intensity ratio to be
$0.34\pm0.12$.
This value is much lower than the lower-J ratios, such as
$^{12}$CO(3-2)/(2-1) of $0.85\pm0.24$ \citep{wie02} or
$^{12}$CO(2-1)/(1-0) of $0.65\pm0.1$ \citep{sco97}.
This is mostly due to the different line profile of the
$^{12}$CO(6-5) line from the lower-J CO lines.
This indicates that the lower velocity is dominated by lower-J CO
lines, but the higher velocity is rich in higher-J lines.
This trend is consistent with the observations of \citet{wie02} that
the $^{12}$CO(3-2) line intensity decreased relative to the
$^{12}$CO(2-1) line in the lower velocity part, but stay almost
constant at higher velocities.
The relation between the spatial distribution and the velocities of
the molecular gas in this galaxy is, however, not simple;
all the molecular gas components in this galaxy, namely the two
nuclei and the extended component, have low and high velocities
(Fig.~\ref{fig-co65spec}; see also \citealt{sak99,sak08}).
It is therefore difficult to tell which component contributes to high
or low excitation condition only from the large-scale line spectra.
The $^{12}$CO transition ratios for each nucleus is derived in
Sect.~\ref{dis-co}.
The $^{12}$CO(2-1)/$^{13}$CO(2-1) line ratio is $13.0\pm3.7$ at our
resolution of $3\farcs9\times3\farcs5$.
We also convolved our data to $13\arcsec$ resolution ($\approx$
single dish resolution), and the ratio was $16.2\pm4.6$.
These values are similar or slightly lower than the values observed
in U/LIRGs, and similar or somewhat higher than starburst or Seyfert
galaxies;
the $^{12}$CO(2-1)/$^{13}$CO(2-1) line ratios in U/LIRGs observed
with single dish telescopes are $23.5\pm4$ \citep{cas92} and $16\pm8$
\citep{gle01}, and those in starburst and Seyfert galaxies are
$13\pm5$ \citep{aal95} and $13\pm1$ \citep{pap98}, respectively.
The line profiles and the line intensities of the $^{13}$CO(2-1) and
C$^{18}$O(2-1) are almost identical (Fig.~\ref{fig-cospec}), and the
$^{13}$CO(2-1)/C$^{18}$O(2-1) line intensity ratio is $1.0\pm0.3$
between the velocity range of 4800~km~s$^{-1}$ and 5550~km~s$^{-1}$,
which is the velocity range we observed the C$^{18}$O(2-1) line
(Sect.~\ref{obs}).
The $^{13}$CO(2-1) line has 92\% of its total integrated intensity in
this velocity range, so the result with the full line width will not
change significantly.
This very low $^{13}$CO(2-1)/C$^{18}$O(2-1) intensity ratio matches
very well with $^{13}$CO(1-0)/C$^{18}$O(1-0) $= 1.0\pm0.3$
\citep{gre08}.
We discuss this ratio in Sect.~\ref{dis-abn}.
\section{DISCUSSION}
\label{dis}
\subsection{Dust Emissivity Index and Dust Opacity
of the Two Nuclei}
\label{dis-sed}
\begin{deluxetable*}{cccc}
\tablecaption{Fitting results of dust spectral energy distributions
\label{tab-dust}}
\tablehead{
\colhead{Source} & \colhead{Dust Temperature $T_{\rm d}$}
& \colhead{Emissivity Index $\beta$}
& \colhead{Critical Frequency (Wavelength) $\nu_{\rm c}$} \\
& \colhead{[K]} & & \colhead{[GHz ($\micron$)]}
}
\startdata
Arp 220 (total) & $51- 66$ & $1.3-1.4$ & $2000-2200$ ($\sim140- 150$) \\
East Nucleus & $49-120$ & $1.8-2.1$ & $370- 520$ ($\sim580- 810$) \\
West Nucleus & $97-310$ & $0.7-1.2$ & $190- 610$ ($\sim490-1600$) \\
Extended Component & $\sim38$ & $\sim2.4$ & $\sim1500$ ($\sim200$)
\enddata
\end{deluxetable*}
\begin{figure}
\plotone{figure8.eps}
\caption{
Continuum flux ratio between the eastern and western nuclei as a
function of frequency.
Our data point is the right-most one, and the data for 229~GHz
and 343~GHz are the same as in Fig.~\ref{fig-sed}.
\label{fig-fratio}}
\end{figure}
We plot in Fig.~\ref{fig-fratio} the flux ratios between the two
nuclei as a function of frequency.
Although our continuum data have $\sim30\%$ of calibration errors and
a large amount ($\sim60\%$) of missing flux,
the flux density ratio between the two nuclei is accurate, since it
depends on the noise level of the map.
Hence this flux ratio diagram has higher accuracy than the SED
diagrams shown in Fig.~\ref{fig-sed}.
The flux ratio in our data is $0.75\pm0.19$.
If we assume that the missing flux of our data is due to the extended
emission (see Sect.~\ref{res-cont435}), the size of the extended
emission is larger than the separation of the two nuclei.
Therefore the correction of the missing flux will increase both the
fluxes of the two nuclei with almost the same amount, and the flux
ratio increases toward unity, namely the ratio derived above is the
lower limit.
Contamination of the flux from one nucleus to the other due to the
large beam size is small; this effect lowers the ratio by $\sim6\%$,
which is smaller than the calibration errors.
As frequency goes down to $\sim200$~GHz, the flux ratio also goes
down, and the flux of the eastern nucleus is about half or less of
that of the western nucleus.
Since the data for the two lower frequencies may not have significant
missing flux \citep[e.g.,][]{sak99,sak08}, the difference between the
ratio at 689~GHz and those at lower frequencies will be more
pronounced if we correct for the missing flux of the 689~GHz data.
Since the emission at these frequencies is dominated by dust
emission, this result suggests that the dust SEDs are different
between the two nuclei.
We then made continuum SEDs for the two nuclei at submillimeter
wavelength (Fig.~\ref{fig-sed}) as well as that for the total flux
density to discuss the dust property differences more quantitatively.
The crosses, filled squares, and filled diamonds in the plot are the
data for the total flux density, east nucleus, and west nucleus,
respectively.
The solid, dashed, and dotted lines are the $\chi^2$ fitting of the
data with a function $\epsilon B(T_{\rm d})$, where $B(T_{\rm d})$ is
the Planck function for temperature $T_{\rm d}$ and $\epsilon$ is the
emissivity function.
We adopted a form $\epsilon = 1-\exp[-(\nu/\nu_{\rm c})^\beta]$;
$\nu_{\rm c}$ is the critical frequency where the opacity is unity,
and $\beta$ is the emissivity index.
We adopted the source sizes of $0\farcs27 \times 0\farcs14$ for the
eastern nucleus and $0\farcs16 \times 0\farcs13$ for the western
nucleus \citep{sak08}, since their images have the highest spatial
resolution at the highest frequency around submillimeter wavelengths.
For the source size of the total flux density, we adopted the
deconvolved size of the extended $^{12}$CO(2-1) emission of
$1\farcs94 \times 1\farcs28$ \citep{sco97}.
The flux density for the non-thermal component is subtracted from all
the fluxes in the plots before the fitting, following the method
explained in \citet{sco91}, although it is not significant especially
at high frequencies; about several milli-Jy or several percent of the
flux around $200-300$~GHz, and even smaller at $600-700$~GHz.
We also estimated the CO flux contamination into the total flux, but
most of the data are affected for only a few \%, and the fitting
result did not change.
The fitting results are summarized in Table~\ref{tab-dust}.
We obtained from the fit to the total flux density $T_{\rm d}$ of
51~K and $\beta$ of 1.4, which are consistent with the past estimates
of $T_{\rm d} \sim 40-60$~K with $\beta \sim 1.2-2.0$
\citep{sco91,dow98,dun01,kla01}.
The critical frequency is estimated to be 2200~GHz
($\sim140~\micron$), which is also consistent with the estimation
that dust is already optically thick at $\sim100~\micron$
\citep{sco91,kla01}.
If we adopt a smaller source size for the total flux of
$1\farcs13$ \citep{sco97}, $T_{\rm d}$, $\beta$, and $\nu_{\rm c}$
would be 66~K, 1.3, and 2000~GHz ($\sim150~\micron$), respectively.
The temperature is slightly higher, but the critical frequency and
the emissivity index are still consistent with the past estimates.
The fitting for the two nuclei gives high temperatures of 83~K and
180~K for the east and west nuclei, respectively.
These high temperatures are due to the small source sizes.
The emissivity indices are different between the two nuclei, 2.1 for
the eastern nucleus and 1.2 for the western nucleus.
The critical frequencies are estimated to be 370~GHz
($\sim810~\micron$) for the eastern nucleus and 190~GHz
($\sim1.6$~mm) for the western nucleus.
The large uncertainties in our fitting for the two nuclei are the
source flux density and the adopted source size.
If we increase the 689~GHz flux density of each nucleus by 30\%,
which corresponds to the calibration error of our data may have
(Sect.~\ref{obs}), the fitting results would be $T_{\rm d}=120$~K
and 310~K, $\beta=1.8$ and 0.7, and $\nu_{\rm c}=520$~GHz
($\sim580~\micron$) and 610~GHz ($\sim490~\micron$) for the eastern
and western nuclei, respectively.
We adopted the source size derived at $860~\micron$ \citep{sak08},
but the effective source size at $435~\micron$ may be different from
that at $860~\micron$.
This is because the opacity is wavelength dependent,
and therefore the effective source size is also wavelength dependent.
Ideally we need to determine the source size at each wavelength, but
our spatial resolution at $435~\micron$ is too low for this.
Since opacity is higher at shorter wavelength, the effective source
size at $435~\micron$ may be larger than what we adopted.
If we increase the source size (area) of each nucleus by 50\%, which
roughly corresponds to the deconvolved size of the double nucleus in
our data, the fitting results would be $T_{\rm d}=49$~K and 97~K,
$\beta=2.1$ and 1.1, and $\nu_{\rm c}=400$~GHz ($\sim750\micron$) and
210~GHz ($\sim1.4$~mm) for the eastern and western nuclei,
respectively.
The fitting results, with the uncertainties, indicate that the dust
temperature, the emissivity index, and the critical frequency for the
eastern nucleus are better constrained than the past observations,
because of our high frequency observations with high spatial
resolution.
The eastern nucleus seems to have a warm temperature of 49 -- 120~K,
a steep emissivity index of $\sim2$, and
becomes optically thick at frequencies above $\sim400$~GHz.
\citet{sak08} derived a dust temperature of 30 -- 160~K, so our
estimate narrows the range.
They also estimated the 350~GHz opacity of 2.8 for $\beta=2$ (and 0.8
for $\beta=1$), hence our result is somewhat lower, but still both
results indicate a high opacity condition at submillimeter
wavelengths in the eastern nucleus.
The emissivity index and the critical frequency for the western
nucleus are less constrained than those for the eastern nucleus, but
are better constrained than the past observations.
Our fitting results indicate a shallow emissivity index of about
unity and a low critical frequency of $\lesssim600$~GHz.
\citet{dow07} estimated the 230~GHz opacity as $\ge0.7$ and
\citet{sak08} estimated the 350~GHz opacity as $0.8-5.3$ for
$\beta=2$ (the estimated ranges depends on the source size and the
flux errors).
Both of these observations and our results indicate the western
nucleus is optically thick at submillimeter wavelengths.
The fitted temperature for the western nucleus, on the other hand,
has a large range, and therefore it does not set a tighter limit than
the past estimations.
It appears that the western nucleus is warmer than the eastern
nucleus, with dust temperatures of a few hundred Kelvin.
\citet{dow07} suggested a dust temperature of 170~K, and
\citet{sak08} derived a temperature of 90 -- 180~K, hence our
estimate is consistent.
Our results indicate that the derived properties, especially the
emissivity, indices are different between the two nuclei, suggesting
that the dust properties, such as dust size distributions or dust
compositions, are different.
This difference may reflect the dust properties of the original host
galaxies of each nucleus, or the difference in activities, such as
star formation or AGN activities, in the nuclei after the merger.
\subsection{Extended Component in the Dust Emission}
\label{dis-ext}
Our $435~\mu$m (689~GHz) continuum data missed a significant amount
($\sim61\%$) of the total flux.
Here we discuss whether this missing flux can be attributed to an
extended component in the dust emission.
The molecular gas clearly has an extended component with a size of
$\sim2\arcsec$ ($\sim1$~kpc), which is interpreted as a molecular gas
disk from the gas kinematics \citep{sco97,dow98,sak99}.
In dust emission, on the other hand, the extended component is weakly
detected or not detected at a significant signal level.
At 1.3~mm, \citet{dow98} suggested that the flux of $55\pm11$~mJy can
be attributed to the flux from the extended component, but
\citet{sak99} did not detect any significant emission from the
extended component.
At $860~\micron$, \citet{sak08} attributed about 25\% of the
total flux density emission from outside the two nuclei.
\begin{deluxetable*}{ccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Integrated $^{12}$CO intensities and intensity ratios
at each nucleus
\label{tab-co}}
\tablehead{
\colhead{Source}
& \colhead{I($^{12}$CO 2-1)}
& \colhead{I($^{12}$CO 3-2)}
& \colhead{I($^{12}$CO 6-5)}
& \colhead{$^{12}$CO(6-5)/(2-1)}
& \colhead{$^{12}$CO(6-5)/(3-2)}
& \colhead{$^{12}$CO(3-2)/(2-1)} \\
& \colhead{[K km s$^{-1}$]}
& \colhead{[K km s$^{-1}$]} & \colhead{[K km s$^{-1}$]} & & &
}
\startdata
East Nucleus
& $5540 \pm 830$ & $5040 \pm 760$ & $2390 \pm 720$
& $0.43 \pm 0.14$ & $0.47 \pm 0.16$ & $0.91 \pm 0.19$ \\
West Nucleus
& $4460 \pm 670$ & $5140 \pm 770$ & $2840 \pm 850$
& $0.64 \pm 0.21$ & $0.55 \pm 0.19$ & $1.15 \pm 0.24$
\enddata
\tablecomments{The adopted spatial resolution is
$1\farcs3\times0\farcs8$ with P.A.\ of $129\arcdeg$.
The $^{12}$CO(6-5) data for the eastern and western nuclei of
Arp 220 are our data, and those for $uv$-matched
$^{12}$CO(2-1) and $^{12}$CO(3-2) data are from \citet{sak99} and
\citet{sak08}.}
\end{deluxetable*}
\begin{figure*}
\plottwo{figure9a.eps}
{figure9b.eps}
\caption{Rotational transition dependence of the $^{12}$CO brightness
temperatures (SEDs) of the double nucleus in Arp 220 and those of
other galaxies.
The horizontal axis is the upper rotational transition levels of
the $^{12}$CO lines and the vertical axis is the brightness
temperatures of the $^{12}$CO lines on an arbitrary scale.
We fixed the $Z$($^{12}$CO)/($dv/dr$) of
$5\times10^{-5}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$.
(a) Arp 220 $^{12}$CO SED of each nucleus overplotted with the
LVG calculation results.
Dashed lines are for two temperatures at a density of
$10^{3.6}$~cm$^{-3}$.
Dotted lines are for two densities at a temperature of 50~K.
(b) $^{12}$CO SED of Arp 220 and other galaxies.
The multi-J $^{12}$CO data of the Galactic Center
($|l|<2.5\arcdeg$) are taken from \citet{fix99}, and those of
M82 and Mrk 231 are compiled by \citet{wei05a} and
\citet{pap07}, respectively.
The $^{12}$CO data of BR 1202--0725 are taken from the
following papers:
J=1-0: \citet{rie06},
J=2-1: \citet{car02},
J=4-3, J=7-6: \citet{omo96}, and
J=5-4: \citet{oht96}.
The best fitted molecular gas temperature curves, made from
LVG calculations for a density of $10^{3.6}$~cm$^{-3}$, are
also overplotted for reference.
\label{fig-coj}}
\end{figure*}
Assuming that all of the missing flux of our $435~\mu$m continuum
data is from the extended component, we derived the dust properties
of this extended component using the $\chi^2$ fitting mentioned above
using the data between 1.3~mm and $435~\mu$m.
We adopt for the size of this component the same size that we adopted
for the fitting of the total flux density, which is
$1\farcs94\times1\farcs28$ \citep{sco97}.
The fitting result is shown in Fig.~\ref{fig-sed} with a dash-dotted
line, and the derived values are $T_{\rm d}\sim38$~K, $\beta\sim2.4$,
and the $\nu_{\rm c}\sim1500$~GHz ($\sim200~\mu$m).
These values are roughly consistent with the values derived
by \citet{gon04} using ISO LWS data; they derived for the extended
component $T_{\rm d}\sim50$~K, $\nu_{\rm c}\sim3000$~GHz
($\sim100~\mu$m), and a source size of $\sim1\farcs8 - 1\farcs9$
(these values vary depending on their models) assuming $\beta$ of
2.0.
These results suggest that a significant amount ($>50\%$) of
$435~\mu$m flux is in the extended component as mentioned in
Sect.~\ref{res-cont435}, contrary to the longer wavelengths where
continuum is dominantly from the two nuclei.
A recent SMA U/LIRG survey shows that many of the sample galaxies
(9 out of 14) have large missing fluxes (typically $50-80\%$) in
$880~\mu$m continuum emission, even though many of the galaxies also
show compact distributions.
This result suggests that many of ULIRGs have extended continuum
emission with very luminous compact cores \citep{wil08}, similar to
the results of Arp 220.
\subsection{Excitation Conditions of Molecular Gas in the Two Nuclei}
\label{dis-co}
We made $^{12}$CO SEDs of the two nuclei using our $^{12}$CO(6-5)
data with the interferometric $^{12}$CO(2-1) and $^{12}$CO(3-2) data
\citep{sak99,sak08} to study the $^{12}$CO excitation conditions.
We first matched the shortest $uv$ distance for all three data sets,
and imaged at the same synthesized beam size of
$1\farcs3 \times0\farcs8$ with P.A.\ of $129\arcdeg$, which is the
same spatial resolution as our $^{12}$CO(6-5) image.
The integrated intensities and line ratios of these three lines for
each nucleus are listed in Table~\ref{tab-co}, and the relative
$^{12}$CO intensities of various transitions are shown in
Fig.~\ref{fig-coj} for each nucleus.
The decrease of $^{12}$CO intensities toward higher-J in the western
nucleus is smaller than that in the eastern nucleus, and indeed the
intensity ratios are higher for the western nucleus than those for
the eastern nucleus.
These differences are, however, within observational errors.
Thus the difference in the excitation conditions of molecular gas
between the two nuclei is not significant.
To discuss the excitation conditions more quantitatively, we
estimated the excitation conditions of the molecular gas using the
large-velocity-gradient (LVG) approximation \citep{gol74}.
The collision rates for CO of $\leq100$~K were taken from
\citet{flo85} and of $\geq250$~K were from \citet{mck82}.
In these calculations, we assume that all the $^{12}$CO emission
comes from the same region (i.e., one-zone model), and assume the
$^{12}$CO relative abundance over velocity gradient,
$Z$($^{12}$CO)/($dv/dr$), of
$5\times10^{-5}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$.
In Fig.~\ref{fig-coj}(a), we overplotted two kinds of curves, one
(dashed lines) is to see the temperature dependence (we fixed
$n_{\rm H_{2}}$ of $10^{3.6}$~cm$^{-3}$), and another (dotted lines)
is to see the number density dependence (we fixed $T_{\rm k}$ of
50~K).
We only plotted the curves close to upper or lower limits to show
the possible ranges of the excitation conditions and the goodness of
the fitting.
Under the above LVG assumptions, it is estimated that the eastern
nucleus has a molecular gas temperature of $\sim30-250$~K, or a
density of $\sim10^{3.5\pm0.2}$~cm$^{-3}$.
For the western nucleus, we could only derive the lower limits,
which are $\gtrsim40$~K for temperature and
$\gtrsim10^{3.5}$~cm$^{-3}$ for density.
As mentioned above, the estimated molecular gas conditions for the
two nuclei overlap, but the western nucleus tends to have higher
temperature or density than the eastern nucleus.
This tendency is similar to that derived from the dust SED fitting
(Sect.~\ref{dis-sed}).
Indeed, the derived molecular gas and dust temperatures for both
nuclei match well.
This suggests that both the molecular gas and dust reside at similar
regions and in thermal equilibrium.
We also calculated the dependence of our results on
$Z$($^{12}$CO)/($dv/dr$).
If we decrease $Z$($^{12}$CO)/($dv/dr$) by an order of magnitude
(i.e., decrease the $^{12}$CO relative abundance or increase the
velocity gradient or both; we fixed $n_{\rm H_{2}}$ of
$10^{4.0}$~cm$^{-3}$ or $T_{\rm k}$ of 100~K), the temperature or the
density increases by about a factor of a few in both nuclei.
If we increase $Z$($^{12}$CO)/($dv/dr$) by an order of magnitude (we
fixed $n_{\rm H_{2}}$ of $10^{3.4}$~cm$^{-3}$ or $T_{\rm k}$ of
30~K),
the temperature or the density decreases by about a factor of a few.
Therefore a small (within an order of magnitude) change in
$Z$($^{12}$CO)/($dv/dr$) does not change the result significantly.
Our $^{12}$CO SEDs are compared with those of other galaxies in
Fig.~\ref{fig-coj}(b).
We plotted the multi-J $^{12}$CO intensities of the Galactic Center
(normal and quiescent galaxy), M82 (nearby starburst galaxy), Mrk 231
(evolved ULIRG with an AGN at the nucleus), and BR 1202--0725
(radio-quiet and CO bright high-z quasar at z of 4.69)
together with the $^{12}$CO SEDs for the Arp 220 nuclei.
The brightness temperatures of the Galactic Center quickly decrease
with the increase of rotational transitions, but those of
BR 1202--0725, M82, and Mrk 231 stay almost constant even at high-J
transitions.
We overplotted the best fit temperature curves on each source for
comparison (as shown in Fig.~\ref{fig-coj}a, higher temperature can
be replaced with higher density).
The Galactic Center can be modeled well with low temperature (or low
density) conditions, but other galaxies are explained with higher
temperatures (or higher densities).
The two nuclei of Arp 220 is similar to these higher temperature
(density) galaxies, and different from the Galactic Center.
This suggests that the molecular gas excitation condition in the
double nucleus of Arp 220 is similar to these galaxies.
Note that the $^{12}$CO SED up to J=6-5 or 7-6 data with the current
accuracy are not enough to distinguish the excitation conditions
between these high temperature (density) galaxies, including the two
nuclei of Arp 220.
Higher accuracy or higher-J observations are needed to differentiate
the excitation conditions of these galaxies.
It is known that Arp 220, M82, and BR 1202--0725 have active star
formation inside.
The excitation conditions in the two nuclei of Arp 220 and in M82 are
averaged over the central a few hundred pc, and that in BR 1202--0725
is averaged over several kpc.
M82 has a gradient in the physical conditions from the center to the
outer region \citep{pet00}, and the physical conditions derived above
is more similar to those of the center, where the starburst region
exist.
BR 1202--0725 has two sources, north and south, and the southern
source may consist of two sources \citep{car02}, probably interacting
with each other.
The similar excitation conditions of molecular gas regardless of the
observed regions suggests that the observed molecular gas is biased
toward the gas closely related to the star forming regions, and the
effect of star forming activities to the exciting conditions of
surrounding molecular gas is similar.
The $^{12}$CO SED study can also be a useful tool to search for
AGN(s), since the nearby Seyfert galaxies exhibit strong enhancement
of higher-J $^{12}$CO lines toward AGNs \citep[e.g.,][]{mat04,hsi08}.
Fig.~\ref{fig-coj}(b) exhibits, however, that the $^{12}$CO SED of
the AGN hosting ULIRG Mrk 231 does not display any higher-J
enhancement, and $^{12}$CO SED comparison between Mrk 231 and the
two nuclei of Arp 220 shows no clear difference.
In addition, the comparison between Mrk 231, starburst dominated
galaxies M82 and BR 1202--0725 also show no clear difference.
We therefore could not find any evidence of an AGN in Arp 220 from
this $^{12}$CO SED study.
These results suggest that the AGN contribution to the surrounding
molecular gas (at least for Mrk 231) is much smaller than the
nearby Seyferts, possibly due to the smoothing effect by a larger
(linear scale) beam or to a larger opacity effect toward the AGN.
\subsection{Molecular Gas Abundance Anomaly in the Central Region of
Arp 220?}
\label{dis-abn}
As mentioned in Sect.~\ref{res-ratio}, we obtained a very low
$^{13}$CO(2-1)/C$^{18}$O(2-1) line intensity ratio of about unity.
Recent SMA observations toward nearby active star forming galaxies
(NGC 253, NGC 1365, and NGC 3256) show $^{13}$CO(2-1)/C$^{18}$O(2-1)
ratios of $\sim4$ \citep{sak06a,sak06b,sak07}.
The $^{13}$CO(1-0)/C$^{18}$O(1-0) line ratios in nearby galaxies are
$\sim4$ \citep{sag91}, similar to the J=2-1 ratios.
If both the $^{13}$CO and the C$^{18}$O lines are optically thin,
the $^{13}$CO/C$^{18}$O line ratios for J=2-1 and J=1-0 are expected
to have almost the same values.
Some of interferometric $^{13}$CO and C$^{18}$O observations of
nearby galaxies show $^{13}$CO/C$^{18}$O ratio of about 2
\citep{mei04,cho07}, but not unity as in Arp 220 (note that some
regions observed by \citeauthor{mei04} show the $^{13}$CO/C$^{18}$O
ratios of about unity, but the S/Ns are low).
The intensity ratio of about unity in Arp 220 is therefore unusual
compared with those in other galaxies.
The intensity ratio may be closely related to the abundance ratio;
the intensity ratio is expected to be similar to the abundance ratio,
if both lines are optically thin.
The abundance ratio between $^{13}$CO and C$^{18}$O in our Galaxy is
5.5 for the Solar System and 12.5 for the Galactic Center, and that
in external galaxies is $3-5$ \citep{hen93}, assuming
[$^{13}$CO]/[C$^{18}$O]
= [$^{13}$C]/[$^{12}$C] $\times$ [$^{16}$O]/[$^{18}$O].
Indeed, the abundance ratios and the aforementioned intensity ratios
for external galaxies are similar.
The intensity ratio of about unity is unusual also from the abundance
viewpoint.
Here we discuss possible reasons for this low intensity ratio using
the LVG calculations.
We assume both $^{13}$CO and C$^{18}$O molecules are located at the
same region (one-zone model).
Note that since the brightness temperatures are different between
these two lines and the $^{12}$CO(2-1) line (see
Fig.~\ref{fig-cospec}), it is evident that these two lines and
the $^{12}$CO(2-1) line emanate from different regions.
We also assume that the [$^{13}$CO]/[C$^{18}$O] relative abundance
ratio of 4 \citep{wan04}.
Under this relative abundance ratio, both lines have to be optically
thick for the line ratio to be unity.
We calculated assuming $Z$($^{13}$CO)/($dv/dr$) of $1\times10^{-5}$,
$1\times10^{-6}$, and
$1\times10^{-7}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$.
$Z$($^{13}$CO)/($dv/dr$) of
$1\times10^{-6}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$ can be explained as
the Galactic abundances of [$^{13}$CO]/[H$_{2}$] = $1\times10^{-6}$
\citep{sol79} with the velocity gradient of 1~km~s$^{-1}$~pc$^{-1}$.
Other parameters are the same as in Sect.~\ref{dis-co}.
The calculation results are shown in Fig.~\ref{fig-lvg}.
In the case of $Z$($^{13}$CO)/($dv/dr$) of
$1\times10^{-6}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$, a high density of
$n_{\rm H_{2}} > 1 \times 10^{4}$~cm$^{-3}$ is needed even for
$T_{\rm k}$ of 10~K, and about an order higher density is needed for
100~K to realize the $^{13}$CO(2-1)/C$^{18}$O(2-1) ratio of
$1.0\pm0.3$.
This is because both the $^{13}$CO and C$^{18}$O lines easily become
optically thin at lower-J with the increase of temperature, since the
population moves to higher-J.
To compensate this, the density, and therefore the column density per
unit velocity,
$N$($^{13}$CO or C$^{18}$O)/$dv = Z$($^{13}$CO or C$^{18}$O)/($dv/dr$)
$\times n_{\rm H_{2}}$, also has to be high for both lines to be
optically thick.
If we increase $Z$($^{13}$CO)/($dv/dr$) by an order of magnitude, the
density decreases by about a factor of several at a certain
temperature.
This is because the increase of $Z$($^{13}$CO)/($dv/dr$) makes the
line easier to be optically thick.
The response will be opposite if we decrease $Z$($^{13}$CO)/($dv/dr$)
by an order of magnitude.
In the case of a lower [$^{13}$CO]/[C$^{18}$O] relative abundance
ratio of 2 (half the abundance ratio used above with increasing
[C$^{18}$O]), the required density for the ratio of $1.0\pm0.3$ is
about an order of magnitude lower at a certain temperature
(Fig.~\ref{fig-lvg}).
This is because the C$^{18}$O abundance increased from the above
condition, the opacity and therefore the intensity of the C$^{18}$O
line become similar to that of the $^{13}$CO line.
\begin{figure}
\plotone{figure10.eps}
\caption{The LVG calculation results for the
$^{13}$CO(2-1)/C$^{18}$O(2-1) ratio as a function of H$_{2}$
number density and kinetic temperature.
Solid, dashed, and dotted lines are the
$^{13}$CO(2-1)/C$^{18}$O(2-1) ratios with
$Z$($^{13}$CO)/($dv/dr$) of $1\times10^{-6}$, $1\times10^{-7}$,
and $1\times10^{-5}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$,
respectively, under the [$^{13}$CO]/[C$^{18}$O] relative
abundance ratio of 4.
Dot-dashed lines are the $^{13}$CO(2-1)/C$^{18}$O(2-1) ratios
with the [$^{13}$CO]/[C$^{18}$O] relative abundance ratio of 2
under $Z$($^{13}$CO)/($dv/dr$) of
$1\times10^{-6}$~(km~s$^{-1}$~pc$^{-1}$)$^{-1}$.
\label{fig-lvg}}
\end{figure}
As shown above, the important parameters for the low
$^{13}$CO(2-1)/C$^{18}$O(2-1) ratio are (1) the molecular gas density
and hence the column density per unit velocity, and (2) the molecular
abundance.
First, we discuss the gas density.
The molecular gas density needs to be high
($\gtrsim10^{4}$~cm$^{-3}$) for the ratio to be around unity.
The molecular gas in Arp 220 indeed contains high density gas, which
is supported by the observations of high critical density molecules,
such as HCN, HCO$^{+}$ or CS \citep[e.g.,][]{sol90,sol92,gre08}.
On the other hand, there are many galaxies with the detections of
these high critical density molecular lines, but almost no report of
a $^{13}$CO(2-1)/C$^{18}$O(2-1) $\sim1$ so far.
One possibility of the difference between Arp 220 and other galaxies
may be due to the large fraction of dense molecular gas.
Our $^{13}$CO(2-1) and C$^{18}$O(2-1) images exhibit compact
distribution around the double nucleus, and the molecular gas in
Arp 220 is dominated by dense gas \citep[e.g.,][]{gre08}.
If the dense gas is concentrated toward the double nucleus, most of
the molecular gas toward the double nucleus can be dominated by dense
gas, and this makes the column density high enough to result in the
$^{13}$CO(2-1)/C$^{18}$O(2-1) of unity.
Second, we discuss the molecular abundance.
To realize the $^{13}$CO(2-1)/C$^{18}$O(2-1) intensity ratio of about
unity with changing the abundance, two possibilities can be
considered; one is the underabundance of the $^{13}$CO molecule, and
another is the overabundance of C$^{18}$O.
Deficiency of $^{13}$CO abundance is often suggested in merging
galaxies based on their larger $^{12}$CO(2-1)/$^{13}$CO(2-1) ratios
than those in starburst or Seyfert galaxies
\citep[e.g.,][]{aal91,cas92}.
But as is mentioned in Sect.~\ref{res-ratio}, the observed
$^{12}$CO(2-1)/$^{13}$CO(2-1) ratio of Arp 220 is not as extreme as
those in the other merging galaxies, and not significantly different
from those in starburst or Seyfert galaxies.
In addition, the possible reason for the $^{13}$CO deficiency is the
selective photodissociation of the $^{13}$CO molecules
\citep[e.g.,][]{cas92}.
In this case, however, C$^{18}$O molecules will be more affected
by the selective photodissociation \citep{dis88,cas92}, hence this
cannot be the cause.
We therefore think that the underabundance of $^{13}$CO is possible,
but less likely.
The overabundance of the C$^{18}$O molecule may be possible to
achieve under the circumstance of Arp 220.
Massive stars synthesize a large amount of the primary element,
$^{12}$C, at helium burning phase, and it goes into interstellar
medium via supernova explosions \citep{cas92}.
The $^{18}$O enrichment occurs also in massive stars
\citep{hen93,sag91}, either Wolf-Rayet stars or type II supernova
explosions by partial helium burning \citep{ama95}.
Since Arp 220 is very active in star formation (Sect.~\ref{intro}),
both the $^{12}$C and $^{18}$O enrichment due to the above mechanism
can be realized.
This can lead to the enrichment of the C$^{18}$O molecule.
This possibility still needs to be studied quantitatively.
Note that a recent molecular gas abundance study toward a young
(several Gyr old) galaxy at $z=0.89$ show low [$^{13}$CO]/[C$^{18}$O]
of $1.9\pm0.2$ \citep{mul06}.
This result also suggests that the low intensity ratio is related to
an abundance anomaly during the young star formation epoch.
\section{CONCLUSIONS}
\label{concl}
We observed the central region of Arp 220 in the $^{12}$CO(6-5),
$^{12}$CO(2-1), $^{13}$CO(2-1), and C$^{18}$O(2-1) lines, and
435~$\mu$m and 1.3~mm continuum simultaneously with the SMA.
The two nuclei are clearly resolved in the 435~$\mu$m image, and
kinematically resolved in the $^{12}$CO(6-5) image.
For the double nucleus, we concluded as follows:
\begin{itemize}
\item The difference of the peak intensities in our 435~$\mu$m image
between the two nuclei is smaller than at longer wavelengths.
From the dust SED fitting, the dust in the two nuclei is
estimated to be optically thick at 435~$\mu$m.
The emissivity indices are estimated to be $\sim2.0$ for the
eastern nucleus and $\sim1.0$ for the western nucleus.
This suggests that the dust properties, such as dust size
distributions or dust compositions, are different between the two
nuclei.
\item The $^{12}$CO SEDs are similar between the two nuclei with the
western nucleus having higher upper limits in the excitation
conditions than those in the eastern nucleus.
The $^{12}$CO SEDs for both nuclei and that of M82 or
BR 1202--0725 are similar, characterized with small intensity
decreases up to J = 6-5 ($^{12}$CO(6-5)/(2-1) ratio of about
0.5).
This suggests that the molecular gas in the two nuclei of
Arp 220 has the similar excitation conditions as that in M82 or
BR 1202--0725, which have a density of
$\gtrsim10^{3.3}$~cm$^{-3}$ or a temperature of $\gtrsim30$~K.
\item We could not find any evidence of an AGN in Arp 220 with the
$^{12}$CO SED study.
There is no clear difference in the $^{12}$CO SEDs between the
AGN hosting ULIRG Mrk 231 and the double nucleus of Arp 220 (and
therefore M82 and BR 1202--0725).
This suggests that the AGN heating is not important for molecular
gas excitation conditions in the large scale (a few hundred to a
few kpc scale).
\end{itemize}
For the global characteristics of the molecular gas and dust in
Arp 220, we concluded as follows:
\begin{itemize}
\item Based on the large amount of missing flux in our data and other
previously published evidence in molecular gas and dust, we
suggest the existence of an extended component in the dust
emission with its dust properties $T_{\rm d}\sim38$~K,
$\beta\sim2.4$, and $\nu_{\rm c}\sim200~\micron$.
A recent SMA U/LIRG survey suggests that many of U/LIRGs seem to
have extended components \citep{wil08}, so that having an
extended dust component might be common.
\item The $^{12}$CO(2-1) line spectrum shows stronger line intensity
at the lower velocities than the higher velocities, but the
spectra of the higher-J lines show the opposite, indicating that
the higher velocity gas has higher density, higher temperature,
or both, than the lower velocity component.
\item The intensities of the $^{13}$CO(2-1) and C$^{18}$O(2-1) lines
are similar.
This suggests that the molecular gas in Arp 220 is dense
enough to be optically thick in both lines, or the abundance of
either line deviates from the values in other nearby galaxies.
To explain the ratio with the density effect, Arp 220 should have
molecular gas largely dominated by dense gas, more than in other
nearby galaxies.
Underabundance of $^{13}$CO is possible, but it is less likely.
Overabundance of C$^{18}$O is also possible, considering the
$^{12}$C and $^{18}$O enrichment by high mass stars.
\end{itemize}
\acknowledgements
We thank all the past and present SMA staff for designing,
constructing, and supporting the SMA, especially who worked for and
realized the 690~GHz observations.
We also thank the anonymous referee for helpful comments.
The Submillimeter Array is a joint project between the Smithsonian
Astrophysical Observatory and the Academia Sinica Institute of
Astronomy and Astrophysics and is funded by the Smithsonian
Institution and the Academia Sinica.
This work is supported by the National Science Council (NSC) of
Taiwan, NSC 97-2112-M-001-021-MY3.
|
1,116,691,500,353 | arxiv | \section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\itshape}}
\makeatother
\def\pplogo{\vbox{\kern-\headheight\kern -29pt
\halign{##&##\hfil\cr&{\ppnumber}\cr\rule{0pt}{2.5ex}&\ppdate\cr}}}
\makeatletter
\def\ps@firstpage{\ps@empty \def\@oddhead{\hss\pplogo}%
\let\@evenhead\@oddhead
\thispagestyle{plain}
\def\maketitle{\par
\begingroup
\def\fnsymbol{footnote}{\fnsymbol{footnote}}
\def\@makefnmark{\hbox{$^{\@thefnmark}$\hss}}
\if@twocolumn
\twocolumn[\@maketitle]
\else \newpage
\global\@topnum\z@ \@maketitle \fi\thispagestyle{firstpage}\@thanks
\endgroup
\setcounter{footnote}{0}
\let\maketitle\relax
\let\@maketitle\relax
\gdef\@thanks{}\gdef\@author{}\gdef\@title{}\let\thanks\relax}
\makeatother
\numberwithin{equation}{section}
\newcommand\nn{\nonumber}
\newcommand\eea{\end{eqnarray}}
\newcommand\bea{\begin{eqnarray}}
\newcommand{\sfrac}[2]{{\textstyle\frac{#1}{#2}}}
\newcommand\di{\partial}
\newcommand\mpl{M_{\rm Pl}}
\newcommand\spacelike{\parbox{.7cm}{\Huge$\times$}}
\def\langle {\langle}
\def\rangle {\rangle}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\partial{\partial}
\def\mbox{const}{\mbox{const}}
\def{\rm e}{{\rm e}}
\def\alpha{\alpha}
\def\varepsilon{\varepsilon}
\def\partial{\partial}
\def\left({\left(}
\def\right){\right)}
\def\langle {\langle }
\def\rangle {\rangle }
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{align}}{\begin{align}}
\newcommand{\end{align}}{\end{align}}
\newcommand{\begin{gather}}{\begin{gather}}
\newcommand{\end{gather}}{\end{gather}}
\newcommand{\begin{subequations}}{\begin{subequations}}
\newcommand{\end{subequations}}{\end{subequations}}
\newcommand{\mathop{\rm tg}\nolimits}{\mathop{\rm tg}\nolimits}
\newcommand{\mathop{\rm arctg}\nolimits}{\mathop{\rm arctg}\nolimits}
\renewcommand{\tanh}{\mathop{\rm th}\nolimits}
\newcommand{\mathop{\rm ch}\nolimits}{\mathop{\rm ch}\nolimits}
\newcommand{\mathop{\rm sh}\nolimits}{\mathop{\rm sh}\nolimits}
\renewcommand{\ln}{\mathop{\rm ln}\nolimits}
\newcommand{\sm}[1]{{\scriptscriptstyle \rm #1}}
\newcommand{{\rm Tr}}{{\rm Tr}}
\renewcommand{\Im}{\mathop{\rm Im}\nolimits}
\renewcommand{\Re}{\mathop{\rm Re}\nolimits}
\renewcommand{\t}{\tilde}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{{\alpha_{UV}}}{{\alpha_{UV}}}
\newcommand{{\alpha_{IR}}}{{\alpha_{IR}}}
\newcommand{{z_{UV}}}{{z_{UV}}}
\newcommand{{z_{IR}}}{{z_{IR}}}
\newcommand{{R_\text{AdS}}}{{R_\text{AdS}}}
\newcommand{{\rm tr}}{{\rm tr}}
\newcommand{\mathcal}{\mathcal}
\usepackage{latexsym}
\usepackage{physics}
\usepackage{mathrsfs}
\usepackage{amsthm}
\usepackage{keyval}
\usepackage{ifthen}
\usepackage{amsbsy}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{R}^3}{\mathbb{R}^3}
\newcommand{{V}}{{V}}
\newcommand{\overline{x}}{\overline{x}}
\newcommand{\overline{k}}{\overline{k}}
\newcommand{\traza}[1]{Tr\left({#1}\right)}
\newcommand{\cris}[2]{\Gamma^{#1}_{\hphantom{#1}#2}}
\newcommand{Y_{lm}}{Y_{lm}}
\newcommand{\av}[1]{\overline{Y}_{lm}^{#1}}
\newcommand{\at}[1]{T_{lm}^{#1}}
\newcommand{\h}[1]{h_{l}^{#1}}
\newcommand{\ho}[1]{h_{0}^{#1}}
\newcommand{\hu}[1]{h_{1}^{#1}}
\newcommand{\drh}[1]{\partial_r h_{l}^{#1}}
\newcommand{\drho}[1]{\partial_r h_{0}^{#1}}
\newcommand{\drhu}[1]{\partial_r h_{1}^{#1}}
\newcommand{\y}[1]{Y_{l{#1}} (\theta,\phi)}
\newcommand{\pl}[1]{P_l^{#1}(\cos{\theta})}
\newcommand{\GG}[1]{\Gamma({#1})}
\numberwithin{equation}{section}
\textwidth = 6.5 in
\textheight = 8.5 in
\oddsidemargin = 0.0 in
\begin{document}
\setcounter{page}0
\def\ppnumber{\vbox{\baselineskip14pt
}}
\def\ppdate{
} \date{}
\author{Valentin Benedetti\footnote{e-mail: [email protected]}, Horacio Casini\footnote{e-mail: [email protected]}\\
[7mm] \\
{\normalsize \it Centro At\'omico Bariloche and CONICET}\\
{\normalsize \it S.C. de Bariloche, R\'io Negro, R8402AGP, Argentina}
}
\bigskip
\title{\bf Entanglement entropy of linearized \\ gravitons in a sphere
\vskip 0.5cm}
\maketitle
\begin{abstract}
We compute the entanglement entropy of a massless spin $2$ field in a sphere in flat Minkowski space. We describe the theory with a linearized metric perturbation field $h_{\mu\nu}$ and decompose it in tensor spherical harmonics. We fix the gauge such that a) the two dynamical modes for each angular momentum decouple and have the dynamics of scalar spherical modes, and b) the gauge-fixed field degrees of freedom inside the sphere represent gauge invariant operators of the theory localized in the same region. In this way the entanglement entropy turns out to be equivalent to the one of a pair of free massless scalars where the contributions of the $l=0$ and $l=1$ modes have been subtracted. The result for the coefficient of the universal logarithmic term is $-61/45$
and coincides with the one computed using the mutual information.
\end{abstract}
\bigskip
\newpage
\tableofcontents
\vskip 1cm
\section{Introduction}
The entanglement entropy (EE) of vacuum fluctuations across a boundary in space
has shown to be an interesting theoretical quantity in quantum field theory (QFT). The study of EE was originally motivated by the quest to understand black hole entropy and entropy in gravity, but it turned out to have a more clear and natural formulation in QFT. Entropy in quantum mechanics is by definition a quantity associated to a state in an algebra of operators, and ordinary QFT naturally comes with a built in correspondence of algebras with regions of the space.
The situation in gravity is less clear precisely because it is not completely understood how ``regions'' in quantum gravity might be defined in terms of the operator content of the theory (see for example \cite{Camps:2018wjf,Donnelly:2017jcd}). Holographic theories give a simple, but perhaps only partial, answer, to this question. By restricting the region to a boundary region, the associated algebra is given by the one of the dual QFT in the boundary. Holographic EE \cite{Ryu:2006bv,Faulkner:2013ana} has shown there is a correspondence, at least at the semi-classical level, of this QFT entropy to an entropy in a gravity theory in the so called entanglement wedge \cite{ese1,ese2,ese3}.
In the study of EE it is important to establish a correspondence of the different terms on the entropy with known physical quantities in the model. One such signature that allows us to distinguish models form their EE is given by the coefficient of the logarithmic term. In this sense there are in the literature several calculations of logarithmic corrections to the black hole entropy formula due to the EE of quantum fields in the semiclassical background, including gravitons (for a review see \cite{Solodukhin:2011gn,Sen:2012dw}). The graviton contribution may actually be of relevance to distinguish the gravity theory \cite{Sen:2012dw}.
Nevertheless, logarithmic terms are subtle too. An example of the problems involved is the case of a Maxwell field. The logarithmic term for a free Maxwell field does not coincide with the expected trace anomaly \cite{Casini2016,dowkeresferagauge,huang}. However, the presence of electric or magnetic charges can change this result, no matter the mass of the charged particles \cite{ss1,ss2}. Without changing the theory to include charges this issue has been discussed in the literature in an effective manner using the constructions of edge modes or extended Hilbert space (see for example \cite{don1,don2}).
In this paper we compute the EE of free gravitons in flat space by treating the theory as a quantum field theory of helicity $2$ particles. In this sense we do not have to deal with the localization problems of a full quantum gravity theory. We show there are no conceptual issues for these free fields per se as QFT. As in the case of the Maxwell field, it is important in computing the EE to understand correctly what is the entropy one is computing, that is, what is the algebra and the state, as well as the meaning of the result in terms of the continuum theory. A natural way to do this is by interpreting the universal coefficients in terms of the mutual information. This is transparent in the real time formulation that we use in this paper where we have the quantum degrees of freedom always in sight. Computations using the replica trick may actually hide the nature of the entropy one is computing in the precise definition of the replica partition functions \cite{ss1}. We treat the case of a sphere computing the universal logarithmic term.
In order to compute the entanglement entropy we should consider the vacuum state in the algebra of gauge invariant operators. This later is generated by the curvature tensor which is gauge invariant at the linearized level. The vacuum is a Gaussian state in this algebra and we could apply EE formulas for Gaussian states in terms of the correlation functions and commutators of the Gaussian variables. However, due to the algebraic complexity of dealing with the four index curvature tensor and its commutators we will follow a different route which is physically equivalent and will allow us to simplify the computations considerably. We will use the metric perturbation tensor $h_{\mu\nu}$ as a generator of the algebra. This is not a physical variable and we need to fix the gauge. This is done taking into account the spherical symmetry of the problem by choosing a gauge that allow us to decouple the two radial modes for each angular momentum. However, as explained in \cite{Casini:2013rba}, while fixing the gauge converts a gauge field into a physical variable, the localization properties of these variables are very much gauge dependent. Hence we need to fix the gauge such that the gauge fixed $h_{\mu\nu}$ can be recovered from the curvature inside the region of interest for computing the EE. Otherwise, selecting the field and momentum variables in a region may compute the EE of an algebra unrelated to geometry.
Since this gauge fixing procedure adapted to the region of interest has not been explicitly carried out in the literature before we find it instructive to see how this works in the simpler case of a Maxwell field first. We will treat the case of a Maxwell field between parallel planes in the next section and in a sphere in section \ref{maxwellsphere}. The results agree with \cite{Casini2016} where the algebra was defined directly in terms of the electric and magnetic fields instead of the gauge fixed vector potential $A_\mu$. In section \ref{gravitonplanos} we describe the theory of the linearized graviton and compute the EE between parallel planes. The case of a sphere is treated in section \ref{gravitonsphere} where we compute the logarithmic coefficient. We end with a discussion in section \ref{dis}, where we briefly compare with other results in the literature.
\section{Entanglement entropy of a Maxwell field between parallel planes}
Before studding the problem of linearized gravitons, we consider the simpler case of a free Maxwell field in $(3+1)$ dimensions given by the Lagrangian
\begin{equation}
L=-\frac{1}{4} \int d^3 x\, F_{\mu\nu} F^{\mu\nu}=\frac{1}{2} \int d^3 x \left[(\dot{\vec{A}}(\vec{x}) + \nabla A_0(\vec{x}))^2 - (\nabla \cross \vec{A}(\vec{x}))^2 \right]\,.
\label{MaxwellLagAmu}
\end{equation}
In this section we aim to obtain the EE associated with the region ${V}$ between two parallel planes separated by a distance $L$ (Figure \ref{paralelplates}). For a Cartesian coordinate system $\vec{x}=(x^1,x^2,x^3)$ the region ${V}$ is given by ${V} = \left\{{x=(x^1,x^2,x^3), 0<x^1<L}\right\}\, . $
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{placas5.jpg}
\caption{Two parallel planes with a separation of distance $L$ in the $x^1$ direction.}
\label{paralelplates}
\end{figure}
In this context, it is particularly useful to write the field $A_\mu$ on a plane wave basis using the Fourier sum over the directions parallel to the plates. Assuming that the directions $x^2$ y $x^3$ are compactified to large sizes $R_2$ y $R_3$ with periodic boundary conditions, we are able to obtain
\begin{equation}
A_\mu(x^0,x^1,x^2,x^3) = \sum_{\vec{k}} N e^{i\vec{k}\cdot\vec{x}} A_\mu(x^0,x^1,k) ,
\label{ondasplanas1}
\end{equation}
where it is valid that $A_\mu^\dag(x^0,x^1,k)=A_\mu(x^0,x^1,-k)$ and $N$ is a constant that takes the value $\left[ \sqrt{2\pi R_2R_3}\right]^{-1} $. Moreover, the vector $\vec{k}$ can be expressed for $ n^2,n^3 = 0,\pm 1,\pm 2,...,\pm\infty$ as
\begin{equation}
\vec{k}=\left(0, \frac{2\pi n^2}{ R_2}, \frac{2\pi n^3}{R_3} \right) \, .
\end{equation}
The problem then decomposes into independent $(1+1)$-dimensional fields in the directions $x^0,x^1$, labeled by $\vec{k}$. To study a fixed mode $\vec{k}$, without loss of generality, we can simplify the calculation by using a coordinate system adapted to $\vec{k}$, where $\hat{x}^2=\hat{k}$ and $\hat{x}^3 = \hat{x}^1 \cross \hat{k}$. In these coordinates, the expression of the each mode is
\begin{equation}
N e^{ikx_2} A_\mu (x_0,x_1, k) \, .
\label{AFourier}
\end{equation}
Considering the gauge freedom of the Maxwell field given by ${A'}_\mu \rightarrow A_\mu + \partial_\mu \chi$ we also decompose $\chi$ in the plane wave basis. The mode corresponding to $\vec{k}$ writes
\begin{equation}
\chi (x_0,x_1, x_2, x_3) =N e^{i k x_2} \chi (x_0,x_1, k) \,.
\label{chichi}
\end{equation}
Then, a gauge transformation of a fix mode yields
\bea
A'_\mu (x_0, x_1, x_2, x_3) &=& N e^{ikx_2} \left[(A_0(x_0, x_1, k) + \dot{\chi} (x_0, x_1, k)) \hat{x}_0 +\right. \nonumber \\
&+&\left. (A_1 (x_0, x_1, k) + \partial_1 \chi (x_0, x_1, k))\hat{x}_1 \right. \nonumber \\
&+&\left. (A_2 (x_0, x_1, k) + ik\chi (x_0, x_1, k))\hat{x}_2 +A_3 (x_0, x_1, k) \hat{x}_3 \right] \, .
\label{maxwellgauge2}
\eea
In the light of (\ref{maxwellgauge2}), it is clear that we can fix $\chi$ in such a way that the field component parallel to $\hat{k}$ vanishes, $A_2=0$.
With this choice we have
\begin{equation}
F_{2\nu}=\partial_2 A_\nu - \partial_\nu A_2 = ikA_\nu \, .
\label{FmunuPP}
\end{equation}
In other words, in this gauge, and for each fixed mode $k$, $A_\mu$ can be expressed as a function of the gauge invariant tensor $F_{\mu\nu}$ in a way local in the coordinates $x^0,x^1$. This allows us to identify the algebra of gauge invariant operators $F_{\mu\nu}$ in between the parallel planes with the one of the quantized gauge fixed operators $A_\mu$.
To proceed we compute the Hamiltonian. We must rewrite the Lagrangian (\ref{MaxwellLagAmu}) by using the expansion (\ref{AFourier}) under the proposed gauge condition. By doing so, we obtain for each mode the Lagrangian
\bea
\mathcal{L}_k &=& 1/2 \left[\dot{A}^\dag_1\dot{A}_1+\dot{A}^\dag_3 \dot{A}_3 -k^2 {A}^\dag_1{A}_1 -k^2 {A}^\dag_3{A}_3 - \partial_1 A_3^\dag\partial_1 A_3 \right. \nonumber \\
&& -\left. k^2 {A}^\dag_0{A}_0 - \partial_1 A_0^\dag\partial_1 A_0 - A_0^\dag\partial_1 \dot{A}_1 - \partial_1 \dot{A}^\dag_1 A_0 \right] \, .
\label{MaxwellPPlag}
\eea
The canonical momenta of the fields $A_1\,,A_1^\dag\, ,A_3\,,A_3^\dag$ are given by
\bea
\pi_1 = \frac{\partial \mathcal{L}_k}{\partial \dot{A}_1 }= \frac{\dot{A}^\dag_1}{2}+ \frac{\partial_1 A^\dag_0}{2} \, \, , \, \, \pi_3 = \frac{\partial \mathcal{L}_k}{\partial \dot{A}_3 }= \frac{\dot{A}^\dag_3}{2}\,,\\
\pi^\dag_1 = \frac{\partial \mathcal{L}_k}{\partial \dot{A}^\dag_1 }= \frac{\dot{A}_1}{2}+ \frac{\partial_1 A_0}{2} \, \, , \, \, \pi^\dag_3 = \frac{\partial \mathcal{L}_k}{\partial \dot{A}^\dag_3 }= \frac{\dot{A}_3}{2}\,.
\eea
The Hamiltonian of the mode is then given by the Legendre transform
\bea
\mathcal{H}_k &=& \pi_1\dot{A}_1+\pi^\dag_1\dot{A}^\dag_1+\pi_3\dot{A}_3+\pi^\dag_3\dot{A}^\dag_3 - \mathcal{L}_k = 2\pi_1^\dag \pi_1 + 2\pi_3^\dag \pi_3 + \frac{k^2}{2} A^\dag_1 A_1 + \nonumber \\
&+& \frac{k^2}{2} A^\dag_3 A_3 +\frac{1}{2}\partial_1 A^\dag_3\partial_1 A_3 + \frac{k^2}{2} A^\dag_0 A_0 + A^\dag_0 \partial_1 \pi^\dag_1 +A_0 \partial_1 \pi_1
\label{MaxwellPPham}
\eea
with the corresponding equal time commutation relations
\bea
\left[A_1(x_0,x_1,k),\pi_1({x}_0,{x'}_1,k)\right]=i \delta(x_1-{x'}_1) \, , \nonumber \\
\left[A_3(x_0,x_1,k),\pi_3({x}_0,{x'}_1,k)\right]=i \delta(x_1-{x'}_1)\, .
\label{MaxwellPPcom}
\eea
It is clear from (\ref{MaxwellPPham}) that the field $A_0$ does not posses his own dynamic and thus it can be treated as a Lagrange multiplier. Differentiating, in order to obtain its equations of motion, we obtain the constraints
\begin{equation}
\partial_1 \pi_1 = - \frac{k^2}{2} A^\dag_0 \,\, , \,\, \partial_1 \pi^\dag_1 = - \frac{k^2}{2} A_0 \, .
\label{MaxwellPPvinc}
\end{equation}
Replacement of (\ref{MaxwellPPvinc}) in (\ref{MaxwellPPham}) gives
\begin{equation}\mathcal{H}_k= 2\pi_1^\dag \pi_1 + 2\pi_3^\dag \pi_3 + \frac{k^2}{2} A^\dag_1 A_1 + \frac{k^2}{2} A^\dag_3 A_3 +
+\frac{1}{2}\partial_1 A^\dag_3\partial_1 A_3 + \frac{2}{k^2} \partial_1 \pi_1 \partial_1 \pi^\dag_1\,.
\label{MaxwellPPham2}
\end{equation}
Making the identifications
\begin{equation}
\phi_1 = \frac{\sqrt{2}\pi_1}{|k|} \, , \, P_1=-\frac{|k|A_1}{\sqrt{2}}\,,
\end{equation}
\begin{equation}
\phi_3 = \frac{A_3}{\sqrt{2}} \, , \, P_3=\sqrt{2} \pi_3\,,
\end{equation}
where $\phi_1, P_1$ and $\phi_3, P_3$ are pairs of canonically conjugate variables,
the Hamiltonian writes
\begin{equation}
\mathcal{H}_k= P^\dag_1 P_1 + P^\dag_3 P_3 + \partial_1 \phi^\dag_1\partial_1 \phi_1 + \partial_1 \phi^\dag_3\partial_1 \phi_3 + k^2 \phi^\dag_1 \phi_1 + k^2 \phi^\dag_3 \phi_3\,.\label{217}
\end{equation}
This is exactly the Hamiltonian for the modes of two independent {\sl scalar} fields $\phi_1, \phi_3$ upon dimensional reduction (see for example \cite{Casini2016}). As a result, the algebra of gauge invariant operators for the gauge field and the vacuum expectation values inside the parallel planes are identical to the algebras and expectation values corresponding to two massless scalar fields inside the same region.
To sum up, we conclude the EE of the Maxwell field associated with a region ${V}$ enclosed by two parallel planes is equivalent to the contribution of two independent scalar fields. In this way, we recover the known result obtained in \cite{Casini2016} by working with the gauge invariant electric and magnetic fields directly. The entropy turns out to be \cite{Casini:2005zv}
\begin{equation}
S=c \frac{A}{\epsilon^2}- 2 \,k_s \frac{A}{L^2}\,,
\end{equation}
where $A=R_2 R_3$ is the area of the planes, $\epsilon$ is a short distance cutoff, $c$ a non universal constant, and $k_s$ is the universal coefficient corresponding to a scalar in this same geometry. This later can be computed with high precision from the knowledge of the one dimensional scalar entropy function \cite{Casini:2009sr}
\begin{equation}
k_s= 0.0055351599...\,.\label{ks}
\end{equation}
As we will now see, this exact identification of entropies between scalars and gauge fields does not hold for other regions.
\section{Entanglement entropy for a Maxwell field in the sphere}
\label{maxwellsphere}
We consider now the problem of a Maxwell field in the sphere, which also can be easily dimensionally reduced. Due to the spherical symmetry presented in this case we expand the field in question using scalar spherical harmonics for the $A_0$ component and vector spherical harmonics for $\vec{A}=(A_1,A_2,A_3)$. That is
\bea
A_0 &=& \sum_{lm} A^0_{lm} (t,r) Y_{lm} (\theta, \phi)\,,\quad l=0,1,...,\infty\,, \quad -l\leq m \leq l \, ,
\label{A0Ylm}\\
\vec{A} &=& \sum_{slm} A^s_{lm} (t,r) \av{s} (\theta, \phi) \,,\quad l=0,1,...,\infty\,, \quad -l\leq m \leq l \,, \quad s=r,e,m \, ,
\label{MaxwellSbase}
\eea
where $\av{s}$ are the vector spherical harmonics defined by
\bea
\av{r}(\theta,\varphi) &=& Y_{lm}(\theta,\varphi) \hat{r}\,, \qquad l \geq 0, \quad -l\leq m \leq l\,,
\label{Ylmr}\\
\av{e}(\theta,\varphi) &=& \frac{r \nabla Y_{lm}(\theta,\varphi)}{\sqrt{l(l+1)}}\,, \qquad l > 0, \quad -l\leq m \leq l\,,
\label{Ylme}\\
\av{m}(\theta,\varphi) &=& \frac{\vec{r} \times \nabla Y_{lm}(\theta,\varphi)}{\sqrt{l(l+1)}}\,, \qquad l > 0, \quad -l\leq m \leq l\,.
\label{Ylmm}
\eea
Considering the gauge transformations, it is useful to expand the function $\chi$ using scalar spherical harmonics as
\begin{equation}
\chi= \sum_{lm} \chi_{lm} (t,r) Y_{lm} (\theta, \phi)\, .
\end{equation}
This gives the transformation law
\begin{equation}
\vec{A}'=\sum_{lm} \left( A_{lm}^{r} + \partial_r \chi_{lm} \right) \av{r} + \left( A_{lm}^{e} + \frac{\chi_{lm}}{r} \right) \av{e} + A_{lm}^{m} \av{m} \, .
\end{equation}
We see it is possible to fix $\chi_{lm}$ completely in such way that the coefficient $ {A'}_{lm}^{e}$ of the ``electric'' vector spherical harmonics $\av{e}$ is identically zero for each angular momentum.
This particular choice of gauge is convenient because of other reasons too.
It
allow us to write for each mode
\begin{equation}
F_{e\mu} =\left(e^\nu\partial_\nu\right) A_\mu + \left( \partial_\mu e^\nu \right) A_\nu
\label{FmunuS}
\end{equation}
where $e^\mu$ is the unit vector in the direction of $\av{e}$, $e^\mu A_\mu$ represents the electric component of the vector $A_\mu$ that vanishes under this particular choice of gauge, and $e^\mu \partial_\mu$ is the derivative in such direction. The expression (\ref{FmunuS}) shows that in this gauge we can recover $A_\mu$ on a sphere by the knowledge of the components $F_{e\mu}$ of the gauge invariant field tensor in the same sphere. This is because derivatives in (\ref{FmunuS}) are tangential to the sphere. Therefore, even if the relation between the gauge fixed $A_\mu$ and $F_{\mu \nu}$ is non local in the angular directions, it maps the variables $A_\mu$ at fixed $r$ to physical variables with the same radius. This is a particular case of the general situation studied in \cite{Casini:2013rba} where it was shown that a gauge fixing that respects the localization of degrees of freedom in a region can be chosen such that $A_\mu$ vanishes on the boundary of the region in a direction parallel to the boundary itself. In the present example this direction is the one of the electric vector harmonics.
From this point, we proceed in the same way as in the case of parallel planes. In particular, a useful writing of the Lagrangian can be obtained by means of replacing (\ref{A0Ylm}) and (\ref{MaxwellSbase}) in (\ref{MaxwellLagAmu}), so as, by taking into consideration the orthonormality property of vector spherical harmonics, we get
\begin{equation}
L=\sum_{lm} \int_0^\infty dr \mathcal{L}_{lm}\,.
\end{equation}
The Lagrangian $\mathcal{L}_{lm}$ for $l\geq 1$ follows from direct computation using the properties of vector harmonics listed in appendix \ref{apa},
\bea
\mathcal{L}_{lm}&=&1/2 \left[r^2 {\dot{A}_{l,m}^{r}}{\dot{A}_{l,-m}^{r}} + r^2 {\dot{A}_{l,m}^{m}}{\dot{A}_{l,-m}^{m}}-l(l+1){{A}_{l,m}^{r}}{{A}_{l,-m}^{r}} \right.\nonumber\\
&&-\left. l(l+1){{A}_{l,m}^{m}}{{A}_{l,-m}^{m}} - \left|{{A}_{l,m}^{m}} + r\partial_r {{A}_{l,m}^{m}}\right|^2 + r^2 \partial_rA^0_{l,m}\partial_rA^0_{l,-m}+ l(l+1) A^0_{l,m}A^0_{l,-m}\right.\nonumber \\
&&- \left. r^2 A^0_{l,m} \partial_r{\dot{A}_{l,-m}^{r}} -r^2 A^0_{l,-m} \partial_r{\dot{A}_{l,m}^{r}} + 2rA^0_{l,m}{\dot{A}_{l,-m}^{r}}+ 2rA^0_{l,-m}{\dot{A}_{l,m}^{r}}\right]\, .
\label{MaxwellSlag}
\eea
The Lagrangian density is independent of $m$, and to simplify the notation in the following we eliminate the index for $m$ in the variables and consider the real $m=0$ mode only, while we have to keep in mind that we will have $(2 l+1)$ identical contributions to the EE for each angular momentum $l$.
The canonical conjugate momenta are defined by
\begin{equation}
\pi_{l}^r =\frac{\partial \mathcal{L}_{l}}{\partial {\dot{A}_{l}^{r}}} = r^2\left({\dot{A}_{l}^{r}}+\partial_r A^0_l \right)\, , \quad \pi_{l}^m =\frac{\partial \mathcal{L}_{l}}{\partial {\dot{A}_{l}^{m}}} = r^2{\dot{A}_{l}^{m}} \, .
\end{equation}
which can be substituted in the Legendre transform
\begin{equation}
\mathcal{H}_{l}= \pi_{l}^r\dot{A}_{l}^{r} + \pi_{l}^m\dot{A}_{l}^{m} - \mathcal{L}_{l} \, ,
\label{MaxwellSleg}
\end{equation}
in order to obtain the Hamiltonian
\bea
\mathcal{H}_{l} = \frac{\pi_{l}^r\pi_{l}^r}{2r^2}+ \frac{\pi_{l}^m\pi_{l}^m}{2r^2}+l(l+1){{A}_{l}^{r}}{{A}_{l}^{r}}+l(l+1){{A}_{l}^{m}}{{A}_{l}^{m}} \nonumber\\
+\left({{A}_{l}^{m}} + r\partial_r {{A}_{l}^{m}}\right)^2 - \pi_{l}^r\partial_r A_{l}^0 - \frac{l(l+1)}{2} A_{l}^0 A_{l}^0\,.
\label{MaxwellSham}
\eea
The non trivial canonical commutation relations are given by
\begin{equation}
\left[{{A}_{l}^{r}}(t,r), {{\pi}_{l}^{r}}(t,r') \right]=\left[{{A}_{l}^{m}}(t,r), {{\pi}_{l}^{m}}(t,r') \right] = i \delta(r-r')\, .
\label{MaxwellScom}
\end{equation}
In addition, modes with different $l$ are independent to each other and their operators commute.
Again, $A_{l}^0$ is a Lagrange multiplier, allowing the derivation of the constraint
\begin{equation}
\partial_r \pi_{l}^r = l(l+1)A_{l}^0\,,
\label{MaxwellSvinc}
\end{equation}
which can be replaced in (\ref{MaxwellSham}) yielding
\bea
\mathcal{H}_{l} &=& \frac{1}{2}\left[\frac{\pi_{l}^r\pi_{l}^r}{r^2}+ \frac{\partial_r \pi_{l}^r\partial_r \pi_{l}^r}{l(l+1)} + {l(l+1)} A_{l}^r A_{l}^r\right] \nonumber \\
&+& \frac{1}{2}\left[\frac{\pi_{l}^m\pi_{l}^m}{r^2} + 2\left({{A}_{l}^{m}} + r\partial_r {{A}_{l}^{m}}\right)^2 + {l(l+1)} A_{l}^m A_{l}^m\right]
\label{MaxwellSham2} \, .
\eea
Lastly, the field and momentum variables can be rewritten as
\bea
\phi^r_{l} &=& \frac{\pi^r_{l}}{\sqrt{l(l+1)}} \, , \,\,\, P^r_{l} =- \sqrt{l(l+1)} A^r_{l}\,,
\label{MaxwellScamp1}\\
\phi^m_{l} &=& r A^r_{l} \, , \quad\qquad\, P^m_{l} = \frac{\pi_{l}^m}{r}\,,
\label{MaxwellScamp2}
\eea
and by applying (\ref{MaxwellScamp1}) and (\ref{MaxwellScamp2}) in (\ref{MaxwellSham2}) we reduce the Hamiltonian to the one of two identical radial modes given by
\bea
\mathcal{H}_{l} &=& \frac{1}{2}\left[P^r_{l}P^r_{l}+ \partial_r \phi^r_{l}\partial_r \phi^r_{l} + \frac{l(l+1)}{r^2} \phi^r_{l}\phi^r_{l}\right]\nonumber\\
&+& \frac{1}{2}\left[P^m_{l}P^m_{l}+ \partial_r \phi^m_{l}\partial_r \phi^m_{l} + \frac{l(l+1)}{r^2} \phi^m_{l}\phi^m_{l}\right] \, ,\label{tyo}
\eea
with the standard commutation relations
\begin{equation}
\left[{{\phi}_{l}^{r}}(t,r), {{P}_{l}^{r}}(t,r') \right]=\left[{{\phi}_{l}^{m}}(t,r), {{P}_{l}^{m}}(t,r') \right] = i \delta(r-r') \, .
\label{MaxwellScom2}
\end{equation}
Each of these two identical modes has the same Hamiltonian as the one that results from the spherial reduction of a free massless scalar field \cite{Casini2016,Srednicki:1993im}.
Eq. (\ref{MaxwellSlag}) does not apply to the zero angular momentum mode. This is simply because the electric (\ref{Ylme}) and magnetic (\ref{Ylmm}) spherical harmonics do not exist for $l=0$. For $l=0$ we get the simpler expression
\begin{equation}
\mathcal{H}_{0} = \frac{\pi_{0}^r\pi_{0}^r}{2r^2}+ A_{0}^0 \partial_r\pi_{0}^r \, ,
\label{MaxwellSham3}
\end{equation}
where by replacing the constraint $\partial_r \pi_{l=0}^r =0$, obtained from the equations of motion of $A^0_{0}$, we get
\begin{equation}
\mathcal{H}_{0} = \frac{\pi_{0}^r\pi_{0}^r}{2r^2}\, .
\label{MaxwellSham4}
\end{equation}
This means that the zero angular momentum mode does not have dynamics and thus generates no contributions to the entropy.
Therefore, the EE of the Maxwell field on the sphere is equivalent to the one of two scalar fields where the $l=0$ mode has been subtraced. This result coincides with the one given in \cite{Casini2016}.
The entanglement entropy of a scalar in a sphere has a universal logarithmic term $-1/90 \log(R/\epsilon)$ \cite{scalar,scalar1,dowkeresferagauge,scalar2,scalar3}. The mode $l=0$ for the scalar (see the Hamiltonian (\ref{tyo}) for $l=0$) corresponds to a massless $d=2$ field in the $r>0$ half-line with Dirichlet boundary condition at the origin, and its universal logarithmic entropy is
$1/6 \log(R/\epsilon)$ \cite{Casini2016,calcar}.
The entropy of the Maxwell field in the sphere is then given by \cite{Casini2016,dowkeresferagauge}
\begin{equation}
S=c \frac{A}{\epsilon^2}-\frac{16}{45} \log(R/\epsilon)\,,
\end{equation}
where the coefficient of the logarithmic term follows from $16/45=2\times 1/90+2\times 1/6$.
Here we recover this result by working with the gauge variant field $A_\mu$ instead of using directly the electric and magnetic gauge invariant fields as in \cite{Casini2016}. It is important to remark that another gauge choice which does not respect the locality on the sphere would have given completely different incorrect results for the sphere EE.
\section{Entanglement entropy of linearized gravitons between parallel planes}
\label{gravitonplanos}
The free theory of a massless helicity $2$ particle can be described by a field $h_{\mu\nu}$. This field can be thought as describing metric perturbations $g_{\mu\nu}=\eta_{\mu\nu}+ h_{\mu\nu}$ with respect to the Minkowski metric $\eta_{\mu\nu}$. The field $h_{\mu\nu}$ obeys the linearized Einstein equations and the Lagrangian that give these equations in absence of sources writes \cite{Ortin:2004ms}
\begin{equation}
\mathcal{L} = -\partial_\mu h^{\mu\nu} \partial_\alpha h^\alpha_{\hphantom{\alpha}\nu} +\frac{1}{2}\partial^\alpha h_{\mu\nu} \partial_\alpha h^{\mu\nu} + \partial_\mu h^{\mu\nu} \partial_\nu h^\alpha_{\hphantom{\alpha}\alpha} - \frac{1}{2} \partial_\alpha h^\mu_{\hphantom{\mu}\mu} \partial^\alpha h^\nu_{\hphantom{\nu}\nu}
\label{lag1} \, .
\end{equation}
The theory has a gauge invariance given by the transformation law
\begin{equation}
h'_{\mu\nu} = h_{\mu\nu} + \partial_\nu \xi_\mu + \partial_\mu \xi_\nu\,,\label{gg}
\end{equation}
for arbitrary vector field $\xi_\mu$. This corresponds to the diffeomorphism invariance of the Einstein theory of gravity at the linearized level.
The curvature is not gauge invariant in the non linear gravity theory but a gauge invariant operator corresponds to the linearized curvature tensor \cite{wee}
\begin{equation}
R_{\mu\nu\rho\sigma}= \frac{1}{2}\left[ \partial_\nu \partial_\rho h_{\mu\sigma}- \partial_\mu \partial_\rho h_{\nu\sigma}+ \partial_\mu \partial_\theta h_{\nu\rho}- \partial_\nu \partial_\sigma h_{\mu\rho}\right]\, .
\label{RDD}
\end{equation}
It is a simple exercise to show that it is indeed invariant under (\ref{gg}).\footnote{This corresponds to the fact that the curvature transforms linearly under changes of coordinates and it is already of linear order in $h_{\mu\nu}$. Then further factors of the infinitesimal coordinate transformation must be second order.} Therefore the theory of a helicity $2$ field in Minkowski space contains gauge invariant local operators, in contrast to what is expected in full quantum gravity. In consequence the EE is well defined, except for the usual issues about divergent terms. Let us study first the case of a region bounded by two parallel planes.
\subsection{Plane wave decomposition and gauge fixing}
For the wall between parallel planes we resort to a plane wave decomposition of the fields analogous to (\ref{AFourier}). Now we write for the field of a mode with $\vec{k}=k \hat{x}_2$
\begin{equation}
h_{\mu\nu}(x_0,x_1,x_2,x_3)= N e^{ik x_2} h_{\mu\nu}(x_0,x_1,k)\, .
\label{ondasplanas2}
\end{equation}
The arbitrary gauge function $\xi$ can also be decomposed in modes. The mode with vector $\vec{k}$ writes
\begin{equation}
\xi_{\mu} (x_0,x_1,x_2,x_3) = N e^{ik x_2} \xi_{\mu}(x_0,x_1,k)\,.
\label{gauge2}
\end{equation}
Therefore, it can be easily observed that
\begin{equation}
h'_{\mu\nu}=\begin{bmatrix}{h_{00} + 2 \dot{\xi_{0}}}&{h_{01}+\dot{\xi_{1}}+\partial_1 \xi_0}&{h_{02}+\dot{\xi_{2}}+ik\xi_0}&{h_{03}+\dot{\xi_{3}}}\\{h_{01}+\dot{\xi_{1}}+\partial_1 \xi_0}&{h_{11}+2\partial_1\xi_1}&{h_{12}+\partial_1\xi_2+ik\xi_2}&{h_{13}+\partial_1 \xi_3 }\\{h_{02}+\dot{\xi_{2}}+ik\xi_0}&{h_{12}+\partial_1\xi_2+ik\xi_2}&{h_{22}+2ik\xi_2}&{h_{23}+ik\xi_3}\\{h_{03}+\dot{\xi_{3}}}&{h_{13}+\partial_1 \xi_3}&{h_{23}+ik\xi_3}&{h_{33}}\end{bmatrix}\,,
\label{matrixh}
\end{equation}
making it clear that the components $h'_{02}$, $h'_{20}$, $h'_{12}$, $h'_{21}$, $h'_{22}$, $h'_{23}$, y $h'_{32}$ can be fixed to zero if we use all the gauge freedom available. Now, all the components of $h'_{\mu\nu}$ that have zero contractions in the direction of $\hat{k}$ are fixed to zero, allowing us to write the following component of the Riemann tensor for each mode as
\begin{equation}
2R_{2\mu 2\nu} = h_{\nu 2},_{\mu 2} + h_{2\mu},_{2\nu} - h_{\nu\mu},_{22} - h_{22},_{\mu\nu} = - h_{\nu\mu},_{22} = k^2 h_{\nu\mu}\, .
\label{RiemannGauge}
\end{equation}
That is, under this particular choice of gauge, it is possible to write the field $h_{\nu\mu}$ in terms of the Riemann tensor in a local way in the $x_0$ and $x_1$ coordinates. This means that the algebra of the gauge fixed degrees of freedom in $h_{\nu\mu}$ is the same as the algebra of the curvature tensor in between the planes.
\subsection{Lagrangian for each momentum}
The Lagrangian decomposes into independent modes for the different plane waves. Using the expansion (\ref{ondasplanas2}) and the gauge condition presented in the previous section, the Lagrangian density for the $(1+1)$ dimensional theory of the mode $k$ can be expressed as
\bea
\mathcal{L}_k &=& \dot{h}_{13}\dot{h}_{13}^\dag - \frac{1}{2}(\dot{h}_{11}\dot{h}_{33}^\dag + \dot{h}_{33}\dot{h}_{11}^\dag) -k^2{h_{13}}{h_{13}}^\dag + \frac{k^2}{2}({h_{11}}{h_{33}}^\dag + {h_{33}}{h_{11}}^\dag) \\
&&- \frac{k^2}{2}h_{00}({h_{11}}^\dag+{h_{33}}^\dag) - \frac{k^2}{2}({h_{11}}+{h_{33}})h_{00}^\dag-h_{01}\partial_1\dot{h}_{33}^\dag -\partial_1\dot{h}_{33}h_{01}^\dag +h_{03}\partial_1\dot{h}_{13}^\dag \nonumber \\
&+&\partial_1\dot{h}_{13}h_{03}^\dag +k^2 h_{03}h_{03}^\dag +k^2 h_{01}h_{01}^\dag + \partial_1 h_{03} \partial_1 h_{03}^\dag - \frac{1}{2}\partial_1 h_{33} \partial_1 h_{00}^\dag - \frac{1}{2}\partial_1 h_{00} \partial_1 h_{33}^\dag \,.\nonumber
\label{lag4}
\eea
This Lagrangian contains two sets of independent fields and the problem can be split into two modes that will be treated separately. The first one contains the field $h_{13}$ and the Lagrange multiplier $h_{03}$
\begin{equation}
\mathcal{L}_I = \dot{h}_{13}\dot{h}_{13}^\dag -k^2{h_{13}}{h_{13}}^\dag
-\partial_1 h_{03}\dot{h}_{13}^\dag -\dot{h}_{13}\partial_1 h_{03}^\dag +k^2 h_{03}h_{03}^\dag + \partial_1 h_{03} \partial_1 h_{03}^\dag\,,
\label{lag6}
\end{equation}
and the second one containing the fields $h_{11}$ y $h_{33}$ and the multipliers $h_{01}$ y $h_{00}$
\bea
\mathcal{L}_{II} &=& - \frac{1}{2}(\dot{h}_{11}\dot{h}_{33}^\dag + \dot{h}_{33}\dot{h}_{11}^\dag) + \frac{k^2}{2}({h_{11}}{h_{33}}^\dag + {h_{33}}{h_{11}}^\dag) + \partial_1 h_{01}\dot{h}_{33}^\dag +\dot{h}_{33}\partial_1 h_{01}^\dag \\
&&- \frac{k^2}{2}h_{00}({h_{11}}^\dag+{h_{33}}^\dag) - \frac{k^2}{2}({h_{11}}+{h_{33}})h_{00}^\dag +k^2 h_{01}h_{01}^\dag - \frac{1}{2}\partial_1 h_{33} \partial_1 h_{00}^\dag - \frac{1}{2}\partial_1 h_{00} \partial_1 h_{33}^\dag\,. \nonumber
\label{lag7}
\eea
Therefore, the total Lagrangian is given by sum over modes
\begin{equation}
L = \sum_k \int_0^\infty dx_1 \left( \mathcal{L}_I + \mathcal{L}_{II} \right)\,.
\label{lag8}
\end{equation}
\begin{comment}
\begin{figure} [h]
\includegraphics[scale=0.4]{modes.jpg}
\centering
\caption{Graphical representation of the modes I and II constructed from $\mathcal{L}_k$}
\end{figure}
\end{comment}
\subsection{Hamiltonian of the mode I}
The momenta $\pi_{13}$, $\pi_{13}^\dag$ corresponding to the Lagrangian (\ref{lag6}) are
\begin{equation}
\pi_{13}=\frac{\partial \mathcal{L}_I}{\partial \dot{h}_{13}}= \dot{h}_{13}^\dag - \partial_1 h_{03}^\dag \quad , \quad \pi_{13}^\dag =\frac{\partial \mathcal{L}_I}{\partial \dot{h}_{13}^\dag}=\dot{h}_{13}- \partial_1 h_{03} \, ,
\label{mom}
\end{equation}
and the corresponding Hamiltonian is
\bea
\mathcal{H}_I &=& \pi_{13}\dot{h}_{13} + \pi_{13}^\dag\dot{h}_{13}^\dag - \mathcal{L}_I =\nonumber \\
&=& \pi_{13}\pi_{13}^\dag + k^2{h_{13}}{h_{13}}^\dag
- h_{03} \partial_1 \pi_{13} - h_{03}^\dag \partial_1 \pi_{13}^\dag - k^2 h_{03}h_{03}^\dag \, .
\label{ham1}
\eea
A constraint equation can be derived by computing the equation of motion of $h_{03}$
\begin{equation}
h_{03} = \frac{\partial_1 \pi_{13}^\dag}{k^2} \quad , \quad h_{03}^\dag = \frac{\partial_1 \pi_{13}}{k^2} \, ,
\label{constrains}
\end{equation}
which can be replaced in (\ref{ham1}) to obtain
\begin{equation}
\mathcal{H}_I = \pi_{13}\pi_{13}^\dag + k^2{h_{13}}{h_{13}}^\dag + \frac{\partial_1 \pi_{13}\partial_1 \pi_{13}^\dag}{k^2} \, .
\label{ham2}
\end{equation}
In order to rewrite the Hamiltonian (\ref{ham2}) as the one associated with a complex scalar (for each pair $\vec{k},-\vec{k}$) is convenient to define
\begin{equation}
\phi_{I} = \frac{\pi_{13}}{|k|} \quad , \quad P_{I} = -|k|h_{13} \,,
\label{replace1}
\end{equation}
\begin{equation}
\phi_{I}^\dag = \frac{\pi_{13}^\dag}{|k|} \quad , \quad P_{I}^\dag = -|k|h_{13}^\dag\,.
\label{replace2}
\end{equation}
In this way (\ref{ham2}) gets
\begin{equation}
\mathcal{H}_I = P_{1}P_{1}^\dag + k^2{\phi_{1}}{\phi_{1}}^\dag + {\partial_1 \phi_{1}\partial_1 \phi_{1}^\dag} \, .
\label{ham3}
\end{equation}
Moreover, by canonical quantization of the field ${h_{13}}$ in (\ref{ham1}), it is clear that the replacements (\ref{replace1}) and (\ref{replace2}) give $P_{1}$ and $\phi_1$ as canonically conjugate variables.
\subsection{Hamiltonian of the mode II}
Analyzing the dynamic of the fields $h_{00}$ y $h_{00}^\dag$ in the Lagrangian (\ref{lag7}) is evident that both play the role of Lagrange multipliers, giving rise to the constraints
\begin{equation}
\partial_1\partial_1 h_{33} = k^2({h_{11}}+{h_{33}}) \, , \quad
\partial_1\partial_1 h_{33}^\dag = k^2({h_{11}}^\dag+{h_{33}}^\dag) \, .
\label{cons1}
\end{equation}
If we replace (\ref{cons1}) in (\ref{lag7}) it is possible to eliminate the field $h_{11}$ from the Lagrangian, obtaining
\bea
\mathcal{L}_{II} &=& \dot{h}_{33}\dot{h}_{33}^\dag - k^2 {h_{33}}{h_{33}}^\dag + \frac{\partial_1\dot{h}_{33}\partial_1\dot{h}_{33}^\dag}{k^2} \nonumber \\
&&- \partial_1{h_{33}}\partial_1{h_{33}}^\dag - h_{01}\partial_1\dot{h}_{33}^\dag - \partial_1\dot{h}_{33} h_{01}^\dag +k^2 h_{01}h_{01}^\dag \, .
\label{lag111}
\eea
The substitution produced a higher derivative term $k^{-2}\partial_1\dot{h_{33}}\partial_1\dot{h_{33}}^\dag$. However, this problem disappear when we use the constraint equation related to the equation for the fields $h_{01}$ y $h_{01}^\dag$ that also work as Lagrange multipliers. Indeed, we obtain
\begin{equation}
h_{01} = \frac{\partial_1\dot{h}_{33}}{k^2} \quad , \quad
h_{01}^\dag = \frac{\partial_1\dot{h}_{33}^\dag}{k^2} \, .
\label{cons21}
\end{equation}
Applying (\ref{cons21}) in (\ref{lag111}) the Lagrangian gets reduced to
\begin{equation}
\mathcal{L}_{II} = \dot{h}_{33}\dot{h}_{33}^\dag - k^2 {h_{33}}{h_{33}}^\dag - \partial_1{h_{33}}\partial_1{h_{33}}^\dag
\label{lag13} \, .
\end{equation}
The canonical momenta associated to the complex field variables ${h_{33}}$ and ${h_{33}}^\dag$ are
\begin{equation}
\pi_{33}=\frac{\partial\mathcal{L}_{II}}{\partial \dot{h}_{33}} = \dot{h}_{33}^\dag \quad , \quad \pi_{33}^\dag=\frac{\partial\mathcal{L}_{II}}{\partial \dot{h}_{33}^\dag} =\dot{h}_{33} \, ,
\label{mom5}
\end{equation}
and the corresponding Hamiltonian gets the same form as (\ref{ham3}),
\begin{equation}
\mathcal{H}_{II} = P_{II}P_{II}^\dag + k^2{\phi_{II}}{\phi_{II}}^\dag + {\partial_1 \phi_{II}\partial_1 \phi_{II}^\dag} \, ,
\label{ham6}
\end{equation}
where we have introduced the trivial notation change
\bea
\phi_{II} &=& {h_{33}} \quad , \quad \,\, P_{II} = \pi_{13}\,,
\label{replace3}\\
\phi_{II}^\dag &=& {h_{33}}^\dag \quad , \quad P_{II}^\dag = \pi_{13}^\dag \,.
\label{replace4}
\eea
These variables also obey canonical commutation relations.
\subsection{Entanglement Entropy}
The Hamiltonians for the two modes (\ref{ham3}) and (\ref{ham6}) are equivalent to the ones of a dimensionally reduced scalar field, and to the two modes (\ref{217}) for the Maxwell field. Therefore, we are allowed to conclude that the EE of linearized gravitons for the region enclosed between two parallel planes is equivalent to the one of two scalar fields or one Maxwell field. The universal coefficients will be the same in the three cases. We get again
\begin{equation}
S=c \frac{A}{\epsilon^2}- 2 \,k_s \frac{A}{L^2}\,,
\end{equation}
with $k_s$ given by (\ref{ks}) \cite{Casini:2005zv}.
\section{Entanglement entropy of linearized gravitons in a sphere}
\label{gravitonsphere}
In this section we treat the case of gravitons inside a sphere. We first introduce the tensor spherical harmonics which we use to decompose $h_{\mu\nu}$ in spherical coordinates. We also decompose the gauge transformations and choose a generic gauge adapted to the spherical symmetry that depends on three arbitrary constants. Then we expand the Lagrangian in terms of the gauge fixed field to get two independent radial modes for each $l,m$. The gauge choice is further refined to allow the simplification of the the mode Hamiltonians and to ensure locality in the radial direction in the relation between the gauge fixed field and the curvature tensor. We get a system of modes that are equivalent to the scalar spherical modes except that the $l=0,1$ modes are absent. Finally, we compute the entanglement entropy.
\subsection{Tensor spherical harmonics}
The tensor spherical harmonics are a further generalization of the concept of scalar and vector spherical harmonics that can be used as a basis for the space of symmetric tensors (of dimension six). An arbitrary symmetric tensor field $X$ can be expanded in polar coordinates as follows
\begin{equation}
X= \sum_{Jslm} X_{lm}^{Js}(r) T_{lm}^{Js}(\theta,\phi)\,,\,\, l=0,1,...,\infty\,,\,\, m=0,\pm 1,..., \pm l \, , \,\, Js=0l,0t,1e,1m,2e,2m\, ,
\label{tensorexpand}
\end{equation}
where the tensor spherical harmonics $T_{lm}^{Js}$ are given by (see for example \cite{Compere2018,Thorne1980})
\bea
T_{lm}^{0l}&=& \hat{r} \otimes \hat{r} Y_{lm} \, , \,\,\,\qquad\qquad\qquad\qquad T_{lm}^{0t}= \frac{1}{\sqrt{2}} \left(\delta - \hat{r} \otimes \hat{r} \right) Y_{lm} \, ,\nonumber \\
T_{lm}^{1e}&=& \sqrt{\frac{2}{l(l+1)}}r \left[\hat{r} \otimes \nabla Y_{lm} \right]^S \, , \,\,\quad \,\,\, T_{lm}^{1m}= \sqrt{\frac{2}{l(l+1)}} \left[\hat{r} \otimes \overline{r} \cross \nabla Y_{lm} \right]^S\, , \label{22} \\
T_{lm}^{2e}&=& \sqrt{2\frac{(l-2)!}{(l+2)!}} \left[r^2 \nabla\nabla Y_{lm} \right]^{STT} \, , \,\,\,\, T_{lm}^{2m}= \sqrt{2\frac{(l-2)!}{(l+2)!}} \left[r \nabla \left(\overline{r} \cross \nabla Y_{lm}\right) \right]^{STT} \, . \nonumber
\eea
The spherical harmonics of spin $J=0$ are defined for $l\geq 0$, the ones related to spin $J=1$ for $l\geq 1$ and the in the case of spin $J=2$ for $l\geq 2$. In the notation of the equations (\ref{22}) the symbol $\delta$ means the identity tensor $\delta_{ij}$. Additionally, the superscript $S$ means taking the symmetric part, and $TT$ the traceless part transverse to $\hat{r}$. For an arbitrary tensor $X_{ij}$ this later is given by the following expression
\begin{equation}
X_{ij}^{TT}=\left(\delta_{ik} - \hat{r}_i\hat{r}_k \right)\left(\delta_{jn} - \hat{r}_j\hat{r}_n \right)X_{kn}-\frac{1}{2}\left(\delta_{ij} - \hat{r}_i\hat{r}_j \right)\left[\left(\delta_{kn} - \hat{r}_k\hat{r}_n \right)X_{nk}\right] \, .
\label{trasversetraceless}
\end{equation}
It will be useful to have a relation between tensor and vector spherical harmonics. This relation can be expressed as
\bea
T_{lm}^{0l}&=& \left[ \hat{r} \otimes \overline{Y}_{lm}^r\right]^{S} \quad , \qquad \,\, T_{lm}^{0t} = \frac{1}{\sqrt{2}} \left(\delta Y_{lm} - \hat{r} \otimes \overline{Y}_{lm}^r \right) \, , \nonumber \\
T_{lm}^{1e}&=& \sqrt{2} \left[\hat{r} \otimes \overline{Y}_{lm}^e \right]^S \quad , \quad T_{lm}^{1m}= \sqrt{2}\left[\hat{r} \otimes \overline{Y}_{lm}^m \right]^S \, , \nonumber \\
T_{lm}^{2e} &=& \sqrt{\frac{2}{(l-1)(l+2)}} \left\{ \left[r \nabla \overline{Y}_{lm}^e \right]^{S} + \frac{1}{\sqrt{2}} \at{1e} + \sqrt{\frac{l(l+1)}{2}} \at{0t} \right\} \,, \label{33} \\
T_{lm}^{2m}&=& \sqrt{\frac{2}{(l-1)(l+2)}} \left\{ \left[r \nabla \overline{Y}_{lm}^m \right]^{S} + \frac{1}{\sqrt{2}} \at{1m} \right\} \, . \nonumber
\eea
Further properties of the tensor spherical harmonics are listed in appendix \ref{apb}.
\subsection{Decomposition of the spin 2 field in spherical harmonics}
To make the process of computing the EE easier it will be useful to decompose the field $h_{\mu\nu}$ and the gauge arbitrary function $\xi_\mu$ in different basis with spherical symmetry. For this purpose we introduce the notation
\begin{equation}
h_T=\begin{bmatrix}{h_{11}}&{h_{12}}&{h_{13}}\\{h_{21}}&{h_{22}}&{h_{23}}\\{h_{31}}&{h_{32}}&{h_{33}}\end{bmatrix}\, \, , \, \, h_V=\begin{bmatrix}{h_{01}}\\{h_{02}}\\{h_{03}}\end{bmatrix} \, \, , \, \, h_S={h}_{00} \, \, , \, \, \xi_V=\begin{bmatrix}{\xi_1}\\{\xi_2}\\{\xi_3}\end{bmatrix} \, \, , \, \, \xi_S=\xi_0 \, .
\label{descomponemeelespacio}
\end{equation}
Firstly, we will work with the space-like part of the problem by expanding $h_T$ in tensor spherical harmonics and $\xi_V$ in vector spherical harmonics. Then we will study the time-like part by using vector spherical harmonics for $h_V$ and scalar spherical harmonics for $\xi_S$ and $h_S$.\footnote{See \cite{Compere2018,ReggeWheeler1957} for a different but somewhat analogous treatment of gravitons in spherical coordinates.}
\subsection{Gauge fixing for the space-like components}
As we just mention, $h_T$ y $\xi_V$ will be expanded using tensor and vector spherical harmonics respectively in the following way
\begin{equation}
h_{T}=\sum_{Jslm} h_{lm}^{Js}(t,r) T_{lm}^{Js}(\theta,\varphi) \quad , \quad \xi_{V}=\sum_{slm} \xi_{lm}^{s}(t,r) \overline{Y}_{lm}^{s}(\theta,\varphi) \, .
\label{hijexpand}
\end{equation}
On the other hand, the gauge freedom of linear gravity can be expressed in this notation as
\begin{equation}
h'_T= h_T + \nabla \xi_V + \left[\nabla \xi_V\right]^T = h_T + 2 \left[\nabla \xi_V\right]^S \, .
\label{ge1}
\end{equation}
The combination of (\ref{hijexpand}) and (\ref{ge1}) gives
\begin{equation}
h'_T = \sum_{Jslm} h_{lm}^{Js} T_{lm}^{Js} + 2 \sum_{slm} \left[\xi_{lm}^{s} \nabla \overline{Y}_{lm}^{s} + \overline{Y}_{lm}^{s} \otimes \partial_r \xi_{lm}^{s}
\hat{r}\right]^S
\label{ge4} \, .
\end{equation}
By computing $\xi_{lm}^{s} \nabla \overline{Y}_{lm}^{s} + \overline{Y}_{lm}^{s} \otimes \partial_r \xi_{lm}^{s}\hat{r}$ using the properties of vector and tensor spherical harmonics (appendices \ref{apa} and \ref{apb}), for $s=r,\, e,\, m$ separately, and then adding up these contributions we get
\bea
&& h'_T = \sum_{lm} \left( h_{lm}^{0l}+2\partial_r \xi_{lm}^{r} \right) T^{0l}_{lm} + \left( h_{lm}^{0t} + \frac{2\sqrt{2}}{r} \xi_{lm}^r - \frac{\sqrt{2l(l+1)}}{r} \xi_{lm}^e\right) T_{lm}^{0t} \nonumber\\
&&+\left( h_{lm}^{1e} +\frac{\sqrt{2l(l+1)}}{r}\xi_{lm}^{r} + \sqrt{2}\partial_r \xi_{lm}^{e} -\frac{\sqrt{2}}{r}\xi_{lm}^{e}\right) T^{1e}_{lm} + \left( h_{lm}^{1m} + \sqrt{2}\partial_r \xi_{lm}^{m} - \frac{\sqrt{2}}{r}\xi_{lm}^{m} \right) T^{1m}_{lm} \nonumber\\
&&+ \left( h_{lm}^{2e} + \frac{\sqrt{2(l-1)(l+2)}}{r}\xi_{lm}^{e}\right) T^{2e}_{lm} + \left( h_{lm}^{2m} + \frac{\sqrt{2(l-1)(l+2)}}{r}\xi_{lm}^{m} \right) T^{2m}_{lm} \, .
\label{ge13}
\eea
This particular case differs from the ones studied earlier because there are many possible reasonable choices of gauge fixing for the spherical waves, but, not all of them will allow us to calculate the EE that corresponds to the spherical boundary or allow us to decouple the two dynamical modes for each $lm$. More specifically, it can be seen that
\begin{itemize}
\item Fixing $ \xi_r $ allows us to cancel the components that are parallel to $ \at{0t} $ or $ \at{1e} $ or to a linear combination of them.
\item Fixing $ \xi_e $ allows us to cancel the components that are parallel to $ \at{0t} $ or $ \at{2e} $ or to a linear combination of them.
\item Fixing $ \xi_m $ allows us to cancel the components that are parallel to $ \at{2m} $.
\end{itemize}
\newpage
For now, we will use the freedom related to $\xi_m$ to cancel the 'electric-magnetic' components, meaning that we take ${h'}_{lm}^{2m}=0$ for all $l$ y $m$. Understanding the gauge fixing of $\xi_r$ y $\xi_e$ that is the correct one for our purposes is not simple at this stage. Because of that we choose to set to zero just some arbitrary linear combination of $\at{0t}$, $\at{1e}$ y $\at{2e}$ to be further determined in what follows. There is only one resulting degree of freedom that we call $\h{te}$ that is associated with a linear combination of these tensors given by some undetermined coefficients. More
formally, we fix the gauge such that
\begin{equation}
h_T = \sum_{lm} {h}^{0l}_{lm} \at{0l} + {h}^{te}_{lm} \left(\alpha \at{0t} +\beta \at{1e} +\gamma \at{2e} \right) + {h}^{1m}_{lm} \at{1m}\,,
\label{ge15}
\end{equation}
where $\alpha$, $\beta$ and $\gamma$ are constants.
\subsection{Gauge fixing for the time-like components}
In order to fix the gauge of the time-like part we will write the vector $h_V$ and the scalar $\xi_S$ as
\begin{equation}
h_V=\sum_{slm} h_{lm}^{0s}(t,r) \overline{Y}_{lm}^{s}(\theta,\varphi)
\quad,\quad \xi_{S}=\sum_{lm} \xi^0_{lm}(t,r) {Y}_{lm}(\theta,\varphi) \, .
\label{xioexpand}
\end{equation}
For each component of $h_V$ we have ${h'}_{0i}=h_{0i}+\partial_0 \xi_i + \partial_i \xi_0$ or more conveniently ${h'}_V=h_V + {\dot{\xi}}_V + \nabla \xi_S $. By replacing with (\ref{xioexpand}) we get
\begin{equation}
{h'}_V=\sum_{lm} \left( h_{lm}^{0r} + \dot{\xi}^{r}_{lm} + \partial_r \xi^0_{lm} \right) \overline{Y}_{lm}^{r} + \left( h_{lm}^{0e} + \dot{\xi}^{e}_{lm} + \frac{\xi^0_{lm}}{r} \right) \overline{Y}_{lm}^{e} + \left( h_{lm}^{0m} + \dot{\xi}^{m}_{lm}\right) \overline{Y}_{lm}^{m} \, .
\label{gt3}
\end{equation}
Thus, in analogy with the case of the Maxwell field can fix $\xi_0$ in such way that $ {h'}_{lm}^{0e}$ is zero for each $lm$, obtaining the expansion
\begin{equation}
{h}_T=\sum_{lm} {h}_{lm}^{0r} \overline{Y}_{lm}^{r} + {h}_{lm}^{0m} \overline{Y}_{lm}^{m}\, .
\label{gt4}
\end{equation}
\subsection{Lagrangian for each angular momentum}
The starting point is the Lagrangian (\ref{lag1}). Using the decomposition of the field $h_{\mu\nu}$ given in (\ref{descomponemeelespacio}) in terms of spatial and temporal components we obtain
\bea
\mathcal{L} &=& \left. \frac{1}{2}\left(\dot{h}_T\cdot\cdot\dot{h}_T-\traza{\dot{h}_T}\traza{\dot{h}_T}\right)+\frac{1}{2}\left(\nabla^2 h_T \cdot\cdot h_T + \nabla \traza{h_T} \cdot \nabla \traza{h_T}\right) \right. \nonumber \\
&&+ \left. \left(\nabla \cdot h_T\right) \cdot \left[\left(\nabla \cdot h_T\right)- \nabla \traza{h_T} \right]+ \nabla h_S \left[ (\nabla \cdot h_T) - \nabla \traza{h_T} \right] \right. \\
&&- \left. 2\dot{h}_V\cdot\left[ (\nabla \cdot h_T) - \nabla \traza{h_T} \right]-(\nabla \cdot h_V) \cdot (\nabla \cdot h_V)-\nabla^2 h_V \cdot h_V \right. \,.\nonumber
\label{lag2ss2}
\eea
In this matricial notation a single dot means the contraction of a one index for each tensor and two dots the contraction of the two sets of indices of the two symmetric tensors involved in the product.
The full Lagrangian is given by
\begin{equation}
L=\int_\mathbb{R}^3 d^3 \overline{x}\, \mathcal{L} = \int_0^\infty dr \, r^2 \left(\int d\Omega \, \mathcal{L} \right) \, .
\label{GravSllagtot}
\end{equation}
Replacing the expressions (\ref{ge15}) and (\ref{gt4}) in (\ref{lag2ss2}) and taking into account the properties of spherical harmonics (appendices \ref{apa} and \ref{apb}) it turns out we can rewrite the Lagrangian for independent modes for each $l$ y $m$
\begin{equation}
L=\sum_{lm} \int_0^\infty dr \, \left(\mathcal{L}^I_{lm} + \mathcal{L}^{II}_{lm} \right) \, ,
\end{equation}
where the $\mathcal{L}_{lm}^I$ contains the variables $\h{1m}$ and $\h{0m}$ and $\mathcal{L}_{lm}^{II} $ involves the fields $\h{0l}$ and $\h{te}$ together with the Lagrange multipliers $\h{0r}$ and $\h{00}$.
The Lagrangians for the modes are independent of $m$, and it is clear that we will have $(2l+1)$ equal contributions for each $l$. Accordingly we will suppress the index $m$. After a long but straightforward calculation using the properties listed in appendices \ref{apa} and \ref{apb}, the Lagrangian corresponding to the mode $I$ (and $l\ge 2$) gets
\bea
\mathcal{L}_{l}^I &=& \frac{r^2}{2} \dot{h}^{1m}_l\dot{h}^{1m}_l -\frac{(l-1)(l+2)}{2}\h{1m}\h{1m} +r^2\drh{0m}\drh{0m} \nonumber \\
&&+ l(l+1)\h{0m}\h{0m} +\sqrt{2}\dot{h}^{1m}_l\left( r\h{0m} - r^2\drh{0m}\right) \,,
\label{GravSmodI}
\eea
and the one associated with mode $II$ is
\bea
\mathcal{L}_{l}^{II} &=& \frac{r^2}{2} \left(\beta^2-\alpha^2+\gamma^2\right)\dot{h}^{te}_l\dot{h}^{te}_l-\sqrt{2}r^2\alpha \dot{h}^{0l}_l\dot{h}^{te}_l+ \frac{r^2}{2}\left(\alpha^2-\gamma^2\right)\drh{te}\drh{te} \nonumber \\
&+&\sqrt{2}\alpha r \h{te}\drh{0l} +\h{0l}\h{0l}+\left(\beta^2-\frac{\sqrt{l(l+1)}}{2}\alpha\beta-\frac{\sqrt{(l-1)(l+2)}}{2}\beta\gamma\right) \h{te}\h{te}\nonumber \\
&+&\sqrt{2}\left( \frac{l(l+1)}{2}\alpha - \sqrt{l(l+1)}\beta + \frac{\sqrt{(l-1)l(l+1)(l+2)}}{2}\gamma \right)\h{0l}\h{te}+l(l+1)\h{0r}\h{0r} \nonumber \\
&+&\h{0r}\left[ 4r\dot{h}^{0l}_l -2\sqrt{2}\alpha r^2 \partial_r \dot{h}^{te}_l - \sqrt{2}\left(2 \alpha + \sqrt{l(l+1)}\beta\right)r \dot{h}^{te}_l\right]+\h{00} \left[-2r\drh{0l} \right. \nonumber \\
&-&\left. (l(l+1)+2)\h{0l} +\sqrt{2}\alpha r^2 \partial_r\drh{te}+\sqrt{2}\left(3\alpha+\sqrt{l(l+1)}\beta \right) r \drh{te}\right. \nonumber \\
&+& \left. \frac{1}{\sqrt{2}}\left(-(l-1)(l+2)\alpha+ 4 \sqrt{l(l+1)}\beta-\sqrt{(l-1)l(l+1)(l+2)}\gamma\right)\h{te} \right] \, .
\label{GravSmodII}
\eea
In the same way as for the case of parallel planes, we will study the modes $I$ and $II$ separately trying to reduce them to scalar fields for $l\geq 2$. Then we will present the particular cases $l=0$ and $l=1$.
\begin{comment}
\begin{figure} [h]
\includegraphics[scale=0.45]{modeslm.jpg}
\centering
\caption{Graphical representation of the modes I and II represented in the Lagrangian}
\end{figure}
\end{comment}
\subsection{Hamiltonian of mode I for \texorpdfstring{$l\geq 2$}{Lg}}
From equation (\ref{GravSmodI}) it can be seen clearly that $\h{0m}$ has no dynamic, thus we get the following constraint
\begin{equation}
-2r^2\partial_r\drh{0m}-4r\drh{0m}+2l(l+1)\h{0m}+\sqrt{2}r^2\dot{\drh{1m}}+3\sqrt{2}r\dot{h}^{1m}_l=0\,.
\label{GravSnoloc}
\end{equation}
In an analogy with the parallel planes case in equation (\ref{GravSnoloc}), this expression cannot be solved algebraically but the constraint can be implemented by first computing the Hamiltonian. The momenta are given by
\begin{equation}
\pi_{l}^{1m}= \frac{\partial \mathcal{L}^I_{l}}{\partial \dot{h}^{1m}_l } = r^2 \dot{h}^{1m}_l + \sqrt{2}\left( r\h{0m} - r^2\drh{0m}\right) \, .
\label{smod11}
\end{equation}
From equations (\ref{GravSmodI}) and (\ref{smod11}) we compute
\bea
\mathcal{H}^I_{l} &=& \pi_{l}^{1m}\dot{h}^{1m}_l-\mathcal{L}^I_{lm} =\frac{\pi_{lm}^{1m}\pi_{lm}^{1m}}{2r^2} + \frac{(l-1)(l+2)}{2} \h{1m}\h{1m} \nonumber \\
&-& (l-1)(l+2)\h{0m}\h{0m}-\sqrt{2}\h{0m}\left(\partial_r \pi_{lm}^{1m} + \frac{\pi_{lm}^{1m}}{r} \right) \, .
\label{smod14}
\eea
Now, by working with $\h{0m}$ as a Lagrange multiplier in (\ref{smod14}) the following constraint appears
\begin{equation}
-2(l-1)(l+2)\h{0m}-\sqrt{2}\left(\partial_r \pi_{lm}^{1m} + \frac{\pi_{lm}^{1m}}{r} \right)=0
\label{smod15} \, .
\end{equation}
Replacing (\ref{smod15}) in (\ref{smod14}) gives for $l\geq 2$
\begin{equation}
\mathcal{H}^I_{l} = \frac{l(l+1)}{2r^2}\frac{\pi_{l}^{1m}\pi_{l}^{1m}}{(l-1)(l+2)} + \frac{1}{2}\frac{\partial_r\pi_{l}^{1m}\partial_r\pi_{l}^{1m}}{(l-1)(l+2)} + \frac{1}{2}(l-1)(l+2)\h{1m}\h{1m} \, .
\label{smod17}
\end{equation}
and by redefining the variables
\begin{equation}
\phi^I_{l}=\frac{\pi_{l}^{1m}}{\sqrt{(l-1)(l+2)}}\,, \qquad P^I_{l}=-\sqrt{(l-1)(l+2)}\h{1m} \, ,
\label{smod18}
\end{equation}
we reduce (\ref{smod17}) to the Hamiltonian of a free scalar in the sphere
\begin{equation}
\mathcal{H}^I_{l} = \frac{1}{2}\left(P^I_{l}P^I_{l} + \partial_r\phi^I_{l}\partial_r\phi^I_{l} + \frac{l(l+1)}{r^2}\phi^I_{l}\phi^I_{l} \right)\,.
\label{GravSSI}
\end{equation}
The canonical commutation relations
\begin{equation}
\left[P^I_{l}(t,r), \phi^I_{l}(t,r')\right]=i \delta(r-r') \, ,
\label{GravScomI}
\end{equation}
follow from
\begin{equation}
\left[\pi^{1m}_{l}(t,r), h^{1m}_{l}(t,r')\right]=i \delta(r-r')\, .
\end{equation}
\subsection{Hamiltonian of mode II for \texorpdfstring{$l\geq 2$}{Lg}}
For the mode II we have the Lagrangian (\ref{GravSmodII}), where working out the equations of motion of $\h{00}$ yields the constraint
\bea
&&-2r\drh{0l} -(l(l+1)+2)\h{0l} +\sqrt{2}\left(3\alpha+\sqrt{l(l+1)}\beta \right) r \drh{te}+\sqrt{2}\alpha r^2 \partial_r\drh{te} \nonumber \\
&&+ \frac{1}{\sqrt{2}}\left(-(l-1)(l+2)\alpha+ 4 \sqrt{l(l+1)}\beta-\sqrt{(l-1)l(l+1)(l+2)}\gamma\right)\h{te} =0 \, .
\label{GravSvinc}
\eea
Taking into account that (\ref{GravSvinc}) gives rise to non local terms (that can not be eliminated by the same means used for mode I), we are led to propose a particular gauge fixing such that
\begin{equation}
\h{0l}=a \h{te} + br\drh{te}\,,
\label{GravSreplace}
\end{equation}
with $a$ y $b$ constants that will be fixed to satisfy (\ref{GravSvinc}). Indeed, by replacing (\ref{GravSreplace}) in (\ref{GravSvinc}) we get
\bea
&&\sqrt{2}\left(\alpha - \sqrt{2} b \right) r^2 \partial_r \drh{te} + \left(3\sqrt{2}\alpha + \sqrt{2l(l+1)}\beta -(l(l+1)+4)b-2a\right)r\drh{te} \nonumber \\
&&\frac{1}{\sqrt{2}}\left(-(l-1)(l+2)\alpha + 4\sqrt{l(l+1)} \beta - \sqrt{(l-1)l(l+1)(l+2)}\gamma \right.\nonumber\\
&&\hspace{8cm}\left. -a \sqrt{2}(l(l+1)+2) \right)\h{te} = 0\,.
\eea
It is possible to solve for $a$, $b$ and $\alpha$ in terms of $\beta$ and $\gamma$ in such a way that all the terms vanish separately. We get
\begin{equation}
a=\sqrt{\frac{2}{l(l+1)}} \beta - \sqrt{\frac{(l-1)(l+2)}{2l(l+1)}} \gamma\, ,
\label{afix}
\end{equation}
\begin{equation}
b=\sqrt{\frac{2}{l(l+1)}} \beta + \sqrt{\frac{2}{(l-1)l(l+1)(l+2)}} \gamma\, ,
\label{bfix}
\end{equation}
\begin{equation}
\alpha=\frac{2}{\sqrt{l(l+1)}} \beta + \frac{2}{\sqrt{(l-1)l(l+1)(l+2)}} \gamma\, .
\label{alphafix}
\end{equation}
Eq. (\ref{alphafix}) then selects a particular gauge choice for achieving this simplification.
Replacing (\ref{GravSreplace}), (\ref{afix}), (\ref{bfix}) y (\ref{alphafix}) in (\ref{GravSmodII}) and working with $\h{0r}$ as Lagrange multiplier allow us to obtain the following simple Lagrangian
\begin{equation}
\mathcal{L}^{II}_{l}=\frac{\gamma^2}{2}\left[\dot{h}^{te}_l\dot{h}^{te}_l - \drh{te}\drh{te} - l(l+1)\h{te}\h{te} \right] \, .
\label{GravSmod2lag}
\end{equation}
The corresponding Hamiltonian is
\begin{equation}
\mathcal{H}^{II}_{l}= \pi^{te}_{l} \dot{h}^{te}_l - \mathcal{L}^{II}_{l} = \frac{1}{2} \left[ \frac{\pi^{te}_{l}\pi^{te}_{l}}{\gamma^2r^2} + \gamma^2 r^2 \drh{te}\drh{te} + \gamma^2 l(l+1) \h{te}\h{te} \right]\,,
\end{equation}
with the canonical commutation relations
\begin{equation}
\left[\pi^{te}_{l}(t,r),h^{te}_{l}(t,r')\right]= i \delta(r-r') \, .
\end{equation}
Finally, by making the identifications
\begin{equation}
\phi^{II}_{l}=\gamma r \h{te} \quad , \quad P^{II}_{l}=\frac{\pi_{l}^{te}}{r\gamma}\,,
\end{equation}
the Hamiltonian of the scalar field modes is recovered in the form
\begin{equation}
\mathcal{H}^{II}_{l}= \frac{1}{2} \left[ P^{II}_{l}P^{II}_{l} + \partial_r \phi^{II}_{l}\partial_r \phi^{II}_{l} + \frac{l(l+1)}{r^2} \phi^{II}_{l}\phi^{II}_{l} \right]\,,
\label{GravSSII}
\end{equation}
associated with the commutation relations
\begin{equation}
\left[P^{II}_{l}(t,r),\phi^{II}_{l}(t,r')\right]= i \delta(r-r')\, .
\label{GravScomII}
\end{equation}
\subsection{Analysis of the mode \texorpdfstring{$l=0$}{Lg}}
For the case $l=0$ the tensor spherical harmonics of spin $J=1$ y $J=2$ do not exist and the Lagrangian (\ref{GravSllagtot}) reduces to
\bea
\mathcal{L}_{l=0}= \mathcal{L}_{l=0}^I + \mathcal{L}_{l=0}^{II}= -\frac{r^2}{2} \alpha^2\dot{h}^{te}_0\dot{h}^{te}_0-\sqrt{2}r^2\alpha \dot{h}^{0l}_0\dot{h}^{te}_0+ \frac{r^2}{2}\alpha^2 \drho{te}\drho{te} \nonumber \\
+\sqrt{2}\alpha r \ho{te}\drho{0l} +\ho{0l}\ho{0l}+2\sqrt{2}\ho{0r}\left[\sqrt{2}r\dot{h}^{0l}_0 -\alpha r^2 \partial_r \dot{h}^{te}_0 - \alpha r \dot{h}^{te}_0\right] \label{GravSlag0} \\
+\sqrt{2}\ho{00} \left[-\sqrt{2}r\drh0{0l} -\sqrt{2}\ho{0l} +\alpha r^2 \partial_r\drho{te}+3\alpha r \drho{te}+ \alpha \ho{te} \right] \, . \nonumber
\eea
The equation of motion of $\ho{00}$ produces the constraint
\begin{equation}
-\sqrt{2}r\drho{0l} -\sqrt{2}\ho{0l} +\alpha r^2 \partial_r\drho{te}+3\alpha r \drho{te}+ \alpha \ho{te} =0 \, .
\label{GravSvinc0}
\end{equation}
By proposing the equivalent of (\ref{GravSreplace}) and replacing (\ref{GravSvinc0}) we get that the constants $a$ y $b$ must be $a=b=\alpha/\sqrt{2}$ without the need of fixing $\alpha$, in other words
\begin{equation}
\ho{0l}= \frac{\alpha}{\sqrt{2}}\left(\ho{te} +r \drho{te}\right) \quad \forall\alpha \, .
\label{GravSreplace0}
\end{equation}
On the other hand, taking $\ho{0r}$ as Lagrange multiplier gives
\begin{equation}
\sqrt{2}r\dot{h}^{0l}_0 -\alpha r^2 \partial_r \dot{h}^{te}_0 - \alpha r \dot{h}^{te}_0=0\,.
\label{GravSvinc02}
\end{equation}
The equations (\ref{GravSreplace0}) and (\ref{GravSvinc02})
are clearly consistent with each other. Replacing both of them in (\ref{GravSlag0}) yields $\mathcal{L}_{l=0}=0$, allowing us to conclude that the $l=0$ mode makes no contribution to the EE for any choice of gauge.
\subsection{Analysis of the mode \texorpdfstring{$l=1$}{Lg}}
For the case $l=1$ the tensor spherical harmonics of spin $J=0$ y $J=1$ are well defined but the ones corresponding to $J=2$ do not exist. Hence the Lagrangian for the mode $I$ now writes
\begin{equation}
\mathcal{L}_{l=1}^I = \frac{r^2}{2} \dot{h}^{1m}_1\dot{h}^{1m}_1 +r^2\drhu{0m}\drhu{0m}+ 2\hu{0m}\hu{0m} +\sqrt{2}\dot{h}^{1m}_1\left( r\hu{0m} - r^2\drhu{0m}\right)\,.
\label{GravSmodI1}
\end{equation}
In an analogous way to the case $l\geq 2$, we obtain $\pi_{1}^{1m} = r^2 \dot{h}^{1m}_1 + \sqrt{2}\left( r\hu{0m} - r^2\drhu{0m}\right) $ and the Hamiltonian can be expressed as
\begin{equation}
\mathcal{H}^I_{l=1} = \frac{\pi_{1}^{1m}\pi_{1}^{1m}}{2r^2}-\sqrt{2}\hu{0m}\left(\partial_r \pi_{1}^{1m} + \frac{\pi_{1}^{1m}}{r} \right) \, .
\label{GravShamI0}
\end{equation}
Working with $\hu{0m}$ as a multiplier gives
\begin{equation}
\mathcal{H}^I_{l=1} = \frac{\pi_{1}^{1m}\pi_{1}^{1m}}{2r^2}\,, \,\,\,\pi_{1}^{1m}= r \partial_r \pi_{1}^{1m} \, ,
\label{GravShamI02}
\end{equation}
which implies that the mode $I$ will not contribute to the EE for $l=1$.
Moreover, the Lagrangian of mode $II$ can be given for $l=1$ from (\ref{GravSmodII}) as
\bea
\mathcal{L}_{l=1}^{II} &=& \frac{r^2}{2} \left(\beta^2-\alpha^2\right)\dot{h}^{te}_1\dot{h}^{te}_1-\sqrt{2}r^2\alpha \dot{h}^{0l}_1\dot{h}^{te}_1+ \frac{r^2}{2}\alpha^2\drhu{te}\drhu{te} \nonumber \\
&+&\sqrt{2}\alpha r \hu{te}\drhu{0l} +\hu{0l}\hu{0l}+\left(\beta^2-\frac{\alpha\beta}{\sqrt{2}} \right) \hu{te}\hu{te}+\left( \sqrt{2}\alpha - 2\beta \right)\hu{0l}\hu{te} \nonumber \\
&+&2\hu{0r}\hu{0r} +2\hu{0r}\left[ 2r\dot{h}^{0l}_1 -\sqrt{2}\alpha r^2 \partial_r \dot{h}^{te}_1 - \left(\sqrt{2} \alpha +\beta\right)r \dot{h}^{te}_1\right] \nonumber \\
&+&\hu{00} \left[-2r\drhu{0l} -4\hu{0l} +\sqrt{2}\alpha r^2 \partial_r\drhu{te}+\left(3\sqrt{2}\alpha+2\beta \right) r \drhu{te}+4\beta \hu{te} \right] \, .
\label{GravSmodII1}
\eea
so, $\hu{00}$ yields the constraint
\begin{equation}
-2r\drhu{0l} -4\hu{0l} +\sqrt{2}\alpha r^2 \partial_r\drhu{te}+\left(3\sqrt{2}\alpha+2\beta \right) r \drhu{te}+4\beta \hu{te} =0\,.
\label{GravSvincII1}
\end{equation}
In this calculation, we also propose the locality relation (\ref{GravSreplace}) and by replacing it in (\ref{GravSvincII1}) we obtain that for every choice of gauge it is valid that
\begin{equation}
\hu{0l}= \frac{\alpha}{\sqrt{2}} \hu{te} + r \beta \drhu{te}\quad \forall \, \alpha,\, \beta \, .
\label{GravSreplaceII1}
\end{equation}
Finally, using (\ref{GravSreplaceII1}) in (\ref{GravSmodII1}) produces $\mathcal{L}_{l=1}^{II}=0 $. Thus, there is no contribution of the mode $II$ for $l=1$.
\subsection{Analysis of the gauge fixing}
We have already restricted the gauge choice with the relation (\ref{alphafix}) that allow us to write the dynamics of the two modes in the same fashion as the one of the scalar modes. Now we analyze if the field $h_{\mu\nu}$ or, more conveniently, the resulting degrees of freedom associated with each mode $\h{1m}$ and $\h{te}$ can be written in terms of gauge invariant operators inside the sphere. For this purpose, we appeal to the expression (\ref{RDD}) of the gauge invariant curvature tensor.
Using a computer based algebraic manipulation we obtain that the mode $I$ field given by $\h{1m}$ can be rewritten in terms of the "electric-radial-electric-magnetic" contraction of the Riemann tensor
\begin{equation}
R_{erem}^{lm}=e^\mu r^\nu e^\rho m^\sigma R_{\mu\nu\rho\sigma}^{lm} = F_{lm}(\theta,\varphi) \frac{\h{1m}(t,r)}{r^2} \,,
\label{erem}
\end{equation}
where $F_{lm}(\theta,\varphi) $ is a function of the angles $\theta$ and $\varphi$ for each $l$ and $m$. Specifically, for $m=0$ it is valid that
\begin{equation}
F_{l0}(\theta)=\frac{\pi^{\frac{5}{2}}l\sqrt{\Gamma^3(l+1)\Gamma(l)}}{16\Gamma^2(l+2)}\pl{0}[\pl{1}]^3\left(4\pl{2}\cot{\theta}+\pl{3}\right)\,,
\end{equation}
where $\pl{m}$ are the associated Legendre polynomials. The important point in this expression is that the relation between $\h{1m}(t,r)$ and the curvature does not involve radial derivatives. That would make the algebra generated by this field non local with respect to the one of gauge invariant operators in the sphere.
For the mode $II$, under the partial gauge choice (\ref{alphafix}), we can further set $\alpha=0$ or equivalently
\begin{equation}
\gamma=-\frac{\beta}{\sqrt{(l-1)(l+2)}} \, ,\label{gabe}
\end{equation}
to obtain locality with respect to the curvature tensor.
With this choice (\ref{GravSreplace}) reduces to an algebraic relation (without any derivatives) between the fields $\h{0l}$ y $\h{te}$ given by
\begin{equation}
\h{0l}=\sqrt{\frac{(l-1)(l+2)}{2}}\beta\h{te} \, .
\end{equation}
From this relation it follows that the remaining field $\h{te}$ can be computed from the "electric-magnetic-electric-magnetic" contraction of the Riemann tensor in a local way in $t,r$ as
\begin{equation}
R_{emem}^{lm}=e^\mu m^\nu e^\rho m^\sigma R_{\mu\nu\rho\sigma}^{lm} = G_{lm}(\theta,\varphi) \frac{h_{te}(t,r)}{r^2}\,,
\label{emem}
\end{equation}
where $G_{lm}(\theta,\varphi) $ is another function of the angles $\theta$, $\varphi$ for each $l$ and $m$. For $m=0$ it writes
\bea
G_{l0}(\theta)&=&\frac{\pi^{\frac{5}{2}}\beta\sqrt{l(l+2)\Gamma(l)}}{16(l+1)^2\sqrt{\Gamma(l+3)}}[\pl{1}]^4\left(4l(l+1)\pl{0}\right. \nonumber \\
&&+\left. 2(l(l+1)+2)\pl{1}\cot{\theta}+(l(l+1)+2)\pl{2}\right)\,.
\eea
Therefore, we conclude that, by taking $\alpha=0$, and eq. (\ref{gabe}) for the gauge fixing, the gauge fixed field $h_{\mu\nu}$ inside the sphere generates the same algebra as the gauge invariant operators. This algebra is equivalent to the one of the modes of two scalar fields except for the $l=0,1$ modes which are absent for the helicity $2$ theory.
\subsection{Entanglement entropy and logarithmic coefficient}
To sum up, the EE associated with linearized gravitons in a sphere of radius $R$ is equivalent to the one corresponding to two scalar fields without contributions of the $l=0$ and $l=1$ angular momentum modes (or a Maxwell field without the $l=1$ modes).
As we recall in section \ref{maxwellsphere}, the entanglement entropy of a scalar in a sphere has a universal logarithmic term $-1/90 \log(R/\epsilon)$ and the mode $l=0$ for the scalar corresponds to a massless $d=2$ scalar field in the $r>0$ half-line with entropy given by $1/6 \log(R/\epsilon)$. To obtain the universal logarithmic term for gravitons we just need the logarithmic contribution of the $l=1$ mode for the scalar.
This mode is a $d=2$ field in the half-line $r>0$ with Hamiltonian
\begin{equation}
\mathcal{H}= \frac{1}{2} \left[ P^2 + (\partial_r \phi)^2 + \frac{2}{r^2} \phi^2 \right]\,.
\end{equation}
This model is scale invariant, but in contrast with the $d=2$ scalar field it contains a potential term $2/r^2 \phi^2$. We have to compute the entanglement entropy in an interval $r\in (0,R)$. The ultraviolet divergent piece of the EE comes from entanglement in high energy fluctuations around the boundary $r=R$. For these high energy fluctuations the effect of the potential can be neglected and then we must have a divergent piece that is the same as for the usual scalar field $S\sim -1/6 \,\log(\epsilon)$. As the model does not contain any dimensionfull scales by dimensional reasons we obtain
\begin{equation}
S= \frac{1}{6} \,\log(R/\epsilon)+ \textrm{cons}\,.
\end{equation}
We have check this numerically in the lattice to an excellent (five digits) precision.
Hence, as for the $l=0$ mode, we get a $1/6$ coefficient for the logarithmic term of the $l=1$ modes. Consequently, summing up, we get a logarithmic coefficient for the graviton in the sphere given by twice the coefficient of the scalar subtracting two times the $l=0$ mode and $2 (2 l+1)=6$ times the $l=1$ mode, obtaining
\begin{equation}
2\times \left(-\frac{1}{90}-\frac{1}{6}-3 \times \frac{1}{6} \right)=-\frac{61}{45}\,.
\end{equation}
As it seems to be the rule, the value of the logarithmic coefficient increases with spin, being higher for helicity $2$ than for Maxwell and scalar fields. The entropy on the sphere then writes
\begin{equation}
S=c \frac{A}{\epsilon^2}-\frac{61}{45}\, \log(R/\epsilon)\,.
\end{equation}
\section{Discussion}
\label{dis}
We have computed the EE for free gravitons in flat space for a region between parallel planes and for the sphere. For the wall we find a universal coefficient that coincides with the one of two scalar fields. For the sphere the logarithmic term is given by $-61/45$, that is equivalent to two scalar fields where the $l=0$ and $l=1$ modes are missing. These results refer to clear physical quantities. First, our real time approach allow us to clarify that these are entropies of gauge invariant operator algebras of the theory inside the regions. Second, the meaning of these universal terms for the continuum model follows from the fact that they coincide with the ones obtained using the mutual information. We can write a regularized entropy as \cite{Casini:2015woa},
\begin{equation}
S_\epsilon(A)\equiv \frac{1}{2}\, I_\epsilon(A_+,A_-)\,.
\end{equation}
In this formula one computes the mutual information between two regions $A_+$ and $A_-$ covering most of the inside and outside parts of the boundary of $A$ respectively, but symmetrically separated from the boundary by a distance $\epsilon/2$. This can be thought a form of point splitting regularization of the entropy. The mutual information for disjoint regions is completely unambiguos in QFT and thus is $S_\epsilon(A)$. In particular, mutual information is unaffected by details of the algebra definition such as center terms (or edge modes). In the present case our results for the entropy are indeed equivalent to $S_\epsilon(A)$. This is the case of the full scalar field EE \cite{Casini2016} and this identification also holds for the $l=0,1$ modes. These later one dimensional fields have mutual information that diverge as $-1/3 \log(\epsilon)$ as the boundaries of $A_+$ and $A_-$ approach each other. This holds for the free scalar and this UV result cannot change due to the potential or the boundary condition at the origin.\footnote{There is however a subleading $-1/2\log(\log(R/\epsilon))$ term in the mutual information for the $l=0$ mode that is not present in the entropy (with the usual lattice regularization) \cite{Casini2016}. This cames from superselection sectors for the $d=2$ scalar \cite{ss1,esecal}.}
There are other results in the literature concerning the logarithmic coefficient due to gravitons, specially in black hole backgrounds (see for example \cite{Fursaev:1996uz,Solodukhin:2011gn,Sen:2012dw,Solodukhin:2015hma}; see also \cite{vassi} and references therein for gravitons in de Sitter space). There is the general expectation that the logarithmic coefficient for the sphere should be proportional to the $A$ anomaly\footnote{For black hole backgrounds another contribution is expected proportional to the $c$ anomaly coefficient.} \cite{scalar,scalar2}. The free graviton does not have a symmetric gauge invariant stress tensor due to the Weinberg Witten theorem \cite{ww}, and then the definition of the $A$ anomaly is uncertain.\footnote{We thank a communication by Sergey Solodukhin regarding anomalies for the graviton.} For a Maxwell field there is a mismatch of the logarithmic term in the entanglement entropy and the $A$ anomaly which is solved by coupling the theory to (heavy) charges. In the present case a clarification of what is the right coefficient for interacting gravity seems to be further away since any interactions would take us away from the QFT setting, and thus rising the problems of operator algebra localization. Eternal black holes seem to be a more natural setup in gravity than the sphere since they are related to a partition of the asymptotic space in two. In this same sense, there are also indications that in full quantum gravity a boundary separating localized degrees of freedom should be an extremal surfaces \cite{Camps:2018wjf,camps2}. This is of course the case of the entanglement wedge in holographic EE but not the sphere is Minkowski space.
A natural conjecture that suggest itself from our results for the Maxwell field and the graviton is that on the sphere the EE of higher helicity $h>2$ fields should be equivalent to the one of two scalar fields where the $l=0,\cdots, h-1$ modes are subtracted. By the same reasons discussed in the previous section these modes have a EE given by
\begin{equation}
S= \frac{1}{6} \,\log(R/\epsilon)+ \textrm{f(l)}\,,
\end{equation}
where $f(l)$ is a function of the angular momentum. Hence, we would have a logarithmic coefficient\footnote{After this paper appeared in the arXiv database Dowker noted this same result would follow from thermodynamics in de Sitter space \cite{dodo}. He also obtains the result for fermion fields of different helicity.}
\begin{equation}
-2 \left(\frac{1}{90}+ \frac{1}{6} \sum_{l=0}^{h-1} (2 l+1)\right)=-\frac{1+15 h^2}{45}\,.
\end{equation}
Another interesting problem is how to fix the gauge for the graviton in order that $h^{\mu\nu}$ inside a region of arbitrary shape is given in terms of the gauge invariant operators localized in the same region. We hope to come back to these problems in the future.
\section*{Acknowledgments}
We thank discussions with Pablo Bueno, Joan Camps and Marina Huerta.
This work was partially supported by CONICET, CNEA
and Universidad Nacional de Cuyo, Argentina. The work of H. C. is partially supported by an It From Qubit grant by the Simons foundation.
|
1,116,691,500,354 | arxiv | \section{Introduction}
\label{intro}
Fashion is one of the most glamorous industries of the modern society, making great contributions to the global economy. With the development of image processing and information retrieval techniques, recently some investigation has been conducted in this domain, including fashion design~~\cite{rostamzadeh2018fashion,ma2017towards}, fashion product recommendation~\cite{han2017learning,liu2012hi,hu2015collaborative,li2017mining}, and conversational fashion image retrieval~\cite{guo2018dialog,guo2019fashion,zhang2020reward}.
\begin{figure}[]
\centering
\includegraphics[width=\linewidth]{data.png}
\caption{The top part is an example of three fashion product images. Each image has some fashion attributes. The bottom part is a sample session of multiturn conversational fashion image retrieval with natural language feedback.}
\label{dataset}
\end{figure}
Building an interactive conversational fashion image retrieval system based on user feedback has drawn increasing research interests in the past years. One important task in interactive conversational fashion image retrieval is target image selection, which aims to find the best-matched fashion image, called target image, from a set of candidate fashion product images via interactions of intermediate retrieved images, called reference images, and user natural language feedback texts. Consider a collection of fashion product images as shown in the top part of Figure \ref{dataset}, each image is associated with some fashion attributes. Suppose that a user has an information need of a fashion product, after an initial interaction with the system, a fashion product image is retrieved and presented to the user. Based on this intermediate reference image, the user typically wishes to refine the retrieval by providing natural language feedback texts, which describe the relative difference between the current retrieved reference image and the desired one. Such process is defined as a turn. If the user is not satisfied with the retrieved image, more turns are conducted until the desired product is retrieved. This multiturn process, consisting of several reference images, feedback texts as well as the final target image, is named as a session.
Studies on multimodal feature composition have shown great promise in single-turn conversational image retrieval. Treating the query as a composition of an image and a text, these methods tackle the task by combining the visual and language representations~\cite{noh2016image,718510,vo2019composing,anwaar2021compositional}.
These works, though having reasonable performance, are limited to single-turn feedback and cannot handle the multiturn conversational image retrieval task in our setting.
Recently some existing multiturn conversational image retrieval methods have been investigated, however, their architecture and techniques are quite simple. ~\cite{kovashka2012whittlesearch} proposes a unique mode of feedback for image search, where users are allowed to give some property related binary feedback attempting to match his/her mental model of the target image(s). Others try to retrieve the target image based on multiple relevance levels ~\cite{datta2008image} or relative attributes ~\cite{kovashka2012whittlesearch}. ~\cite{liao2018knowledge} proposes a knowledge-aware multimodal dialogue model which gives special consideration to the semantics and domain knowledge revealed in visual content. ~\cite{guo2018dialog} first introduces a deep learning based approach to interactive image search which enables users to provide feedback via natural language. Based on this work, ~\cite{zhang2019text} proposed a novel constraint augmented reinforcement learning (RL) framework to efficiently incorporate user
preferences over time. These works have several drawbacks: 1) Simply conducting attribute matching is far from understanding users' needs and thus leading to a bad performance. Existence of synonyms or metaphor makes it even harder to retrieve the target image on semantic level. 2) The neural network models employed are not effective. Neural models which gain a lot of research interest such as attention mechanism~\cite{bahdanau2016neural} and Transformer~\cite{vaswani2017attention} are not leveraged. 3) Fashion attribute information associated with products such as fabric, shape, etc is not fully used in these models. However, these attributes show great potential in the related tasks such as fashion image modeling, style prediction and so on.
We propose a framework that can effectively handle conversational fashion image retrieval with multiturn natural language feedback texts mentioned above.
One characteristic is that it searches for candidate images based on exploitation of the encoded reference image and feedback text information together with the conversation history via a novel neural framework. It facilitates better predictions based on the information from all previous turns within the concerned session. Another unique component is a comparative analysis module which establishes a relationship between the differential representation derived from the reference image and the candidate image, and the feedback texts contributing to a matching score for the candidate image.
To leverage the fashion attribute information of candidate images, a mutual attention mechanism is designed containing both the attention from the candidate image to the feedback texts and the other way round.
The former attention helps to obtain a flexible feedback text representation according to each candidate image fashion attribute, forming one matching vector for each fashion attribute. The later one aims to adjust the corresponding weights with respect to the matching vectors obtained in the first stage, which contribute to the final matching score.
Since there is no existing suitable fashion product image dataset that can appropriately capture user search scenario in multiturn settings, we derive a large-scale dataset based on an existing dataset which supports conversational fashion image retrieval with a single-turn natural language feedback. We integrate multiple single-turn data followed by additional manual efforts on a scrutinizing and consistency verification process ensuring that a multiturn session can consistently capture a particular user's fashion product search need.
To sum up, the main contributions of this paper are listed as follows:
\begin{itemize}
\item Our model interacts with the dialog history from previous turns via a novel neural framework. It also establishes a relationship between the differential representation derived from the reference image and candidate image, and the feedback text contributing to a matching score for the candidate image.
\item Fashion attribute information of candidate images is leveraged via a mutual attention consisting of attention from both candidate image to feedback texts of each turn and the other way round.
\item We derive a new conversational fashion product image retrieval dataset supporting multiturn settings from an existing dataset.
\item Our model outperforms all existing state-of-the-art methods in the experiment.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[height=75mm,width=157mm]{Overall.png}
\caption{The overall architecture of our framework. It consists of three modules, namely, composite representation module, comparative analysis module, and fashion attribute module. The input of the framework is a conversation context composed of several turns of a session. The output is a matching score corresponding to the candidate image.}
\label{fig:overall}
\end{figure*}
\section{Related Work}
\subsection{Conversational Image Retrieval}
Recent developments in computer vision and natural language processing methods have led to the considerable interest in image retrieval related tasks including image captioning~\cite{rennie2017self,vinyals2015show}, visual question-answering (VQA)~\cite{antol2015vqa,das2018embodied,goyal2017making}, cross-modal image retrieval~\cite{vo2019composing,noh2016image,han2017automatic}. Among these tasks, some research focuses on the topic of image retrieval with natural language feedback. Aiming at selecting the desired image according to natural language feedback, efforts are made by incorporating users' feedback to the reference image and retrieving the image having the highest similarity score. Performance on this task has been enhanced in many works. For instance, some methods use a predefined set of attribute values to facilitate product retrieval application~\cite{han2017automatic}. Some seek to fuse the image \& text features producing a more precise representation of the image-text pair ranging from simple techniques (e.g. concatenation, simple feed-forward networks) to advanced techniques such as conducting parameter hashing~\cite{noh2016image}, using a composition classifier~\cite{718510,anwaar2021compositional} or through residual connection~\cite{vo2019composing}. Some methods improve the performance by adding more features such as text-only, image-only, attribute-only features, etc~\cite{shin2020fashioniq,li2019designovels}. Notably, ~\cite{yu2020curlingnet} gains quite good performance in single-turn conversational image retrieval by adding a correction module which takes the difference between the reference and target image embedding into account.
\subsection{Fashion Search and Recommendation}
When making fashion search decisions, people usually show different preferences for product attributes (e.g. a dress with short sleeves and floral prints), which can correspond to the fashion attributes the target image contain (e.g. sleeves, prints, etc). Fashion recommendation task aims to choose the best-matched product from a large number of fashion products satisfying personalized demands. Along this line, some studies have been proposed to improve the performance of fashion recommendation. In order to model visual characters and user preferences, some methods utilize pre-trained CNN to generate the image representation~\cite{mcauley2015image,he2015vbpr,he2016ups,wu2019hierarchical}. Some methods attempt to get a better understanding of products by leveraging aesthetics and style features~\cite{liu2017deepstyle,yu2018aesthetic}. While some methods manage to provide recommendations following the purpose of explaining the recommendation reason through intuitive fashion attribute semantic highlights in a personalized manner~\cite{hou2019explainable}.
Analyzing fashion attributes is essential in fashion retrieval. Fashion attributes, which include texture, fabric, style, are used to drive the learning of retrieval representations~\cite{al2017fashion,hsiao2017learning,hsiao2018creating}. Typically, fashion attributes extracted from the side information are always informative. In some works, attribute-guided learning is a key factor for retrieval accuracy improvement~\cite{wu2017image,yao2017boosting,you2016image}. Relative attributes, first introduced by ~\cite{parikh2011relative}, can be seen of a supplement of user feedback. Following this concept, a system named 'WhittleSearch' is proposed for fashion image retrieval~\cite{kovashka2012whittlesearch}. When a user states a query, the system calculates the relative strength of attributes to provide the retrieval result. ~\cite{kovashka2013attribute} leverages attributes for guiding relevance feedback information in image search. ~\cite{kovashka2017attributes} aims to discover semantic visual attributes to assist downstream tasks such as image retrieval.
\section{Our Framework}
\subsection{Problem Definition}
As described in the Introduction section, our goal is to find the most relevant image, called the target image, from a collection of candidate images satisfying the user's information need. Typically, each candidate image is associated with some fashion attributes. In each session, the user refines the retrieval result by providing natural language feedback texts. The initial retrieved image is treated as the first reference image, denoted as $p_1$, and the first natural language feedback text is denoted as $t_1$. Given a dialog context composed of $n$ turns of a session, represented as $(p_1,t_1,...,p_n,t_n)$ where $p_i$ denotes the $i$-th reference image and $t_i$ represents the corresponding feedback texts. In each turn $i$, the feedback texts $t_i$ describes the relative difference between the current reference image $p_i$, and the desired image. The aim is to retrieve the target image $trg$ via computing a matching score with each candidate image in the fashion dataset.
The training dataset consisting of $N$ sessions is denoted as $D = \{[ref,trg]_i\}_{i=1}^{N}$, where the reference information $ref = (p_1,t_1,...$ $,p_{n_i},t_{n_i})$. The $i$-th session contains $n_i$ turns and each turn is represented as $(p_k,t_k)_{k=1}^{n_i}$ where ${p_k}$ denotes a reference image and ${t_k}$ denotes a user feedback text.
\subsection{Framework Overview}
Figure \ref{fig:overall} depicts the overview of our proposed framework, which consists of three modules, namely, composite representation module, comparative analysis module, and fashion attribute module. The composite representation module aims to extract and integrate the reference image and feedback text features of each turn and form a composite feature representation. Then it makes an association between the composite feature with the candidate image representation. Specifically, the image representation is extracted using residual networks. The natural language feedback text of each turn is embedded using pre-trained word embedding. The embedding of each text is then fed into a Transformer-based self-attention module. By making sentences attend to itself, dependencies of different level are captured. The image and text representation of each turn is composed into one representation and fed into a recurrent network following the turn order with a pooling layer. Finally a partial matching score for the candidate image, which can be treated as a matching score from the perspective of composite features, is obtained.
The comparative analysis module comes up with a differential representation in order to compare the difference between the reference image and the candidate image. The representation is acquired by feeding the reference image representation, the candidate image representation, as well as the difference of them into a fully connected layer. It then establishes a relationship between the differential representation and the feedback text by matching the representation with the feedback representation at each turn. After that, the matched vectors of each turn are recorded via a recurrent network with a pooling layer contributing to a partial matching score for the candidate image.
The fashion attribute module exploits the attribute information of the candidate image and calculates the mutual attention between candidate image and feedback texts. We first embed the fashion attribute texts using the same pre-trained word embedding method as above. Then a recurrent network is applied for the feedback text embedding at each turn. A mutual attention matching method is designed to handle the output of the pooling layer and the candidate image embedding. In the first attention stage, the feedback texts of all turns are embedded according to different fashion attributes forming one matching vector result for each attribute. In the second attention stage, the corresponding weights are learned with respect to the matching vectors obtained in the first stage. As a result, a partial matching score due to fashion attributes is obtained from the attentive sum of the weighted vectors.
In addition, in order to get a more precise text representation, we employ a self-attention mechanism. By making the textual feedback at each turn attend to itself, intra word-level dependencies are captured. This helps to learn hierarchical representations such as word-level, phrase-level as well as sentence-level representations.
\subsection{Preprocessing and Encoding}
\label{Preprocessing}
\subsubsection{Image Transformation}
Before encoding the images into vectors, we first use some techniques to perform image transformation on product images. Image transformation aims to help make some desired information more obvious or explicit and augment the original data. These techniques include random horizontal flip, random rotation, random translation and random scale, which horizontally flips, rotates, translates, resizes the given image randomly with a given probability.
\subsubsection{Image Encoder}
\label{image encoder}
After the image transformation, we encode the image representation using ResNet-101 and ResNet-152. We share the parameters for image representation between reference and candidate images. The encoder embeds the $i$-th image from a multiturn session $p_{i}$ into a vector representation $x_{i}^p = ImgEnc(p_{i}) \in \ \mathbb{R}^{d_p}$. The image encoder is trained from scratch without pretraining on any external data.
\subsubsection{Spell Checker}
Since the dataset contains some misspelled words (e.g. stripe$\to$strip, colorful$\to$colorfull), we correct the spelling using the {\itshape pyspellchecker}\footnote{https://pypi.org/project/pyspellchecker/} python package if the word cannot be found in the vocabulary. We further manually collect some commonly misspelled fashion-related words to ensure the probability of misspelling is minimized to the lowest. In the experiment, the collection contains 855 fashion-related misspelled words.
\subsubsection{Text Encoder}
\label{text encoder}
To encode the textual feedback of each turn, the texts are firstly tokenized, and the word embeddings are initialized with GloVe~\cite{pennington2014glove}. For example, the $i$-th feedback of one session $t_i$, after embedded into a vector, is represented as $e_{i}^t\in\mathbb{R}^{l\times{d_t}}$, where $l$ is the number of words in the sentence and $d_t$ is the embedding dimension. The word embeddings are then fed into a self-attention stack encoder. This stack has three inputs: the query sentence $Q\in\mathbb{R}^{l_Q\times{d_t}}$, the key sentence $K\in\mathbb{R}^{l_K\times{d_t}}$, and the value sentence $V\in\mathbb{R}^{l_V\times{d_t}}$. In our case, both of them are equal to the input embedding $e_{i}^t$.
Specifically, the self-attention encoder is made up of several attention blocks. These blocks have the same structure and are stacked together. Each block takes the output of the former block as input.
Inside the block, each word in the query sentence is attended to words in the key sentence via Scaled Dot-Product Attention. Then the block multiplies the results to the value input and calculates the weighted sum. Finally it adds the vector to the query sentence and feeds it into fully connected network ~\cite{vaswani2017attention}.
Our feedback text encoding result can be represented as the pooling result of the hierarchical self-attention stack. We use the average pooling, which takes the average of different granularity embedding, as the result.
\begin{equation}
x_i^t = TxtEnc(t_i) = Pooling(e_{i,j}^t)_{j=1}^{n_s}\in\mathbb{R}^{d_t}
\end{equation}
\begin{equation}
e_{i,j}^t = SelfAttention(e_{i,j-1}^t,e_{i,j-1}^t,e_{i,j-1}^t)
\end{equation}
\begin{equation}
e_{i,0}^t = e_i^t
\end{equation}
where $n_s$ is the number of self-attention blocks, $x_i^t$ represents the encoding result of the feedback text $t_i$, $e_{i,j}$ is the embedding output vector at the $j$-th attention block.
\subsection{Composite Representation Module}
\label{composite representation module}
\subsubsection{Multimodal composer}
At the $i$-th turn, the multimodal composer composes the reference image embedding $x_i^p$ and the feedback text $x_i^t$ into a joint semantic representation $x_i^c$. Here we use ComposeAE
, which is an effective and state-of-the-art method for combining image and text through autoencoding~\cite{anwaar2021compositional}. It takes the image and text embeddings as input, and outputs the composed feature between them. This method shows promising result on image retrieval tasks where texts are used to express the difference between reference and target images. The composition process can be formulated as follows:
\begin{equation}
x_i^c = ComposeAE(x_i^p,x_i^t)
\end{equation}
\subsubsection{Multiturn Analyzer}
\label{Multiturn Analyzer}
The multiturn analyzer is used to aggregate the encoded multimodal representation with the conversation context history from previous turns. In order to better memorize the history information and make decisions based on all turns, a Gated Recurrent Unit ({\bfseries GRU}) network with a pooling layer is employed~\cite{chung2014empirical}. Each time the composed feature is generated, it is then fed into the network following chronological order. We take the output of the pooling layer as the final representation of all multiturn reference image \& text information. The forward formulas of the multiturn analyzer are:
\begin{equation}
h_i^c = GRU(h_{i-1}^c,x_i^c)
\end{equation}
\begin{equation}
x^{ref}_{CR} = W_3h_i^c+b_3
\end{equation}
where $h_i^c$ is the hidden representation multimodal feature at the time step $i$, $x^{ref}_{CR}$ is the final representation of all multiturn information, $W_3$ is the parameter we wish to optimize.
\subsubsection{Candidate Generator}
Given the final representation of all multiturn reference information $x^{ref}_{CR}$, we aim to search the target image from a set of candidate fashion images. Precisely, we calculate the cosine similarity between the reference embedding $x^{ref}_{CR}\in \mathbb{R}^{d_p}$ and the candidate image embedding $x^{cand} \in \mathbb{R}^{d_p}$, which can be obtained from the image encoder described in Section \ref{image encoder}. The cosine similarity then serves as the partial matching score, denoted as $S_1(ref,cand)$, derived from the composition representation module.
\subsection{Comparative Analysis Module}
\subsubsection{Differential Representation}
Aiming at deriving the differential representation between encoded candidate and reference image information, this module first takes the difference between the reference image and candidate image as input. Let $x_i^{a}$ denote the differential representation, the representation is concatenated by three vectors:
\begin{equation}
x_i^{diff}= FC([x^{cand}\odot{x_i^p};x^{cand}])-FC([x^{cand}\odot{x_i^p};{x_i^p}])
\end{equation}
\begin{equation}
x^{a}_i = FC([x_e^{trg};x_e^{ref};x_i^{diff}])
\end{equation}
where $x_i^p$ is the reference image representation at $i$-th turn, $x^{cand}$ is the candidate image representation, $FC$ is a fully connected layer.
\subsubsection{Text Matching}
In order to detect the relationship between the differential representation mentioned above and the feedback text, at each turn, we match the differential representation $x_i^a$ with the feedback text representation $x_i^t$ via element-wise multiplication. The matched vectors of each turn are later fed into a GRU network:
\begin{equation}
x_i^m = x_i^{a}*{x}_i^t
\end{equation}
\begin{equation}
h_i^m = GRU(h_{i-1}^m,x_i^m)
\end{equation}
where $x_i^t$ denotes the text encoding at $i$-th turn, $h_i^m$ is the $i$-th hidden layer in the GRU network.
After that, an average pooling layer is employed to record the information of all GRU cells. Finally, the output of the pooling layer is fed into a fully connected layer with RELU activation. The output is served as the partial matching score $S_2(ref,cand)$:
\begin{equation}
S_2(ref,cand) = g([\bar{h}_i^m]_{i=1}^{n})
\end{equation}
where $[\bar{h}_i^m]_{i=1}^{n}$ is the average pooling output of all hidden layers, and $g(\cdot)$ is a RELU activation function.
\subsection{Fashion Attribute Module}
\subsubsection{Image Fashion Attribute Embedding}
Recall that each candidate image has five fashion attributes, namely, texture, fabric, shape, part, and style. Each fashion attribute consists of some words describing the corresponding aspect of it.
To encode the candidate images, we also use GloVe (same as in Section \ref{Preprocessing}) to embed each fashion attribute into a $d$ dimension vector. For a candidate image, the embedding is denoted as $(a_{txt},a_{fab},a_{shp},$ $a_{prt},a_{stl})$. Each part of the embedding represents the embedding of one fashion attribute. It is worth noting that some fashion attributes may contain more than one words (e.g. part: long sleeve, button front). We first acquire the embedding of each word, then calculate the average of them as the final fashion attribute embedding.
\subsubsection{Feedback Encoder}
The feedback text is initiated using the GloVe embedding matrix. The embedding of the feedback text at the $i$-th turn is denoted as $e_i^t$ as mentioned in Section \ref{text encoder}. After that, each feedback is converted into a $d_t$ dimension vector.
Similar as mentioned in Section \ref{Multiturn Analyzer}, the feedback text information of each turn is recorded chronologically. A bidirectional GRU network is leveraged and takes the embedding of feedback at each turn as input, which can be denoted as:
\begin{equation}
h_i^{attr}= BiGRU(h_{i-1}^{attr},h_{i+1}^{attr},e_i^{t})
\end{equation}
where $h_{i-1}^{attr}$ is the forward encoded feedback text representation with the history from $i$-1 previous turns, $h_{i+1}^{attr}$ is the backward encoded feedback text representation with the history from $i$+1 previous turns, $e_i^{t}$ is the text representation at $i$-th turn.
\begin{figure}
\centering
\includegraphics[width=\linewidth,height=68mm]{CrossAttention.png}
\caption{The mutual attention between the feedback text and the candidate image.}
\label{fig:my_label}
\end{figure}
\subsubsection{Candidate-to-feedback Attention}
The key element of the fashion attribute module is the mutual attention mechanism. It is composed of mutual attention from the candidate image to the feedback text and the other way round.
The candidate-to-feedback attention mainly calculates the attentive embedding of the feedback text according to each candidate image attribute. When we wish to determine if a candidate image is suitable or not, we first take a look at one of its fashion attributes, fabric for example, then we reread the multiturn feedback to find out which part of the feedback dialog context should be more focused. Based on this strategy, each candidate fashion attribute should focus on different parts of the feedback dialog texts. Weights are introduced measuring the relevance between each image fashion attribute and the feedback text at each time step. The following formulas are designed for calculating the weights:
\begin{equation}
\alpha_{mj} = \frac{exp(w_{mj})}{\sum_{k=1}^{n}exp(w_{mk})}
\end{equation}
\begin{equation}
w_{mj} = f(W_4^T[a_m;h_j^{attr}]+b_4)
\end{equation}
where $w_{mj}$ denotes the attention weight from the $j$-th turn feedback text hidden layer to the $m$-th fashion attribute in the candidate image. Note that $a_m\in
\{a_{txt},a_{fab},a_{shp},a_{prt},a_{stl}\}$. $W_4,b_4$ are the parameters to optimize, $f(\cdot)$ is an non-linear activation function.
According to each image fashion attribute $a_m$, the attention weights are then employed to calculate the weighted sum of the hidden representation in the feedback aspect. The detailed formula is shown as follows:
\begin{equation}
x_m^{attr}=\sum_{j=1}^n\alpha_{mj}h_j^{attr}
\end{equation}
where $x_m^{attr}$ is the attentive feedback representation according to the attribute $a_m$.
Having the feedback representation $x_m^{attr}$, we can calculate the similarity between each attentive feedback text representation and candidate attribute representation $a_m$:
\begin{equation}
S_{attr}(x_m^{attr},a_m) = cosine(x_m^{attr},a_m)
\end{equation}
\subsubsection{Feedback-to-candidate Attention}
Intuitively, for the same feedback information, different values should be put to different image fashion attributes. That is, for the five similarity scores mentioned above, we compute the weight of each one to form the partial similarity score. This leads to the idea of feedback-to-candidate attention, which means that the weights are trained between feedback representation mentioned in the last section and the candidate attribute representation:
\begin{equation}
S_3(ref,cand) = \sum\beta_{a_m}S_{attr}(x_m^{attr},a_m)
\end{equation}
\begin{equation}
\beta_{a_m} = \frac{exp(w_{a_m})}{\sum_{a_k\in\{a_{txt},a_{fab},a_{shp},a_{prt},a_{stl}\}}exp(w_{a_k})}
\end{equation}
\begin{equation}
w_{a_m} = f(W_5^T[\bar{h}^{attr};a_m]+b_5)
\end{equation}
where $\bar{h}^{attr}$ is the average pooling result of the GRU multiturn analyzer hidden states. $\beta_{e_i}$ is the correlation weight of feedback-to-candidate aspect, indicating which candidate attribute should be more focused in the data.
\subsection{Partial Matching Score Combination}
Given the three partial matching scores $S_1(ref,cand)$, $S_2(ref,cand)$, $S_3(ref,cand)$ mentioned above, the final matching score is computed as the weighted sum of them:
\begin{equation}
S(ref,cand) = w_1S_1(ref,cand)+w_2S_2(ref,cand)+w_3S_3(ref,cand)
\end{equation}
where $w_1$, $w_2$, $w_3$ are the parameters that need to be optimized.
\subsection{Training}
During training, we use the max-margin triple loss as the loss function:
\begin{equation}
L = max(0,margin-S(ref,trg')+S(ref,trg))
\end{equation}
where $margin$ is a hyperparameter which records the gap between positive and negative examples. $trg$ is the true positive target image while $trg'$ is the false negative target image.
It is worth noting that instead of building a negative sample of the target set, we use batch hard triplet loss. For each anchor, we get the hardest positive and negative to form a triplet and build the triplet loss over a batch of embeddings.
We train three modules separately and record the best score matrix of each module. Then, an iterative process is utilized to ensemble the modules. In the process, the previous best score matrix serves as the new candidate in the next iteration. We use {\itshape hyperopt}\footnote{https://github.com/hyperopt/hyperopt} Bayesian optimization to handle the process, which aims to find optimal weights between partial matching scores and maximize the overall score.
\section{Experiment}
\subsection{Dataset}
Since there is no existing suitable fashion product dataset which can appropriately capture user modeling in multiturn settings, we derive a large-scale multiturn fashion dataset based on the existing FashionIQ dataset which originally only supports single-turn feedback~\cite{guo2019fashion}. Every fashion product image in the dataset has five fashion attributes, namely, texture, fabric, shape, part, and style. Each fashion attribute includes some words describing the corresponding aspect of the image.
The original FashionIQ dataset contains single-turn sessions, and each session is represented by a triplet having the form of (reference image, feedback text, target image). Many reference and target images in FashionIQ dataset are highly relevant and have duplications. To derive multiturn sessions, we concatenate single-turn sessions in FashionIQ by matching the target image of one triplet with the reference image of another triplet in an automatic manner. For example, the original triplets (img1, txt1, img2), (img2, txt2, img3), (img3, txt3, img4) in FashionIQ dataset can be concatenated into a session having the form of (img1, txt1, img2, txt2, img3, txt3, img4). This process can derive a large number of multiturn sessions. A session with $n$ turns implies that the session has $n-1$ feedback and $n$ images which is composed of $n-1$ reference images and 1 target image.
However, not all the sessions obtained are reasonable. Therefore, we manually select and filter out the problematic sessions which belong to several kinds of cases. The first kind refers to duplicates. For example, the current reference image is exactly the same as the next image. The second kind refers to inconsistency. For example, associated with a particular reference image, the feedback is 'white color'. Then a white colored reference image is retrieved. The user continues to write the feedback 'shorter sleeve'. The next reference image retrieved has shorter sleeves than the last one, but is not white colored, which is obviously unreasonable. The third kind refers to conflicts. For example, associated with the first image, the feedback is 'long sleeves'. After the second image is retrieved, the user gives another feedback 'sleeveless'. The last kind is circle. For example, the first reference image is followed by the feedback 'longer sleeves'. After the second reference image is retrieved, the user gives the feedback 'shorter sleeves', the system repeatedly retrieves the reference image same as the first one.
Moreover, although a fashion attribute dataset is contained in the FashionIQ dataset, the size of the dataset is limited and cannot cover every target image. We expand the fashion attribute dataset to make sure every target image is associated with some fashion attributes following the same way as in ~\cite{guo2019fashion}. Specifically, image fashion attributes were extracted from the product titles, the product summaries, and detailed product descriptions on product websites.
\setlength{\belowcaptionskip}{+0.3cm}
\begin{table}
\begin{tabular}{cp{1cm}p{1cm}p{1cm}p{1cm}p{1cm}}
\toprule
Category&Sessions with 3 turns&Sessions with 4 turns&Sessions with 5 turns&Total Sessions&Total Images\\
\midrule
Dress &3264&763 &311&4338& 5115\\
Shirt &2821 &687 &167&3675&4130\\
Toptee &2565 &766 &162&3493&4410\\
\midrule
Total & 8650 & 2216 & 640&11506&13655\\
\bottomrule
\end{tabular}
\caption{Detailed Information of Our Dataset}
\label{tab:data}
\end{table}
As a result, we gather 11506 multiturn sessions. Table \ref{tab:data} shows the detailed information of our derived dataset. Furthermore, the multiturn sessions are grouped into three categories, namely, dress, toptee, and shirt. Each category includes sessions ranging from 3 to 5 turns. The average length of the feedback text in each turn is 5.02 words.
\begin{table*}
\begin{tabular}{cccc|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{3}{c}{Overall}&\multicolumn{3}{c}{Dress} & \multicolumn{3}{c}{Shirt}&\multicolumn{3}{c}{Toptee}\\
\cline{2-4}\cline{5-7} \cline{8-10}\cline{11-13}
&R@5& R@8 & MRR & R@5 & R@8 & MRR & R@5&R@8&MRR&R@5&R@8&MRR \\
\midrule
Text-only &7.8&12.3&6.9 &7.4&10.8&5.4 &7.5&12.5&6.2&8.2&13.4&6.3 \\
Image-only &10.7&14.5&6.9&10.2&15.0&6.3&9.3&15.7&5.5&9.1&13.9&7.6\\
Attribute-only &9.8&12.8&7.7&11.1&13.9&8.2&9.7&11.5&6.8&8.9&11.8&6.3\\
\midrule
TIRG ~\cite{vo2019composing}&12.4 &15.9&11.5&12.5&14.2&11.6&13.8&16.7&12.9&12.0&15.6&10.9\\
RITC ~\cite{shin2020fashioniq}&13.2&18.7&12.4&11.8&18.8&10.2&14.0&20.6&12.2&13.2&18.9&11.7 \\
ComposeAE ~\cite{anwaar2021compositional}&19.2&25.7&13.4&18.5&26.4&14.4&19.8&25.2&14.5&19.2&26.6&14.8 \\
CCNet ~\cite{yu2020curlingnet}&13.5&15.5&12.2&12.7&17.2&10.5&15.2&18.5&13.3&13.6&16.2&12.1\\
AUS ~\cite{guo2019fashion} &11.3&16.2&9.2&13.4&15.3&10.5&14.7&16.6&11.3&12.4&13.3&10.6\\
Dialog Manager ~\cite{guo2018dialog}& 13.1&15.2&11.6&12.7&16.7&10.8&13.9&17.7&11.6&11.6&15.8&10.3\\
\midrule
Ours &30.3&33.4&26.5&29.8&33.5&25.6&30.5&34.1&27.4&29.4&33.6&26.1\\
w/o CA &25.5&29.2&21.3&24.1&28.4&18.5&25.6&28.6&22.1&24.5&25.2&21.7\\
w/o FA &21.3&28.8&17.6&20.5&29.4&15.0&22.9&29.0&18.3&22.0&28.5&17.9\\
\bottomrule
\end{tabular}
\caption{Experimental results on Ours and some comparison methods. The metrics are in percentage. CA denotes the comparative analysis module. FA denotes the fashion attribute module.}
\label{tab:experiments}
\end{table*}
\subsection{Experimental Setting}
We prepare 5 runs for the experiments. For each run, we randomly choose 70\% multiturn sessions from the dataset as the training set and 30\% as the testing set. We conduct our experiments using PyTorch. We use ResNet-152 as our image encoder without pretraining it on any external information. As for the text encoder, we use 3 stacks of self-attention blocks. By default, training is run for 150 epochs with a start learning rate 0.0001. We will release the code and the dataset to the public.
We use similar evaluation metrics as in previous works ~\cite{guo2019fashion}. Each model outputs K best-matched products having the highest output score for the given session. Since we assume that each session in our dataset corresponds to one true positive image, which is the target image, we calculate the recall rate of the target image among the K selected ones as the main evaluation metric. The metric is denoted as recall at K (R@K), representing the proportion of target image found in the top-K retrieval results. In our experiment, K is selected to be 5 and 8. We also use MRR (mean reciprocal rank) as another evaluation metric in our experiment. It is the average of the multiplicative inverse of the rank of the target image in all retrieved images.
\subsection{Comparison Models}
In order to evaluate the effectiveness of our proposed model, we compare our model with several baselines and existing state-of-the-art models. Since most of existing models are originally designed for single-turn setting. For fair comparison, we extend these models by adding a recurrent network to aggregate the encoded results with the dialog context from previous turns so that they can handle multiturn settings.
\paragraph{Text-Only} This model ignores the reference images and retrieves the target image according to user feedback only. The texts are encoded by a LSTM-GRU network and the images are encoded by ResNet-152.
\paragraph{Image-Only} The image-only model only utilizes the image information in the reference image session. It utilizes the information of visual similarity between the information of the visual similarity between candidate and reference images.
\paragraph{Attribute-Only} The attribute-only model treats every image as a combination of fashion attributes. Each fashion attribute is encoded by LSTM-GRU and is concatenated to form the image representation.
\paragraph{TIRG} A recent method based on concatenation of visual and textual features with an additional gating connection to pass the image features directly to the learned joint feature space ~\cite{vo2019composing}.
\paragraph{RITC} It was first proposed by ~\cite{shin2020fashioniq}. The residual text and image composer is a method that learns the residual between the features of target and reference images.
\paragraph{ComposeAE} A state-of-the art model proposed by ~\cite{anwaar2021compositional}. It is an autoencoder based model which aims to learn the composition of image and text query
for retrieving images.
\paragraph{Attribute-aware User Simulator (AUS)} It is a model where attribute features are incorporated to augment image representation~\cite{guo2019fashion}.
\paragraph{Correction Network (CCNet)} It is a model that finds the difference between reference and target images and checks its validity with a relative caption~\cite{yu2020curlingnet}.
\paragraph{Dialog Manager} It was first proposed in ~\cite{guo2018dialog}, which is a framework that considers a user interacting with a retrieval agent via iterative dialog turns. The texts are encoded by a simple LSTM network, and the images are encoded by ResNet. It composes the image and text features by summing them up and feeds them into a memory network.
\subsection{Experiment Results}
Table \ref{tab:experiments} shows the experiment results of our model as well as all comparison models. Our Model, denoted as Ours, has gained the best R@5, R@8 and MRR score on the whole dataset as well as on all categories.
Generally for most of the models, the R@5, R@8 and MRR rates on the shirt category are higher than the other two categories. One reason is that the appearance of different shirts is less diverse, which assists the models in better capturing the difference between different products. Another reason is that the fashion attribute information of shirts is richer than other two categories. We also conduct the ablation study in our experiment. Without comparative analysis module, the performance of our model decreases by 4.8\% in R@5, 4.2\% in R@8, and 5.2\%
in MRR on average. This result reflects the difference between candidate and reference images can be exploited and reflected from the feedback text of each turn. Taken fashion attribute module into account, the performance increases by 9.0\% in R@5, 4.6\% in R@8, 8.9\% in MRR on average. This verifies the effectiveness of fashion attributes for improving retrieval results. Compared with image-only and text-only models, the remaining models utilizing the multimodal feature composer have a better performance (Dialog Manager, RITC, TIRG, ComposeAE, CCNet, AUS), which verifies the necessity of composing the image \& text at each turn into one representation. Compared with models without attention (Text-only, Image-only, Attribute-only, TIRG, ComposeAE, CCNet, AUS, Dialog Manager), attentive models outperform them on the whole, which demonstrates the superior power of attention mechanism in multiturn image retrieval.
\begin{figure}[b]
\centering
\subfigure[Ours]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[scale=0.25]{ex1.png}
\label{sub:ours}
\end{minipage}%
}%
\subfigure[ComposeAE]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[scale=0.25]{ex2.png}
\label{sub:com}
\end{minipage}%
}%
\subfigure[Attribute-only]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[scale=0.25]{ex4.png}
\label{sub:att}
\end{minipage}
}%
\subfigure[Text-only]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[scale=0.25]{ex3.png}
\label{sub:txt}
\end{minipage}
}%
\centering
\caption{The average R@5 performance concerning different lengths of turns on different models.}
\label{all}
\end{figure}
\subsection{Further Analysis}
We study how our model performs under different turn lengths. We also conduct similar investigation for three baselines. Recall that our dataset contains sessions ranging from 3 to 5 turns, Figure \ref{all} shows the analysis on R@5 rate with respect to different turn lengths.
For attribute-only model, according to Figure \ref{sub:att}, the longer the session is, the better the performance will be. The reason is that the fashion attribute information is insufficient in short sessions. While for text-only models, according to \ref{sub:txt}, the performance on 5-turn sessions is worse compared to the other two
types. For the reason that on one hand, the number of 5-turn length sessions is much smaller than the other two types. On the other hand, without image information for reference, it is rather hard for the model to understand the target image the user wishes to retrieve, especially when it comes to long conversations.
\begin{figure*}[t]
\centering
\includegraphics[width=170mm,height=45mm]{Case5.png}
\caption{Some case studies.}
\label{fig:case study}
\end{figure*}
According to Figure \ref{sub:com}, on all categories, the 4-turn sessions have the best performance in the ComposeAE model, while the 3-turn sessions have the worst. However, in our model, the performances on 4 and 5-turn sessions are similar. The 4 and 5-turn sessions have improved performance in the two models because they have richer information. Through interacting with the system, more hints are provided in the user feedback, which implies that if the session is too short, the information provided in each turn may not be rich enough for accurately retrieving the target image. However, for the ComposeAE model, if the session is too long, it may give rise to the problem of information lost. Information of previous turns is easier to be forgotten in the former time steps. This problem gets tackled by the fashion attribute module in our model where the mutual attention between candidate image and feedback captures the information of each turn according to image attributes. The information buried in long sessions can be fully utilized by the model.
\subsection{Case Study} We collect some cases to analyze our retrieval results compared with a comparison model, namely Dialog Manager as shown in Figure \ref{fig:case study}. Five products are retrieved by our model and Dialog Manager, respectively. It can be observed that the retrieved image by our model can satisfy the user's need, which shows the effectiveness of the proposed model. Specifically, the desired target images in the first and third case are ranked the first among all candidate images, while they do not appear in the top-5 list in the comparison model result. Particularly, in the second case, the target image is ranked the third in our model while is ranked the ninth in the comparison model.
Interestingly, general requirements such as 'blue colored','lighter' can be recognized by both models. However, detailed requirements such as 'has front bow', 'numbered slots', 'black spots' are challenging. One reason for the efficacy of our model is that it exploits the utilization of the fashion attribute module. Detailed information such as 'bow', 'number print', 'dots' can often be found clues in image fashion attributes, thus facilitating the interaction between target image and user feedback text. Another reason is the adoption of the self-attention feedback encoder. By making feedback text attend to itself, richer information of different granularities is learned.
Compared with Dialog Manager, our model is also better at capturing the information from previous turns. In the first case, the feedback 'is shorter' seems to be forgotten in the comparison model result. The retrieved images do not have the 'shorter' aspect compared with the first target image. In the third case, 'sleeveless' is a very important clue in retrieving target images. However, not all the images retrieved by Dialog Manager top-5 list are sleeveless. One of the reasons that our model has better performance is that in our model, the fashion attribute module containing mutual attention is employed to connect the information from previous turns to the present task. Furthermore, due to the comparative analysis module, the retrieval results of our model is more similar to the reference images. In cases where there are many candidate images fulfilling the feedback requirements, such as the second case, choosing the one which is more similar to the reference image will produce the results with higher quality.
\section{Conclusions} In our paper, we investigate multiturn conversational fashion image retrieval with natural language feedback. A new framework is proposed which can effectively handle the task. Our model searches for target images based on the aggregation of the encoded reference image and text information with the conversation history via a novel neural framework. Moreover, it utilizes fashion attribute to improve the performance. We also derive a dataset suitable for multiturn conversational fashion image retrieval. Empirical results demonstrate the effectiveness of our model. The results show that our approach outperforms all the baselines and state-of-the-art models. In our future work, we wish to expand the generality of the approach on other fashion items having more wholistic fashion attributes and explore the usability of the work in real-world fashion domain applications.
\bibliographystyle{ACM-Reference-Format}
\balance
|
1,116,691,500,355 | arxiv | \section{Introduction}
Penetration Testing (PT) is a process of simulating the attack behavior of malicious hackers, launching authorized attacks on computer systems and networks to discover any security vulnerabilities that can be exploited~\cite{2}\cite{1}. Numerous PT methods have been proposed for assessing the security of network systems, such as host operating systems \cite{os}, database systems \cite{ds}, network equipment systems \cite{firewall1} \cite{firewall2} and etc.. According to different testing techniques, PT methods are roughly categorized as manual PT, automatic PT and intelligent PT.
Manual PT depends on applying software exploits like running computer code to trigger a system vulnerabilities and taking further actions such as payloads that after a successful exploit, according to the security audit experts\cite{aduit}. Generally speaking, they take full advantage of some penetration tools (e.g., Nessus \cite{nessus}, Nexpose \cite{nexpose} ) combined with expert operations to accomplish the PT process. However, those tools rely on the professional knowledge of security experts to make decisions, so they usually can not update attack strategies accordingly when the environment changes. Additionally, with the complexity of systems grows, the task of manually assessing security will become quite a burden of labor-consuming and time-consuming. Consequently, the emergence of automatic PT methods alleviate this problem to a certain extent.
Existing approaches of the automatic PT include those mapping the results of vulnerability scanners to the corresponding penetration tools and those describing the PT process as an attack graph to solve the problem of the penetration path planning. The former method is integrated by vulnerability scanning, penetration attacking, payloads choosing and other modules into a framework to realize PT automatically, e.g., Metasploit \cite{3}\cite{metasploit}, Core Impact \cite{coreimpact}. However, some operations still require manual participation in the critical penetration steps, such as selecting proper exploit modules and payloads. In other words, it still relies on the attack rules carefully set in advance by programmers and security experts \cite{4}; thus, it cannot learn to deal with dynamic and uncertain penetration environments to develop new attack strategies. The latter method of automatic PT is mainly based on attack graphs \cite{graph}, by showing the attack sequence and the effect that an attacker may launch. In the graph, the attacker could use the relationship of vulnerabilities and exploits that are already grasp to choose an attack path. Nevertheless, since the attack graphs demands the complete system knowledge, it is difficult to apply to realistic dynamic penetration environments. To address the problem, Artificial Intelligence (AI) based PT methods come into being.
According to the AI techniques adopted for PT, AI-based intelligent PT methods can be mainly grouped into PT based on traditional Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) \cite{DRL}.Specifically, in PT based on traditional RL, such as modeling the attack process as a Partially Observable Markov Decision Process (POMDP) \cite{POMDP2}\cite{POMDP1}, the attacker gradually observes and models the computer configuration as the attack progresses. It can be well-aimed at a single host in practice, but due to POMDP's high computational complexity, it is not scalable for large networks \cite{POMDP3}. Besides, the Q-learning algorithm is also applied to the Capture The Flag (CTF) competition \cite{CTF} and the penetration path optimization \cite{pathfind}. However, these methods are also challenged in real-world scenarios since they are hard to involve in large-scale networks. Besides, the RL agent needs to comprehend the complete network topology in advance, which is a strong assumption in the real environment. Furthermore, intelligent PT based on DRL uses the function approximator as a neural network \cite{DRL} to conduct PT. It is easy to encounter the high-dimensional discrete of the action space when applied to the actual scenario and complex network system, which makes DRL model tough to train and converge in the PT process. Taking the value-based Deep Q-Network (DQN) algorithm \cite{DQN} as an example, the limitation of using DQN for automated PT lies in the complexity of the action space of different penetration scenarios. DQN evaluates all output actions' values, and selects the largest value as the best option in the entire action space. If the action space is large, we may encounter the problem that multiple actions have the same value \cite{growingactionspace}. PT is similar; the state and action space exponentially expand with the increasing hosts in different subnets. Consequently, applying the existing DRL algorithms to automate PT is difficult and unstable. Although in a relatively small network scene, it still faces the challenge of the state and action space explosion with hundreds or thousands of dimensions \cite{explode}.
In summary, the existing PT methods are still challenged in three aspects: (1) Security experts play a significant role in the PT process, but over-reliance on security experts' manual operations and decisions will also increase the labor cost of PT;
(2) When using the DRL algorithm for automating PT, it could encounter the problem of large state space and high-dimensional discrete action space, thus leading to convergence difficulty of PT training; (3) Most intelligent PT methods are verified in the virtual network environment instead of real penetration scenarios.
To address these challenges, for the first time, we introduce automatically-collected expert knowledge for agents to perform autonomous PT, proposing a general intelligent PT framework that incorporates Generative Adversarial Imitation Learning, denoted as GAIL-PT.We combine GAIL with RL / DRL to automate PT process, by modeling the penetration attacker as a agent. Specifically, due to imitation learning has achieved a great success for speeding up the convergence rate of reinforcement learning with learned strategy in prior, we adopt GAIL for accumulated expert knowledge for RL / DRL based PT. Then, the agent continuously interacts with the penetration environment through the guidance of expert knowledge, and the accumulated penetration experience will assist the agent in formulating new attack strategies with a higher penetration success rate adaptive to the dynamic environments. At last, for both small-scale and large-scale network scenes with high-dimensional discrete action space, GAIL-PT shows state-of-the-art (SOTA) penetration performance compared with baselines and also can be verified in practical scenarios.
The main contributions of this paper are as follows:
\begin{itemize}
\item Aiming at the problem that most PT methods over-reliance on security experts' manual operations and decisions, we construct an expert knowledge base and store state-action pairs as experiences in different penetration scenarios so that the PT experts can take part in the decision-making in a lower-cost way.
\item To solve the convergence difficulty problem of PT training and improve the penetration performance, we apply the GAIL network to PT based on RL or DRL for the first time. With the guidance of the expert knowledge, the agent is trained through the GAIL network to perform the action infinitely closed to the expert knowledge base, making the training process more stable and efficient for PT.
\item We conduct extensive experiments not only in real target Metasploitable2 and small-scale networks with or without honeypot, but also for large-scale networks. The experiment results indicate that GAIL-PT shows the SOTA penetration performance in actual or simulated network scenes. In addition, more experiments also verified that the proposed GAIL-PT is a general leading framework suitable for different reinforcement learning methods, i.e., DRL, RL.
\end{itemize}
The organization of this paper is as follows. Section~\ref{RWs} provides a overview of penetration testing, penetration testing based on reinforcement learning and imitation learning. Section~\ref{Preliminaries} supplies an overview of the reinforcement learning models we applied. Section~\ref{Method} exhibits the outline of our method and demonstrates its details. Section~\ref{Exp} describes the experimental setup, then evaluates and discusses the results. Finally, in Section~\ref{Conclusion and future work} the main findings and limitations of our work are summarized.
\section{Related Work\label{RWs}}
In this section, we mainly present penetration testing, the current research status of penetration testing based on reinforcement learning and mainstream imitation learning methods.
\subsection{Penetration testing}
PT is a security exercise designed to evaluate the system's overall security by authorizing simulated cyberattacks on the computer system. Manual PT relies on the use of penetration tools. For instance, Nessus \cite{nessus} and Nmap \cite{nmap} use the target list to perform network vulnerability testing one by one without checking for real-time environment changes. They neither make any decisions automatically, nor work independently. As one of the best performing vulnerability exploits, Core Impact \cite{coreimpact}, establishes an attack plan automatically by security experts. However, its limitation lies in generating attack plans on the target environment before conducting PT. Consequently, Core Impact cannot be dynamically adaptive to the environment either. Another prevalent PT tool is Rapid 7 Nexpose \cite{nexpose}. It uses the vulnerability development, and penetration tool Metasploit \cite{metasploit} for vulnerability scanning and penetration. Although it can achieve flexibility by distributing multiple network scan engines, Nexpose is unable to learn attack policies. In short, most well-known commercial penetration testing tools show lots of advantages but highly rely on the expertise and decision-making of PT experts. These tools suffer from the problem of poor adaptability, that is, once the environment changes, the attack strategy cannot be updated accordingly.
In addition to manual PT methods, automated PT methods have been further explored. Numerous automated PT methods define PT as a path planning problem for constructing attack graphs \cite{BID6}. Cynthia and Swiler \cite{graph} firstly proposed a method based on the attack graph to analyze the system's vulnerability. The system divides the common attack database into atomic steps, distinct network configuration, topology details and attacker configuration files. The nodes and chain edges in the attack graph describe the different attack stages. The probability of success will be allocated to the chain edge, so different attack graph algorithms are used to identify the attack path with the highest probability of success. Based on the previous attack graph method, Kyle et al. \cite{BID8} created the NetSPA attack graphics system; its innovation lies in allowing network defenders to assess security threats to select complementary strategies. NetSPA analyzes multiple targets in minutes using firewall rules and vulnerability scanning, drastically reducing attack graph building time. Considering the attack path's reliability, Xue Qiu et al. \cite{BID9} proposed an algorithm for automatically generating attack graphs using Common Vulnerability Scoring System (CVSS) \cite{cvss} for the first time to improve the trustworthiness of attack paths by eventually optimizing the network topology. As a result, once an attack graph is generated, the network topology is also optimized. However, most of these methods could simply output corresponding action sequences to handle static environments. In other words, they can go back and provide corresponding steps or guidelines for PT, but cannot perform PT dynamically and interactively in real PT scenarios. In summary, we concluded some details of the introduced manual and automatic PT tools as shown in Table~\ref{tabel:PT drawbacks}.
\begin{table}[!htp]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\footnotesize
\centering
\caption{The details of manual and automatic PT tools}
\label{tabel:PT drawbacks}
\resizebox{1.00\linewidth}{!}
{\begin{tabular}{cccccc}
\toprule \hline
\textbf{Category} & \textbf{Tool} &\textbf{Benefits} &\textbf{Drawbacks} &\textbf{Scenes} &\textbf{Integration}\\
\hline
\multirow{3}{*}{Manual PT}
&{Nessus}\cite{nessus} & \multirow{3}*{{Vulnerability scanning precisely }} & \multirow{3}*{{Cannot learn attack strategies}} &\multirow{3}*{{Single server \& Network topology}} &\multirow{3}*{\tabincell{c}{Low}} \\
\cline{2-2}
&{Nmap}\cite{nmap}&&&& \\
\cline{2-2}
&{Nexpose}\cite{nexpose}&&& & \\
\hline
\multirow{3}*{Automatic PT}
& {Metasploit}\cite{metasploit} &{{Semi-automated PT process}}&{\tabincell{c}{Highly rely on the expertise}}&{\tabincell{c}{Single server}}&{\tabincell{c}{High}} \\
\cline{2-6}
&{Core impact}\cite{coreimpact} & {\tabincell{c}{ Generate attack plans automatically}} &{\tabincell{c}{Inability to update dynamically}} &{\tabincell{c}{Single server \& Network topology}} &{\tabincell{c}{High}} \\
\cline{2-6}
&{Attack graph}\cite{graph} &{\tabincell{c}{Attack sequence and effect are clear}} &{\tabincell{c}{Cannot interact dynamically }} &{\tabincell{c}{Network topology}} &{\tabincell{c}{Middle}} \\
\hline
\end{tabular}}
\end{table}
\subsection{Penetration testing based on reinforcement learning}
In recent years, RL \cite{RL} has achieved remarkable success in game areas, e.g., Alpha Go \cite{Alphago}, Open AI Five \cite{openai5} and Alpha Star \cite{Alphastar}. Particularly, in some games the agents based on RL have surpassed human players. Just like game rules, PT is also a dynamic decision-making process based on observing the environment. The agent in RL-based PT is trained to observe and explore the dynamic network environment, so as to learn the optimal policies through trial and error \cite{trialanderror}. Intuitively, RL is an excellent option to stimulate and develop as an effective tool for penetration attacks against network security. Schwartz and Kurniawati \cite{explode}, Hu et al. \cite{autoPTDQN}, Zennaro and Erdodi \cite{modelptql} all describe PT as a POMDP, using RL to model the penetration environment and learn penetration strategies. However, the penetration agent merely observes the compromised host and the subnet, it does not observe the connection information of the global network. In other words, the agent learns the optimal strategy to execute the penetration attack through continuous trial and error learning. These studies show that RL can be applied to relatively small action spaces or CTF challenges.
Considering the observation state, the PT environment is different from the RL game environment, because the action space of the network penetration environment is discrete and high-dimensional. It is still a long-term challenge to extend DRL to a network with a more extensive action space \cite{explode}. Arnold et al. \cite{largespace} proposed a framework called Wolpertinger, which uses the Actor-Critic (AC) framework to learn strategies in a large action space that combines sub-linear complexity. It could be unstable in a spare reward environment since the gradient cannot be back-propagated for the training of the actor network. To address the issue of sparse rewards in large action spaces, Zhou et al. \cite{zhouimDQN} proposed an improved DQN algorithm NDSPI-DQN to optimize the penetration path. It effectively reduces the agents' action space by decoupling attack vectors, but the method is only suitable for simulation scenarios. Besides, Tran K et al. \cite{HA-DRL} proposed automated PT based on deep hierarchical reinforcement learning, denoted as HA-DRL, which uses algebraic action decomposition approach to deal with the sizeable discrete action space of PT simulators. However, the space of actions will grow up exponentially with the complexity of the network. Thus the training costs are relatively high. In addition, Bland et al. \cite{bland2020}, Elderman et al. \cite{elderman2017} and He et al. \cite{he2016} proposed the application of multi-agent reinforcement learning for automated penetration testing of network security simulation scenarios, and they also proposed a learning algorithm of the optimal strategy. This method can achieve satisfying performance in simulated scenarios at the cost of high training complexity.
\subsection{Imitation learning}
Imitation learning \cite{im1} refers to automatic learning from examples provided by experts. The purpose of training the model is to fit the trajectory distribution of the strategy generated by the model with the trajectory distribution of the input. According to different ways of strategy optimization, imitation learning methods roughly divided into Behavioral Cloning (BC) \cite{im10}\cite{im9}, Inverse Reinforcement Learning (IRL) \cite{im11} and Generative Adversarial Imitation Learning (GAIL) \cite{GAIL}.
BC was proposed by Torabi et la.\cite{im9} to directly learn the optimal action in the sampling state from the expert's empirical data without constructing a reward function and predict the corresponding optimal action in the new state after learning. Nevertheless, BC is prone to cascade errors. Based on BC, IRL finds a reward function that explains these strategies or behaviors by giving optimal strategies or trajectories. In order to improve the performance of IRL, Wang et al. \cite{im13} proposed Imitation Learning via Inverse Reinforcement Learning (IRL-IL) based on IRL. It is designed to solve the optimal strategy through RL, restore the expert strategy indirectly, and make a plan with long-term. Consequently, it solves the cascade error problem of BC, leading to more substantial generalization and robustness. However, the linear reward function of most IRL-IL methods contains certain restrictions \cite{im15}, i.e., the representation ability of its reward function is insufficient, and the set penalty item cannot be assigned to the expert strategy with a larger reward value as much as possible; what is more, the iterative solution of the RL sub-process requires lots of computing resources \cite{im16}. In order to make up for these limitations, Jonathan Ho et al. \cite{GAIL} combined the Generative Adversarial Network (GAN) \cite{GAN} with IRL-IL proposed GAIL. The expert data is compared with the data generated by the agent network and optimized the model reversely so that the agent can learn the strategy of approaching the expert and alleviate the training problem of IRL-IL. In addition, the algorithm has achieved a considerable performance improvement beyond the current model-free methods when simulating problematic behaviors in large-scale high-dimensional environments \cite{maxentropy}.
\section{Preliminaries\label{Preliminaries}}
This section briefly introduces reinforcement learning and its algorithm Q-learning. Besides, we also introduce DRL algorithms based on different policies, i.e., Asynchronous Advantage Actor Critic (A3C) and Distributed Proximal Policy Optimization (DPPO).
\subsection{Reinforcement learning}
RL is a machine learning method based on the sequence interaction between the agent and the environment. RL is usually modeled as a Markov process to solve sequential decision-making problems. It can be represented by a four-tuple $<S, A, P, R>$, containing the state space $S$, action space $A$, state transition probability $P$ and reward function $R$. Through continuous interaction with the environment, the agent $a_{t}$ acts under the current state $s_{t}$ according to the learned policy $\pi$. Simultaneously, the environment feeds back to the agent a scalar reward value $r\left(s_{t}, a_{t}\right)$ to evaluate the action's quality, and then transfers to the next state according to the state transition probability $P\left(s_{t+1} \mid s_{t}, a_{t}\right)$.
The objective of RL is to maximize the cumulative reward over time $t$, denoted by the reward function:
\begin{equation}
R_{t}=\sum_{k=0}^{\infty} \gamma^{k} r_{t+k}
\label{equ:1}
\end{equation}
where $\gamma \in[0,1]$ is the discount factor used to measure the importance of current rewards to future rewards, the smaller the value, the agent only focus on the current rewards; on the contrary, the larger the value indicates that the agent will concentrate on to the future long-term returns.
In this paper, we consider the value-based RL algorithm Q-learning \cite{DRL}, the policy-based multi-threaded DRL algorithms A3C \cite{ACA3C} and DPPO \cite{snoop33}.
\subsubsection{Q-learning}
Q-learning is a representative algorithm in value-based model-free learning. $Q$ represents the quality function $Q(s, a)$ of the policy $\pi$, which refers to the expectation when executing action $a$ in the state of $s$ at a specific moment.
The algorithm associates all states $s$ with action $a$ to form a $Q$ table to store the $Q$ value of the state-action pair $(s, a)$, and generates the best policy $\pi ^ {*}$ by selecting the action with the highest $Q$ value in each state. The update mechanism explaining in formula:
\begin{equation}
Q(s,a)\leftarrow Q(s, a)+\alpha[r+\gamma \max _{a^{\prime}}Q(s^{\prime},a^{\prime})-Q(s,a)]
\label{equ:3}
\end{equation}
where $s$, $a$ are the current state and action, respectively. $s^{\prime}$ is the next state that emerges as a result of action $a$, $a^{\prime}$ is a possible action in state $s^{\prime}$. $r$ is the instant reward, $\alpha$ and $\gamma$ represent learning rate and discount factor respectively.
\subsubsection{Asynchronous advantage actor critic}
Asynchronous Advantage Actor Critic (A3C) \cite{ACA3C} is a DRL algorithm based on policy gradient. A3C adopts multi-thread mechanism, and uses Actor-Critic(AC) network structure in both of the chief network and the thread network. AC is divided into Actor network $\pi ^ {{\theta} ^\prime} (a|s)$ and Critic network $ V ^ {{\mu} ^\prime} (s)$. The corresponding policy $\pi(a|s;{{\theta} ^\prime})$ is obtained by inputting the current state $s$, which indicates the probability of selecting an action $a$ under the condition of state $s$ and the Actor network parameters $\theta^\prime$.
In A3C, an advantage function $A(s,t)$ is constructed by using the output of the value function $ V(s|{{\mu} ^\prime})$ to evaluate the policy adopted, ${{\mu} ^\prime}$ is Critic network parameters. When $N$-step sampling is used, the advantage function obtained is:
\begin{equation}
A(s,t)=r_t+\gamma r_{t+1} + \ldots + \gamma^{n-1} R_{t+n-1}\gamma^n V(s^\prime)-V(s) = R(t)-V(s)
\label{equ:4}
\end{equation}
where $n$ means $N$-step, $\gamma$ is the discount factor. $R(t)$ represents the reward function at the current time $t$, $R(.)$ represent the value of the current state $s$. Then calculate the first derivative of $\theta ^\prime$ in Actor network and the second derivative of $\mu ^\prime$ in Critic network to update the $d\theta$ and $d\mu$, respectively.
\begin{equation}
d\theta \leftarrow d\theta + \nabla_{{\theta}^\prime} \log
\pi(a|s;\theta^\prime)A(s|{{\mu} ^\prime})
\label{equ:5}
\end{equation}
\begin{equation}
d\mu \leftarrow d\mu + \partial A(s|{{\mu}^\prime})^2 / \partial {{\mu}^\prime}
\label{equ:6}
\end{equation}
\subsubsection{Distributed proximal policy optimization}
Similar to the A3C algorithm, the essence of Distributed Proximal Policy Optimization (DPPO) \cite{snoop33} algorithm uses the Proximal Policy Optimization (PPO) algorithm to train agent in a multi-thread distributed manner. The PPO algorithm is based on policy gradients and also uses a parameter $\theta$ of a neural network to approximate policy $\pi_\theta (a|s)$ directly. Besides, PPO is also implemented in an AC framework. PPO uses the advantage estimation function with reduced variance to stabilize the policy gradient while learning the approximate policy $\pi_\theta (a|s)$. The most significant difference between the AC algorithm and PPO is that three networks are implementing the PPO algorithm: a Critic-network and two Actor-networks, just the old and new Actor-networks and the primary function is to limit the update step size of the Actor-network.
At each training step, PPO collects experience by executing the present policy within a set of time steps, calculates the practical returns and advantages, then utilizes batch learning to optimize a clipped surrogate objective $L^{clip}$ that restrains the number of the updated strategies that may differ from the older strategies.
\begin{equation}
L^{clip}(\theta)=\mathbb{E}_t [min(r_t(\theta)\hat{A_t},clip(r_t(\theta), 1-\epsilon, 1+\epsilon) \hat{A_t} ]
\label{equ:9}
\end{equation}
where $\theta$ is the neural network parameter to approximate policy $\pi$. $\hat{A_t}$ refers to the estimation of the advantage function at the moment $t$. $\epsilon$ refers to the hyperparameter, used to restrict the change range of $r_t(\theta)$. It represents the probability ratio of the new policy to the old policy:
\begin{equation}
r_t(\theta)=\frac{\pi_\theta(a_t|s_t)}{\pi_{old}(a_t|s_t)}
\label{equ:10}
\end{equation}
In addition, the discrepancy between DPPO and A3C is that the update mechanisms of the chief and threads are different. The chief network of A3C does not participate in training. It only collects each thread's trained parameters, weighing them and sending them to each thread. The parameter of the child thread is new, and the parameter of the chief is old. On the contrary, the chief network of DPPO first starts training, sends its trained parameters to each thread, and then performs the process of parameter transfer and parameter update continuously, in which the threads always carry out the old strategy training and the main thread for the training of new policies.
\section{Methodology\label{Method}}
In this section, the method of automated penetration testing based on GAIL is introduced. First, the state-action pairs are automatically collected to construct an expert knowledge base when the pre-trained DRL / RL model executes successful post-exploit.Second, input the expert knowledge and the state-action pairs generated online by the different DRL / RL model into the discriminator of GAIL for training.At last, the discriminator's output reward is applied to guide the agent to perform the action with a higher penetration success rate to improve PT's performance. The overall framework of the proposed GAIL-PT is illustrated in Fig.~\ref{fig:1}, including three stages: \textcircled{1} Construction of penetration experts knowledge base; \textcircled{2} GAIL training; \textcircled{3} Automate penetration testing based on GAIL.
\subsection{Construction of penetration experts knowledge base}
Expert sample data are necessarily acquired in advance when conducting PT based on GAIL to train agents in different PT environments. Different from general game scenes, e.g., Gym and Atari \cite{DQN}, there are not any published expert samples for agent training in PT environment. Therefore, to our best knowledge, it is the first time, we construct penetration experts knowledge base for PT.
There are several ways to obtain PT expert knowledge samples, such as saving penetration rules defined by network security experts. It is usually difficult and expensive to collect these rules manually. Therefore, when using DRL or RL model to perform PT, we automatically collect the state-action pairs of the agent when successfully executing post-exploit, then save them into the expert knowledge base constructed online. At present, DeepExploit \cite{DeepExploit} is a better way to exploit the single target host Metasploitable2 \cite{metasploitable2} than manually using Metasploit.Using DeepExploit to perform automated PT will encounter three different results in an authentic single target host: unsuccessful exploit, successful exploit and successful post-exploit.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.77\linewidth]{method.jpg}
\caption{Overall framework of GAIL-PT. Stage \textcircled{1} Construction of penetration experts knowledge base: collecting PT expert samples in different penetration scenarios, storing the state-action pairs when using RL / DRL model executing successful exploitation to construct the expert knowledge database. Stage \textcircled{2} GAIL training: Input both the expert samples and state-action pairs generated by the different RL / DRL model online into the discriminator in GAIL for training.
Stage\textcircled{3} Automate penetration testing based on GAIL: RL / DRL agent applies the new action from the well-trained discriminator to automate the PT process and execute more effective penetration attacks.}
\label{fig:1}
\end{figure*}
As shown in Fig.~\ref{fig:2}, when the Kali server initiates a penetration attack to the target host, if stage 1 executes successfully, the target server will establish a session connection with the Kali server at the same time like stage 2, then will use the payload to execute related operations on the target host to gain the root privilege. We call the process of privilege promotion post-exploit, which corresponds to stage 3. Suppose stage1, stage 2 and stage 3 all execute successfully, in that case, it indicates that the Kali server has successfully exploited the target host and performed subsequent privilege promotion operations by the payload, which means the post-exploit is also successful. We set different reward values for mixed penetration results and set the highest value for a successful post-exploit. Therefore, in the process of PT, the reward value obtained by the agent can be used to confirm the penetration result at that moment. Besides, We collected the state-action pairs with the highest reward value as the expert sample data and stored them into expert knowledge base in the form of one-to-one correspondence between states and actions.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.7\linewidth]{postexploit.jpg}
\caption{Process of executing integral penetration attack. Stage \textcircled{1}: Kali server launches penetration attack toward Target host. Stage \textcircled{2}: Target host establishes session and feedback shell with Kail server after successful exploit. Stage \textcircled{3}: Kali server uses the payload remotely to the target Server for executing related operations to promote the privileges and obtain its root authority.}
\label{fig:2}
\end{figure*}
Like the single target host scenario, the mode of collecting expert sample data in the network scenario also means storing the state-action pairs with the highest reward values when exploiting the target sensitive hosts successfully at the lowest cost.
Generally speaking, construction of penetration experts knowledge base is similar for both exploiting a single target host and exploiting a network. Specifically, exploiting a host means acting some related operations on the target host, like executing the payload, etc., finally gaining control of the target host. In the network scenario, after gaining the control of one host, the attacker can use the compromised host as a stepping stone according to network connection relationships to carry out lateral movements for obtaining control of the final sensitive host. In other words, exploiting a network is just like finding an optimal attack path, and the path shows the exploit relationship between some hosts in different subnets. It is worth mentioning that the most significant distinction between our approach and the attack graph lies in our method does not need to know the network topology information in advance but finds an optimal attack path through continuous trial and error training.
For better understanding, an illustration example is shown in Fig.~\ref{fig:3}. There are various combinations of state-action pairs due to the diversity of vulnerabilities when exploiting the real target host Metasploitable2. The five dimension state represents operating system, port service, product version, protocol and exploit module type. The action corresponds to the number of payloads in Metasploit. Take the first row of data of a single target host as an example; 0.875 means the operating system is Linux, the port service is SSH (Secure Shell), the product version is 0.0, the protocol is TCP (Transmission Control Protocol), and the exploit module type is 0 represents choosing the first item of the exploit list. As we know, there are 593 (0 $\sim$ 592) payloads in Metasploit. Action 53 means choosing the 53rd payload to execute the exploit operation.
However, for expert sample data in the network scenario, its state represents the configuration details and vulnerability information of all hosts observed by the agent in the different network environments. Besides, action means the operations of vulnerability scanning, exploit and privilege promotion executed by the agent arriving at the target host. Take the small-scale scenario's state and action as an example. The state vector consists of different host vectors for each subnet and starts with the first host. At the beginning of the state vector, the host location is represented by a 10 bit one-hot encoding. The numbers, i.e., 0 means false and l means true, behind successively represent compromised, reachable, discovered, value (specific value of different hosts), discovery value, access, os (operating system, i.e., 1 means Windows, otherwise means Linux), exploit services and privilege promotion processes. Besides, the additional final row in the state vector, representing action success, connection error, permission error and undefined error. It also uses 1 or 0 to represent true or false.
There are eight hosts in the small-scale network scenario (the scenario is shown in Fig.\ref{fig:smallscale}), including two types of the operating system: Windows and Linux, three types of exploit services: SSH, HTTP (Hyper Text Transfer Protocol), and FTP ( File Transfer Protocol ), two types of privilege promotion process: Tomcat and daclsvc, two types of firewall restrictiveness: HTTP and SSH. Consequently, the actions could have 72 (8$\times$9) types of operation combinations. The 16th action means executing privilege promotion by the Tomcat process on the sensitive hosts in subnet 2, which is critical in exploiting the small network scenario. In short, the state-action pair corresponding to the best penetration attack path, and is unique and deterministic, so the expert sample data is relatively fixed. After that, we only did simple and repeated expansions of the sample size.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.68\linewidth]{PTexpertsample.jpg}
\caption{Examples of PT expert knowledge base.}
\label{fig:3}
\end{figure*}
\subsection{GAIL training}
After collecting the PT expert sample data, this section introduces training the GAIL network.
Our proposed GAIL contains three different neural networks: Policy Neural Network (Actor), Value Neural Network (Critic) and Discriminator Network (Discriminator). The Actor consists of a four-layer network. The hidden layers comprise three dense layers with output dimensions of 50, 100 and 200 connected through the ReLU activation function. The last is the output layer with a Softmax activation function, and the output dimension corresponds to the number of actions.
The network structure of Critic is similar to that of Actor. The only difference is that the last output layer is linear, the dimension is one. Discriminator, known as Reward Neural Network, has a four-layer network with three hidden layers of the same structure as the Actor's hidden layers. The last output layer is activated by Sigmoid and outputs a specific probability value as a discounted reward.
The process of GAIL-training is exhibited in Fig.~\ref{fig:4}. The Actor, Critic and Discriminator are alternately trained. The inputs of the Actor and Critic are both currently observed states, while the output of Actor is the corresponding action, and the output of Critic is the value of the current state. The input of the Discriminator are the state-action pairs of the agent and PT experts. Besides, the state-action pairs of the expert sample are only used for training, and the output of the discriminator is the discount reward value.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.72\linewidth]{GAILtraining.jpg}
\caption{The Process of GAIL training. Stage \textcircled{1}: Put the state-action pairs from PT expert knowledge base and generating by online model respectively into the Discriminator for training together. Stage \textcircled{2}: Through maximizing the standard action reward given by the sample of PT experts and minimizing the action reward output by the agent to train the Discriminator. Stage \textcircled{3}: Subtract the discount reward and the value output by the Discriminator and the different DRL model to get the advantage function. Stage \textcircled{4}: Use advantage function to update DRL model meanwhile guiding agent output actions with a higher probability of successful penetration.}
\label{fig:4}
\end{figure*}
The main processes of GAIL-training are as follows: (1) Put the state-action pairs from PT expert knowledge base and generating by DRL / RL online model respectively into the Discriminator for training together. (2) Through maximizing the standard action reward given by the sample of PT experts and minimizing the action reward output by the agent to train discriminator. (3) Use the discriminator's output to replace the original model's reward function to guide the training directions. (4) The DRL model output predicted distribution is infinitely close to the trajectory of PT expert knowledge base as much as possible.
During the training process of imitating PT experts strategy, the Actor network is used to replace the generator $G$, its output action and state are paired into the Discriminator, and compared with the expert data. The output of the Discriminator $D:S \times A\rightarrow(0,1)$ is considered as the reward signal to guide strategy learning in imitation learning. Therefore, regarding the training of policy and reward function as the process of a game that is analogous to the game of $G$ and $D$ in Generative Adversarial Network (GAN), the objective function $L_{GAIL}(\pi,D)$ as follows:
\begin{equation}
\min_{\pi}\max_D L_{GAIL}(\mathbb{E}_{\pi}[\log D(s,a)]+\mathbb{E}_{\pi_E}[\log(1-D(s,a))]
\label{equ:11}
\end{equation}
The $\pi$ in the formula represents the policy that should be learned, and $\pi_{E}$ means the expert strategy. The first term $\log D(s, a)$ denotes the judgment of the Discriminator on the factual data, and the second term $\log(1-D(s, a))$ denotes the judgment of the generated data. Through such a maximum-minimum game process, $G$ and $D$ are optimized alternately to train the Actor and Discriminator networks. In the training process, we get the total loss by adding the expert loss and the agent loss, minimizing the loss function by gradient derivation to update the Discriminator and Actor network parameters in reverse:
\begin{equation}
Loss =\min \sup_{D\in(0,1)^{S\times A}} \mathbb{E}_{\pi}[\nabla_{\omega}\log D_{\omega}(s,a)]+\mathbb{E}_{\pi_E}[\nabla_{\omega}\log(1-D_{\omega}(s,a))]-\lambda\nabla_{\theta}H(\pi_{\theta})
\label{equ:12}
\end{equation}
where $H(\pi_{\theta})\overset{\Delta}= \mathbb{E}_{\pi}[-\log \pi_\theta(a|s)]$, it represents the entropy of the imitation policy, controlled by a constant $\lambda(\lambda \ge 0)$, as the policy regular term in the loss function, and $\omega$ and $\theta$ signifies the parameter of Discriminator and Actor network separately.
\subsection{Automated penetration testing based on GAIL}
After GAIL's training is completed, automated penetration testing can be performed based on GAIL. This section narrates the automated penetration testing process based on GAIL. The training framework has been shown in Fig.~\ref{fig:1}.
\subsubsection{Automated penetration testing based on GAIL combined with DRL}
In the actual scenario of exploiting the single target Metasploitable2, we define the PT process as an MDP at first. The states $s$ of the agent is defined by the information obtained by Nmap through port scanning; secondly, we define the actions $a$ selected by the agent in the PT, the actions corresponding to the payload sequences list in Metasploit Framework (MSF); at last, the reward $r$ is set according to the penetration result of the applied payload. We use four different DRL models to train the agent, such as A3C, DPPO, A3C-GAIL and DPPO-GAIL. What is more, we all used a multi-threaded mechanism for training to speed up the learning efficiency of the agent.
Each of the A3C and DPPO models we used consisted of 4 layers neural networks: the input layer, the hidden layers with three fully connected and the output layers. The state inputs of the model, i.e., operating system, port service, product version, protocol and exploit module type, are all obtained by scanning the target through the vulnerability scanning tool Nmap. The action output of the model corresponds to the probability distribution and the state value of each payload in Metasploit. For the Actor network in the model, we use Softmax as the activation function, and the output of the Actor network is the probability distribution $P(a)$ of each action; while the Critic network uses Linear as the activation function, and its output is the state value $V(s)$ corresponding to the current state.
We introduced a discriminator to participate in the Actor and the Critic network training during the Actor and the Critic network training. There are specific differences from A3C and DPPO models in the training process based on the DRL model combined with GAIL. The difference between the value function and the discriminator output discount reward is applied as the advantage function to guide the training of the policy network, and the output of the GAIL model is consistent with the original model.
The DRL model is regarded as an attacker in the learning process. According to the input vulnerability information, the agent will choose an effective payload to exploit the target host until further getting its root authority.
Therefore, obtaining the root authority of the target host is a sign of a successful exploit. To be more precise, it is the post-exploit after the privilege promotion operation has been done, and then the agent will be rewarded. In addition, the value of rewards $r$ depends on whether the PT attack is effective.
As shown in Fig.~\ref{fig:2}, if the agent executes a penetration attack from the Kali server to the target host, establishing a session connection with the Kali server and then executing the payload to gain root authority of the target host favorably. It indicates that the post-exploit is prospering, and the agent will obtain the highest reward.
On the contrary, if the agent only initiates the penetration attack without establishing a connection with the Kali server, the agent will be given a lower reward at this time. Furthermore, if the agent has not exploited the target host successfully, its reward will be negative. The specific settings of the reward are as follows:
\begin{itemize}
\item The post-exploit is successful, $r$=$100$;
\item The exploit is successful, but the root authority for target host is not obtained, $r$=$1$;
\item The exploit is failure, $r$=$-1$;
\item For other operations, regardless of whether the exploitation is successful or not, $r$=$-1$.
\end{itemize}
Negative reward $r$=$-1$ means to punish the agent, and the agent tries to maximize the reward all the time. Therefore, if the agent has been punished with negative rewards, it will try to reach the penetration goal as soon as possible. It corresponds to a situation where an actual attacker realizes detecting vulnerabilities and exploiting them successfully. In addition, as long as the agent gets a reward, it symbolizes that each round of training is over.
Details regarding the automated PT via GAIL combined with DPPO are presented in \textbf{Algorithm 1}.
\begin{algorithm}
\caption{DPPO-GAIL}
\begin{algorithmic}[1] %
\Require Expert PT trajectories $\hat{\tau}_{GAIL}\sim{\pi_{E}}$, five-dimension state $s$ including operating system type, opening-port service name, product version, penetration module type, and key information of exploiting target type from using Namp to scan Metaspolitable2, iterations $i=200000$.
\Ensure Action $a$ means payload which is infinitely close to the expert samples $a_E$.
\State Initialize DPPO actor-net and GAIL discriminator-net parameters $\theta_0$; $i_0$.
\For{$i = 0,1,2,\ldots$}
\State Generate state-action pairs from DPPO model online and store them as $\pi_t$.
\State Sample online trajectories $\hat{\tau}_{i}\sim{\pi_{\theta_i}}$.
\State Update the discriminator-net parameters from $\omega_{i}$ to $\omega_{i+1}$ with the gradient:
\State $G_d = \hat{\mathbb{E}}_{\pi_{GAIL}}[\nabla_{\omega}\log D_{\omega}(s,a)]+\hat{\mathbb{E}}_{\pi_t}[\nabla_{\omega}\log(1-D_{\omega}(s,a))]$.
\State From DPPO actor-net take a policy step from $\theta_i$ to $\theta_{i+1}$ respectively.
\State Using the TRPO rule with the cost function $\log(D_{\omega_i+1}(s,a))$.
\State Specifically, DPPO takes a $L^{clip}$ natural gradient step to update old and new policy with \eqref{equ:9}, and:
\State $\hat{\mathbb{E}}_{\pi_{GAIL}}[\nabla_{\theta}\log \pi_{\theta}(a|s)Q(s,a)-\lambda\nabla_{\theta}H(\pi_{\theta})]$,
\State where $Q(\bar{s}, \bar{a})=\hat{\mathrm{E}}_{\pi_{GAIL}}\left[\log \left(D_{\omega_{i+1}}(s, a)\right) \mid s_{0}=\bar{s}, a_{0}=\bar{a}\right]$
\EndFor
\end{algorithmic}
\end{algorithm}
\subsubsection{Automated penetration testing based on GAIL combined with RL}
When conducting penetration testing in the network attack simulator NASim\cite{nasim}, we also model the PT process as an MDP at first, the state is defined as the configuration details and vulnerability information of all hosts observed by the agent in the different network environments. The larger the number of hosts, the larger the state scale. Besides, action means the manipulations of vulnerability scanning, exploit and privilege promotion executed by the agent arriving at the target host. We defined the action as an attack vector, $<m,c>$ means the agent manipulates $m$ on the host computer $c$. In each training step, the scale of operating space for an agent reaches $(P\times Q)$, where $P$ are the number of hosts in the network, $Q$ represents the number of manipulations that exploit the final target host could perform. After the agent executes a particular operation, the environment will feedback a corresponding reward value. The reward value is calculated as shown in \eqref{equ:13}, including the value of the host and the cost of exploits, privilege promotion, or scanning operations on the host.
\begin{equation}
R=\sum_{c\in C} value(c) - \sum_{m\in M} cost(m)
\label{equ:13}
\end{equation}
We use the RL algorithm Q-learning as a benchmark to automatically generate penetration paths and optimize its process for path-finding. During the training process, the agent will select the corresponding representative vulnerabilities according to the service to calculate the vulnerabilities' probability. $C$ in Equation \eqref{equ:14} represents the set of host computers in different networks that the agent can exploit, and $M$ represents the set of manipulations the agent can adopt. The attacker intends to maximize the cumulative reward and explore the optimal penetration path, until exploiting the most valuable sensitive target hosts successfully with as few operations as possible.
\begin{equation}
\max_\pi{\mathbb{E}}[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t,s_t+1)|\pi]
\label{equ:14}
\end{equation}
Subsequently, we combined GAIL with the Q-learning algorithm. The Q-learning-GAIL algorithm is applied to three different scenarios of a small-scale network (considering whether there exists a honeypot) and a large-scale network to train the agent to accomplish the optimization of finding the penetration path. The Q-learning algorithm is different from the DRL algorithm. Its optimization process is to continuously minimize the difference $TD-error$ between the target value $Q_{tar}$ and the realistic evaluation value $Q_{eva}$. When calculating the current value, $Q_{eva}$, the immediate reward $r$ needs to be are summed with the discounted maximum target value $Q_{tarnex}$ in the next state. Therefore, to improve the optimization effect, we superimpose the instant reward $r$ and the reward $r_d$ of the discriminator output, which is used to guide the learning of the imitation policy, so that leads the action distribution of the RL model is close to PT experts samples as much as possible.
Details regarding the automated PT with GAIL combined with Q-learning are presented in \textbf{Algorithm 2}.
\begin{algorithm}
\caption{Q-learning-GAIL }
\begin{algorithmic}[1] %
\Require Expert PT trajectories $\hat{\tau}_{GAIL}\sim{\pi_{E}}$, states $s$ including configuration knowledge and vulnerability information of all hosts, iterations $i=20000$.
\Ensure Actions $a$ means exploits or privilege escalation which are infinitely close to the expert samples $a_E$.
\State Initialize $Q(s,a)$ arbitrarily and GAIL discriminator-net parameters $\omega_0$.
\For{$i = 0,1,2,\ldots$}
\State Generate state-action pairs from RL model online and store them as $\pi_t$.
\State Sample online trajectories $\hat{\tau}_{i}\sim{\pi_{\theta_i}}$, initial $s$..
\State Update the discriminator-net parameters from $\omega_{i}$ to $\omega_{i+1}$ with the gradient:
\State $G_d$ = $\hat{\mathbb{E}}_{\pi_{GAIL}}[\nabla_{\omega}\log D_{\omega}(s,a)]+\hat{\mathbb{E}}_{\pi_t}[\nabla_{\omega}\log(1-D_{\omega}(s,a))]$.
\State Choose $a$ from $s$ using derived from $Q($$\epsilon$-greedy$)$.
\State Using the TRPO rule with the cost function $\log(D_{\omega_i+1}(s,a))$.
\State Take action $a$, observe $r$, $\overline{s}$.
\State From discriminator-net get discounted reward $r_d$.
\State Update $Q(s,a)$$\leftarrow Q(s,a) + \alpha[r+r_d+\gamma\max_{\overline{a}}$ Q($\overline{s}$,$\overline{a}$)$-Q(s,a)], $s$\leftarrow$ $\overline{s}$.
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Convergence analysis of GAIL}
Before GAIL \cite{GAIL} was proposed, Ho et al. \cite{im15} first proposed the inverse reinforcement imitation learning (IRL-IL) framework:
\begin{equation}
\max_{\pi}\min_r\mathbb{E}_{\pi}[r(s,a)]-\mathbb{E}_{\pi_E}[r(s,a)]+\psi(r)
\label{equ:16}
\end{equation}
where $r(s, a)$ represents the reward function, and the reward $r$ corresponding to the state-action pair $(s, a)$, ${\pi_E}$ represents expert strategy, $\pi$ represents the strategy need to be learned, $\mathbb{E}_{\pi}[$·$]$ denotes the expectations about the strategy, and $\psi(r)$ is the penalty term of the reward function. Based on this framework, GAIL gave a particular but more reasonable form of penalty term:
\begin{equation}
\psi_{G A I L}(r) \triangleq\left\{\begin{array}{cc}
\mathbb{E}_{\pi_{E}}[g(r(s, a))] & \text { if } r>0 \\
+\infty & \text { otherwise }
\end{array}~g(x)=\left\{\begin{array}{cc}
x+\log \left(1-e^{-x}\right) & \text { if } x>0 \\
+\infty & \text { otherwise }
\end{array}\right.\right.
\end{equation}
The valuable feature for the penalty term $\psi_{GAIL}(r)$ is that it encourages the new reward function $g(x)$ to assign a more considerable reward value to the expert strategy ${\pi_E}$ ; its special feature is that if the reward function satisfies a specific form:
\begin{equation}
r(s,a)=-\log D(s,a)
\label{equ:17}
\end{equation}
where $D(s,a)$ represents the probability that the Discriminator $D$ discriminates $(s, a)$ produced by the expert strategy. It happens to combine IRL-IL with Generative Adversarial Networks (GANs): the Actor network of inputting state and generating policy can be compared to Generator $G$. The reward function of inputting state-action pairs and outputting discount rewards can be compared to Discriminator $D$. The optimization process of the policy based on the current reward function can be analogous to the training process of the Generator. Similarly, the optimization process of the reward function can be analogous to the training process of the Discriminator.
Therefore, the core of GAIL is to minimize the Jensen-Shannon divergence between the state-action sample distribution $\rho_{\pi}(s, a)$ generated by the strategy $\pi$ and the expert sample distribution $\rho_{\pi_E}(s, a)$:
\begin{equation}
\psi^*_{GAIL}=\sup_{D\in(0,1)^{S\times A}} \mathbb{E}_{\pi}[\log D(s,a)]+\mathbb{E}_{\pi_E}[log(1-D(s,a))].
\label{equ:18}
\end{equation}
Equation \eqref{equ:18} proportional to the best negative logarithm loss of the binary classification problem of distinguishing the different state-action pairs. The results show that the optimal loss is the Jensen-Shannon divergence under a constant offset and scale, which is also the square measure between the standardized occupancy distributions:
\begin{equation}
D_{JS}(\overline{\rho}_{\pi},\overline{\rho}_{\pi_E})\overset{\Delta} = D_{KL}(\overline{\rho}_{\pi}||(\overline{\rho}_{\pi}+\overline{\rho}_E)/2)+D_{KL}(\overline{\rho}_{E}||(\overline{\rho}_{\pi}+\overline{\rho}_E)/2)
\label{equ:19}
\end{equation}
where $\overline{\rho}_{\pi}=(1-\gamma)\rho_{\pi}$, $\overline{\rho}_{\pi_E}= (1-\gamma)\rho_{\pi_E}$. Regarding the causal entropy as a policy regular item, it is controlled by $\gamma (\gamma\geq 0)$, and in order to make the classification clearer, abandoning the standardization of the occupancy measurement $1-\gamma$, thereby obtaining a new imitation learning algorithm:
\begin{equation}
\mathop {\min }_{\pi}\psi^*_{GAIL}({\rho}_\pi-\rho_{\pi_E})-\lambda H(\pi)=D_{JS}({\rho}_\pi,\rho_{\pi_E})-\lambda H(\pi)
\label{equ:20}
\end{equation}
So we can get a strategy, and its occupancy rate indicator minimizes the Jensen-Shannon divergence between the generated and expert samples. The above Equation \eqref{equ:20} minimizes the accurate metric between the occupancy metrics. Therefore, unlike the linear apprentice learning algorithm, it can imitate expert strategies accurately. It also proves that the state-action pair generated by the instant strategy $\pi$ can fit the sample distribution of the expert strategy $\pi_E$ finally by minimizing the difference between ${\rho}_{\pi}(s, a)$ and ${\rho}_{\pi_E}(s, a)$ so that GAIL could come to converge.
\subsection{Analysis of algorithm complexity}
In the realistic scenario of exploiting Metasploitable2, the automated PT process is completed based on A3C-GAIL and DPPO-GAIL. As for DRL algorithms, the algorithm's complexity is related to the number of training episodes $m$ and maximum steps $t$ per episode. For the A3C algorithm, the thread network and the chief network adopt AC structure with only one for-loop statement nested. Besides, the GAIL network also nests one for-loop. Because of the GAIL, the Actor and the Critic adopt an alternative train mechanism. After combining with the GAIL, the algorithm complexity of A3C-GAIL is $O(n_mn_t)$. In addition, the DPPO algorithm uses the new and old policy update mechanisms. The training program of the chief network contains three for-loop statements, two of which are nested in another for-loop in a juxtaposed relationship. One of these two for-loop statements are used for model training. The other is used for agent choosing action. Then combined with GAIL, the algorithm time complexity of DPPO-GAIL is $O(n_mn_t)$.
The Q-learning algorithm's complexity is relatively small for the network scene to automatically find and optimize penetration paths because there is only one for-loop statement in it. Consequently, after nesting with GAIL, the algorithm time complexity of Q-learning-GAIL stays at $O(n)$.
\section{Experiments\label{Exp}}
The experiments we conducted are divided into the following two parts. First, we test the penetration performance of using the combination of GAIL network and DRL algorithms A3C-GAIL, and DPPO-GAIL to automate PT in the real Metasploitable2 scenario. Then, considering the complexity of the actual penetration environment, we employ the network attack simulator NASim to find the optimal penetration path automatically. To testify the effectiveness of the RL model, we combine GAIL network with RL algorithm Q-learning, and evaluate the penetration performance of using the Q-learning-GAIL in a small-scale network (considering whether there exists honeypot \cite{honeypot1} \cite{honeypot2}) and a large-scale network.
Specifically, all experiments will perform on a single server with an Intel i7-7700K CPU running at 4.20GHz, 64 GB DDR4 memory, 4 TB HDD and 2 TITAN Xp 12 GB GPU card, operation system is Ubuntu 16.04, and employ Python 3.6 with tensorflow-2.1.0 and Python 3.7 with torch-1.5.0 for exploiting in metasploitable2 and NASim.
\subsection{Research questions}
The experiment will be analyzed based on different experimental scenarios around the following four research questions (RQs):
\textbf{RQ1:} Can expanding expert knowledge samples improve penetration performance in exploiting Metasploitable2? What is the effect of the different amounts?
\textbf{RQ2:} How efficient is GAIL's penetration when exploiting Metasploitable2? Can GAIL achieve the SOTA performance?
\textbf{RQ3:} Does expanding the expert sample amount still improve the penetration performance in different network scenarios?
\textbf{RQ4:} Can GAIL also achieve the SOTA penetration performance in different network scenarios?
\subsection{Single host senario}
\subsubsection{Explanation for Metasploitable2 }
We used Metasploitable2 as the authentic single target host. The Metasploitable2 equipped an elaborated Ubuntu operating system designed as a security tool to test and demonstrate the environment for common vulnerabilities and attacks. Its primary purpose is to use as a target for launching penetration attacks by MSF. The target host opened 23 ports, and some of them opened port vulnerability information is shown in Table~\ref{tabel:metasploitable2}, including some high-risk ports such as 21, 23, 8180, etc. Many unpatched high-risk vulnerabilities include Samba, MSRPC, Shell and Command injection vulnerabilities, etc. Besides, some vulnerabilities open various services to the outside environment, and the database allows external connections. The user passwords in the system are vulnerable, and it equips with web vulnerability drill platforms such as DVWA and Mutillidae.
\begin{table}[!htp]
\footnotesize
\centering
\caption{Part of the opened port information of Metasploitable2}
\label{tabel:metasploitable2}
\resizebox{0.3\linewidth}{!}{
\begin{tabular}{lll}
\toprule \hline
\textbf{Port} & \textbf{State} &\textbf{Service} \\ \hline
21/tcp & open & FTP \\
22/tcp & open & SSH \\
23/tcp & open & Telnet \\
\dots & \dots &\dots \\
8180/tcp & open & HTTP \\
\hline \bottomrule
\end{tabular}
}
\end{table}
\subsubsection{Experiment setting}
In order to evaluate the penetration performance of different algorithms in exploiting the target Metasploitable2, we first choose DeepExploit as the benchmark, which automates PT based on the A3C algorithm.Second, DPPO is different from A3C only in the update mechanism of the thread and chief network, so we also explored the feasibility of using the distributed algorithm DPPO to automate the PT process, and tested its performance on exploiting the same target.At last, we combined the GAIL network with A3C and DPPO algorithm, respectively. We Applied the A3C-GAIL and DPPO-GAIL algorithms to automate the PT process, exploited the target host Metasploitbale2, and tested its penetration performance.
This section uses the above four different algorithm models to automate PT, and the ultimate goal is to execute the post-exploit successfully. The four algorithms all use 20 threads for 200,000 episodes of training and the same hyper-parameter value settings, details are shown in Table~\ref{tabel:hyper-parameter}.
\begin{table}[!htp]
\footnotesize
\centering
\caption{Hyper-parameter values of four algorithms.}
\label{tabel:hyper-parameter}
\resizebox{0.4\linewidth}{!}{
\begin{tabular}{ll}
\toprule \hline
\textbf{Hyper-parameter} & \textbf{Value} \\ \hline
Max steps per episode & 30 \\
Max update step size per episode & 5 \\
Learning rate & 0.0001 \\
Greedy rate &0.8 \\
Entropy coefficient &0.01 \\
Loss coefficient of Critic-net &0.05 \\
\hline \bottomrule
\end{tabular}
}
\end{table}
\subsubsection{Evaluation metrics}
In the single target host scenario, we apply the following five indicators from the perspectives of DRL performance and penetration performance to analyze the answers of \emph{\textbf{RQ1}} and \emph{\textbf{RQ2}}:
\begin{itemize}
\item \textbf{Total reward.} For the training of DRL models, the reward value most intuitively reflects the learning quality of the agent, so we apply the total reward as the first indicator. The total reward is summed in every episode, which can be used to evaluate the overall performance of the algorithm;
\item \textbf{Loss value.} For the training of DRL models, we can judge the stability of the model training by the loss value, so we apply the loss value as the second indicator. The loss value is calculated when the chief and child network parameters are updated, which can also be used to evaluate the overall performance of the algorithm;
\item \textbf{Post-exploit count.} In the process of exploiting the Metasploitable2, post-exploit means the agent exploited the vulnerabilities service in a different open port and successfully executed the payload to gain the privilege for the target host. The more successful post-exploit, the more effective the PT is. It is the most intuitive reflection of penetration performance;
\item \textbf{Time cost.} In the process of exploiting the Metasploitable2, time cost means the time spend in 200000 episodes training for using DRL models to exploit the target host. It reflects the cost we should pay. The less the time cost, the better the penetration performance is;
\item \textbf{Time cost under exploiting limited vulnerabilities.} Considering the limitations of applying the MSF to exploit the target host Metasploitable2, no matter using the MSF manually or automatically to exploit Metasploitable2, we could only exploit the vulnerabilities of 11 types of services: VNC (Virtual Network Computing), Telnet, SSH, RPC (Remote Procedure Call), ProFTPd, PostgreSQL (Postgre Structured Query Language), PHP (Hypertext Preprocessor), MySQL (My Postgre Structured Query Language), IRC (Internet Relay Chat), HTTP (Hypertext Transfer Protocol), Apache. Therefore, we compared the time to exploit these vulnerabilities and used it as the last indicator to evaluate the penetration performance.
\end{itemize}
\subsubsection{Research question 1}
\begin{center}
\fcolorbox{black}{gray!20}{\parbox{0.97\linewidth}
{
Can expanding expert knowledge samples improve penetration performance in exploiting Metasploitable2? What is the effect of the different amounts?
}
}
\end{center}
We will answer the question through analyzing the experiment results from the two indicators: Post-exploit count and Time cost.
This section used DeepExploit as the benchmark and then built an expert knowledge base by collecting expert state-action pairs when the post-exploit is successful. We tested the count of post-exploit and time cost of applying A3C-GAIL performing automate PT with 10,000, 25,000, 40,000, 55,000 and 70,000 pairs of five different expert sample amounts in the same training episodes, using the two evaluation indicators to overview the impact of the expanding expert samples on the penetration performance.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.66\linewidth]{Amount1128.png}
\caption{The impact of different expert sample sizes on penetration performance in exploiting the target host Metasploitable2 }
\label{fig:amount}
\end{figure*}
It can be observed from the experiment results in Fig.~\ref{fig:amount} that under the premise of considering the count of successful post-exploit and time cost comprehensively, the increase in the sample size of experts from 10,000 to 55,000 pairs, the improvement in penetration performance positively correlating with the size of the expert samples. When the expert sample size is expanded to 70,000 pairs, although the count of successful post-exploit increases slightly, the time cost is higher. Therefore, we determined that the sample size is around 55,000 pairs to achieve better penetration performance. Besides, we also use the DPPO-GAIL model with this expert sample size to automate PT in the subsequent experiment, and the final expert sample size is 57296 pairs.
\vspace{-0.3cm}
\begin{center}
\fcolorbox{black}{white!20}{\parbox{0.97\linewidth}
{
\emph{\textbf{Answer to RQ1}}:
Penetration performance is improved by expanding the amount of expert samples in the range of 55,000 pairs, but it tends to be saturated when the expert samples exceed this range.
}
}
\end{center}
\subsubsection{Research question 2}
\begin{center}
\fcolorbox{black}{gray!20}{\parbox{0.97\linewidth}
{
How efficient is GAIL's penetration when exploiting Metasploitable2? Can GAIL achieve the SOTA performance?
}
}
\end{center}
We will answer the question through analyzing the experiment results from the above proposed five indicators:
Total rewards, Loss value, Post-exploit count, Time cost, Time cost under exploiting limited vulnerabilities.
\textbf{Total rewards.} As shown in Fig.~\ref{fig:R&L}(a), at the beginning of training, the total reward value obtained by the agent is relatively low, and the loss function has not yet converged. With the continuous learning of the agent, when the training reaches 150,000 rounds, the reward function values for four algorithms all show a trend of convergence. Although the introduction of the GAIL mechanism does not speed up the convergence of the reward function of the A3C and DPPO models, the total reward value of the A3C-GAIL and DPP-GAIL are higher than the original DRL model, especially the total reward value of the A3C-GAIL is 243\% higher than DeepExploit.
\begin{figure}[t]
\centering
\subfigure[Comparison of reward]{
\includegraphics[width=0.48\linewidth]{reward1128.png}
}
\subfigure[Comparison of loss value]{
\includegraphics[width=0.48\linewidth]{loss1128.png}
}
\vspace{-0.4cm}
\caption{Different algorithms compared the reward value (left) and loss value (right). The training episodes of the four algorithms are all 200,000, they are all trained five times, and we present the best results of three of them in the figure. The shaded part of the left figure represents the fluctuation of the reward value in each episode, and the solid line represents the most stable training result. The loss value on the right figure corresponds to the situation when the reward value is the most stable.}
\label{fig:R&L}
\vspace{-0.3cm}
\end{figure}
\textbf{Loss value.} In the course of many experiments, we found that the loss value of the benchmark DeepExploit will explode in the later stage of training, which causes the model to fail to convergence. Therefore, we modify the loss function in the original DeepExploit algorithm and use modified DeepExploit as a new benchmark to compare with the other three algorithms. Since the multi-thread training mechanism only calculates the loss value of the model when passing the parameters between the thread and the chief network, the meaning of the ordinate is not equal to the number of training episodes. From the convergence of the loss function of the four algorithm models in Fig.~\ref{fig:R&L}(b), we can see that after updating the DRL algorithm model with GAIL mechanism 10,000 times, the fluctuation of the loss value should be stable to the original DRL model, particularly the loss fluctuation contrast between A3C-GAIL and A3C is more evident than DPPO-GAIL and DPPO.
\textbf{Post-exploit count \& Time cost.} In addition to the analysis of the total reward and loss value from the perspective of the deep reinforcement learning model, we also compare the penetration performance from the perspective of PT itself with two indicators: the count of successful post-exploit and the time cost for completing 200,000 episodes training of PT. As shown in Fig.~\ref{fig:20w}, after replacing the new and old policies and changing the thread and chief network parameter update mechanism, the post-exploit count of the DPPO model is 276 more than that of DeepExploit, but the time cost only increased by 311 seconds, which indicates that applying DPPO to automate PT is effective and its penetration performance is better than the DeepExploit.
For A3C-GAIL and DPPO-GAIL, we combined the GAIL network then employed the A3C-GAIL to perform intelligent PT. The post-exploit counts of A3C-GAIL are the highest among the four algorithms models. However, the time cost of A3C-GAIL is also the highest, with only 260 times more than the DPPO-GAIL in the count of post-exploit; the time cost is twice as high as DPPO-GAIL, which explains the reason why the total reward value of A3C-GAIL in Fig.~\ref{fig:20w} is higher than that of DPPO-GAIL. Since we only counted the number of successful post-exploit and gave positive rewards for the exploit that did not establish a session with the target host, A3C-GAIL may have performed multiple exploitations in extra time but failed in the post-exploit. Therefore, considering the count of post-exploit and time cost, we can conclude that DPPO-GAIL performs more efficient intelligent PT.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.64\linewidth]{pt20w.png}
\caption{Comparison of post-exploit count and time cost of different algorithms in exploiting the target host Metasploitable2 }
\label{fig:20w}
\end{figure*}
\textbf{Time cost under exploiting limited vulnerabilities.} In order to verify the better performance of applying the DPPO-GAIL model to perform intelligent PT, we conducted another set of comparative experiments. Since Metasploit can only exploit the vulnerabilities of 11 service types in the target host Metasploitable2, we counted the time it takes to exploit these 11 different vulnerabilities for the four algorithms respectively as the last evaluation indicator. Intuitively, from Fig.~\ref{fig:pt11} we can see that under the guidance of expert samples and a combination of the GAIL network, the time cost of the DPPO-GAIL is the least, followed by A3C-GAIL. What is more, the time cost of DPPO-GAIL, A3C-GAIL and DPPO is reduced by 55.7\%, 46.1\%, and 17.1\% compared with DeepExploit, respectively.
Considering the above five different evaluation indicators adequately, under the guidance of expert sample knowledge and combination with the GAIL network, the performance of A3C-GAIL and DPPO-GAIL in intelligently exploiting the actual target Metasploitable2 has improved in total reward and the count of successful post-exploit compared with DeepExploit. The total reward value has increased by 253\% and 160\%, respectively, and the count of post-exploit has increased by 75.7\% and 46.5\%. However, the time cost of A3C-GAIL was twice as high as that of DPPO-GAIL. Besides, after exploiting 11 different types of service vulnerabilities, the time cost of the two algorithms was reduced by 46.1\% and 55.7\%, respectively.
\vspace{0.1cm}
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.64\linewidth]{pt11.png}
\caption{Comparison of and time cost of exploiting 11 different types of service vulnerabilities in the target host Meatsploitable2}
\label{fig:pt11}
\end{figure*}
\vspace{-0.4cm}
\begin{center}
\fcolorbox{black}{white!20}{\parbox{0.97\linewidth}
{
\emph{\textbf{Answer to RQ2}}:
The penetration performance of A3C-GAIL and DPPO-GAIL is better than that of A3C and DPPO. Besides, the DPPO-GAIL can execute more post-exploits successfully, and exploit more vulnerabilities of different service types in less time to achieve the SOTA performance.
}
}
\end{center}
\subsection{Network scenario}
In order to verify the effectiveness of automating PT in different scale networks based on GAIL, we conducted another part of experiments on three different network scenarios: a small-scale network with or without honeypot, and a large-scale network in a high-fidelity network simulator Nasim. The primary purpose of this scenario is to generate an optimal penetration path in a trial and error training manner, through exploiting one host in one subnet and gaining control of it, then using the compromised host as a stepping stone according to network connection relationships to carry out several lateral movements for obtaining the control of the final sensitive host.
\subsubsection{Experiment settings for different network scenarios}
At first, we automate PT in a small-scale network (without honeypot) scenario as shown in Fig.~\ref{fig:smallscale}. The network contains 4 sub-networks and 8 hosts, sensitive hosts are (2, 0) and (4, 0), which is the target host in PT. Firewalls block communication between subnets with restricted services, and penetrating from one subnet to another requires a specific cost each time.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.68\linewidth]{smallscale.jpg}
\includegraphics[width=0.31\linewidth]{legend.jpg}
\caption{It is the layout of the small-scale network. A honeypot is located in (3,2). In the scene without a honeypot, the location of (3,2) is regarded as a normal host.}
\label{fig:smallscale}
\end{figure*}
The hosts in the subnet can communicate with each other. The configuration information of each host demonstrating in Table~\ref{tabel:Configuration list}, including the hosts' respective virtual address, operating system, service information, process information, and the value of the host. In order to simulate the action of an actual attacker, we assume that the agent cannot directly obtain network topology information and host configuration information. Therefore, in addition to processing exploits(Exploit) and privilege promotion (Promotion) actions, scanning (Scan) can also be used to obtain relevant information about the host and target network.
For each host, the agent can choose the actions shown in Table~\ref{tabel:action list}. We choose processes and services that attackers often use during the PT process in a subnet to replace network security vulnerabilities, set the probabilities of `0.9' and `0.6' for SSH, HTTP and FTP according to CVSS score, respectively, and set the probability to `1' for privilege promotion and scanning. For example, the agent can get the user permission of the host through exploiting the vulnerability with FTP service then performing the Daclsvc privilege promotion action to get the root permission of the host, thereby realizing a penetration attack on the host.
\begin{table}[!htp]
\footnotesize
\centering
\caption{Configuration list}
\label{tabel:Configuration list}
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{lllll}
\toprule \hline
\textbf{Address} & \textbf{Operation System} &\textbf{Host-value} &\textbf{Service} &\textbf{Process} \\
\hline
(1,0) & Linux & 0 & HTTP & Tomcat \\
(2,0) & Linux & 100 & SSH,HTTP & / \\
(3,0) & Windows & 0 & HTTP & / \\
(3,1) & Windows & 0 & FTP,HTTP & Daclsvc \\
(3,2) & Windows & 0 & FTP,HTTP & Daclsvc \\
(3,3) & Windows & 0 & FTP & / \\
(3,4) & Windows & 0 & FTP & Daclsvc \\
(4,0) & Linux & 100 & SSH,HTTP & Tomcat \\
\hline \bottomrule
\end{tabular}
}
\end{table}
Then, we conduct the same experiments in the small-scale network with a honeypot scenario shown in Fig.~\ref{fig:smallscale} as well, optimizing strategy for finding the penetration path through RL algorithm to realize an intelligent PT process. The honeypot is located in (3,2). But in the samll-scale without a honeypot, the location of (3,2) is regarded as a normal host. We set the value of the honeypot host to -100, assuming the agent knows the address of the honeypot in the subnet. In this case, the agent can bypass the honeypot for exploiting.
\begin{table}[!htp]
\footnotesize
\centering
\caption{Agent action list}
\label{tabel:action list}
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{llllll}
\toprule \hline
\textbf{Name} & \textbf{Type} &\textbf{Operation System} &\textbf{Cost} &\textbf{Prob} &\textbf{Access}\\
\hline
SSH-Exp & Exploit & Linux & 3 & 0.9 &User \\
FTP-Exp & Exploit & Windows & 1 &0.6 &User \\
HTTP-Exp & Exploit & None & 2 & 0.9 &User \\
Tomcat & Promotion & Linux & 1 & 1 &Root \\
Daclsvc & Promotion & Windows & 1 & 1 &Root\\
Service-Scan & Scan & / & 1 & 1 &/\\
Os-Scan & Scan & / & 1 & 1 &/\\
Subnet-Scan & Scan & / & 1 & 1 &/\\
Process-Scan & Scan & / & 1 & 1 &/\\
\hline \bottomrule
\end{tabular}
}
\end{table}
At last, we conducted the same PT experiment in a large-scale simulated network scenario with 23 hosts. Different from the small-scale network scenario, the large-scale network scenario is more complicated: as shown in Fig.~\ref{fig:largescale}, it has 8 subnets and a total of 23 hosts. There is a sensitive host in subnet 2 and subnet 7, respectively (2, 0) and (7, 0), which are the target hosts of the PT. The hosts in the remaining subnets are all normal hosts; each host runs various services and communicates. Besides, only subnet 1 could communicate with the external network directly. The remaining 6 subnets communicated in the internal network; subnet 5 could only communicate with subnet 3 directly, and subnet 6 could only communicate with subnet 4 directly. For example, if the attacker arrives at (5,0), his goal is to exploit the sensitive host (7,0). Therefore, he has to come back to subnet 3 for lateral movements and other operations to exploit one host in subnet 4, then arrive at (7.0) through the same operation. It significantly increases the cost of the attack, so our purpose is to find the optimal penetration path at the lowest cost.
Besides, there are three operating systems types: Windows, Linux and Unix; seven different exploiting services: SSH, FTP, HTTP, RPC, PHP, Samba, SMTP (Simple Mail Transfer Protocol), and three privilege promotion process: Tomcat, Daclsvc, and Schtask in the large-scale network scenario. We set the probabilities of `0.9' for SSH, HTTP; set `0.6' for FTP, RPC, PHP and SMTP; set `0.3' for Samba according to CVSS score respectively, and set the probability to `1' for privilege promotion and scanning. The penetration mechanism of this scene is similar to the small-scale network.
However, due to the increase of exploiting services and hosts, the input state in the large-scale scene is expanded to 768 dimensions, and also the output action is expanded to 322 dimensions, which is a high-dimensional input and output scene. Due to the higher complexity of large-scale network scenarios, the combination of reinforcement learning to optimize the penetration path has the problem of too large state dimensions and sparse reward values. Therefore, we also introduce the expert sample knowledge into this scene and combine the GAIL network with training the agent for finding and optimizing the penetration path.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.77\linewidth]{largescale.jpg}
\caption{It is the layout of the large-scale network. Sensitive file server and sensitive host located in (2,0) and (7,0), respectively.}
\label{fig:largescale}
\end{figure*}
The reward value of successfully exploiting the sensitive host is 100 in the small scale network with or without the honeypot scenario illustrated in Fig.~\ref{fig:smallscale}, but the reward value of falling into the honeypot is set to -100.
Likewise, in the large-scale network shown in Fig.~\ref{fig:largescale}, setting the reward values of successfully exploiting each of two sensitive hosts to 100. Agents can only execute exploit between connected subnets or hosts in the network. The ending of each round of the agent are the following two situations:
\begin{itemize}
\item Obtaining root permissions for all sensitive hosts.
\item Training steps in each round have reached the maximum value.
\end{itemize}
The agent is tried to obtain the maximum cumulative reward value through continuous trial and error, thereby learning the optimal penetration path for exploiting the target sensitive host. In the small-scale with a honeypot network, since the reward value of falling into the honeypot is negative, the agent will choose to bypass the honeypot during the PT process and finally learn the optimal penetration path to exploit the sensitive host with fewer steps.
We first use the Q-learning algorithm to conduct experiments and then perform automated PT on the small-scale, small-scale with honeypot and large-scale networks. Second, expert knowledge is introduced based on the benchmark algorithm Q-learning and combines the GAIL network with Q-learning. At last, compare the penetration performance of Q-learning-GAIL and Q-learning in finding the optimal penetration path. Besides, we set the training round is 20000 for each scenario, recorded the steps and cost needed to generate the penetration path in each round, and adopted the average value as the experiment results. For two RL algorithms in three different scenarios: Q-learning and Q-learning-GAIL, we apply them to automate PT and evaluate their performance. \subsubsection{Evaluation metrics}
In the training phase, we calculating the average value every 1000 rounds, and applying the following three indicators to analyze the answers of \emph{\textbf{RQ3}} and \emph{\textbf{RQ4}}:
\begin{itemize}
\item \textbf{Average reward.} In the training process, the average round reward is calculated every certain round, which can be used to evaluate the overall performance of the algorithm;
\item \textbf{Average steps.} In the training process, the average round length is calculated every certain round, which represents the time cost required for the generation of the penetration path, and is also used to evaluate the overall performance of the algorithm.
\item \textbf{Probability of invading honeypot.} In the process of PT, the average probability of honeypot intrusion is calculated every certain round to evaluate the effectiveness of honeypot deployment.
\end{itemize}
\subsubsection{Research question 3}
\begin{center}
\fcolorbox{black}{gray!20}{\parbox{0.97\linewidth}
{
Does expanding the expert sample amount still improve the penetration performance in different network scenarios?
}
}
\end{center}
We will answer the question through analyzing the experiment results from the one indicator: Training rounds.
The optimal path is definite when optimizing the penetration path in a network scene. Therefore, there is no situation where a particular state corresponds to multiple optimal penetration paths. In other words, when the agent finds the best penetration path, the state and action at this time are a one-to-one relationship. What is more, there is no concept of the complexity of expert knowledge rules. Therefore, we are only considering the impact of expert sample amount on penetration performance.
We evaluated the impact of different expert sample amounts on the improvement of penetration performance in three different network scenarios separately and used the training rounds required to find the optimal penetration path as the indicator. It can be seen from Table~\ref{tabel:virtualamonut} that the sample size of expert knowledge has a specific impact on the training effect of optimizing the penetration path.
Without introducing expert sample knowledge, the training rounds required to find the optimal penetration path in the small-scale network, small-scale with honeypot network scenario, and large-scale network scenario are 4000, 5000, and 3000, respectively. When the sample amount expands to 5000 pairs, the rounds for generating the optimal penetration path are 3000, 3500, and 2000, respectively, reduced by 1000, 1500, and 1000 rounds in turn.
On this basis, the expansion of the expert knowledge sample size did not significantly accelerate generating the optimal penetration path. Therefore, we all introduced 5000 pairs of expert knowledge sample amounts in the following three scene experiments.
\begin{table}[!htp]
\footnotesize
\centering
\caption{Training rounds for different expert sample amount to find the \\optimal penetration path in different network scenarios}
\label{tabel:virtualamonut}
\resizebox{0.70\linewidth}{!}{
\begin{tabular}{lllll}
\toprule \hline
\textbf{\diagbox{Network scene}{Expert sample amount}} & \textbf{0} &\textbf{2500} &\textbf{5000} &\textbf{7500} \\
\hline
Small-scale & 4000 & 3500 & \textbf{3000} & 3000 \\
Small-scale with honeypot & 5000 & {4000} & \textbf{3500} &3500 \\
Large-scale & 3000 & 3000 & \textbf{2000} & 2000 \\
\hline \bottomrule
\end{tabular}
}
\end{table}
\vspace{-0.3cm}
\begin{center}
\fcolorbox{black}{white!20}{\parbox{0.97\linewidth}
{
\emph{\textbf{Answer to RQ3}}:
Penetration performance improved with expanding the expert samples in the range of 5,000 pairs, but it is not significantly improved when the expert sample exceeds this range.
}
}
\end{center}
\subsubsection{Research question 4}
\begin{center}
\fcolorbox{black}{gray!20}{\parbox{0.97\linewidth}
{
Can GAIL also achieve the SOTA penetration performance in different network scenarios?
}
}
\end{center}
We will answer this question from three different network scenarios combined with the three indicators: Average reward, Average step and Probability of invading honeypot through the following experimental results.
\textbf{Small-scale network}
It can be seen from the comparison of the average reward value and average steps in Fig.~\ref{fig:smallandhoney}, with the introduction of expert knowledge samples and the combination of the GAIL mechanism, accelerate the convergence of the reward function and training steps, and finally improve the penetration performance to a certain extent. In a small-scale network scenario without a honeypot, using Q-learning-GAIL algorithm to optimize the penetration path, the average reward value began to converge in 2000 rounds, and the average training steps also showed the same trend, both of them are about 1000 rounds faster than the Q-learning.
As to small-scale with a honeypot, the average reward value of each round is higher than Q-learning. The fluctuation of the reward value before convergence is much gentle than that of Q-learning. In terms of training steps, the average round steps of the Q-learning-GAIL algorithm tend to converge at 3000 rounds, while Q-learning tends to converge at 5000 rounds. The average reward value also shows this trend, and both are about 2000 rounds faster than Q-learning for finding the optimal penetration path.
In short, the introduction of expert knowledge samples speeds up finding the optimal penetration path in the small-scale network. Because the complexity of the small network is relatively lower than large-scale, the introduction of the GAIL mechanism does not reduce the average round steps and the steps of finding optimal penetration stabilizing at 9 steps in the end. On the other side, training costs are also reduced. Especially in the honeypot scenario, the reduction in training cost is twice that in the case of no honeypot.
\begin{figure}[t]
\centering
\subfigure[Comparison of average reward]{
\includegraphics[width=0.48\linewidth]{smallhoney1129.png}
}
\subfigure[Comparison of average steps]{
\includegraphics[width=0.48\linewidth]{smallhoneysteps1129.png}
}
\vspace{-0.4cm}
\caption{Comparison of average reward (left) and average steps (right) in small-scale network with or without honeypot scenarios by Q-learning and Q-learning-GAIL with 20000 rounds training.}
\label{fig:5}
\vspace{-0.3cm}
\label{fig:smallandhoney}
\end{figure}
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.5\linewidth]{pro.png}
\caption{Probability comparison of invading the honeypot }
\label{fig:pro}
\end{figure*}
\textbf{Small-scale network with honeypot} In addition, this section comparative analyzes the intrusion probability of honeypots in the small-scale network with a honeypot. Intuitively, this indicator reflects the effectiveness of deploying honeypots, that is, whether the agent will fall into the honeypot again after training through expert knowledge and GAIL network guidance. The agent's learning process aims to exploit sensitive hosts, not to invade the honeypot. We set invading the honeypot (3, 2) a negative reward. Therefore, the lower the probability of falling into the honeypot, the better the expert sample guides the agent to train, which also indicates that the agent's action distribution is infinitely close to the expert sample at this time.
Fig.~\ref{fig:pro} intuitively reflects that the probability that the agent trained by Q-learning-GAIL will fall into the honeypot is much lower than the benchmark Q-learning in the first 3000 rounds. At the beginning of training, the probability of falling into the honeypot is 100\% training by Q-learning, while training with Q-learning-GAIL is directly reduced to 0. Although in the next 3000 rounds, the probability that GAIL guides the agent to fall into a honeypot fluctuates slightly, in the entire 20000 rounds, the probability of applying Q-learning-GAIL algorithm to fall into honeypot is lower than Q-learning in each round, which also indicates that the introduction of the GAIL mechanism improves the penetration performance more significantly in the network scenarios with honeypot.
\textbf{Large-scale network} Compared with the small-scale network, it is not difficult to find that, when applying Q-learning-GAIL to determine optimal penetration path, the improvement of PT performance is more evident in the large-scale network, as shown in
Fig.~\ref{fig:largescale}.
In the early stage of training, the agent is still in the exploring stage. The average reward and average steps of the benchmark Q-learning in the first 3000 rounds show a major fluctuations trend. However, under the guidance of expert knowledge, the Q-learning-GAIL algorithm shows a relatively stable performance in the early stage of the training agent for optimal penetration path-finding. The average reward value is positive in the first 3000 rounds, and the average exploration steps are within 50 steps. After 2000 rounds, both the average reward value and the average steps show a convergence trend. Moreover, the final average steps stabilized at 15 steps, while the final average steps of Q-learning is 17-18 steps, which 2-3 steps have accelerated in the final round.
\begin{figure}[t]
\centering
\subfigure[Comparison of average reward]{
\includegraphics[width=0.48\linewidth]{largerewards1129.png}
}
\subfigure[Comparison of average steps]{
\includegraphics[width=0.48\linewidth]{largestep1129.png}
}
\vspace{-0.4cm}
\caption{Comparison of average reward (left) and average steps (right) in large-scale network scenarios}
\label{fig:5}
\vspace{-0.3cm}
\label{fig:largescale}
\end{figure}
In short, by introducing expert knowledge and combining the GAIL network with Q-learning to optimize penetration path, Q-learning-GAIL shows a more stable penetration performance and achieves the SOTA penetration performance with less time and cost.
\vspace{-0.3cm}
\begin{center}
\fcolorbox{black}{white!20}{\parbox{0.97\linewidth}
{
\emph{\textbf{Answer to RQ4}}:
Whether in a small-scale network with honeypot or not or in a large-scale network, the penetration performance of Q-learning-GAIL has all reached the SOTA. Especially in small-scale with honeypot and large-scale networks, penetration performance improved even more apparent.
}
}
\end{center}
\section{Conclusion and future work\label{Conclusion and future work}}
This paper proposed a method to automate the penetration testing process based on GAIL, and evaluated the effectiveness and performance of exploiting the single host and three different networks. The imitation learning agent is modeled as a penetration attacker in the actual and simulated scenarios. In the experiments, we compared the methods of A3C-GAIL, DPPO-GAIL and Q-learning-GAIL, which are our methods, with the DeepExploit, DPPO and Q-learning in exploiting single host and network. The results testified the performance of our methods.In addition to combining with A3C, DPPO and Q-learning, GAIL can also combine with other DRL / RL algorithms. What is more, our original intention is to propose a general penetration testing framework GAIL-PT that can help conjunct GAIL with different DRL / RL algorithms to automate the penetration testing process in different scenarios. To our best knowledge, it is the first study of applying GAIL for penetration testing. We publish the code on Github so that interested scholars can conduct related research.
In aspect of limitation of the proposed method, its complexity of A3C-GAIL and DPPO-GAIL is relatively high. Although the network simulator NASim appears accurate vulnerability service and specific penetration operations in network scenarios, there are still differences between NASim and the existing network, and our method has not yet been validated in real complex networks. Therefore, future work can be carried out in the following two aspects: reducing the algorithm's complexity and applying GAIL-PT to exploit the real complex network.
\section{Acknowledgment}
This research was supported by the National Natural Science Foundation of China (No. 62072406), the National Key Laboratory of Science and Technology on Information System Security (No. 61421110502), the National Key R\&D Projects of China (No. 2018AAA0100801),
the Key R\&D Projects in Zhejiang Province (No. 2021C01117), the 2020 Industrial Internet Innovation Development Project (No.TC200H01V), and ``Ten Thousand Talents Program" in Zhejiang Province (No. 2020R52011).
|
1,116,691,500,356 | arxiv | \section{Introduction}
Jod\'ar and Cort\'es \cite{jc00}, introduced the concept of a fundamental set of solutions for matrix differential equations of the type
\begin{align}
X'' = f_1(z) X' + f_2(z) X f_3(z) + X' f_4(z),
\end{align}
where $f_i$ are matrix valued functions of complex variable $z$. A closed form general solution of such bilateral type matrix differential equation is determined in terms of Gauss hypergeometric matrix function.
In this paper, we give the systems of partial matrix differential equations of bilateral type in the form
\begin{align}
U_{x_ix_i}&= \sum_{
\substack{m,\,l=1\\ m<l}
}^{n} \ f_{iml} (x_1, \dots, x_n)\, U_{x_m x_l} + \sum_{j=1}^{n} \ \left(U_{x_j}\, g_{ij} (x_1, \dots, x_n) + h_{ij}(x_1, \dots, x_n) \, U_{x_j}\right) \nonumber\\
& \quad + f(x_1, \dots, x_n) U g(x_1, \dots, x_n), \ 1\le i, j \le n,\label{e1.2}
\end{align}
where $U_{x_ix_i} = \frac{\partial^2 U}{\partial x_i^2}$, $U_{x_m x_l} = \frac{\partial^2U}{\partial x_m \partial x_l}$, $U_{x_j} = \frac{\partial U}{\partial x_j}$ and $g_{ij}$, $h_{ij}$, $f$ and $g$ are matrix valued functions of complex variables $x_1,\dots, x_n$.
We show that the Lauricella matrix functions of $n$-variables satisfy the systems of bilateral type matrix differential equation of the form \eqref{e1.2}. The region of convergence and integral representation for Lauricella matrix functions of $n$-variables are also determined. The particular case $n=3$ leading to fourteen Lauricella matrix functions of three variables has been discussed in detail.
The sectionwise treatment is as follows:
In Section~2, we give some basic definitions from special matrix function theory that are needed in the sequel. In Section~3, we find the convergence conditions
and system of partial matrix differential equations of bilateral type satisfied by the generalized ($n$-variable) Lauricella matrix functions. The region of convergence and integral representation satisfy by the generalized ($n$-variable) Lauricella matrix functions are also given here. In Section~4, we give the complete list of fourteen Lauricella matrix functions of three variables and matrix analogue of Srivastava triple hypergeometric functions. Their regions of convergence, integral representations
and the system of matrix differential equations of bilateral type satisfied by them are also given.
Our notations are standard. For details, see \cite{ds1,ds4,jjc98a,jjc98b, jc00}.
\section{Preliminaries}
Let the spectrum of a matrix $A$ in $\mathbb{C}^{r\times r}$, denoted by $\sigma(A)$, be the set of all eigenvalues of $A$ and let $\alpha(A) = \max \{\,\Re(z) \mid z \in \sigma(A)\,\}$ and $\beta(A) = \min \{\,\Re(z) \mid z \in \sigma(A)\,\}$. Then for a positive stable matrix $A \in \mathbb{C}^{r \times r}$, that is $\beta(A) > 0$, the gamma matrix function is defined as \cite{jjc98a}
\[ \Gamma(A) = \int_{0}^{\infty} e^{-t} \, t^{A-I}\, dt
\]
and the reciprocal gamma matrix function is defined as \cite{jjc98a}
\begin{equation}
\Gamma^{-1}(A)= A(A+I)\dots (A+(n-1)I)\Gamma^{-1}(A+nI) , \ n\geq 1.\label{eq.07}
\end{equation}
The Pochhammer symbol for $A\in \mathbb{C}^{r\times r}$ is given by \cite{jjc98b}
\begin{equation}
(A)_n = \begin{cases}
I, & \text{if $n = 0$,}\\
A(A+I) \dots (A+(n-1)I), & \text{if $n\geq 1$}.
\end{cases}\label{c1eq.09}
\end{equation}
This gives
\begin{equation}
(A)_n = \Gamma^{-1}(A) \ \Gamma (A+nI), \qquad n\geq 1.\label{c1eq.010}
\end{equation}
If $A \in \mathbb{C}^{r\times r}$ is a positive stable matrix and $n\geq 1$ is an integer, then the gamma matrix function can also be defined in the form of a limit as \cite{jjc98a}
\begin{equation}
\Gamma (A) = \lim_{n \to \infty} (n-1)! \, (A)_n^{-1} \, n^A. \label{eq10}
\end{equation}
If $A$ and $B$ are positive stable matrices in $\mathbb{C}^{r \times r}$, then the beta matrix function is defined as \cite{jjc98a}
\begin{equation}
\mathfrak{B}(A,B) =\int_{0}^{1} t^{A-I} \, (1-t)^{B-I} dt.\label{c1eq11}
\end{equation}
Furthermore, if $AB = BA$, then the beta matrix function can be written in terms of gamma matrix function as \cite{jjc98a}
\begin{equation}
\mathfrak{B}(A,B) = \Gamma(A)\,\Gamma(B)\,\Gamma^{-1} (A+B).
\end{equation}
Using the Schur decomposition of $A$, it follows that \cite{gl,vl}
\begin{equation}
\Vert e^{tA}\Vert \leq e^{t\alpha(A)} \sum_{k=0}^{r-1}\frac{(\Vert A\Vert r^{1/2} t)^k}{k!}, \ \ t\geq 0.\label{eq09}
\end{equation}
We shall use the notation $\Gamma \left(\begin{array}{c}
A_1, \dots, A_p \\
B_1, \dots, B_q
\end{array}\right)$ for $\Gamma (A_1) \cdots \Gamma (A_p) \, \Gamma ^{-1}(B_1) \cdots \Gamma ^{-1} (B_q)$.
\section{Generalized Lauricella matrix functions}
The four Appell matrix functions of two variables \cite{al,ds5} can be generalized to the following matrix functions of $n$-variables.
\begin{align}
&F_{\mathcal{A}}[A, B_1, \dots, B_n; C_1, \dots, C_n; x_1, \dots, x_n]\nonumber\\
& = \sum_{m_1, \dots, m_n = 0}^{\infty} (A)_{m_1 + \cdots + m_n} (B_1)_{m_1} \cdots (B_n)_{m_n} (C_1)_{m_1}^{-1} \cdots (C_n)_{m_n}^{-1} \, \frac{x_1^{m_1} \cdots x_n^{m_n}}{m_1! \cdots m_n!};\label{eq2.1}
\end{align}
\begin{align}
&F_{\mathcal{B}}[A_1, \dots, A_n, B_1, \dots, B_n; C; x_1, \dots, x_n]\nonumber\\
& = \sum_{m_1, \dots, m_n = 0}^{\infty} (A_1)_{m_1} \cdots (A_n)_{m_n} (B_1)_{m_1} \cdots (B_n)_{m_n} (C)^{-1}_{m_1 + \cdots + m_n} \, \frac{x_1^{m_1} \cdots x_n^{m_n}}{m_1! \cdots m_n!};\label{eq2.2}
\end{align}
\begin{align}
&F_{\mathcal{C}}[A, B; C_1, \dots, C_n; x_1, \dots, x_n]\nonumber\\
& = \sum_{m_1, \dots, m_n = 0}^{\infty} (A)_{m_1 + \cdots + m_n} (B)_{m_1 + \cdots + m_n} (C_1)_{m_1}^{-1} \cdots (C_n)_{m_n}^{-1} \, \frac{x_1^{m_1} \cdots x_n^{m_n}}{m_1! \cdots m_n!};\label{2.3}
\end{align}
\begin{align}
&F_{\mathcal{D}}[A, B_1, \dots, B_n; C; x_1, \dots, x_n]\nonumber\\
& = \sum_{m_1, \dots, m_n = 0}^{\infty} (A)_{m_1 + \cdots + m_n} (B_1)_{m_1} \cdots (B_n)_{m_n} (C)^{-1}_{m_1 + \cdots + m_n} \, \frac{x_1^{m_1} \cdots x_n^{m_n}}{m_1! \cdots m_n!},\label{2.4}
\end{align}
where $A$, $A_1$, $\dots$, $A_n$, $B$, $B_1$, $\dots$, $B_n$, $C$, $C_1$, $\dots$, $C_n$ are matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C_i + kI$, $1 \leq i \leq n$ are invertible for all integers $k \geq 0$ and $x_1, \dots, x_n$ are complex variables.
We give the system of partial matrix differential equations of bilateral type satisfied by generalized Lauricella matrix functions under certain conditions. The following theorem gives the system of partial matrix differential equations of bilateral type satisfied by the matrix function $F_{\mathcal{A}}$ defined in \eqref{eq2.1}.
\begin{theorem}\label{3.2.1}
Let $A, B_i, C_i$ be matrices in $\mathbb{C}^{r \times r}$ such that $B_iB_j = B_jB_i$, $C_iB_j = B_jC_i$ and $C_iC_j = C_jC_i$, for each $i,j=1,\dots,n$. Then the matrix function $F_{\mathcal{A}}$ satisfies the following system of partial matrix differential equations of bilateral type
\begin{align}
& x_i(1-x_i) U_{x_ix_i} - x_ix_1U_{x_ix_1} - \cdots - x_ix_{i-1} U_{x_i x_{i-1}} - x_ix_{i+1} U_{x_i x_{i+1}} - \cdots - x_ix_{n} U_{x_i x_{n}} \nonumber\\
& \quad - x_i(A+I) U_{x_i} - ({x_{1} U_{ x_{1}} + \cdots + x_{n} U_{ x_{n}}})B_i + U_{x_i} C_i - AUB_i = 0, \ i = 1, \dots, n.
\end{align}
\end{theorem}
\begin{proof}
Let
\begin{align}
U &= F_{\mathcal{A}}(A, B_1, \dots, B_n; C_1, \dots, C_n; x_1, \dots, x_n) = \sum_{m_1, \dots, m_n = 0}^{\infty} \mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n}.
\end{align}
Then, we have
\begin{align}
&x_i(1-x_i) U_{x_ix_i} - x_ix_1U_{x_ix_1} - \cdots - x_ix_{i-1} U_{x_i x_{i-1}} - x_ix_{i+1} U_{x_i x_{i+1}} - \cdots - x_ix_{n} U_{x_i x_{n}}\nonumber\\
& = \sum_{m_1, \dots, m_n = 0}^{\infty} (A+(m_1 + \cdots + m_n)I) \mathcal{A}_{m_1, \dots, m_n} m_i (B_i + m_i I) (C_i + m_i I)^{-1}\nonumber\\
& \quad \times x_1^{m_1} \cdots x_n^{m_n} - \sum_{m_1, \dots, m_n = 0}^{\infty} m_i (m_i - 1) \mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n} \nonumber\\
& \quad - \sum_{m_1, \dots, m_n = 0}^{\infty} (m_i m_1 + \cdots+ m_i m_{i-1} + m_i m_{i+1} + \cdots + m_i m_n)\mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n}\nonumber\\
& = A \sum_{m_1, \dots, m_n = 0}^{\infty} \mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n} B_i - \sum_{m_1, \dots, m_n = 0}^{\infty} (A+(m_1 + \cdots + m_n)I) \mathcal{A}_{m_1, \dots, m_n}\nonumber\\
& \quad \times x_1^{m_1} \cdots x_n^{m_n} C_i (B_i + m_i I) (C_i + m_i I)^{-1} + (A+ I) \sum_{m_1, \dots, m_n = 0}^{\infty} \mathcal{A}_{m_1, \dots, m_n} \nonumber\\
& \quad \times m_i x_1^{m_1} \cdots x_n^{m_n} + \sum_{m_1, \dots, m_n = 0}^{\infty} (m_i m_1 + \cdots + m_i m_n)\mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n} B_i\nonumber\\
& = x_i(A+I) U_{x_i} + ({x_{1} U_{ x_{1}} + \cdots + x_{n} U_{ x_{n}}}) B_i - U_{x_i} C_i + AUB_i.
\end{align}
This completes the proof.
\end{proof}
In the next three theorems, we give the system of partial matrix differential equations of bilateral type satisfied by remaining three generalized Lauricella matrix functions. The proofs are similar to Theorem~\ref{3.2.1} and are omitted.
\begin{theorem}
Let $A_i, B_i, C$ be matrices in $\mathbb{C}^{r \times r}$ such that $A_iA_j = A_jA_i$, $B_iB_j = B_jB_i$ and $CB_j = B_jC$, for each $i,j=1,\dots,n$. Then the matrix function $F_{\mathcal{B}}$ satisfies the following system of partial matrix differential equations of bilateral type
\begin{align}
& x_i(1-x_i) U_{x_ix_i} + x_1U_{x_ix_1} + \cdots + x_{i-1} U_{x_i x_{i-1}} + x_{i+1} U_{x_i x_{i+1}} + \cdots + x_{n} U_{x_i x_{n}}\nonumber\\
& \quad - x_i(A_i+I) U_{x_i} - x_{i} U_{ x_{i}} B_i + U_{x_i} C - A_iUB_i = 0, \ i = 1, \dots, n
\end{align}
\end{theorem}
\begin{theorem}
Let $A, B, C_i$ be matrices in $\mathbb{C}^{r \times r}$ such that $C_iC_j = C_jC_i$ and $C_jB = BC_j$, for each $i,j=1,\dots,n$. Then the matrix function $F_{\mathcal{C}}$ satisfies a system of bilateral type partial matrix differential equations
\begin{align}
&x_i(1-x_i) U_{x_ix_i} - x_1^2 U_{x_1x_1} - \cdots - x_{i-1}^2 U_{x_{i-1}x_{i-1}} - x_{i+1}^2 U_{x_{i+1}x_{i+1}} - \cdots - x_{n}^2 U_{x_{n}x_{n}} - 2x_1x_2 \nonumber\\
& \quad \times U_{x_1x_2} - \cdots - 2x_rx_sU_{x_rx_s} - \cdots - 2x_{n-1}x_nU_{x_{n-1}x_n} - (A+I) (x_1U_{x_1} + \cdots + x_nU_{x_n}) \nonumber\\
&\quad + U_{x_i} C_i - (x_1U_{x_1} + \cdots + x_nU_{x_n}) B - AUB = 0, \ (r\ne s, \ r,s,i = 1, \dots, n).
\end{align}
\end{theorem}
\begin{theorem}
Let $A, B_i, C$ be matrices in $\mathbb{C}^{r \times r}$ such that $B_iB_j = B_jB_i$ and $CB_j = B_jC$, for each $i,j=1,\dots,n$. Then the matrix function $F_{\mathcal{D}}$ satisfies a system of bilateral type partial matrix differential equations
\begin{align}
&x_i(1-x_i) U_{x_ix_i} + x_1(1-x_i) U_{x_ix_1} + \cdots + x_{i-1}(1-x_i) U_{x_ix_{i-1}} + x_{i+1}(1-x_i) U_{x_ix_{i+1}} \nonumber\\
& \quad + \cdots + x_{n}(1-x_i) U_{x_ix_{n}} - (A+I) x_iU_{x_i} + U_{x_i} C - (x_1U_{x_1} + \cdots + x_{n} U_{x_{n}}) B_i\nonumber\\
&\quad - AUB_i = 0, \ i = 1, \dots, n.
\end{align}
\end{theorem}
\subsection{Region of convergence}
We give here the regions of convergence of the four Lauricella matrix functions.
We give below the proof for the Lauricella matrix function $F_{\mathcal{A}}$ and present the remaining results without proof.
\begin{theorem}\label{t3.5}
Let $A,B_1,\dots,B_n,C_1,\dots,C_n$ be positive stable matrices in $\mathbb{C}^{r \times r}$ such that $\alpha(A) < 1, \alpha(B_1) < \beta(C_1), \dots, \alpha(B_n) < \beta(C_n)$. Then the series $F_{\mathcal{A}}$ defined in \eqref{eq2.1} converges absolutely for $\vert x_1\vert + \cdots + \vert x_n\vert < 1$.
\end{theorem}
\begin{proof}
Let $\mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n}$ denote the general term of the series $F_{\mathcal{A}}$. Then
using \eqref{eq10},
we get
\begin{align}
\Vert \mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n}\Vert & \le N \left\Vert (m_1 + \cdots + m_n)^A\right\Vert \prod_{i=1}^{n} \left(\left\Vert (m_i)^{B_i}\right\Vert \left\Vert (m_i)^{-C_i}\right\Vert\right)\nonumber\\
& \quad \times \frac{(m_1 + \cdots + m_n - 1)!}{m_1! \cdots m_n!} \, \vert x_1\vert^{m_1} \cdots \vert x_n\vert^{m_n}, \label{2.7}
\end{align}
where $ N = \Vert \Gamma^{-1}(A) \Vert \Vert \Gamma^{-1}(B_1)\Vert \cdots \Vert \Gamma^{-1}(B_n)\Vert \Vert \Gamma(C_1) \Vert \cdots \Vert \Gamma(C_n) \Vert$. The Schur decomposition \eqref{eq09}
yields
\begin{align}
&\left\Vert (m_1 + \cdots + m_n)^A\right\Vert \prod_{i=1}^{n} \left(\left\Vert (m_i)^{B_i}\right\Vert \left\Vert (m_i)^{-C_i}\right\Vert\right) \le S \ (m_1 + \cdots + m_n)^{\alpha(A)} \prod_{i=1}^{n} (m_i)^{\alpha(B_i) - \beta(C_i)},\label{2.8}
\end{align}
where
\begin{align}
S & = \sum_{j= 0}^{r-1} \frac{1}{j!} \left(\Vert A\Vert r^{1/2} \ln (m_1 + \cdots + m_n)\right)^j \prod_{i=1}^{n} \left(\sum_{j=0}^{r-1} \frac{1}{j!} \left(\max\{\Vert B_i\Vert, \Vert C_i\Vert\} r^{1/2} \ln m_i\right)^j \right)^2.
\end{align}
Using Equation \eqref{2.8} in \eqref{2.7}, we get
\begin{align}
&\Vert \mathcal{A}_{m_1, \dots, m_n} x_1^{m_1} \cdots x_n^{m_n}\Vert\nonumber\\
& \le N \ S \ (m_1 + \cdots + m_n)^{\alpha(A) - 1} \prod_{i=1}^{n} (m_i)^{\alpha(B_i) - \beta(C_i)} (\vert x_1\vert + \cdots + \vert x_n\vert)^{m_1 + \cdots + m_n}.
\end{align}
Hence, for $\alpha(A) < 1, \alpha(B_1) < \beta(C_1), \dots, \alpha(B_n) < \beta(C_n)$ and $\vert x_1\vert + \cdots + \vert x_n\vert < 1$, $\Vert \mathcal{A}_{m_1, \dots, m_n}$ $ x_1^{m_1} \cdots x_n^{m_n}\Vert \rightarrow 0$ as $m_1$, $\dots$, $m_n \rightarrow \infty$. Therefore the series $F_{\mathcal{A}}$ converges absolutely.
\end{proof}
We remark that the region of convergence of matrix functions $F_{\mathcal{A}}$ obtained above is
same as in \cite{ds4}, where all the matrix considered were commuting.
Next three theorems give the regions of convergence for generalized Lauricella matrix functions $F_{\mathcal{B}},\ F_{\mathcal{C}}$ and $F_{\mathcal{D}}$. The proofs are similar to Theorem~\ref{t3.5} and are omitted.
\begin{theorem}
Let $A_1$, $\dots$, $A_n$, $B_1$, $\dots$, $B_n$ and $C$ be positive stable matrices in $\mathbb{C}^{r\times r}$ such that $\alpha(A_1) + \alpha(B_1) < 2, \, \dots, \, \alpha(A_n) + \alpha(B_n) < 2, \ \beta(C) > 1$. Then the series $F_{\mathcal{B}}$ defined in \eqref{eq2.2} converges absolutely for $\vert x_1\vert, \dots, \vert x_n\vert < 1$.
\end{theorem}
\begin{theorem}
Let $A$, $B$, $C_1, \dots, C_n$ be positive stable matrices in $\mathbb{C}^{r\times r}$ such that $\alpha(A) + \alpha(B) < 2, \, \beta(C_1) > 1, \dots, \beta(C_n) > 1$.
Then the series $F_{\mathcal{C}}$ defined in \eqref{2.3} converges absolutely for $\sqrt{\vert x_1\vert} + \cdots + \sqrt{\vert x_n\vert} < 1$.
\end{theorem}
\begin{theorem}
Let $A$, $B_1, \dots, B_n, C \in \mathbb{C}^{r\times r}$ be positive stable matrices such that $\alpha(A) < \beta(C)$, $\alpha(B_1) < 1, \, \dots, \, \alpha(B_n) < 1$.
Then the series $F_{\mathcal{D}}$ defined in \eqref{2.4} converges absolutely for $\vert x_1\vert$, $\dots, \vert x_n\vert < 1$.
\end{theorem}
\subsection{Integral representations}
\begin{theorem}
Let $A, B_i, C_i$ be matrices in $\mathbb{C}^{r \times r}$ such that $B_iB_j = B_jB_i$, $C_iB_j = B_jC_i$, $C_iC_j = C_jC_i$ and $B_i$, $C_i$, $C_i-B_i$ are positive stable for each $i,j=1,\dots,n$. Then for $\vert x_1\vert \leq r_1, \dots, \vert x_n\vert \leq r_n$, $r_1+\cdots + r_n < 1$, the series $F_{\mathcal{A}}$ defined in \eqref{eq2.1} can be put in the integral form as
\begin{align}
&F_{\mathcal{A}}[A, B_1, \dots, B_n; C_1, \dots, C_n; x_1, \dots, x_n]\nonumber\\
& = \underbrace{\int_{0}^{1}\cdots \int_{0}^{1}}_{n-fold} (1-(u_1x_1+\cdots+u_nx_n))^{-A} \ \prod_{i=1}^{n} u_i^{B_i-I} (1-u_i)^{C_i-B_i-I} \nonumber\\
& \quad \times du_1\cdots du_n \ \Gamma \left(\begin{array}{c}
C_1, \dots, C_n\\
B_1, \dots, B_n, C_1-B_1, \dots, C_n-B_n
\end{array}\right).
\end{align}
\end{theorem}
\begin{theorem}\label{t2}
Let $A_i, B_i, C$ be matrices in $\mathbb{C}^{r \times r}$ such that $B_iB_j = B_jB_i$, $CB_j = B_jC$ for each $i,j=1,\dots,n$ and $B_1, \dots, B_n$, $C$ and $C-(B_1+\dots+B_n)$ are positive stable. Then for $\vert x_1\vert, \dots, \vert x_n\vert <1$, the series $F_{\mathcal{B}}$ defined in \eqref{eq2.2}, can be put in the integral form as
\begin{align}
&F_{\mathcal{B}}[A_1, \dots, A_n, B_1, \dots, B_n; C; x_1, \dots, x_n]\nonumber\\
& = \underbrace{\idotsint}_{n-fold} (1-u_1x_1)^{-A_1} \cdots (1-u_nx_n)^{-A_n} \, u_1^{B_1-I} \cdots u_n^{B_n-I} (1-(u_1+\cdots+u_n))^{C-(B_1+\cdots+B_n)-I}\nonumber\\
&\quad \times du_1\cdots du_n \ \Gamma \left(\begin{array}{c}
C\\
B_1, \dots, B_n, C-(B_1+\cdots+B_n)
\end{array}\right), \ u_1\geq 0, \dots, u_n\geq 0, \ u_1 + \cdots + u_n \leq 1.
\end{align}
\end{theorem}
\begin{theorem}
Let $C$ be a matrix in $\mathbb{C}^{r\times r}$ such that $CB_i = B_iC$, $AC = CA$, for each $i$ and let $A$, $C$ and $C-A$ be positive stable. Then for $\vert x_1\vert, \dots, \vert x_n\vert < 1$, the series $F_{\mathcal{D}}$ defined in \eqref{2.4} can be put in the integral form as
\begin{align}
&F_{\mathcal{D}}[A, B_1, \dots, B_n; C; x_1, \dots, x_n]\nonumber\\
& = \Gamma \left(\begin{array}{c}
C\\
A, C-A
\end{array}\right)\int_{0}^{1} u^{A-I} (1-u)^{C-A-I} (1-ux_1)^{-B_1} \cdots (1-ux_n)^{-B_n} du.
\end{align}
\end{theorem}
\begin{theorem}\label{t3}
Let $A, B_i, C$ be matrices in $\mathbb{C}^{r \times r}$ such that $B_iB_j = B_jB_i$ and $CB_j = B_jC$, for each $i,j=1,\dots,n$ and let $B_1, \dots, B_n$, $C$, $C-(B_1+\cdots+B_n)$ be positive stable. Then for $\vert x_1\vert \leq r_1, \dots, \vert x_n\vert \leq r_n$, $r_1+\cdots + r_n < 1$, the series $F_{\mathcal{D}}$ defined in \eqref{2.4} can be put in the integral form as
\begin{align}
&F_{\mathcal{D}}[A, B_1, \dots, B_n; C; x_1, \dots, x_n]\nonumber\\
& = \underbrace{\int \cdots \int}_{n-fold} (1-(u_1x_1+\cdots+u_nx_n))^{-A} \, u_1^{B_1-I} \cdots u_n^{B_n-I} (1-(u_1+\cdots+u_n))^{C-(B_1+\cdots+B_n)-I} \nonumber\\
&\quad \times du_1\cdots du_n \ \Gamma \left(\begin{array}{c}
C\\
B_1, \dots, B_n, C-(B_1+\cdots+B_n)
\end{array}\right), \ u_1\geq 0, \dots, u_n\geq 0, \ u_1 + \cdots + u_n \leq 1.
\end{align}
\end{theorem}
The proofs of these results are similar to the corresponding proofs of integral representations of Appell matrix functions $F_2$, $F_3$ and $F_1$ respectively, \cite{ds1}. As such, the proofs are omitted. We remark that the following lemma is used in the proofs of Theorems~\ref{t2} and \ref{t3}
\begin{lemma}\cite{ds4}\label{t1.1}
Let $A_1, \dots, A_n$, $C$ be commuting matrices in $\mathbb {C}^{r \times r}$ such that $A_1, \dots, A_n$, $C, A_1 + \cdots + A_n + C$ are positive stable and $A_1 + \cdots + A_n + C + kI$ is invertible for all integers $k\geq 0$. Then
\begin{align}
\underbrace{\idotsint}_{\mbox{$n$-fold}}\, & u_1^{A_1-I} \cdots \ u_n^{A_n-I} \ (1-u_1 - \cdots - u_n)^{C-I} \, du_1 \cdots du_n = \Gamma \left(\begin{array}{c}
A_1, \dots, A_n, C\\
A_1 + \cdots + A_n + C
\end{array}\right)\nonumber\\
&u_1\geq 0, \dots, u_n\geq 0, u_1 + \cdots + u_n \leq 1.
\end{align}
\end{lemma}
We have not given the integral representation for generalized Lauricella matrix function $F_{\mathcal{C}}$ because the matrix function $F_{\mathcal{C}}$ does not give a simple form for integral in the arguments $x_1, \dots, x_n$.
\section{Lauricella matrix functions}
There are fourteen Lauricella matrix functions of three variables denoted by $F_1, \dots, F_{14}$. Out of these, $F_1$, $F_2$, $F_5$ and $F_9$ are particular cases of generalized Lauricella matrix functions $F_{\mathcal{A}}$, $F_{\mathcal{B}}$, $F_{\mathcal{C}}$ and $F_{\mathcal{D}}$ respectively for $n = 3$.
We give below the definition of remaining ten Lauricella matrix functions, \emph{viz.}, $F_3$, $F_4$, $F_6$, $F_7$, $F_8$, $F_{10}, F_{11}, F_{12}$, $F_{13}, F_{14}$ and discuss their regions of convergence.
Let $A_i$, $B_i$ and $C_i$, $1\le i\le 3$, be matrices in $\mathbb{C}^{r \times r}$ such that each $C_i+kI$ is invertible for all integers $k \geq 0$. Then the Lauricella matrix functions are defined by
\begin{align}
&F_{3}(A_1, A_2, A_2, B_1, B_2, B_1; C_1, C_2, C_3; x, y, z) \nonumber\\
& =\sum_{m, n, p =0}^{\infty} (A_1)_{m} \, (A_2)_{n+p} \, (B_1)_{m+p} \, (B_2)_n \, (C_1)_m^{-1} \, (C_2)^{-1}_n \, (C_3)_p^{-1} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.2}
\end{align}
\begin{align}
&F_{4}(A_1, A_1, A_1, B_1, B_2, B_2; C_1, C_2, C_3; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m+n+p} \, (B_1)_{m} \, (B_2)_{n+p} \, (C_1)_m^{-1} \, (C_2)^{-1}_n \, (C_3)_p^{-1} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.3}
\end{align}
\begin{align}
&F_{6}(A_1, A_2, A_3, B_1, B_2, B_1; C_1, C_2, C_2; x, y, z) \nonumber\\
& =\sum_{m, n, p =0}^{\infty} (A_1)_{m} \, (A_2)_{n} \, (A_3)_{p} \, (B_1)_{m+p} \, (B_2)_n \, (C_1)_m^{-1} \, (C_2)^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.4}
\end{align}
\begin{align}
&F_{7}(A_1, A_2, A_2, B_1, B_2, B_3; C_1, C_1, C_1; x, y, z) \nonumber\\
& =\sum_{m, n, p =0}^{\infty} (A_1)_{m} \, (A_2)_{n+p} \, (B_1)_{m} \, (B_2)_n \, (B_3)_{p} \, (C_1)_{m+n+p}^{-1} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.5}
\end{align}
\begin{align}
&F_{8}(A_1, A_1, A_1, B_1, B_2, B_3; C_1, C_2, C_2; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m+n+p} \, (B_1)_{m} \, (B_2)_n \, (B_3)_p \, (C_1)_m^{-1} \, (C_2)^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.6}
\end{align}
\begin{align}
&F_{10}(A_1, A_2, A_1, B_1, B_2, B_1; C_1, C_2, C_2; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m+p} \, (A_2)_n \, (B_1)_{m+p} \, (B_2)_n (C_1)_m^{-1} \, (C_2)^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.7}
\end{align}
\begin{align}
&F_{11}(A_1, A_2, A_2, B_1, B_2, B_1; C_1, C_2, C_2; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m} \, (A_2)_{n+p} \, (B_1)_{m+p} \, (B_2)_n (C_1)_m^{-1} \, (C_2)^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.8}
\end{align}
\begin{align}
&F_{12}(A_1, A_2, A_1, B_1, B_1, B_2; C_1, C_2, C_2; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m+p} \, (A_2)_{n} \, (B_1)_{m+n} \, (B_2)_p \, (C_1)_m^{-1} \, (C_2)^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.9}
\end{align}
\begin{align}
&F_{13}(A_1, A_2, A_2, B_1, B_2, B_1; C_1, C_1, C_1; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m} \, (A_2)_{n+p} \, (B_1)_{m+p} \, (B_2)_n \, (C_1)^{-1}_{m+n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!};\label{eq3.10}
\end{align}
\begin{align}
&F_{14}(A_1, A_1, A_1, B_1, B_2, B_1; C_1, C_2, C_2; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A_1)_{m+n+p} \, (B_1)_{m+p} \, (B_2)_n \, (C_1)_m^{-1} \, (C_2)^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!}.\label{eq3.11}
\end{align}
The matrix analogues of the three Srivastava's triple hypergeometric functions, \cite{hm84}, are given by
\begin{align}
&H_{\mathcal{A}}(A, B, B'; C, C'; x, y, z) = \sum_{m, n, p =0}^{\infty} (A)_{m+p} \, (B)_{m+n} \, (B')_{n+p} \, (C)_m^{-1} \, (C')^{-1}_{n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!},\label{eq3.12}
\\[5pt]
&H_{\mathcal{B}}(A, B, B'; C, C', C''; x, y, z) \nonumber\\
& = \sum_{m, n, p =0}^{\infty} (A)_{m+p} \, (B)_{m+n} \, (B')_{n+p} \, (C)_m^{-1} \, (C')^{-1}_{n} \, (C'')_{p}^{-1} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!},\label{eq3.13}
\\[5pt]
&H_{\mathcal{C}}(A, B, B'; C; x, y, z) = \sum_{m, n, p =0}^{\infty} (A)_{m+p} \, (B)_{m+n} \, (B')_{n+p} \, (C)^{-1}_{m+n+p} \, \frac{x^m \, y^n \, z^p}{m! \, n! \, p!},\label{eq3.14}
\end{align}
where $A$, $B$, $B'$, $C$, $C'$ and $C''$ are matrices in $\mathbb{C}^{r\times r}$ such that $C+kI$, $C'+kI$ and $C''+kI$ are invertible for all integers $k\geq 0$.
We remark that the regions of convergence of Lauricella matrix functions and Srivastava's triple hypergeometric matrix functions defined above remain the same as in \cite{ds4}, with proofs modified as illustrated in Theorem~\ref{t3.5}. As such, these results are omitted.
The following theorem gives the region of convergence of Lauricella matrix function $F_3$.
\begin{theorem}
Let $A_1$, $A_2$, $B_1$, $B_2$, $C_1$, $C_2$ and $C_3$ be positive stable matrices in $\mathbb{C}^{r \times r}$ such that $\alpha(A_1) < \beta(C_1), \ \alpha(A_2) < 1, \ \alpha(B_1) < 1, \ \alpha(B_2) < \beta(C_2), \ \beta(C_3) > 1$. Then the series $F_3$ defined in \eqref{eq3.2} converges absolutely for $\vert x\vert < r, \ \vert y \vert < s, \ \vert z\vert < t, \ (1-r)(1-s) = t$.
\end{theorem}
\begin{proof}
Let $\mathcal{A}_{m,n,p} x^{m} y^n z^p$ denote the general term of the series $F_{3}$. Then
\begin{align}
&\Vert \mathcal{A}_{m,n,p} x^{m} y^n z^p\Vert \nonumber\\
& \le \left\Vert (A_1)_{m} \right\Vert \left\Vert (A_2)_{n+p} \right\Vert \Vert (B_1)_{m+p} \Vert \left\Vert (B_2)_{n} \right\Vert \Vert (C_1)_{m}^{-1} \Vert \Vert(C_2)_{n}^{-1}\Vert \Vert(C_3)_{p}^{-1}\Vert \left \vert \frac{x^{m} y^n z^p}{m! n! p!}\right\vert\nonumber\\
& \le \left\Vert \frac{(A_1)_{m} \,m^{A_1}\, m^{-A_1} \,(m-1)!}{(m-1)!} \right\Vert \left\Vert\frac{(A_2)_{n+p} \,(n+p)^{A_2} \,(n+p)^{-A_2} \,(n+p-1)!}{(n+p-1)!} \right\Vert\nonumber\\
&\quad \times \left\Vert\frac{(B_1)_{m+p} \,(m+p)^{B_1} \,(m+p)^{-B_1} \,(m+p-1)!}{(m+p-1)!} \right\Vert \left\Vert \frac{(B_2)_{n} \,n^{B_2}\, n^{-B_2}\, (n-1)!}{(n-1)!} \right\Vert\nonumber\\
&\quad \times \left\Vert \frac{(C_1)^{-1}_{m} \,m^{C_1}\, m^{-C_1}\, (m-1)!}{(m-1)!} \right\Vert \left\Vert \frac{(C_2)^{-1}_{n} \,n^{C_2}\, n^{-C_2}\, (n-1)!}{(n-1)!} \right\Vert \left\Vert \frac{(C_3)_{p} \ p^{C_3}\, p^{-C_3}\, (p-1)!}{(p-1)!} \right\Vert \nonumber\\
& \quad \times \frac{\vert x\vert^{m} \vert y\vert^n \vert z\vert^p}{m! n! p!}.\label{4.16}
\end{align}
Using $\Gamma (A) = \lim_{n \to \infty} (n-1)! \, (A)_n^{-1} \, n^A$, \cite{jjc98a}, we get
\begin{align}
\Vert \mathcal{A}_{m,n,p} x^{m} y^n z^p\Vert & \le N \Vert m^{A_1}\Vert \Vert (n+p)^{A_2}\Vert \Vert (m+p)^{B_1} \Vert \Vert n^{B_2}\Vert \Vert m^{-C_1}\Vert \Vert n^{-C_2}\Vert \Vert p^{-C_3}\Vert\nonumber\\
& \quad \times \frac{(n+p-1)! \, (m+p-1)!}{m! \, n! \, p! \, (p-1)!} \, \vert x\vert^m \, \vert y\vert^n \, \vert z\vert^p,\label{4.17}
\end{align}
where $ N = \Vert \Gamma^{-1}(A_1) \Vert \Vert \Gamma^{-1}(A_2) \Vert\Vert \Gamma^{-1}(B_1)\Vert \Vert \Gamma^{-1}(B_2)\Vert \Vert \Gamma(C_1) \Vert \Vert \Gamma(C_2) \Vert \Vert \Gamma(C_3) \Vert$. The Schur decomposition, \cite{gl,vl}, yields
\begin{align}
&\Vert m^{A_1}\Vert \Vert (n+p)^{A_2}\Vert \Vert (m+p)^{B_1} \Vert \Vert n^{B_2}\Vert \Vert m^{-C_1}\Vert \Vert n^{-C_2}\Vert \Vert p^{-C_3}\Vert\nonumber\\
& \le S \,m^{\alpha(A_1) - \beta(C_1)} \,n^{\alpha(B_2) - \beta(C_2)} \,p^{-\beta(C_3)}\, (m+p)^{\alpha(B_1)} \,(n+p)^{\alpha(A_2)},\label{4.18}
\end{align}
where
\begin{align}
S & = \left(\sum_{j= 0}^{r-1} \frac{1}{j!} \left(\max\{\Vert A_1\Vert, \Vert C_1\Vert\} r^{1/2} \ln m\right)^j\right)^2 \left(\sum_{j=0}^{r-1} \frac{1}{j!} \left(\max\{\Vert B_2\Vert, \Vert C_2\Vert\} r^{1/2} \ln n\right)^j \right)^2 \nonumber\\
& \quad \times \sum_{j= 0}^{r-1} \frac{1}{j!} \left(\Vert C_3\Vert r^{1/2} \ln p\right)^j \sum_{j= 0}^{r-1} \frac{1}{j!} \left(\Vert A_2\Vert r^{1/2} \ln (n+p)\right)^j \sum_{j= 0}^{r-1} \frac{1}{j!} \left(\Vert B_2\Vert r^{1/2} \ln (m+p)\right)^j.
\end{align}
Equations \eqref{4.17} and \eqref{4.18} gives
\begin{align}
\Vert \mathcal{A}_{m,n,p} x^{m} y^n z^p\Vert & \le N \ S \ m^{\alpha(A_1) - \beta(C_1)} \,n^{\alpha(B_2) - \beta(C_2)} \,p^{1-\beta(C_3)}\, (m+p)^{\alpha(B_1) -1} \nonumber\\
& \quad \times (n+p)^{\alpha(A_2) - 1} \, \frac{(n+p)! \, (m+p)!}{m! \, n! \, p! \, p!} \, \vert x\vert^m \, \vert y\vert^n \, \vert z\vert^p.
\end{align}
Hence, for $\alpha(A_1) < \beta(C_1), \ \alpha(A_2) < 1, \ \alpha(B_1) < 1, \ \alpha(B_2) < \beta(C_2), \ \beta(C_3) > 1$ and $\vert x\vert < r, \ \vert y \vert < s, \ \vert z\vert < t, \ (1-r)(1-s) = t$, $\Vert \mathcal{A}_{m,n,p} x^m y^n z^p\Vert \rightarrow 0$ as $ m, n, p \rightarrow \infty$. Therefore the series $F_3$ converges absolutely.
\end{proof}
We now find the system of partial matrix differential equations of bilateral type obeyed by the Lauricella matrix function $F_3$.
\begin{theorem}\label{th5.2}
Let $A_1$, $A_2$, $B_1$, $B_2$, $C_1$, $C_2$, $C_3$ be matrices in $\mathbb{C}^{r \times r}$ such that $A_1A_2 = A_2A_1$, $B_1B_2 = B_2B_1$, $B_iC_j = C_jB_i$ and $C_iC_j = C_jC_i$, for each $i,j=1,2,3$. Then the system of partial matrix differential equations of bilateral type satisfied by the matrix function $F_3$ defined in \eqref{eq3.2} is
\begin{align}
&x(1-x)U_{xx} - xzU_{xz} + U_x(C_1 - (B_1+I)x) - A_1(xU_x + zU_z) - A_1UB_1 = 0,\label{4.21}
\\[5pt]
&y(1-y)U_{yy} - yzU_{yz} + U_y(C_2 - (B_2+I)y) - zU_zB_2 - yA_2U_y - A_2UB_2 = 0,\label{4.22}
\\[5pt]
&z(1-z)U_{zz} - (xyU_{xy} + yzU_{yz} + xzU_{xz}) + U_z(C_3-(B_1+I)z) -yU_yB_1\nonumber
\\[5pt]
&- A_2(xU_x + zU_z) - A_2UB_1 = 0.\label{4.23}
\end{align}
\end{theorem}
\begin{proof}
Let $U = F_{3}(A_1, A_2, A_2, B_1, B_2, B_1; C_1, C_2, C_3; x, y, z) = \sum_{m,n, p=0}^{\infty} U_{m,n, p} \, x^m y^n z^p$. Since $U(x, y, z)$ converges absolutely for $\alpha(A_1) < \beta(C_1), \ \alpha(A_2) < 1, \ \alpha(B_1) < 1, \ \alpha(B_2) < \beta(C_2), \ \beta(C_3) > 1$ and $\vert x\vert < r, \ \vert y \vert < s, \ \vert z\vert < t, \ (1-r)(1-s) = t$, so it is termwise differentiable in this domain and
\begin{align}
&U_x = \sum_{m=1,n, p=0}^{\infty} m\, U_{m,n,p} \, x^{m-1} y^n z^p,\ U_y = \sum_{m=0,n=1, p=0}^{\infty} n\, U_{m,n,p} \, x^m y^{n-1} z^p, \nonumber\\
& U_z = \sum_{m=0,n=0, p=1}^{\infty} p\, U_{m,n,p} \, x^m y^{n} z^{p-1}, \ U_{xx} = \sum_{m=2,n,p=0}^{\infty}m\, (m-1)\, U_{m,n,p} \, x^{m-2} y^n z^p,\nonumber\\
& U_{xy} = \sum_{m=1,n = 1,p=0}^{\infty} m\,n\, U_{m,n,p} \, x^{m-1} y^{n-1} z^{p}, \ U_{xz} = \sum_{m=1,n = 0,p=1}^{\infty} m\,p\, U_{m,n,p} \, x^{m-1} y^{n} z^{p-1},\nonumber\\
& U_{yy} = \sum_{m=0,n=2,p=0}^{\infty}n\, (n-1)\, U_{m,n,p} \, x^{m} y^{n-2} z^p, \ U_{yz} = \sum_{m=0,n=1,p=1}^{\infty}n\, p\, U_{m,n,p} \, x^{m} y^{n-1} z^{p-1},\nonumber\\
& U_{zz} = \sum_{m=0,n=0,p=2}^{\infty}p\, (p-1)\, U_{m,n,p} \, x^{m} y^{n} z^{p-2}.\label{e3.17}
\end{align}
This gives
\begin{align}
&x(1-x)U_{xx} - xzU_{xz}\nonumber\\
& = \sum_{m,n, p=0}^{\infty} m (m+1) U_{m+1, n, p}\, x^m y^n z^p - \sum_{m,n,p=0}^{\infty} m(m-1) U_{m,n,p}\, x^m y^n z^p\nonumber\\
& \quad - \sum_{m,n,p=0}^{\infty} m p \, U_{m,n,p}\, x^m y^n z^p\nonumber\\
& = \sum_{m,n, p=0}^{\infty} (A_1+mI) U_{m,n,p} (B_1+(m+p)I) x^m y^n z^p - \sum_{m,n,p=0}^{\infty} m(m-1) U_{m,n,p}\, x^m y^n z^p \nonumber\\
& \quad - \sum_{m,n,p=0}^{\infty} (A_1+mI) U_{m,n,p} (B_1+(m+p)I) (C_1+mI)^{-1} \, C_1 x^m y^n z^p\nonumber\\
&\quad - \sum_{m,n,p=0}^{\infty} m p \,U_{m,n,p}\, x^m y^n z^p\nonumber\\
& = \sum_{m,n=0}^{\infty} (m+p) A_1 U_{m,n,p} \, x^m y^n z^p + \sum_{m,n,p=0}^{\infty} m \, U_{m,n,p}\, x^m y^n z^p (B_1+I) \nonumber\\
& \quad - \sum_{m,n,p=0}^{\infty} (A_1+mI) U_{m,n,p} (B_1 +(m+p)I) (C_1+mI)^{-1} \, C_1 x^m y^n z^p \nonumber\\
& \quad + \sum_{m,n,p=0}^{\infty} A_1 U_{m,n,p} x^m y^n z^p B_1\nonumber\\
& = - U_x(C_1 - (B_1+I)x) + A_1(xU_x + zU_z) + A_1UB_1,
\end{align}
completing the proof of Equation \eqref{4.21}. Similarly we are able to show that the matrix function $F_3$ satisfies the bilateral type matrix differential equations \eqref{4.22} and \eqref{4.23}.
\end{proof}
The
bilateral type partial matrix differential equations obeyed by remaining Lauricella matrix functions and Srivastava's triple hypergeometric matrix functions are tabulated in Table~1.
{\renewcommand\arraystretch{1.5}
\begin{longtable}{|l|l|c|}
\hline
Functions & Systems of Matrix Differential equations & Conditions on Matrices\\
\hline
$F_4$ & $\begin{array}{c}
x(1-x) U_{xx} -(xyU_{xy} + xzU_{xz}) - (yU_y + zU_z) B_1\\
+ U_x (C_1-(B_1+I)x) - xA_1 U_x - A_1 U B_1 = 0,\\
y(1-y) U_{yy} -(xyU_{xy} + xzU_{xz} + 2yz U_{yz} + z^2 U_{zz}) \\
- xU_x B_2 + U_y (C_2-(B_2+I)y) - A_1 (yU_y + zU_z) \\
-zU_z (B_2 + I)- A_1 U B_2 = 0,\\
z(1-z) U_{zz} -(xyU_{xy} + xzU_{xz} + 2yz U_{yz} + y^2 U_{yy}) \\
- xU_x B_2 + U_z (C_3-(B_2+I)z) - A_1 (yU_y + zU_z) \\
-yU_y (B_2 + I)- A_1 U B_2 = 0;
\end{array}$ & $\begin{array}{c}
B_1B_2 = B_2B_1,\\
C_iC_j = C_jC_i,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$F_6$ & $\begin{array}{c}
x(1-x) U_{xx} - xz U_{xz} + U_x (C_1 - (B_1 + I)x) \\
- A_1(xU_x + zU_z) - A_1 U B_1 = 0,\\
y(1-y) U_{yy} + z U_{yz} + U_y (C_2 - (B_2 + I)y) \\
- yA_2 U_y - A_2 U B_2 = 0,\\
z(1-z) U_{zz} - xz U_{xz} + y U_{yz} + U_z (C_2 - (B_1 + I)z) \\
- A_3(xU_x + zU_z) - A_3 U B_1 = 0;
\end{array}$ & $\begin{array}{c}
A_i A_j = A_j A_i\\
B_1B_2 = B_2B_1\\
C_iC_j = C_jC_i,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$F_7$ & $\begin{array}{c}
x(1-x)U_{xx} + y U_{xy} + z U_{xz} + U_x (C_1 - (B_1 + I)x)\\
- xA_1 U_x - A_1 U B_1 = 0,\\
y(1-y)U_{yy} + x U_{xy} + z U_{yz} - yz U_{yz} - zU_z B_2 \\
+ U_y (C_1 - (B_2 + I)y) - yA_2 U_y - A_2 U B_2 = 0,\\
z(1-z)U_{zz} + x U_{xz} + y U_{yz} - yz U_{yz} - yU_y B_3 \\
+ U_z (C_1 - (B_3 + I)z) - zA_2 U_z - A_2 U B_3 = 0;
\end{array}$ & $\begin{array}{c}
A_1A_2 = A_2A_1,\\
B_iB_j = B_jB_i,\\
B_i C_1 = C_1 B_i
\end{array}$\\
\hline
$F_8$ & $\begin{array}{c}
x(1-x)U_{xx} - xyU_{xy} - xz U_{xz} - x A_1U_x + \\
U_x (C_1 - (B_1 + I)x) - (yU_y + zU_z) B_1 - A_1UB_1 = 0,\\
y(1-y)U_{yy} - xyU_{xy} - yz U_{yz} + zU_{yz} - yA_1U_y\\
- (xU_x + zU_z) B_2 + U_y (C_2 - (B_2 + I)y) - A_1UB_2 = 0,\\
z(1-z)U_{zz} - xzU_{xz} - yz U_{yz} + yU_{yz} - zA_1U_z\\
- (xU_x + yU_y) B_3 + U_z (C_2 - (B_3 + I)z) - A_1UB_3 = 0;
\end{array}$ & $\begin{array}{c}
B_iB_j = B_jB_i,\\
C_1C_2 = C_2C_1,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$F_{10}$ & $\begin{array}{c}
x(1-x)U_{xx} - 2xzU_{xz} - z^2 U_{zz} - zU_z(B_1 + I)\\
+ U_x (C_1 - (B_1 + I)x) - A_1(xU_x + zU_z)
- A_1 UB_1 = 0,\\
y(1-y)U_{yy} + zU_{yz} + U_y (C_2 - (B_2 + I)y)\\
-yA_2U_y - A_2 UB_2 = 0,\\
z(1-z)U_{zz} - 2xzU_{xz} + y U_{yz} - x^2u_{xx} + U_z C_2 \\
- (xU_x + zU_z)(B_1 + I)
- A_1(xU_x + zU_z) - A_1 UB_1 = 0;
\end{array}$ & $\begin{array}{c}
A_1A_2 = A_2A_1,\\
B_1B_2 = B_2B_1,\\
C_1C_2 = C_2C_1,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$F_{11}$ & $\begin{array}{c}
x(1-x)U_{xx} - xzU_{xz} + U_x (C_1 - (B_1 + I)x)\\
- A_1(xU_x + zU_z) - A_1 UB_1 = 0,\\
y(1-y)U_{yy} - yzU_{yz} + z U_{yz} - zU_zB_2\\
+ U_y (C_2 - (B_2 + I)y) - yA_2U_y - A_2 UB_2 = 0,\\
z(1-z)U_{zz} - xyU_{xy} - yzU_{yz}- xzU_{xz} + yU_{yz} - y U_y B_1\\
+ U_z (C_2 - (B_1 + I)z) - A_2(xU_x + zU_z)
- A_2 UB_1 = 0;
\end{array}$ & $\begin{array}{c}
A_1A_2 = A_2A_1,\\
B_1B_2 = B_2B_1,\\
C_1C_2 = C_2C_1,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$F_{12}$ & $\begin{array}{c}
x(1-x)U_{xx} - xyU_{xy} - yzU_{yz}- xzU_{xz} - z U_z B_1\\
+ U_x (C_1 - (B_1 + I)x) - A_1(xU_x + yU_y)
- A_1 UB_1 = 0,\\
y(1-y)U_{yy} - xyU_{xy} + zU_{yz} + U_y (C_2 - (B_1 + I)y)\\
- A_2(xU_x + yU_y) - A_2 UB_1 = 0,\\
z(1-z)U_{zz} - xzU_{xz} + yU_{yz} + U_z (C_2 - (B_2 + I)z)\\
-xU_x B_2 - zA_1U_z - A_1 UB_2 = 0;
\end{array}$ & $\begin{array}{c}
A_1A_2 = A_2A_1,\\
B_1B_2 = B_2B_1,\\
C_1C_2 = C_2C_1,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$F_{13}$ & $\begin{array}{c}
x(1-x)U_{xx} + yU_{xy} + zU_{xz} - xzU_{xz}\\
+ U_x (C_1 - (B_1 + I)x) - A_1(xU_x + zU_z)
- A_1 UB_1 = 0,\\
y(1-y)U_{yy} + xU_{xy} + zU_{yz} - yzU_{yz} - zU_z B_2\\
+ U_y (C_1 - (B_2 + I)y) - yA_2U_y - A_2 UB_2 = 0,\\
z(1-z)U_{zz} + xU_{xz} + yU_{yz} - xyU_{xy} - yzU_{yz}\\
- xzU_{xz} + U_z (C_1 - (B_1 + I)z) -yU_y B_1\\
- A_2(xU_x + zU_z) - A_2 UB_1 = 0;
\end{array}$ & $\begin{array}{c}
A_1A_2 = A_2A_1,\\
B_1B_2 = B_2B_1,\\
B_i C_1 = C_1 B_i
\end{array}$\\
\hline
$F_{14}$ & $\begin{array}{c}
x(1-x)U_{xx} - xyU_{xy} - yzU_{yz} - 2xzU_{xz} - z^2U_{zz}\\
+ U_x (C_1 - (B_1 + I)x) - zU_z - (yU_y + zU_z)B_1 \\
- A_1(xU_x + zU_z) - A_1 UB_1 = 0,\\
y(1-y)U_{yy} - xyU_{xy} + zU_{yz} - yzU_{yz} - zU_z B_2\\
- xU_xB_2 + U_y (C_2 - (B_2 + I)y) - yA_1U_y
- A_1 UB_2 = 0,\\
z(1-z)U_{zz} -2xzU_{xz} + yU_{yz} - xyU_{xy} - yzU_{yz}\\
+ U_z (C_2 - (B_1 + I)z) - x^2 U_{xx} - xU_x(B_1+I)\\
-yU_y B_1 - A_1(xU_x + zU_z) - A_1 UB_1 = 0;
\end{array}$ & $\begin{array}{c}
B_1B_2 = B_2B_1,\\
C_1C_2 = C_2C_1,\\
B_i C_j = C_j B_i
\end{array}$\\
\hline
$H_{\mathcal{A}}$ & $\begin{array}{c}
x(1-x)U_{xx} - xyU_{xy} - yzU_{yz} - xzU_{xz} - zU_zB \\
+ U_x (C - (B + I)x) - A(yU_y + xU_x) - AUB = 0, \\
y(1-y)U_{yy} - xyU_{xy} -xz U_{xz} - yzU_{yz} + zU_{yz} - xU_xB'\\
+ U_y (C' - (B' + I)y) - (yU_y + zU_z)B
- UBB' = 0,\\
z(1-z)U_{zz} -xzU_{xz} - xyU_{xy} - yzU_{yz} + yU_{yz} - xU_{x} B'\\
+ U_z (C' - (B' + I)z) - A(yU_y + zU_z)
- A UB' = 0;
\end{array}$ & $\begin{array}{c}
B, B', C, C'\\
\text{ are commuting }\\
\end{array}$\\
\hline
$H_{\mathcal{B}}$ & $\begin{array}{c}
x(1-x)U_{xx} - xyU_{xy} - yzU_{yz} - xzU_{xz} - zU_zB \\
+ U_x (C - (B + I)x) - A(yU_y + xU_x) - AUB = 0, \\
y(1-y)U_{yy} - xyU_{xy} -xz U_{xz} - yzU_{yz} - xU_xB'\\
+ U_y (C' - (B' + I)y) - (yU_y + zU_z)B - UBB' = 0,\\
z(1-z)U_{zz} -xzU_{xz} - xyU_{xy} - yzU_{yz} - xU_{x} B'\\
+ U_z (C'' - (B' + I)z) - A(yU_y + zU_z) - A UB' = 0;
\end{array}$ & $\begin{array}{c}
B, B', C, C', \text{ and }\\
C'' \text{ are commuting }\\
\end{array}$\\
\hline
$H_{\mathcal{C}}$ & $\begin{array}{c}
x(1-x)U_{xx} - xyU_{xy} - yzU_{yz} - xzU_{xz}\\
+ yU_{xy}+ zU_{xz} -zU_zB + U_x (C - (B + I)x) \\
- A(yU_y + xU_x) - AUB = 0, \\
y(1-y)U_{yy} - xyU_{xy} -xz U_{xz} - yzU_{yz}\\
+ xU_{xy}+ zU_{yz} - xU_xB' + U_y (C - (B' + I)y) \\
- (yU_y + zU_z)B - UBB' = 0,\\
z(1-z)U_{zz} -xzU_{xz} - xyU_{xy} - yzU_{yz}\\
+ xU_{xz}+ yU_{yz} + U_z (C - (B' + I)z) - xU_{x} B'\\
- A(yU_y + zU_z) - A UB' = 0.
\end{array}$ & $\begin{array}{c}
B, B' \text{ and } C\\
\text{are commuting }\\
\end{array}$\\
\hline
\caption{Systems of partial matrix differential equations of bilateral type satisfied by Lauricella and Srivastava matrix functions}
\end{longtable}}
\begin{example} {\rm We show here that the conditions on the matrices given in the third column of Table~1 are necessary for systems of matrix differential equations of bilateral type to hold.
Consider the differential equation of the bilateral type
\begin{align}
&x(1-x)U_{xx} - xzU_{xz} + U_x(C_1 - (B_1+I)x) - A_1(xU_x + zU_z) - A_1UB_1 = 0\label{e3.19}
\end{align}
satisfied by the Lauricella matrix function $F_3$. Let $U = F_{3}(A_1, A_2, A_2, B_1, B_2, B_1; C_1, C_2, C_3; x, y, z) = \sum_{m,n, p=0}^{\infty} U_{m,n, p} \, x^m y^n z^p$. Then, using the partial derivatives \eqref{e3.17}, we have
\begin{align}
&x(1-x)U_{xx} - xzU_{xz} + U_x C_1\nonumber\\
& = \sum_{m,n, p=0}^{\infty} (m+1) U_{m+1,n, p} \, x^m y^n z^p (C_1 + mI) - \sum_{m,n, p=0}^{\infty} m(m-1) U_{m,n, p} \, x^m y^n z^p\nonumber\\
& \quad - \sum_{m,n, p=0}^{\infty} m\,p\, U_{m,n, p} \, x^m y^n z^p
\end{align}
and
\begin{align}
& x U_x (B_1+I) + A_1(xU_x + zU_z) + A_1UB_1\nonumber\\
& = \sum_{m,n, p=0}^{\infty} m U_{m,n, p} \, x^m y^n z^p (B_1 + I) + \sum_{m,n, p=0}^{\infty} A_1 (m+p) U_{m,n, p} \, x^m y^n z^p\nonumber\\
& \quad + \sum_{m,n, p=0}^{\infty} A_1 U_{m,n, p} \, x^m y^n z^p B_1.
\end{align}
In particular for $m=1$ and $n=p=0$, we get
\begin{align}
&x(1-x)U_{xx} - xzU_{xz} + U_x C_1 = A_1 (A_1 + I) B_1 (B_1 + I) C_1^{-1} x\label{e3.22}
\end{align}
and
\begin{align}
& x U_x (B_1+I) + A_1(xU_x + zU_z) + A_1UB_1 = A_1 (A_1 + I) B_1 C_1^{-1} x (B_1 + I).\label{e3.23}
\end{align}
From \eqref{e3.22} and \eqref{e3.23}, one can conclude that the equation \eqref{e3.19} will hold if
\begin{align}
A_1 (A_1 + I) B_1 (B_1 + I) C_1^{-1} x = A_1 (A_1 + I) B_1 C_1^{-1} x (B_1 + I).
\end{align}
This gives $B_1 C_1 = C_1 B_1$. Similarly, at $m = 0, n = 1, p = 0$, we have
\begin{align}
&x(1-x)U_{xx} - xzU_{xz} + U_x C_1 = A_1 A_2 B_1 B_2 C_1^{-1} C_2^{-1} C_1 y\label{e4.22}
\end{align}
and
\begin{align}
& x U_x (B_1+I) + A_1(xU_x + zU_z) + A_1UB_1 = A_1 A_2 B_2 C_2^{-1} yB_1,\label{e4.23}
\end{align}
which gives $B_1 B_2 = B_2 B_1$, $C_1 C_2 = C_2 C_1$ and $B_1 C_2 = C_2 B_1$. The remaining conditions on the matrices can be obtained by changing the particular values of $m$, $n$ and $p$.
}
\end{example}
In Table~2, we give the integral representations of Lauricella matrix functions of three variables and Srivastava's triple hypergeometric matrix functions.
\begin{longtable}{|l|l |c|}
\hline
Functions & Integral Representations & Conditions on Matrices\\
\hline
$F_6$ & $\begin{array}{c}\\
\Gamma{\left(\begin{array}{c}
C_1, C_2\\
A_1, A_2, A_3, C_1 - A_1, C_2-A_2-A_3
\end{array}\right)} \iiint u^{A_1-I}\\
\times v^{A_2-I} w^{A_3-I} (1-u)^{C_1-A_1-I} (1-v-w)^{C_2-A_2-A_3-I}\\
\times (1-vy)^{-B_2} (1-ux-wz)^{-B_1} \ du \, dv\, dw, \, 0\leq u \leq 1,\\
\, v \geq 0, \, w\geq 0, \ v+w \leq 1, r + t < 1, \ \vert x\vert
\leq r, \vert z\vert \leq t,\\
\beta(A_1) >0, \beta(A_2)>0, \beta(A_3)>0, \beta(C_1)>0, \\
\beta(C_2)>0, \beta(C_1-A_1)>0, \beta(C_2-A_2-A_3)>0.
\\
\end{array}$ & $\begin{array}{c}
A_iA_j = A_jA_i,\\
B_iC_j = C_jB_i,\\
C_1C_2 = C_2C_1,\\
A_i C_j = C_j A_i
\end{array} $ \\
\hline
$F_7$ & $\begin{array}{c}\\
\iiint (1-ux)^{-A_1} (1-vy-wz)^{-A_2} u^{B_1-I} v^{B_2-I} w^{B_3-I}\\
\times (1-u-v-w)^{C_1-B_1-B_2-B_3-I} du \, dv \, dw\\
\times \Gamma{\left(\begin{array}{c}
C_1\\
B_1, B_2, B_3, C_1 - B_1-B_2-B_3
\end{array}\right)}, u\geq 0,\\
\, v\geq 0, \, w\geq 0, \, u + v+ w \leq 1, \, s+t < 1, \, \vert y\vert \leq s,\\
\vert z\vert \leq t, \beta(B_1) >0, \beta(B_2)>0, \beta(B_3)>0, \beta(C_1)>0, \\
\beta(C_1-B_1-B_2-B_3)>0;\\
\end{array}$ & $\begin{array}{c}
B_iB_j = B_jB_i,\\
B_iC_1 = C_1B_i
\end{array}$\\
\hline
$F_8$ & $\begin{array}{c}\\
\iiint (1-ux-vy-wz)^{-A_1} u^{B_1-I} v^{B_2-I} w^{B_3 - I}\\
\times (1-u)^{C_1 - B_1 - I} (1-v-w)^{C_2-B_2-B_3-I} du\,dv \, dw,\\
\times \Gamma{\left(\begin{array}{c}
C_1, C_2\\
B_1, B_2, B_3, C_1 - B_1, C_2-B_2-B_3
\end{array}\right)}, 0 \leq u \leq 1,\\
v \geq 0, \ w \geq 0, \ v + w \leq 1, \ r + s + t < 1, \, \vert x\vert \leq r, \vert y\vert \leq s, \\ \vert z\vert \leq t, \beta(B_1) >0, \beta(B_2)>0, \beta(B_3)>0, \beta(C_1)>0,\\
\beta(C_2)>0, \beta(C_1-B_1)>0, \beta(C_2-B_2-B_3)>0.
\end{array}$ & $\begin{array}{c}
B_iB_j = B_jB_i,\\
C_1C_2 = C_2C_1,\\
B_iC_j = C_jB_i
\end{array}$\\
\hline
$F_{11}$ & $\begin{array}{c}\\
\Gamma{\left(\begin{array}{c}
C_1, C_2\\
A_1, A_2, C_1 - A_1, C_2-A_2
\end{array}\right)} \int_{0}^{1} \int_{0}^{1} u^{A_1-I} v^{A_2-I}\\
\times (1-u)^{C_1-A_1-I}(1-v)^{C_2-A_2-I}(1-ux-vz)^{-B_1} \\
\times (1-vy)^{-B_2} \,du \, dv, \ r+t < 1, \ \vert x\vert \leq r, \vert z\vert \leq t,\\
\beta(A_1)>0, \beta(A_2)>0, \beta(C_1)>0, \beta(C_2)>0,\\
\beta(C_1-A_1)>0, \beta(C_2-A_2)>0.
\\ \end{array}$ & $\begin{array}{c}
A_1A_2 = A_2A_1,\\
B_iC_j = C_jB_i,\\
C_1C_2 = C_2C_1,\\
A_i C_j = C_j A_i
\end{array} $ \\
\hline
$F_{12}$ & $\begin{array}{c}\\
\iiint (1-vy)^{A_1} (1-ux-vy-wz+vwyz)^{-A_1} u^{B_1-I} v^{A_2-I} \\
\times w^{B_2-I} (1-u)^{C_1-B_1-I} (1-v-w)^{C_2-A_2-B_2-I} (1-vy)^{-B_1} \\
\times \, du\, dv\, dw \Gamma{\left(\begin{array}{c}
C_1, C_2\\
A_2, B_1, B_2, C_1 - B_1, C_2-A_2-B_2
\end{array}\right)},\\
0 \leq u \leq 1, v \geq 0, w \geq 0, v+w \leq 1, r+s+t < 1+st, \\
\vert x\vert \leq r, \vert y\vert \leq s, \vert z\vert \leq t, \beta(A_2) >0, \beta(B_1)>0,\\
\beta(B_2)>0, \beta(C_1)>0, \beta(C_2) >0, \beta(C_1-B_1)>0, \\
\beta(C_2- A_2-B_2)>0.
\\ \end{array}$ & $\begin{array}{c}
C_1, C_2, B_1, B_2 \text{ and } \\
A_2 \, \text { are commuting }
\end{array}$\\
\hline
$F_{13}$ & $\begin{array}{c}\\
\iint (1-ux)^{-A_1} (1-vy - uz)^{-A_2} u^{B_1-I} v^{B_2-I}\\
\times (1-u-v)^{C_1-B_1-B_2-I} \, du \, dv\\
\Gamma{\left(\begin{array}{c}
C_1\\
B_1, B_2, C_1 - B_1-B_2
\end{array}\right)}, u\geq 0, \, v\geq 0, \\
u + v \leq 1, \ s + t < 1, \ \vert y\vert \leq s, \vert z\vert \leq t, \beta(B_1) >0,\\
\beta(B_2)>0, \beta(C_1)>0, \beta(C_1-B_1-B_2)>0.
\\ \end{array}$ & $\begin{array}{c}
C_1, B_1 \text{ and } B_2\\
\text { are commuting }
\end{array} $\\
\hline
$H_{\mathcal{A}}$ & $\begin{array}{c}\\
\int_{0}^{1} \int_{0}^{1} (1-ux-vy-vz+v^2yz)^{-A} (1-vy)^A u^{B-I} \\
\times v^{B'-I} (1-u)^{C-B-I} (1-v)^{C'-B'-I} (1-vy)^{-B}\\
du \, dv \Gamma{\left(\begin{array}{c}
C, C'\\
B, B', C - B, C'-B'
\end{array}\right)}, r+s+t < 1+st,\\
\ \vert x\vert \leq r, \vert y\vert \leq s, \vert z\vert \leq t, \beta(B) >0, \beta(B')>0, \\
\beta(C)>0, \beta(C')>0, \beta(C-B)>0, \beta(C'-B')>0.
\\ \end{array}$ & $\begin{array}{c}
B, B', C \text{ and } C'\\
\text { are commuting }
\end{array} $\\
\hline
$H_{\mathcal{B}}$ & $\begin{array}{c}\\
\Gamma^{-1}(A) \Gamma^{-1} (B) \Gamma^{-1}(B') \int_{0}^{\infty}\int_{0}^{\infty} \int_{0}^{\infty} e^{-(u+v+w)} v^{A-I}\\
\times u^{B-I} w^{B'-I} {}_0F_1(-;C;xuv) {} \ _0F_1(-;C';yuw) {} \\
_0F_1(-;C'';zvw) \ du \ dv \ dw, \vert x\vert \leq r, \vert y\vert \leq s, \vert z\vert \leq t,\\
\max\{r, s, t\}<1, \beta(A)>0, \beta(B)>0, \beta(B')>0.\\
\end{array}$ & $\begin{array}{c}
B, B' \text{ commutes }\\
\text { with } A
\end{array} $\\
\hline
$H_{\mathcal{C}}$ & $\begin{array}{c}\\
\Gamma{\left(\begin{array}{c}
C\\
A, B, C-A-B
\end{array}\right)}\int_{0}^{1} \int_{0}^{1} u^{A-I} v^{B-I} \\
(1-u)^{C-A-I} (1-v)^{C-A-B-I} (1-ux)^{-B} (1-ux)^{B'}\\
(1-ux-vy-wz+uvy-zxu^2)^{-B'} du \, dv,\\
\ r+s+t+rt < 1+s, \ \vert x\vert \leq r, \vert y\vert \leq s, \vert z\vert \leq t,\\
\beta(A)>0, \beta(B)>0, \beta(C)>0, \beta(C-A-B)>0.
\\ \end{array}$ & $\begin{array}{c}
A, B, C \text{ commutes }\\
\text { and } B'C = CB'
\end{array} $\\
\hline
\caption{Integral Representations of Lauricella and Srivastava Matrix Functions}
\end{longtable}
|
1,116,691,500,357 | arxiv | \section{Introduction}
\subsection{Asymptotic behaviour of learning rates in Armijo's condition}
Fix a constant $0<\alpha <1$. For a $C^1$ function $f:\mathbb{R}^k\rightarrow \mathbb{R}$, a point $x$ and a positive number $\delta >0$, we say that Armijo's condition is satisfied if $f(x-\delta \nabla f(x))-f(x)\leq -\alpha \delta ||\nabla f(x)||^2$. We say that a sequence $\{x_n\}$ satisfies Armijo's condition \cite{armijo} if $x_{n+1}=x_n-\delta _n\nabla f(x_n)$ for some positive number $\delta _n$ for which Armijo's condition is satisfied.
In Backtracking GD, one fixes a countable set $\Delta$ of positive numbers converging to $0$, starts from a random initial point $x_0$ and defines $x_{n+1}=x_n-\delta (x_n)\nabla f(x_n)$, where $\delta (x_n)\in \Delta $ is the largest number for which Armijo's condition is satisfied. Convergence guarantee for Backtracking GD and modifications is currently the best among all iterative methods \cite{truong-nguyen} with associated python codes for experiments on CIFAR datasets \cite{mbtoptimizer}. A popular choice for the set $\Delta $ is as follows: we choose $0<\beta <1$ and $\delta _0>0$ and define $\Delta =\{\beta ^n\delta _0:~n=0,1,2,\ldots \}$
A drawback in Backtracking GD is that the learning rates are bounded from above by $\max \Delta $. If one could allow learning rates in Backtracking GD to be unbounded, then the convergence could be faster and could avoid bad critical points. To this end, the first author defined in \cite{truong} the Unbounded Backtracking GD procedure, where now learning rates $\delta _n$ are not bounded by $\max \Delta$ but are allowed to grow provided $\lim _{n\rightarrow\infty}\delta _n||\nabla f(x_n)||=0$. Under this condition, one obtains the same convergence guarantee as in Backtracking GD.
If the sequence $\{x_n\}$ satisfies Armijo's condition and converges, then the above condition $\lim _{n\rightarrow\infty}\delta _n||\nabla f(x_n)||=0$ is satisfied.
The goal of numerical optimisation is to guarantee convergence to local minima, and hence at least to critical points of $f$.
Recall that a critical point of $f$ is non-degenerate at a critical point $x_{\infty}$ if $f$ is $C^2$ near $x_{\infty}$ and the Hessian $\nabla ^2f(x_{\infty})$ is invertible. Note that non-degenerate critical points are "generic", in the sense that a randomly chosen function $f$ will have all its critical points to be non-degenerate (for example, Morse's functions). The above discussion motivates us to investigate the question: Can we allow the sequence $\delta _n$ grow to infinity while having the sequence $\{x_n\}$ converge to a non-degenerate critical point? A bit surprisingly, the answer is No, as seen from the next result.
\begin{theorem} Assume that the sequence $\{x_n\}$ satisfies Armijo's condition and converges to a non-degenerate critical point $x_{\infty}$. To avoid triviality, we assume moreover that $\nabla f(x_n)\not= 0$ for all $n$. Then for every $\epsilon >0$, there is $n_{\epsilon} $ so that for all $n\geq n_{\epsilon}$ we have
\begin{eqnarray*}
\alpha \delta _n\leq \frac{1}{2}(||\nabla ^2f(x_{\infty})||+\epsilon ) \times (||\nabla ^2f(x_{\infty})^{-1}||+\epsilon )^2.
\end{eqnarray*}
\label{TheoremLearningRateRestriction}\end{theorem}
\begin{proof}
Fix $\epsilon >0$. We have that $\{f(x_n)\}$ decreases to $f(x_{\infty})$. Hence, by Armijo's condition we have
\begin{eqnarray*}
0\leq f(x_{n+1})-f(x_{\infty})\leq f(x_n)-f(x_{\infty}) -\alpha \delta _n||\nabla f(x_n)||^2,
\end{eqnarray*}
for all $n$. Therefore, for all $n$, we have $\alpha \delta _n||\nabla f(x_n)||^2 \leq f(x_n)-f(x_{\infty})$.
By Taylor's expansion for $f$ near $x_{\infty}$, using that $f$ is $C^2$ and noting that $\nabla f(x_{\infty})=0$, we have (here $o(.)$ is the small-O notation)
\begin{eqnarray*}
f(x_n)-f(x_{\infty}) &=&\frac{1}{2}<\nabla ^2f(x_{\infty})(x_n-x_{\infty}),x_n-x_{\infty}>+o(||x_n-x_{\infty}||^2)\\
&\leq& \frac{1}{2}||\nabla ^2f(x_{\infty})||\times ||x_n-x_{\infty}||^2+o(||x_n-x_{\infty}||^2).
\end{eqnarray*}
Hence, if $n$ is large enough then $f(x_n)-f(x_{\infty})\leq \frac{1}{2}(||\nabla ^2f(x_{\infty})||+\epsilon )\times ||x_n-x_{\infty}||^2$
By Taylor's expansion for $\nabla f$ near $x_{\infty}$, using again that $f$ is $C^2$ and noting that $\nabla f(x_{\infty})=0$, we have
\begin{eqnarray*}
\nabla f(x_n)=\nabla ^2f(x_{\infty})(x_n-x_{\infty})+o(||x_n-x_{\infty}||).
\end{eqnarray*}
Hence, multiplying both sides with $\nabla ^2f(x_{\infty})^{-1}$, when $n$ is large enough, we get $||x_n-x_{\infty}||\leq (||\nabla ^2f(x_{\infty})^{-1}||+\epsilon ) ||\nabla f(x_n)||$.
Putting together all the above estimates and cancelling the term $||\nabla f(x_n)||^2$ at the end, we obtain finally:
\begin{eqnarray*}
\alpha \delta _n\leq \frac{1}{2}(||\nabla ^2f(x_{\infty})||+\epsilon ) \times (||\nabla ^2f(x_{\infty})^{-1}||+\epsilon )^2,
\end{eqnarray*}
for large enough values of $n$, as wanted.
\end{proof}
This result says roughly that in case of convergence to a non-degenerate critical point, then the performance of Unbounded Backtracking GD and of the usual Backtracking GD are similar. On the other hand, in case of convergence to a degenerate critical point, then the performance of the two algorithms can be sharply different. Below are some experimental results illustrating that both scenarios do happen in reality.
The setups are as follows. We choose $\alpha =0.5$ for Armijo's condition.
For the usual Backtracking GD, we choose $\beta =0.7$ and $\delta _0=1$.
For Unbounded Backtracking GD: we choose $\beta =0.7$ and $\delta _0=1$ as in the usual Backtracking GD. We choose the function $h(t)=\delta _0$ if $t>1$, and $h(t)=\delta _0/\sqrt{t}$ if $t\leq 1$. For the readers' convenience, we recall here the update rule for Unbounded Backtracking GD \cite{truong}: At step $n$, we start with $\delta =\delta _0$. If $\delta $ does not satisfy Armijo's condition, then we reduce $\delta $ by $\delta \beta$ until it satisfies Armijo's condition, hence in this case we proceed as in the usual Backtracking GD. On the other hand, if $\delta $ does satisfy Armijo's condition, then we increase it by $\delta /\beta $ while both Armijo's condition and $\delta \leq h(||\nabla f(x_n)||)$ is satisfied. We choose $\delta _n$ to be the final value of $\delta$, and update $x_{n+1}=x_n-\delta _n\nabla f(x_n)$.
We will stop when either the iterate number is $10^6$ or when the gradient of the point is $\leq 10^{-10}$.
{\bf Example 1:} We look at the function $f(x,y)=x^3 sin(1/x) +y^3sin(1/y)$ and start from the initial point $z_0=(4,-5)$. After 10 steps, both algorithms Backtracking GD and Unbounded Backtracking GD arrive at the same point $(0.09325947,-0.09325947)$ which is very close to a non-degenerate local minimum of the function.
{\bf Example 2:} We look at the function $f(x,y)=x^4+y^4$ and start from the initial point $z_0=(0.1,15)$. This function has a degenerate global minimum at $(0,0)$. After $10^6$ steps, Backtracking GD arrives at the point $(0.00111797,0.00111802)$ with learning rate $1$. On the other hand, only after 89 steps, Unbounded Backtracking GD already arrives at a better point $(0.00025327, 0.00025327)$ with learning rate $90544.63441298596$ much bigger than $1$.
Finally, we present a heuristic argument showing that Armijo's condition and backtracking manner of choosing learning rates could prevent a pathological scenario not covered by the convergence result in \cite{truong}. More precisely, we use the following update rule: it is like the update rule for the discrete version of Unbounded Backtracking GD mentioned above, except that we do not constrain $\delta $ by any function $h(||\nabla f(z_n)||)$. The pathological scenario is that the constructed sequence $\{z_n\}$ contains both a bounded and an unbounded subsequence, and the bounded subsequence converges to a critical point $z_{\infty}$. Since as mentioned, modifications of Backtracking GD in \cite{truong2, truong3} can avoid saddle points, we expect that with the above update rule the sequence $\{z_n\}$ can also avoid saddle points. Then the point $z_{\infty}$ is expected to be a local minimum. There is expected a small open neighbourhood $U$ of $z_{\infty}$ for which $\min _{z\in \partial U}f(z)>f(z_{\infty})$. Now, the backtracking manner of choosing learning rates is expected to have this effect: if $z\in U$ is very close to $z_{\infty}$, then the choice of $\delta (z)$ - since at most will be increased by $\beta$ at a time and must keep the value of the function not increased - will not be enough to allow the resulting point $z-\delta (z)\nabla f(z)$ to escape $U$. (Since $||\nabla f(z_n)||$ is very small, it is expected that if $\delta '$ is the largest positive number so that $z_n-\delta '\nabla f(z_n)$ stays in $U$, then the next value $\delta '/\beta$ is expected to make $z_n-\delta '\nabla f(z_n)/\beta $ stay close to $\partial U$, which will force $f(z_n-\delta '\nabla f(z_n)/\beta )>f(z_n)$ - a condition prohibited by Armijo's condition.) Therefore, we expect that if there is a sequence $\{z_{n_j}\}$ converging to $z_{\infty}$, then the whole sequence $\{z_n\}$ must be bounded, and the above pathological scenario cannot happen. It would be good if the above heuristic argument can be realised at least for $C^2$ cost functions.
\subsection{Backtracking GD has correct units}
In \cite{zeiler} where he introduced Adadelta, Zeiler has an interesting interpretation of whether a numerical method is "right" or not, based on the idea of "correct unit". Here we show that Backtracking GD has the correct unit, thus gives more support to why it is effective. The argument is of course non-rigorous, but we hope that this explanation can be amusing and can encourage more interest in using Backtracking GD in practical applications, in particular in Deep Learning.
The idea is as follows. If we have an equality $LHS=RHS$, then whenever the LHS has a certain unit, then so is the RHS. For example, in the formula for velocity $v=x/t$, if the unit of $x$ is m and the unit of $t$ is s, then the unit of $v$ must be $m/s$. Likewise, in numerical methods, if we define $x_{n+1}=x_n+\xi _n$, then the unit of $\xi _n$ must be equal that of $x_n$ and $x_{n+1}$.
To make the presentation simple, we will choose dimension $k=1$, and hence our map $f$ is from $\mathbb{R}$ to $\mathbb{R}$. In this case, we can write $f'(x)$ for $\nabla f(x)$. For an object $z$, we write $Unit(z)$ for its unit. Our convenience is that if a constant $\alpha$ is not bound in any relation (equality, in equality and so on), then it is unitless.
By definition
\begin{eqnarray*}
f'(x)=\frac{\Delta f}{\Delta x},
\end{eqnarray*}
where $\Delta$ is the difference, and for any object $z$ we have $Unit(\Delta z)=Unit (z)$. Therefore, we obtain $Unit(f')=Unit(f)/Unit(x)$.
Similarly, $f"(x)=\Delta f'/\Delta x$ implies that $Unit(f")=Unit(f')/Unit(x)=Unit(f)/Unit(x)^2$.
Zeiler analysed the unit correctness of some common gradient descent methods appearing before Adadelta: Standard GD, Momentum, Adagrad and Newton's method. Here we repeat the analysis for Standard GD and Newton's.
For Standard GD, the update rule is $x_{n+1}=x_n-\delta _0f'(x_n)$. Since $\delta$ is an unbound constant, we have that $\delta _0$ is unitless. Hence, we have a mismatch because $Unit(x)^2\not= Unit(f')$ in general. This can be interpreted in that Standard GD is not the "right" method for a general $C^1$ function. Similarly, Zeiler showed that Momentum and Adagrad do not have correct units.
For Newton's method, the update rule is $x_{n+1}-x_n=-f'(x_n)/f"(x_n)$. Here we have unit correctness because the unit of RHS is
\begin{eqnarray*}
Unit(f'/f")=[Unit(f)/Unit(x)]/[Unit(f)/Unit(x)^2]=Unit(x),
\end{eqnarray*}
which is the same as that of LHS. One weak point of Newton's method is however that it is not guaranteed to be a descent method, that is there is no guarantee that $f(x_{n+1})\leq f(x_n)$ for all $n$.
Zeiler designed his algorithm Adadelta as a way to make Adagrad have correct unit. However, again this method is not guaranteed to be descent.
Now we show that Backtracking GD has correct unit. In deed, we choose $\delta (x_n)$ as the largest $\delta$ among $\{\beta ^n\delta _0: ~n=0,1,2,\ldots\}$ so that Armijo's condition
\begin{eqnarray*}
f(x-\delta f'(x))-f(x)\leq -\alpha \delta |f'(x)|^2.
\end{eqnarray*}
Since $x-\delta f'(x)$ appears as an argument for the function $f$, we must have $Unit(\delta f'(x))=Unit(x)$, which implies that
\begin{eqnarray*}
Unit(\delta )=Unit(x)/Unit( f'(x))=Unit(x)/[Unit(f)/Unit(x)]=Unit(x)^2/Unit(f).
\end{eqnarray*}
For Armijo's condition to have correct unit, the necessary and sufficient condition is then that $\alpha$ is unitless. Likely, we check that $\beta$ is unitless, and $Unit(\delta _0)=Unit(\delta (x))=Unit(x)^2/Unit(f)$.
Likewise, we can now check that in case $\nabla f$ is Lipschitz continuous with Lipschitz constant $L$, then the Standard GD update with learning rate $\delta _0~1/L$ has correct unit. To see this, we first observe that since the constant $L$ is bound in the inequality $|f'(x)-f'(y)|\leq L|x-y|$, it follows that
\begin{eqnarray*}
Unit(L)=Unit(f')/Unit(x)=Unit(f)/Unit(x)^2,
\end{eqnarray*}
and hence the update rule $\Delta x_n=-\delta _0f'(x_n)$ has correct unit. We can see this fact also by observing that in this case the Standard GD is a special case of the Backtracking GD, and hence also has correct unit. For example, if we choose the learning rate to be too much bigger than $1/L$, then the sequence may diverge to $\infty$, that is the update rule is not "right". If we instead choose the learning rate to too much smaller than $1/L$, then convergence can be guaranteed but the limit point may not be a critical point of $f$.
On the other hand, for Diminishing GD, where we pre-choose a sequence $\delta _n$ so that $\lim _{n\rightarrow\infty}\delta _n=0$ and $\sum _{n}\delta _n=\infty$, independent of functions $f$, then it is clear that $\delta _n$'s are unitless. Then the update rule for Diminishing GD does not have "correct unit".
\subsection{Acknowledgments} We thank anonymous comments for inspiring our study in Section 0.1. The first author is supported by Young Research Talents grant 300814 from Research Council of Norway.
|
1,116,691,500,358 | arxiv | \section{Introduction}
\label{sec:introduction}
Text summarization is an NLP task with many real-world applications. The ever-increasing amount of unstructured information in text form calls for methods to automatically extract the relevant information from documents and present it in condensed form. Within the field of summarization, different paradigms are recognised in two dimensions: extractive vs.~abstractive, and single-document vs.~multi-document. In extractive summarization, those sentences or words are extracted from a text which carry the most important information, directly presenting the result of this as the summary. Abstractive summarization methods paraphrase the text, and by changing the text aim to generate more flexible and consistent summaries. Furthermore, single-document summarization works on single documents, while multi-document summarization deals with multiple documents at once and produces a single summary. In this paper, we concentrate on single-document abstractive summarization. Most recent abstractive models utilize the neural network-based sequence-to-sequence approach. During training, such models calculate the conditional probability of a summary given the input sequence by maximizing the loss function (typically cross-entropy). Most approaches are based on the encoder-decoder framework where the encoder encodes the input sequence into a vector representation and the decoder produces a new summary given the draft summary (which is the part of the summary generated during previous iterations). The last layer of a decoder, the generator, maps hidden states to token probabilities. We use a state-of-the-art Transformer for sequence-to-sequence tasks which is built primarily on the attention mechanism \cite{Vaswani2017}.
We attempt to improve performance of abstractive text summarization by improving the language encoding capabilities of the model. Recent results have shown that the main contribution of the Transformer is its multi-layer architecture, allowing Self-Attention to be replaced with some other technique without a significant drop in performance \cite{domhan-2018-much,wu2018pay}. Following this strategy, we develop a model that introduces convolution into the vanilla Self-Attention, allowing to better encode the local dependencies between tokens.
To overcome the data sparsity problem, we use a pre-trained language model for the encoding part of the encoder-decoder setup, which creates a contextualized representation of the input sequence. Specifically, we use BERT due to its bi-directional context conditioning, multilingualism and state-of-the-art scores on many other tasks \cite{devlin2018}. Furthermore, we propose a new method which allows applying BERT on longer texts. The main contributions of this paper are: (1) Designing two new abstractive text summarization models based on the ideas of conditioning on the pre-trained language model and application of convolutional self-attention at the bottom layers of the encoder. (2) Proposing a method of encoding the input sequence in windows which alleviates BERT's input limitations\footnote{BERT can process sequences with a maximum of 512 tokens.} and allows the processing of longer input texts. (3) Evaluating the performance of our models on the English and German language by conducting an ablation study on CNN/Dail Mail and SwissText datasets and comparing it with other state-of-the-art methods.
\section{Related Work}
\label{sec:related}
\subsection{Pre-trained Language Models}
\label{sec:langmodels}
Traditionally, non-contextualized embedding vectors were used for pre-training neural-based NLP models \cite{mikolov2013distributed,pennington2014glove}. Recently, pre-trained language models exploiting contextualized embeddings, such as ELMo, GPT-2, BERT and XLNet raised the bar in many NLP tasks \cite{Peters:2018,Radford2019LanguageMA,devlin2018,yang2019xlnet}. Recent attempts to use these models for text summarization demonstrated their suitability by achieving new state-of-the-art results \cite{zhang2019pretrainingbased,liu2019finetune,Liu_2019}.
\subsection{Neural Abstractive Text Summarization}
\label{sec:relatedneural}
The neural approach toward abstractive summarization was largely adopted by state-of-the-art models \cite{shi2018neural}. A significant contribution was the pointer Generator Network \cite{See2017}. It uses a special layer on top of the decoder network to be able to both generate tokens from the dictionary and extract them from the input text. It uses the coverage vector mechanism to pay less attention to tokens already covered by previous iterations. An example of earlier work adapting Reinforcement Learning (RL) is described by \newcite{paulus2017deep}. The pure RL model achieved high ROUGE-1 and ROUGE-L scores but produced unreadable summaries. Its combination with typical cross-entropy optimization achieved high scores eliminating the unreliability problem. \newcite{Liu2018}, to the best of our knowledge, were the first to use the Transformer model for summarization. It was only used in the decoder on top of the extraction model with various attention compression techniques to increase the size of the input sequence. \newcite{zhang2019pretrainingbased} incorporate BERT into the Transformer-based model. They use a two-stage procedure exploiting the mask learning strategy. Others attempt to improve their abstractive summarization models by incorporating an extractive model. For example, \newcite{li-etal-2018-guiding} use the Key information guide network to guide the summary generation process. In Bottom-up summarization \cite{gehrmann2018bottom} the extractive model is used to increase the precision of the Pointer Generator mechanism. Another strand of research adapts existing models to cope with long text. \newcite{Cohan2018ADA} present the Discourse-Aware Attention model which introduces hierarchy in the attention mechanism via calculating an additional attention vector over the sections of the input text. \newcite{s2019extractive} showed that the language model trained on the combination of the original text, extractive summaries generated by the model and the golden summary can achieve results comparable to standard encoder-decoder based summarization models.
\section{Approach}
\label{sec:approach}
Our text summarization model is based on the Transformer architecture. This architecture adopts the original model of \newcite{Vaswani2017}. On top of the decoder, we use a Pointer-Generator (Formula~\ref{copy}) to increase the extractive capabilities of the network (we later refer to this architecture as CopyTransformer).
\begin{equation}
p(w) = p_{gen} P_{copy}(w) + (1-p_{gen}) P_{softmax}(w) ,
\label{copy}
\end{equation}
where $P_{copy}(w)$ is the probability of copying a specific word $w$ from the source document, $P_{softmax}(w)$ is the probability of generation a word calculated by the abstractive summarization model and $p_{gen}$ is the probability of copying instead of generation.
\begin{figure}[t]
\centering
\includegraphics[width=.8\columnwidth]{img/summmodel.png}
\caption{Model overview}\label{fig:summmodel}
\end{figure}
\subsection{Convolutional Self-Attention}
\label{sec:localatten}
The Transformer, like any other self-attention network, has a hierarchical multi-layer architecture. In many experiments it was shown that this architecture tends to learn lexical information in the first layers, sentence-level patterns in the middle and the semantics in the upper layers \cite{raganato-tiedemann-2018-analysis,Tenney_2019}.
The disadvantage of this approach is that during the attention operation it considers all tokens as equally important, whereas syntactic information is mostly concentrated in certain local areas. This problem is usually specified as the problem of locality modeling. As syntactic information can help in identifying more important words or phrases, it could be beneficial to focus attention on these regions.
A successful approach to the locality modeling task are the so-called convolutions (local) self-attention networks \cite{localattention}. Essentially, the problem is dealt with by the application of a 1-dimensional convolution to the self-attention operation at the network's lower layers. This strengthens dependencies among neighboring elements and makes the model distance-aware when it searches for low-level patterns in a sequence. In other words, it restricts the attention scope to the window of neighboring elements. The 1D convolution applied to attention is illustrated in Formulas~\ref{eqn-1D_key}, \ref{eqn-1D_value} and~\ref{eq:conc}.
\begin{equation}
\widehat{\mathbf{K}}^h = \{\mathbf{k}^h_{i-\frac{M}{2}}, \dots, \mathbf{k}^h_i, \dots, \mathbf{k}^h_{i+\frac{M}{2}} \} ,
\label{eqn-1D_key}
\end{equation}
\begin{equation}
\widehat{\mathbf{V}}^h = \{\mathbf{v}^h_{i-\frac{M}{2}}, \dots, \mathbf{v}^h_i, \dots, \mathbf{v}^h_{i+\frac{M}{2}} \} ,
\label{eqn-1D_value}
\end{equation}
\begin{equation}
\mathbf{o}^h_i = \textsc{Att}(\mathbf{q}^h_i, \widehat{\mathbf{K}}^h) \widehat{\mathbf{V}}^h ,
\label{eq:conc}
\end{equation}
where $\mathbf{q}^h_i$ is the query and $M+1$ ($M \le I$) is its attention region centered at the position $i$.
\\
The convolution can be extended to the 2-dimensional area by taking interactions between features learned by the different attention heads of the Transformer into account. In the original Transformer each head independently models a distinct set of linguistic properties and dependencies among tokens \cite{raganato-tiedemann-2018-analysis}. By applying 2-dimensional convolution, where the second dimension is the index of attention head, we explicitly allow each head to interact with learned features for their adjacent sub-spaces. The shortcoming of the original implementation is that the first and the last heads do not interact as they are assumed not to be adjacent. Thus, we assume that considering the heads' sub-spaces periodically, we can increase the model's effectiveness by applying circular convolution to the second dimension. In Section~\ref{sec:experiments}, we evaluate both the original version and our modification.
\begin{equation}
\widetilde{\mathbf{K}}^h = \bigcup [\widehat{\mathbf{K}}^{h-\frac{N}{2}}, \dots, \widehat{\mathbf{K}}^{h}, \dots, \widehat{\mathbf{K}}^{h+\frac{N}{2}}] ,
\end{equation}
\begin{equation}
\widetilde{\mathbf{V}}^h = \bigcup [\widehat{\mathbf{V}}^{h-\frac{N}{2}}, \dots, \widehat{\mathbf{V}}^{h}, \dots, \widehat{\mathbf{V}}^{h+\frac{N}{2}}] ,
\end{equation}
\begin{equation}
\mathbf{o}^h_i = \textsc{Att}(\mathbf{q}^h_i, \widetilde{\mathbf{K}}^h) \widetilde{\mathbf{V}}^h ,
\end{equation}
where $(M+1)$ ($N \le H$) is the window region over heads and $\bigcup$ stands for the union of keys $\widehat{\mathbf{K}}^h$ and values $\widehat{\mathbf{V}}^h$ from different subspaces.
\\
The convolutional self-attention has been shown to be very effective in Machine Translation and several other NLP tasks. However, to our knowledge, it was never applied to the text summarization problem. For the experiments reported on in this paper, we created our implementation of the local attention and the convolutional self-attention network (Transformer). It supports both 1D and 2D modes having the size of the kernels as system parameters. As in \newcite{localattention} we incorporate convolutional self-attention in the Transformer encoder by positioning it in the place of the self-attention in the lower layers. In Section~\ref{sec:experiments}, we show that the low-level modeling capabilities of our encoder provides a strong boost to the model's prediction accuracy in the text summarization task.
\subsection{BERT-Conditioned Encoder}
\label{sec:knowledgetransferencoder}
The main task of the encoder is to remember all the semantic and syntactic information from the input text which should be used by the decoder to generate the output. Knowledge transfer from the language model should theoretically improve its ability to remember the important information due to the much larger corpus used in its pre-training phase compared to the corpus used in the text summarization training phase. We thus condition our encoder on the BERT language model.
For the encoder conditioning, we used the most straightforward strategy recommended for the BERT based model: placing the pre-trained language model in the encoder as an embeddings layer. This should make the embeddings of the system context-dependent. We decided not to fine-tune the encoder on BERT for the sake of memory and time economy. Instead, we follow the general recommendations by concatenating the hidden states of the last four layers of BERT into a 3072-dimensional embedding vector \cite{devlin2018}. We use two variations of the BERT-based encoder. The first model uses only BERT to encode the input sequence and the second model feeds BERT's generated embeddings into the vanilla Transformer encoder.
\subsection{BERT-Windowing}
\label{sec:windowing}
One of the key features of our approach is its ability to overcome the length limitations of BERT, allowing it to deal with longer documents. BERT's maximum supported sequence length is 512 tokens\footnote{These are not tokens in the traditional sense, but so-called WordPiece tokens, see \newcite{devlin2018}.}, which is smaller than the average size of texts used in most summarization datasets. Our method relies on the well-known method of windowing which to our knowledge was never used before neither in the BERT-based models nor in abstractive text summarization research (Figure~\ref{fig:windowing}). We apply BERT to the windows of texts with strides and generate $N$ matrices, every matrix embedding one window. Then we combine them by doing the reverse operation. The vectors at the overlapping positions are averaged (by summing and dividing by the number of overlapping vectors). As a result, we have the matrix of embeddings with the shape of the hidden size times the length of the text. The drawback of this approach is that we reduce the size of the context as each resulted vector is calculated based on maximum twice the window size number of tokens. Besides, the split of the text to equal size windows will aggravate the consistency of the input as some sentences will be split in an arbitrary manner between two adjacent windows. Despite this drawback, we assume that this procedure will nevertheless improve the accuracy of the encoder trained on the non-truncated texts. We set the window size to the maximum size of 512 tokens and the stride to 256. We consider this stride size optimal due to a trade-off between the average context size and computational requirements of the model (number of windows). By this trade we ensure every token to have a 768 tokens-context except for the 256 initial and final tokens, that only have 512 tokens-context.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{img/windowing.png}
\caption{Integration of BERT-generated contextual representations from two windows}\label{fig:windowing}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.45]{img/Integration.png}\\
\caption{Two different approaches for the integration of the BERT-conditioning with Convolutional Self-Attention}
\label{fig:integration}
\end{figure*}
\subsection{BERT-Conditioned Decoder\label{sec:knowledgetransferdecoder}}
In the decoder, pre-training was applied in a similar way. The main difference is that instead of the final output of BERT we use only its word embedding matrix (without positions). The reason behind this is that in the decoder the generated probability distribution is conditioned on the incomplete text (previous summary draft output) while BERT implicitly assumes consistent and completed input \cite{zhang2019pretrainingbased}. As context-independent embeddings are not enough to represent the minimum set of features to make a meaningful prediction, the custom Transformer decoder is always stacked on top of BERT.
Our whole BERT-based model is similar to One-Stage BERT \cite{zhang2019pretrainingbased} and BertSumAbs \cite{Liu_2019} but differs in the usage of the four last hidden states of BERT to create contextualized representation, in presence of Pointer Generator and capabilities to process long texts. In Figure~\ref{fig:summmodel} we show the schema of the basic model with the BERT-conditioned convolutional self-attention encoder and BERT-conditioned decoder.
\subsection{Integration of BERT and Convolutional Self-Attention}
\label{sec:encoderintegratin}
We evaluated two different ways of integrating the BERT-conditioning with the convolutional self-attention of the model's encoder (Figure~\ref{fig:integration}).
\paragraph{Stacking} This approach comprises feeding the BERT-generated embeddings to the convolutional self-attention Transformer encoder. A potential problem with this approach is that convolutional self-attention is assumed to be beneficial when applied in the lower layers as its locality modeling feature should help in modeling of local dependencies (e.\,g., syntax). At the same time, BERT is a hierarchical model where the last layers target high-level patterns in the sequences (e.\,g., semantics). We assume that the application of the network detecting the low-level patterns on BERT's output can undermine its generalization abilities.
\paragraph{Concatenation} Because of the considerations raised above, we also develop a second approach which we call Concatenation. We split the convolutional self-attention Transformer encoder into two networks where the first one uses only convolutional self-attention and the second original self-attention (identical to the Transformer encoder). Then we feed the original sequences into BERT and into the convolutional self-attention network in parallel. The resulting embedding vectors are concatenated and fed into the Transformer encoder. In this way, we model the locality at the lower layers of the encoder at the cost of a smaller depth of the network (assuming the same number of layers).
\begin{table}[t]
\centering\small
\begin{tabular}{@{}lccc@{}} \toprule
Method & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule
CopyTransformer & 31.95 & 14.49 & 30.02 \\ \midrule
+ 1D conv. & 32.62 & 14.99 & 30.74\\
+ 2D conv. & \textbf{32.72} & \textbf{15.12} & \textbf{30.85}\\
+ 2D Circular conv. & 32.68 & 15.01 & 30.76 \\ \bottomrule
\end{tabular}
\caption{Ablation study of model with Convolutional Self-Attention on the CNN/Daily Mail dataset (kernel sizes are 11 and 3)}
\label{tab:conv}
\end{table}
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{img/convtransformer.png}
\caption{Effect of the window size on ROUGE-1}
\label{fig:window}
\end{figure}
\section{Datasets}
\label{datests}
We aim to develop a system that works in a language-independent way. It assumes that either the upstream components are available in the respective language, or they are themselves language-independent, such as the multi-lingual version of BERT. Since most summarization datasets are in English however, we use English for the evaluation and additionally include German to check if of our model can be applied to another language.
\begin{table*}[t]
\centering\small
\begin{tabular}{@{}lrrr@{}} \toprule
Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule
Transformer & 24.82 & 6.27 & 22.99 \\
CopyTransformer & 31.95 & 14.49 & 30.02\\
Bert Encoder + Transformer Decoder & 31.3 & 13.37 & 29.46\\
Bert-transformer Encoder + Transformer Decoder & 32.5 & 14.68 & 30.68\\
Bert-transformer Encoder + Bert-transformer Decoder & \textbf{33.23} & \textbf{14.99} & \textbf{31.26}\\ \midrule
Transformer (full text) & 23.18 & 5.15 & 21.48\\
Bert-transformer Encoder + Transformer Decoder (full text) & \textbf{31.51} & \textbf{14.1} & \textbf{29.77}\\ \bottomrule
\end{tabular}
\caption{Ablation study of the BERT-based model on truncated and original CNN/Daily Mail dataset}
\label{tab:bert}
\end{table*}
\begin{table*}[t]
\centering\small
\begin{tabular}{@{}lrrr@{}} \toprule
Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule
Transformer & 36.40 & 20.69 & 34.14\\
CopyTransformer & 39.44 & 25.11 & 37.16\\
Bert-transformer Encoder + Transformer Decoder & \textbf{44.01} & \textbf{29.60} & \textbf{41.65}\\
Bert-transformer Encoder + Bert-transformer Decoder & 43.22 & 29.01 & 40.84\\ \midrule
Transformer (full text) & 34.76 & 18.65 & 32.61\\
Bert-transformer Encoder + Transformer Decoder (full text) & \textbf{45} & \textbf{30.49} & \textbf{42.64}\\ \bottomrule
\end{tabular}
\caption{Ablation study of the BERT-based model on the truncated and original SwissText dataset}
\label{tab:bertswiss}
\end{table*}
\subsection{CNN/Daily Mail}
\label{sec:cnndm}
Our experiments are mainly conducted on the CNN/Daily Mail dataset \cite{Hermann2015,Nallapati2016}. It contains a collection of news articles paired with multi-sentence summaries published on the CNN and Daily Mail websites. This dataset is the de facto standard for training summarization models. We use the non-anonymized data as was used for training of the most recent state-of-the-art models (e.\,g., \newcite{See2017}). The raw dataset consists of separate text files each representing a single article or a summary. We use the data in its preprocessed version as provided by \newcite{gehrmann2018bottom}. It has 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs.
To align the data with the vocabulary of BERT we tokenized it using the BPE-based WordPiece tokenizer \cite{devlin2018}. As all samples in BERT's training data are prepended with the special token "[CLS]", we follow this and add it to every source text in our dataset. In the resulting dataset, the average lengths of an article and a summary are 895 and 63 tokens, respectively. In most of our experiments, we use the clipped version of the training and validation datasets with each article truncated to 512 tokens. In the experiments on BERT windowing, we use the full-text version.
\subsection{SwissText Dataset}
\label{sec:gerwiki}
To evaluate the efficiency of the model in a multi-lingual, multi-domain environment we conduct a series of experiments on the German SwissText dataset. This dataset was created for the 1st German Text Summarization Challenge at the 4th Swiss Text Analytics Conference -- SwissText 2019 \cite{Swisstext}. It was designed to explore different ideas and solutions regarding abstractive summarization of German texts. To the best of our knowledge, it is the first long document summarization dataset in the German language that is publicly available. The data was extracted from the German Wikipedia and represents mostly biographical articles and definitions of various concepts.
The dataset was tokenized by the multilingual WordPiece tokenizer \cite{devlin2018} and preprocessed in the same way as the CNN/Daily Mail dataset. It was split into the training, validation and testing sets containing 90,000, 5,000 and 5,000 samples, respectively. The average length of a source sequence is 918 tokens, which makes this dataset suitable for our experiments on windowing
\begin{table*}[b]
\centering\small
\begin{tabular}{@{}llrrr@{}} \toprule
Method of Integration & Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule
\multirow{2}{*}{Stacking} & BERT+CopyTransformer & 35.28 & \textbf{17.12} & 33.31\\
& BERT+Convolutional CopyTransformer & \textbf{35.4} & 16.82 & \textbf{33.31}\\ \midrule
\multirow{2}{*}{Concatenation} & BERT+CopyTransformer & 34.82 & 16.46 & 32.79 \\
& BERT+Convolutional CopyTransformer & \textbf{35.26} & \textbf{16.79} & \textbf{33.22}\\ \bottomrule
\end{tabular}
\caption{Different strategies for integrating language models with convolutional Self-Attention (CNN/Daily Mail dataset)}
\label{tab:integration}
\end{table*}
\section{Experiments}
\label{sec:experiments}
Our system is built on the OpenNMT library. For training, we use cross-entropy loss and the Adam optimizer with the Noam decay method \cite{kingma2014adam}. Regularization is made via dropout and label smoothing. For evaluation, we calculate the F1-scores for ROUGE using the files2rouge library. The ROUGE evaluation is made on the sequences of WordPiece tokens.
\subsection{Locality Modeling}
\label{sec:localmodel}
To evaluate the effect of convolution on self-attention we introduce it in the first layer of the encoder. We use the same kernel sizes as in \newcite{localattention}. In these experiments, to accelerate the training process, we use a small model with a hidden size of 256, four self-attention heads and three layers in the encoder and decoder. All models are trained for 90,000 training steps with the Coverage Penalty. As a baseline, we use our implementation of CopyTransformer. In contrast to \newcite{See2017}, we do not re-use the attention layer for the decoder but train a new Pointer-Generator layer from scratch.
The results are presented in Table~\ref{tab:conv}. We see that both convolutions over tokens and over attention heads improve the ROUGE scores. Standard convolution outperformed circular convolution on ROUGE-1, ROUGE-2 and ROUGE-L by 0.06, 0.13 and 0.09 percent, respectively.
We also investigated the effect of the window size of the 1-dimensional convolution on ROUGE scores (Figure~\ref{fig:window}). In contrast to findings in Machine Translation, we found that size 13 returns the best result for the summarization task.
\subsection{BERT Conditioning}
\label{sec:bertcond}
To find the optimal architecture of the BERT-based abstractive summarizer we conducted an ablation study (Table~\ref{tab:bert}). All hyperparameters were set equal to the ones in experiments in convolutional self-attention. On CNN/Daily Main dataset we test three different models: BERT encoder+Transformer Decoder, BERT-Transformer encoder+Transformer decoder and BERT-Transformer encoder+BERT-Transformer decoder. The version of BERT used in the experiments is BERT-Base. As the baseline, we use the Transformer without Pointer Generator. From the results, we observe that BERT improves the efficiency of the model when it is used in both encoder and decoder. Besides, BERT in the encoder is more effective when it is used to produce embeddings to be used by the standard Transformer encoder than when it is used solely as an encoder. Even without a Pointer Generator, our model outperformed the CopyTransformer baseline by 1.28, 0.5 and 1.24 on ROUGE-1, ROUGE-2 and ROUGE-L.
To evaluate our BERT-windowing method we conducted the experiments on the full text. Our approach outperforms the baseline, which proves that the method can be successfully applied to texts longer than 512 tokens. The final performance of this model is still lower than that of the model trained on the truncated text, but as the same pattern can be observed for the baselines we assumed this relates to the specifics of the dataset that is prone to having important information in the first sentence of a text.
On SwissText data we use the multilingual version of BERT-Base. We evaluated two models with Bert-transformer encoder and Transformer and BERT-Transformer decoders (Table~\ref{tab:bertswiss}). The introduction of BERT into the transformer increased the ROUGE-1, ROUGE-2 and ROUGE-L scores by 7.21, 8.91 and 7.51 percent, respectively. At the same time, the usage of BERT in the decoder decreased the overall score. We assume that the reason behind this is that in multilingual BERT, due to its language-independence, the embedding matrix outputs less precise contextualized representations which undermines their benefits for the summarization task.
On the non-truncated texts, usage of the Bert-transformer encoder increased the ROUGE scores by 10.23, 11.84 and 10.03 percent. Furthermore, it gives us higher scores compared to the same model on truncated texts.
This demonstrates the usability of BERT-windowing for this particular dataset. We assume that the difference in performance on the CNN/Daily Mail datasets reflects the difference in distribution of the useful information within the text. Particularly, that in the SwissText dataset, it is spread more uniformly than in the CNN/Daily Mail dataset. We conducted a small experiment comparing the average ROUGE score between a golden summary and the head and the tail of a document (taking the first or last \textit{n} sentences, where \textit{n} correlates to the length of the gold summary) on both datasets. The difference between taking the head and a tail on the SwissText dataset (ROUGE-L of 34.79 vs. 20.15, respectively) was much smaller than on CNN / Daily Mail (ROUGE-L of 16.95 vs.~12.27, respectively) which confirms our hypothesis.
\subsection{Integration Strategies}
\label{sec:integration}
To evaluate the integration strategies, we trained two models with the respective BERT-based baselines. Both models have in their encoder two Transformer layers and one Convolutional Transformer layer placed on top of BERT or in parallel, respectively (Table~\ref{tab:integration}).
The method of stacking does not provide any significant improvement. With the introduction of convolutional self-attention only ROUGE-1 increased by 0.12 percent, while ROUGE-2 dropped by 0.3 and ROUGE-L remained the same. Considering that in many domains ROUGE-2 maximally correlates with human assessment (see Section~\ref{sec:evaluation}), we dismiss this method. The concatenation strategy convolution is shown to be much more efficient, increasing ROUGE scores by 0.44,0.33 and 0.43 percent. This confirms our hypothesis that locality modeling is the most efficient when applied at the bottom on the non-contextualized word representations. Unfortunately, this model failed to outperform the stacking baseline. We conclude that the concatenating architecture undermines the performance of the Transformer model, and the convolutional self-attention is not beneficial when used together with pre-trained language models. Hence, we decided to train our two final models separately.
\begin{table*}[t]
\centering\small
\begin{tabular}{@{}llll@{}} \toprule
Method & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule
BiLSTM + Pointer-Generator + Coverage \cite{See2017} & 39.53 & 17.28 & 36.38 \\
ML + Intra-Attention \cite{paulus2017deep} & 38.30 & 14.81 & 35.49 \\
CopyTransformer \cite{gehrmann2018bottom} & 39.25 & 17.54 & 36.45 \\
Bottom-Up Summarization \cite{gehrmann2018bottom} & 41.22 & 18.68 & 38.34 \\
One-Stage BERT \cite{zhang2019pretrainingbased} & 39.50 & 17.87 & 36.65 \\
Two-Stage BERT \cite{zhang2019pretrainingbased} & 41.38 & 19.34 & 38.37\\
ML + Intra-Attention + RL \cite{paulus2017deep} & 39.87 & 15.82 & 36.90 \\
Key information guide network \cite{li-etal-2018-guiding} & 38.95 & 17.12 & 35.68 \\
Sentence Rewriting \cite{chen-bansal-2018-fast} & 40.88 & 17.80 & 38.54 \\
BertSumAbs \cite{Liu_2019} & \textbf{41.72} & \textbf{19.39} & \textbf{38.76}\\ \midrule
CopyTransformer (our implementation) & 38.73 & 17.28 & 35.85\\
Convolutional CopyTransformer & 38.98 & 17.69 & 35.97\\
BERT+CopyTransformer (enc., dec.) & \textbf{40} & \textbf{18.42} & \textbf{37.15}\\ \bottomrule
\end{tabular}
\caption{ROUGE scores for various models on the CNN/Daily Mail test set. The first section shows different state-of-the-art models, the second section presents our models and baseline.}
\label{tab:abs}
\end{table*}
\begin{table*}[t]
\centering\small
\begin{tabular}{@{}llll@{}} \toprule
Method & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \midrule
CopyTransformer (our implementation) & 39.5 & 22.36 & 36.97\\
Convolutional CopyTransformer & 40.54 & 23.62 & 38.06\\
BERT+CopyTransformer (enc.) & \textbf{42.61} & \textbf{25.25} & \textbf{39.85}\\ \bottomrule
\end{tabular}
\caption{ROUGE scores for our models on the SwissText test set}
\label{tab:absswiss}
\end{table*}
\subsection{Model Comparison}
\label{sec:finalmodel}
For the final comparison of our model to other state-of-the-art methods we conducted experiments on the CNN/Daily Mail dataset. We set the hidden state to 512, the number of Transformer layers in the encoder and layers to six and the number of self-attention heads to eight. Hence, our baseline is smaller than the original CopyTransformer \cite{gehrmann2018bottom}, which may be the reason why it performs slightly worse (Table~\ref{tab:abs}). BERT-conditioning was used in both the encoder and decoder. The sizes of convolution kernels are set to 13 and three. The networks were trained for 200,000 training steps on a single NVIDIA GeForce GTX 1080 Ti. The generation of the summary was made via the Beam search algorithm with the Beam size set to four. Finally, the generated summaries were detokenized back to the sequences of words separated by spaces.
For the BERT-based model, we set the minimum length of a generated summary to 55 as we found that without such restriction the model was prone to generate shorter sequences than in the test dataset. The model outperformed the baseline by 1.27 on ROUGE-1, 1.14 on ROUGE-2 and 1.3 on ROUGE-L. This is better than the scores of One-Stage BERT but still worse than the two-stage and BertSumAbs models.
For the convolutional CopyTransformer we use convolutional self-attention in the first three layers of the encoder. It increased ROUGE-1, ROUGE-2 and ROUGE-L by 0.25, 0.41 and 0.12.
Furthermore, we present the first publicly available benchmark for the SwissData dataset (Table~\ref{tab:absswiss}).\footnote{For comparability with our other model we include results for the bigger BERT+CopyTransformer model. At the same time, we found that the smaller model without the copy mechanism achieved higher scores with 45.12 ROUGE-1, 28.38 ROUGE-2 and 42.99 ROUGE-L. This needs to be explored in future work.} All parameters are equal to the CNN/Daily Mail baseline. BERT-conditioning was used only in the encoder. The networks were trained on the truncated texts in 90,000 training steps. From the results we see that the convolutional CopyTransformer showed much more efficiency than on CNN/Daily Mail dataset, outperforming the baseline by 1.04 percent on ROUGE-1, 1.26 on ROUGE-2 and 1.09 on ROUGE-L. The BERT-based model achieved the highest scores.
\section{Qualitative Analysis}
\begin{figure*}[t]
\small
\noindent\fbox{%
\parbox{484pt}{%
\textbf{Gold summary:} researchers are developing a computer that can write weather forecasts . it takes meteorological data and writes a report designed to mimic a human . this process is known as ` natural language generation ' - lrb - nlg - rrb - . a prototype system will be tested on the bbc website later this year .
}%
}
\noindent\fbox{%
\parbox{484pt}{%
\textbf{Transformer:} researchers from london and edinburgh have developed a computer that can \colorbox{GreenYellow}{collateological} information . these computer - generated weather updates are being tested by scientists at heriot - watt university and university college london . if the project is successful , a prototype system will be tested by generating local weather reports on the bbc ' s website . currently , the bbc website features 10 reports written by meteorologists .
}%
}
\noindent\fbox{%
\parbox{484pt}{%
\textbf{Convolutional Transformer:} researchers from london and edinburgh have developed a computer that can collate \colorbox{GreenYellow}{meterological} information and then produce forecasts as if they were written by a human . it uses a process known as ` natural language generation ' - lrb - nlg - rrb - . these computer - generated weather updates are being tested by scientists at heriot - watt university and university college london . if the project is successful , a prototype system will be tested by generating local weather reports on the bbc ' s website .
}%
}
\noindent\fbox{%
\parbox{484pt}{%
\textbf{BERT-Transformer:} researchers from london and edinburgh have developed a computer that can collate meteorological information and produce forecasts as if they were written by a human . \colorbox{SkyBlue}{using met office data , it uses a process} known as ` natural language generation ' - lrb - nlg - rrb - . if the project is successful , a prototype system will be tested by generating local weather reports on the bbc ' s website .
}%
}
\caption{Comparison of the output of models on an example form CNN/Daily Mail testset. Surface realisation mistakes are highlighted in green and a typical abstractive feature, illustrating re-arranging of the sentence is highlighted in blue.\label{box:texts}}
\end{figure*}
\label{sec:testsamples}
As ROUGE evaluation is not always a valid method for quality assessment we perceive the need for an additional, manual evaluation. The best solution would be to conduct a fine-grained study of the models' outputs by manually ranking them in terms of semantic coherence, grammaticality, etc. However, due to the time-consuming nature of such an evaluation, we reverted to a qualitative analysis comparing several summaries generated by different models. Figure~\ref{box:texts} includes the reference summary and those generated by the different models. Comparing the first sentence we see that the vanilla Transformer model performed worse by copying only part of the original sentence omitting some characters in the word ``meteorological''. The model with convolution has copied the whole sentence but still made a spelling error. Finally, only the BERT-based model succeeded to generate the right token, ``meteorological''. Also, we see that while the BERT-based model's summary conveys the same meaning as the gold summary, the convolutional Transformer generates one and Transformer two sentences that are not present in the gold summary. Overall, on the given example all models provided a summary of extractive nature and only the BERT-based model shows some level of abstractiveness merging parts of the two sentences into the single one (in the second summary's sentence). This is far from the gold summary where every sentence in some way paraphrases the original text. Hence, given this particular example, our models demonstrate some explicit improvements. Still, abstractive summarization remains challenging. The paraphrasing capabilities of all state-of-the-art systems are low and the models are not guaranteed to produce summaries which follow the initial order of the sequence of events.
\section{Discussion: Summarization Evaluation}
\label{sec:evaluation}
ROUGE \cite{lin-2004-rouge} is the most widely adopted metric used for evaluating automatic text summarization approaches. The evaluation is made though comparison of a set of system-generated candidate summaries with a gold standard summary. The availability of the corresponding software and its performance contributed to its popularity \cite{cohan2016revisiting}. Despite its adoption in many studies, the metric faced some key criticisms.
The main criticism of ROUGE is that it does not take into account the meaning expressed in the sequences. The metric was developed based on the assumption that a high quality generated candidate summary should share many words with a single human-made gold standard summary. This assumption may be very relevant to extractive, but not to abstractive summarization, where different terminology and paraphrasing can be used to express the same meaning \cite{cohan2016revisiting}. This results in the metric assigning low scores to any summary not matching the gold standard on the surface level. This also allows cheating the metric by generating ungrammatical and nonsensical summaries having very high ROUGE scores. \newcite{SJOBERGH20071500} show how this can be achieved by choosing the most frequent bigrams from the input document.
ROUGE adoption relies on its correlation with human assessment. In the first research on the DUC and TDT-3 datasets containing news articles, ROUGE indeed showed a high correlation with the human judgments \cite{lin-2004-rouge,dorr-etal-2005-methodology}. However, more recent research questions the suitability of ROUGE for various settings. \newcite{conroy-dang-2008-mind} show that on DUC data the linguistic and responsiveness scores of some systems do not correspond to the high ROUGE scores. \newcite{cohan2016revisiting} demonstrate that for summarization of scientific texts, ROUGE-1 and ROUGE-L have very low correlations with the gold summaries. ROUGE-N correlates better but is still far from the ideal case. This follows the result of \newcite{murray}, showing that the unigram match between the candidate summary and gold summary is not an accurate metric to assess quality.
Another problem is that the credibility of ROUGE was demonstrated for the systems which operated in the low-scoring range. \newcite{peyrard-2019-studying} show that different summarization evaluation metrics correlate differently with human judgements for the higher-scoring range in which state-of-the-art systems now operate. Furthermore, improvements measured with one metric do not necessarily lead to improvements when using others.
This concern led to the development of new evaluation metrics. \newcite{peyrard-2019-simple} define metrics for important concepts with regard to summariazion: Redundancy, Relevance, and Informativeness in line with Shannon’s entropy. From these definitions they formulate a metric of Importance which better correlates to human judgments. \newcite{clark-etal-2019-sentence} propose the metric of Sentence Mover’s Similarity which operates on the semantic level and also better correlates with human evaluation. A summarization model trained via Reinforcement Learning with this metric as reward achieved higher scores in both human and ROUGE-based evaluation.
Despite these drawbacks, the broad adoption of ROUGE makes it the only way to compare the efficiency of our model with other state-of-the-art models. The evaluation of our system on the SwissData dataset confirms that its efficiency (in terms of ROUGE) is not restricted to CNN/Daily Mail data only.
\section{Conclusion}
\label{sec:conclusions}
We present a new abstractive text summarization model which incorporates convolutional self-attention in BERT. We compare the performance of our system to a baseline and to competing systems on the CNN/Daily Mail data set for English and report an improvement over state-of-the-art results using ROUGE scores. To establish suitability of our model to languages other than English and domains other than that of the CNN/Daily Mail data set, we apply our model to the German SwissText data set and present scores on this setup. A key contribution of our model is the ability to deal with texts longer than BERT's window size which is limited to 512 WordPiece tokens. We present a cascading approach and evaluate this on texts longer than this window size, and demonstrate its performance when dealing with longer input texts.
The source code of our system is publicly available.\footnote{\url{https://github.com/axenov/BERT-Summ-OpenNMT}} A functional service based on the model is currently being integrated, as a summarization service, in the platforms Lynx \cite{morenoschneider2020j}, QURATOR \cite{rehm2020d} and European Language Grid \cite{rehm2020m}.
\section*{Acknowledgements}
The work presented in this paper has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no.~780602 (Lynx) and from the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Wachs\-tums\-kern no.~03WKDA1A).
\section{Bibliographical References}
\label{main:ref}
\bibliographystyle{./lrec}
|
1,116,691,500,359 | arxiv | \section{Introduction}\label{s1}\setcounter{equation}{0}
Volterra integral equations (VIEs) are of fundamental importance in the mathematical modelling of many scientific, economic, physical, chemical and biological phenomena \cite{corduneanu1991integral, jerri1999introduction}. Taking into account the fact that a general initial value problem could be rewritten as a Volterra integral equation and also due to the basic role of VIEs in the study of evolutionary processes, Volterra equations have gained much popularity in the functional and numerical analysis fields and many theoretical and numerical efforts have been devoted to study their solutions and properties (see e.g. \cite{brunner, hack}).
In recent years, integral equation models have also found their way into the Wall Street and some practical financial problems, mainly from the field of financial option pricing and hedging are now reformulated as Volterra integral equations (see e.g. \cite{chen, evans, patrik2, keller, shev2} and the many references therein).
This line of research started with the pioneering contributions of Kim \cite{kim}, Jacka \cite{jacka} and Carr et al. \cite{carr1} who derived nonlinear integral representations for the ``early exercise premium'' where the underlying asset follows a geometric Brownian motion. Soon after, a bunch of numerical methods for American option pricing using these integral representations were proposed by Broadie and Detemple \cite{broadie1996american}, Huang et al. \cite{huang1996pricing}, Ju \cite{ju}, AitSahlia and Lai \cite{lai2}, and Kallast and Kivinukk \cite{kallast} among others.
Based on the fact that integral operators have a smoothing character and could potentially increase the regularity properties of their input functions, the methodology of transforming partial differential equations into equivalent integral equations, known in the literature as ``Boundary Integral Equation Method'' has been a widely developed field within the scientific computing community \cite{stas}. In the context of Black-Scholes partial differential equation (PDE) considered as a parabolic free boundary problem, such an approach has been employed successfully based on different transformation techniques (e.g. Fourier, Laplace, Mellin, etc.) to arrive at a variety of integral equation formulations of the problem \cite{MR2087015}.
Although some studies in the finance literature have criticized the use of integral equation methods in option pricing\footnote{Due mainly to their low speed and high computational costs.}, in recent years this point of view has changed and recent research has shown a promising speed-accuracy performance for the integral equation approach \cite{andersen}. This has led some researchers to put forth their efforts to explore and extend these integral representations with the hope to make them a method of choice in real-time computing frameworks.
It is worthwhile to mention that the widespread appearance of IEs in finance will potentially open new avenues in the study of some integral equation families which have been previously studied only in some restricted senses (e.g. non-standard Volterra integral equations \cite{brunner, guan}). Moreover, there is also incentives to invent new tools and techniques in this rapidly developing field of study to accommodate for the arising problems and challenges.
Due to the fact that these integral equation representations are usually derived from the Black-Scholes partial differential equation, starting from different departure points by employing a wide range of transforms and resulting in a variety of forms with different characteristics, it will be helpful to have a comprehensive categorization and a coherent presentation of these various forms in order to gain some insight into their behaviors. This task will also be of help when we try to extend these techniques to other asset price dynamics and also option payoff structures.
Recently, Chiarella and his coworkers \cite{chiarella2014numerical, chia} have provided a survey on integral representations of the optimal exercise boundary, arising from the American option pricing problem. As a first contribution of this kind, their work could be extended to include more recent developments in the field, as well as some less well-known representations in a unified manner. In this respect, the first part of this paper is concerned with a comprehensive review of the existing approaches in the literature for driving the integral equation representations of the early exercise boundary. We also present some general considerations concerning the existence and uniqueness issue for these integral equations.
Among the existing integral equation reformulations of the early exercise boundary, Kim's representation \cite{kim} is of particular interest, partly due to the financial interpretation of each term in the equation. This has resulted in the development of some approximation techniques in the finance literature to solve this equation \cite{lai2, ju, kallast}. Much of the numerical research in this area is based on direct discretization of the integral terms, called in the literature of integral equations as the Nystr\"{o}m \cite{atkinson} or quadrature method \cite{hack}. However, there is still much room for improving the performance of numerical approaches based on interpolatory quadrature rules to solve the problem at hand.
Taking into account the fact that the early exercise boundary has some kind of singularity near the expiry (see e.g. \cite{evans, stamicar}) and noting that this knowledge must be incorporated in the design of the numerical scheme, we consider here a one-dimensional reformulation of Kim's integral equation proposed by Hou, et al. \cite{little} and employ a generalization of the Nystr\"{o}m method, called the product integration method \cite{atkinson}, specifically designed to tackle this singular behavior. More precisely, we employ an approximation of the kernel based on linear barycentric rational interpolation to manage the weakly singular nature of the integral equation \cite{cuminato, orsi, hoog}.
In this respect, after a brief review of the existing numerical approaches utilized for the approximation of the early exercise boundary, we provide theoretical and numerical evidence that the product integration method based on linear barycentric rational interpolation is an efficient way to approximate the solution. In the sequel, the integral representation of the American option price and its numerical approximation will be considered and an upper bound for the incurred error will be given.
The structure of the paper is as follows. After presenting a survey of existing techniques to arrive at integral equation representations for the early exercise boundary in Section \ref{option}, we introduce the product integration method based on barycentric rational interpolation to approximate the free boundary as well as a convergence analysis of the numerical method in Section \ref{NMEEB}. We then employ the corresponding barycentric rational quadrature to find the price of the option from its integral representation in Section \ref{thistable}. We have performed some numerical experiments in Section \ref{NE} to confirm the theoretical findings of the paper and also a detailed comparison is made between the presented method and some competing approaches. Section \ref{conclusion} concludes the paper by pointing out to some research questions worthy of consideration in the future.
\section{From Option Valuation to Integral Equations}\label{option}
In this and the following sections, we assume that the asset price process, $\{S(t),t\geq 0\}$, follows a lognormal diffusion of the form
\[dS(t) = (r-\delta)S(t)dt + \sigma S(t)dW(t),\]
in which $\{W(t),t\geq 0\}$ is the standard Wiener process, $r$ is the constant interest rate, $\sigma$ is the constant volatility and $\delta$ is the continuous proportional dividend yield.
Our aim here is to give a brief overview of different methods to derive integral equations describing the early exercise boundary of an American call or put option. For this purpose, we start from the famous Black-Scholes PDE of the form
\begin{gather}\label{pde}
\frac{\partial V}{\partial t}+\frac{1}{2}\sigma^{2}S^{2}\frac{\partial^{2}V}{\partial S^{2}} +(r-\delta)S\frac{\partial V}{\partial S}-rV = 0,
\end{gather}
in which $V(t, S)$ describes the price of an option at time $t$, when the underlying security price is equal to $S = S(t)$.
The associated boundary and initial conditions in the case of an American put option with $V(t, S)\equiv P(t, S)$ are of the form:
\begin{align}\label{callcon}
P(t, S)&=K - S, \quad \text{for} \quad S = \mathcal{B}(t), \quad 0\leq t <T, \\\label{smoothcal}
\frac{\partial P}{\partial S}(t, S)&=-1, \quad \text{for} \quad S = \mathcal{B}(t), \quad 0\leq t <T,\\
P(T, S)& = \max\{0, K-S \}, \quad
\lim_{S \rightarrow \infty} P(t, S) = 0,
\end{align}
and the corresponding conditions for an American call with $V(t, S)\equiv C(t, S)$ could be written as:
\begin{align}\label{putcon}
C(t, S)&=S - K, \quad \text{on} \quad S = \mathcal{B}(t), \quad 0\leq t <T, \\\label{smoothput1}
\frac{\partial C}{\partial S}(t, S)&=1, \quad \text{on} \quad S = \mathcal{B}(t), \quad 0\leq t <T,\\
C(T, S)& = \max\{0, S - K \}, \quad \lim_{S \rightarrow 0} C(t, S) = 0.\label{tah}
\end{align}
In the above expressions, $K$ is the exercise price of the option, $T$ is the expiry and $\mathcal{B}(t)$ is a free boundary corresponding to the ``optimal exercise price" or the ``early exercise boundary", to be determined alongside the option price\footnote{As a time-dependent function, $\mathcal{B}(t)$ could be utilized for dividing the hold and exercise regions of the option.}.
In recent years, there have been many efforts to find these unknowns by different analytical and numerical approaches. Among the semi-analytical techniques, one could mention the quadratic approximation method of Barone-Adesi and Whaley \cite{barron}, two-point and three-point maximum methods of Bunch and Johnson \cite{bunch1992simple} and the lower and upper bound approximation methods of Broadie and Detemple \cite{broadie1996american}. From a numerical discretization point of view, the finite difference \cite{duffy}, finite element \cite{achdou} and spectral methods \cite{chen2012new} could also be mentioned.
As an alternative and to obtain an expression for the solution of PDEs, we could apply a wide range of transform techniques available in the literature \cite{stamicar, shev2} to reduce the problem dimension. Roughly speaking, transform methods convert the PDE into one or more ordinary differential equations which by solving them and applying the inverse transform on the solutions we obtain an expression for the price. The next natural step is to use the smooth pasting condition (\ref{smoothcal}) or (\ref{smoothput1}) to arrive at a nonlinear integral equation for the early exercise boundary.
Among other approaches to represent the solution of the pricing equation, we could also mention the Green's function method \cite{evans} and optimal stopping representation \cite{peskir}. In the following, we give a brief outline of these approaches towards tackling the pricing problem:
\begin{description}
\item [(Complete and Incomplete) Fourier Transform Approach] \label{A}
Applying the Fourier transform on equation (\ref{pde}) leads to a nonlinear integral equation for the free boundary, $\mathcal{B}(t)$, defined recursively and described in detail for the zero divided case in \cite{stamicar} and also for the non-zero dividend case in \cite{shev2}. In both cases, the obtained integral equations are of non-standard Volterra type (see the Appendix \ref{forieh} for more details).
\item [Laplace Transform Approach]
Utilizing the Laplace transform on equation (\ref{pde}) will result in an integral equation for the location of the free boundary, $ \mathcal{B}(t) $ \cite{gada1}. In this case, the nonlinear integral equation is of the Fredholm type with an unbounded domain of integration (for more details see the Appendix \ref{laplas}).
\item [Mellin Transform Approach]
Using the Mellin transform technique and employing the convolution property of it (see e.g. \cite{patrik2}), we obtain a class of nonlinear Volterra integral equations of the second kind \cite{brunner}. As it is shown in \cite{panini}, this kind of Volterra integral equation is equivalent to the one obtained from the optimal stopping approach (see the Appendix \ref{melina}).
\item [Green's Function Approach]
Some researchers in the field have adopted the method of Green's functions or fundamental solutions \cite{stack} for solving Eq. (\ref{pde})
which will result in a family of integral and integro-differential equations of Volterra type \cite{chen, evans, keller} (see the Appendix \ref{green1}).
\item[Optimal Stopping Approach] Employing the risk-neutral valuation approach of Cox and Ross \cite{cox} and Kim \cite{kim} obtained an integral equation for the early exercise boundary of an American option
as the continuous limit of the valuation formula that allow early exercise at a finite number of points in time
(see the Appendix \ref{stop}). He also obtained an integral representation for the value of the option based on the critical stock price. It should be noticed that Jamshidian in \cite{jam} has obtained the same representation as Kim \cite{chia, jam} via the Duhamel principle. Furthermore, for the general discrete dividend case, an integral equation for the early exercise boundary of American options is studied extensively in \cite{vellek1, vellek2}.
\end{description}
In Tables \ref{table:1} and \ref{table:2}, we have outlined all of the above forms and also the integral equation classes (\ref{kimnondiv}) and (\ref{kimdiv}) which will be introduced in the sequel. The above approaches provide a variety of integral equations each with specific flavors. Among them, we only mention the following:
\begin{itemize}
\item[$\bullet$] Weakly singular IEs (see Eq. (\ref{aa1})),
\item[$\bullet$] Recursive nonlinear IEs (see Eq. (\ref{Aa2})),
\item[$\bullet$] Urysohn IEs of the first kind (see Eq. (\ref{bb1})),
\item[$\bullet$] Delayed Volterra IEs (see Eq. (\ref{2.4})),
\item[$\bullet$] Fully nonlinear weakly singular Volterra IEs (see Eq. (\ref{shevon})).
\end{itemize}
As a natural question, one could ask whether and how these integral equations are interrelated? Although this question in unanswered in the general case, the relation between Kim's representation and the one obtained from the Mellin transform approach presented by (\ref{melin}) have been studied in \cite{patrik2}. Also, it is worth mentioning that Kim's representation for the price could also be obtained using the Fourier transform (see \cite{underwood2002integral} for more details).
Recently, Alobaidi \textit{et al.} have shown that the integral equations obtained from Mellin and Laplace transform are equivalent to the one derived from Green's function approach \cite{alobaidi2014integral}.
\begin{table}[ht!]
\centering
\begin{tabular}{ |p{3.2cm}|p{8.3cm}|p{2cm}| }
\multicolumn{3}{c}{} \\
\hline
\center{Approach} & \center{Equation} &\begin{center} \small{IE Kind}\end{center} \\
\hline
\begin{center}Fourier Transform \cite{chia, chiarella2014numerical, stamicar, shev2}\end{center}& \begin{center}$ u(t)=g(t,u(t) )+\int_{0}^{t}k(t,s,u(t),u(s))\mathrm{d}s $\end{center} & \begin{center} \small{Nonlinear Weakly Singular Volterra} \end{center}\\
\hline
\begin{center}Laplace Transform \cite{knessl, gada1}\end{center}& \begin{center}$ g(t)=\int_{a}^{b}k(t,s,u(s))\mathrm{d}s $\end{center} \begin{center}$ g(t)=\int_{0}^{\infty}k(t,s,u(s))\mathrm{d}s $\end{center}
& \begin{center} \small{ First kind nonlinear Fredolm \& Weakly Singular Fredholm} \end{center} \\
\hline
\begin{center}Mellin Transform \cite{patrik2, panini}\end{center} & \begin{center}$ u(t)=g(t,u(t) )+\int_{t}^{b}k(t,s,u(t),u(s))\mathrm{d}s $\end{center} & \begin{center} \small{Nonlinear Weakly Singular Volterra} \end{center} \\
\hline
\begin{center}Green's Function \cite{chen, chen1, chia, chiarella2014numerical, evans, kim, keller}\end{center}& \begin{center}$ u(t)=g(t,u(t) )+\int_{t}^{b}k(t,s,u(t),u(s),u'(s))\mathrm{d}s $\end{center}
\begin{center} $ u(t)=g(t,u(t) )
+\int_{0}^{t}k(t,s,u(t),u(s))\mathrm{d}s$ \end{center} & \begin{center} \small{Nonlinear Weakly Singular Volterra Integral and Integro-differential} \end{center} \\
\hline
\begin{center}Optimal Stopping \cite{kim, peskir}\end{center}&
\begin{center} $ u(t)=g(t,u(t) )
+\int_{0}^{t}k(t,s,u(t),u(s))\mathrm{d}s$ \end{center} & \begin{center} \small{Nonlinear Weakly Singular Volterra} \end{center} \\
\hline
\end{tabular}
\caption{Integral equation types arising from the American option pricing problem.}
\label{table:1}
\end{table}
\begin{table}[ht!]
\centering
\begin{tabular}{ |p{3.2cm}|p{8.3cm}|p{2cm}| }
\multicolumn{3}{c}{} \\
\hline
\center{Approach} & \center{Equation} & \begin{center}\small{IE Kind}\end{center} \\
\hline
\begin{center}Hou et.al 's \cite{little}\end{center} & \begin{center} $ g(t,u(t))=\int_{0}^{t}k(t,s,u(t),u(t-s))\mathrm{d}s $
\end{center} & \begin{center} \small{Nonlinear Weakly Singular Volterra }\end{center}\\
\hline
\begin{center}Kim's (2013) \cite{ kim2}\end{center} & \begin{center} $ u(t)=g(t,u(t) )
+\int_{0}^{t}\frac{1}{\sqrt{t-s}}k(t,s,u(t),u(s))\mathrm{d}s $ \end{center}
& \begin{center}\small{Nonlinear Weakly Singular Volterra}\end{center}\\
\hline
\end{tabular}
\caption{One Dimensional Integral Equations}
\label{table:2}
\end{table}
\subsection{Kim's Integral Equation Representation} Among the above mentioned ways to arrive at integral equation representations, Kim's approach belonging to the optimal stopping category is an elegant way to characterize the behavior of the early exercise boundary in the American option pricing literature \cite{chia, chiarella2014numerical, kim}. In this approach, it could be shown (see e.g. \cite{chen1, kim, kim2}) that the early exercise boundary, $\mathcal{B}(t)$ of an American put option satisfies a weakly singular Volterra integral equation of the form
\begin{align}\label{kim}
K-\mathcal{B}(t)=p^{E}(t,\mathcal{B}(t)) +& \int_{0}^{t}[rKe^{-r(t-s)}\aleph(-d_{2}(\mathcal{B}(t),t-s,\mathcal{B}(s)))\\\notag
-&\delta \mathcal{B}(t) e^{-\delta (t-s)}\aleph(-d_{1}(\mathcal{B}(t),t-s,\mathcal{B}(s)))]\mathrm{d}s,
\end{align}
in which $ p^{E}(t, S) $ represents the price of an otherwise equivalent European counterpart given by
\begin{equation}\label{IEs}
p^{E}(t, S)= Ke^{-rt}\aleph(- d_{2}(S,t,K))-\mathcal{B}(t)e^{-\delta t}\aleph(-d_{1}(S,t,K)),
\end{equation}
and $\aleph(.) $ is the standard cumulative normal distribution function. Furthermore, the functions $ d_{1}(x,t,y) $ and $ d_{2}(x,t,y) $ are defined respectively by
\begin{equation}\label{d1d2}
d_{1}(x,t,y) =\frac{\log(\frac{x}{y})+ (r-\delta + \frac{\sigma^{2}}{2})t}{\sigma\sqrt{t}}, \quad d_{2}(x,t,y) = d_{1}(x,t,y) - \sigma \sqrt{t}.
\end{equation}
In this case, the price of the American option, represented by $P(t,S)$, could be recovered from $\mathcal{B}(t)$ by the expression
\begin{align}\label{price}
P(t,S)=p^{E}(t,S) +& \int_{0}^{t}rK e^{-r(t-\xi)}\aleph (-d_{2}(S, t-\xi, \mathcal{B}(\xi)))\mathrm{d}\xi\\
- &\int_{0}^{t} \delta S e^{-\delta (t- \xi)} \aleph (-d_{1}(S, t- \xi, \mathcal{B}(\xi))) \mathrm{d}\xi,\notag
\end{align}
which is known in the literature as the ``early exercise premium representation'' (see \cite{kim} for more details).
Due to the appearance of the cumulative normal distribution term, $\aleph(.)$ in (\ref{kim}) and noting that it has an integral representation, we are faced with a two-dimensional integral equation. This will make the problem hard from a numerical point of view. Some researchers have tried to reduce this two-dimensional equation into a one-dimensional expression to improve the numerical and analytic tractability of the integral representation which is the subject of the following subsection.
\subsection{Converting Kim's Representation into a One-Dimensional Form}
Hou et al., in \cite{little} have proposed a technique to reduce (\ref{kim}) to a one dimensional integral equation. Their method is based on
replacing the term $\mathcal{B}(t)$ in (\ref{kim}) by $\epsilon\mathcal{B}(t)$, differentiating w.r.t. $\epsilon$ and taking the limit as $\epsilon$ tends to zero. In this way, one obtains the equation
\begin{align}\label{2.4}
\mathcal{B}(t) \Big\{ &\sigma e^{- \delta t -\frac{1}{2}d_{1}(\mathcal{B}(t), t, K)^{2}}+ \delta \sqrt{2\pi t }\Big\}
= K r \sqrt{2 \pi t}\notag \\
&+ \delta \mathcal{B}(t)\sqrt{t}\int_{0}^{t}e^{-\delta s-\frac{1}{2}d_{1}(\mathcal{B}(t), s, \mathcal{B}(t-s))^{2}}\Big( \frac{d_{2}(\mathcal{B}(t), s, \mathcal{B}(t-s))^{2}}{s} \Big)\mathrm{d}s\notag\\
&-Kr\sqrt{t}\int_{0}^{t}e^{-r s-\frac{1}{2}d_{2}( \mathcal{B}(t), s, \mathcal{B}(t-s))^{2}}\Big( \frac{d_{1}( \mathcal{B}(t), s, \mathcal{B}(t-s))^{2}}{s} \Big)\mathrm{d}s,
\end{align}
which has the general form represented in the first row of Table \ref{table:2}. It should also be noted that the numerical solution of equation (\ref{2.4}) is considered in \cite{little}.
Using similar ideas, Kim \textit{ et al.} \cite{kim2} obtain an integral equation in the zero-divided case of the form
\begin{align}\label{kimnondiv}
\mathcal{B}(t) \aleph(d_{1}&(\mathcal{B}(t),t, K))+\mathcal{B}(t) \frac{1}{\sigma\sqrt{2\pi t}}K\exp\Big( -\frac{1}{2}d_{1}(\mathcal{B}(t),t, K)^{2}\Big) \notag\\
&= \frac{1}{\sigma \sqrt{2\pi t}}K\exp\Big( -\Big[rt + \frac{1}{2} d_{2}(\mathcal{B}(t),t, K)^{2}\Big]\Big)\notag\\
&+ rK\int_{0}^{t}\frac{1}{\sigma\sqrt{2\pi (t-\xi)}} \exp\Big(-\Big(r(t-\xi)+ \frac{1}{2}d_{2}(\mathcal{B}(t),t-\xi, \mathcal{B}(\xi) )^{2}\Big)\Big)\mathrm{d}\xi,
\end{align}
and the nonlinear integral equation
\begin{align}\label{kimdiv}
-\mathcal{B}(t)\exp(-\delta t) \aleph &\Big(d_{1}(\mathcal{B}(t),t, K)\Big) +\frac{K}{\sigma \sqrt{2\pi t}} \exp \Big(- \Big(rt + \frac{1}{2}d_{2}(\mathcal{B}(t),t, K)^{2}\Big) \Big)\notag\\
& -\frac{\mathcal{B}(t)}{\sigma \sqrt{2\pi t}} \exp \Big(-\Big (\delta t + \frac{1}{2}d_{1}(\mathcal{B}(t),t, K)^{2}\Big) \Big)\notag\\
&+\int_{0}^{t}\frac{1}{\sigma \sqrt{2\pi (t-\xi)}}\Big[ rK \exp \Big(- r(t-\xi) - \frac{1}{2} d_{2}(\mathcal{B}(t),t-\xi, \mathcal{B}(\xi))^{2}\Big)\notag\\
&-\delta \mathcal{B}(t) \exp \Big(- \delta(t-\xi) - \frac{1}{2} d_{1}(\mathcal{B}(t),t-\xi, \mathcal{B}(\xi))^{2}\Big)\Big] \mathrm{d}\xi \notag \\
&- \delta \int_{0}^{t} \mathcal{B}(t) \exp(-\delta (t - \xi)) \aleph \Big(d_{1} \Big(\mathcal{B}(t), t-\xi, \mathcal{B}\xi)\Big)\Big)\mathrm{d}\xi = 0,
\end{align}
in the dividend paying case.
\begin{remark}
Consider the integral equation
\begin{equation}\label{asli}
u(t)=g(t,u(t))+\int_{0}^{t}\frac{1}{(t-s)^{\alpha}}k_{1}(t,s,u(t),u(s))\mathrm{d}s+\int_{0}^{t}k_{2}(t,s,u(t),u(s))\mathrm{d}s,
\end{equation}
with $0\leq\alpha <1$, where the forcing function $ g $ and the kernels $k_{1}$ and $k_{2}$ are given and $ u(t) $ is an unknown function to be determined. It is easily seen that equations (\ref{kim}), (\ref{kimnondiv}) and (\ref{kimdiv}) all are of this general form for suitable
$k_{1},k_{2}$ and $\alpha$.
\end{remark}
\subsection{Existence and Uniqueness Issue}
In recent years, a number of researchers have dealt with the existence and uniqueness issue for the free boundary problem resulting from the American option and its early exercise boundary, based primarily on fixed point theorems and also a probabilistic approach \cite{chen,jacka,peskir,myneni} .
Chen and Chadam \cite{chen} prove the existence and uniqueness for the pricing problem in free boundary form (\ref{pde})-(\ref{callcon}) via the Schauder fixed point theorem and also some comparison theorems. They prove the existence and uniqueness of the pair $(P, \mathcal{B})$ as well as the continuity and monotonicity of $\mathcal{B}(t)$. On the other hand, Jacka \cite{jacka} using a probabilistic approach has proved that the early exercise boundary is unique under a condition which will cause some difficulties in the numerical calculation procedure. Myneni \cite{myneni} stated in his paper that ``the uniqueness and regularity of the stopping boundary from this integral equation remain open''. Peskir \cite{peskir} employed a change-of-variables formula with local time on curves to prove, in a nine-step process, the uniqueness of the solution for the equation
\begin{eqnarray*}\label{peskireq}
K-\mathcal{B}(t)=&e^{-r(T-t)}\int_{0}^{K}\aleph\Big(\dfrac{1}{\sigma
\sqrt{T-t}}\Big(\log\Big(\dfrac{K-s}{\mathcal{B}(t)}\Big)-(r-\frac{\sigma^{2}}{2})(T-t)\Big)\Big)\mathrm{d}s\\\notag
&+rK\int_{0}^{T-t}e^{-rs}\aleph\Big(\dfrac{1}{\sigma \sqrt{s}}\Big(\log\Big(\dfrac{\mathcal{B}(t+s)}{\mathcal{B}(t)}\Big)-(r-\dfrac{\sigma^{2}}{2})s\Big)\Big)\mathrm{d}s,
\end{eqnarray*}
which is a Volterra nonlinear integral equation describing the early exercise boundary.
It must be stressed here that research on the existence and uniqueness theorems for the early exercise boundary from the integral equations point of view is an ongoing issue which is of independent interest in the field. In fact, by imposing more restrictive conditions on the forcing function and the kernel, one obtains the required result using classical fixed point theorems (for more details on the case $\alpha =0$ see e.g. \cite{nedaiasl2017numerical}) but proving a theorem with minimal conditions compatible with the structure of the integral equations will require the extension of some advanced techniques in the theory of integral equations.
\section{Numerical Methods for the Early Exercise Boundary}\label{NMEEB}
Based on the fact that a closed form analytical solution for Eq. (\ref{kim}) is not available in general, the need to numerically approximate the early exercise boundary and also the price of the option appears naturally.
Generally, the numerical methods used for solving the integral equations describing the early exercise boundary in the American option pricing literature could be classified into three main categories:
\begin{description}
\item[Direct Quadrature Methods:] This family of methods could be considered as the oldest approximation schemes for integral equations which approximate the integral terms by numerical quadrature rules such as trapezoidal, midpoint and Simpson rules for equidistant meshes or Gaussian type quadrature rules \cite{hack}. In the special case of integral equations arising from American option pricing, we could construct a system of nonlinear equations with the solution $\mathcal{B}(t_{i})$ (for given $t_{i}$'s, $i=1,2,\cdots,n$) and then solve these equations to finally arrive at a global solution using the theory of polynomial or rational interpolation. The first idea of this kind is due to Huang \textit{et al.} \cite{huang1996pricing} which is pursued later by Kallast and Kivinukk \cite{kallast} who focus on Kim's integral representation for the early exercise boundary and apply a suitable quadrature rule based on Sullivan's idea \cite{sullivan} accompanied by the Newton-Raphson method in order to obtain a fast numerical approach. Heider \cite{heider} has also employed an integral transform to propose a Nystr\"{o}m-type discretization for Kim's integral equation.
\end{description}
\begin{description}
\item[Successive Iteration Methods:] In this method, we construct a recursive sequence of the form $\mathcal{B}^{(k+1)} = F (\mathcal{B}^{(k)}), k=0,1,2,\cdots$ in which $F$ is a fully nonlinear integral operator with fixed point $\mathcal{B}$. For the one dimensional Kim's integral equation, the method of fixed point iteration with Gauss-Kronrod rule has been used by Kim \cite{kim2}. Recently, a modified Newton iterative solution that operates in parallel is obtained for the approximation of the early exercise boundary by Cortazar et al. in \cite{cortazar}. Also, this approach has been employed for other nonlinear integral equations in the corresponding literature \cite{lauko, shev2}.
\end{description}
\begin{description}
\item[Collocation-Based Methods:] Classically, collocation discretization which is based on interpolatory projection of $C(X)$ (the space of continuous functions on $X$) onto a finite-dimensional subspace is widely used in the numerical solution of integral and differential equations. Noting that the $\mathcal{B}(t)$ term in the integral equation for the early exercise boundary appears inside the logarithm in (\ref{d1d2}), it is suitable to define an approximation by multi-piece exponential functions (see Ju \cite{ju} for more details). Aitsahlia \cite{lai2} replaced piecewise exponential functions with linear splines to improve Ju's approach and to get more accuracy and speed-up gains. Recently, a polynomial spectral collocation method for computing the American call and put option prices has been considered in \cite{andersen} based on Kim's integral equation.
\end{description}
What is the key point in the numerical treatment of nonlinear integral equations discussed above is the weakly singular character of these equations and the resulting singular behavior of the exercise boundary. In this respect, we propose a product integration method which belongs to the first category of numerical schemes based on rational barycentric interpolation to overcome this difficulty in the discretization of the integral terms.
\subsection{Product Integration Method}\label{produc}
Nystr\"{o}m method is one of the popular ways for numerical solution of integral equations \cite{atkinson}. It should be noticed that in the literature of Volterra integral equations, this method is called quadrature method \cite{hack}, however both of them use the same plan to approximate the solution. Product integration method is a kind of Nystr\"{o}m method which is utilized to numerically solve weakly singular integral equations. In this method, the smooth part of the kernel is interpolated in order to manage the weak singularity of the kernel \cite{cuminato, orsi, hoog}.
In order to present the principles underlying the method, we first choose $n+1$ distinct points, $\{ t_{i}\}_{i=0}^{n}$ in the interval $[0, T]$ and then collocate (\ref{asli}) at these nodes to obtain
\begin{equation}\label{collocation}
{u}(t_{i}) = g(t_{i}, {u}(t_{i})) + \int_{0}^{t_{i}}\frac{1}{\sqrt{t_{i}-s}}k_{1}(t_{i}, s, {u}(t_{i}), {u}(s))\mathrm{d}s+\int_{0}^{t_{i}}k_{2}(t_{i},s ,{u}(t_{i}),{u}(s))\mathrm{d}s,
\end{equation}
for $i=0,1,\cdots, n$. Based on the fact that the second integral term has a smooth kernel, it is utilized by direct quadrature rule. Moreover,
the interpolation of the smooth part of the first kernel, $k_{1}$, is used to cope with the weakly singular term, (see e.g. \cite{orsi, hack, orsi1996product}).
In this respect, we project the functions \[K_{i}(s, {u}(s)) : = k_{1}(t_{i}, s, {u}(t_{i}), {u}(s)), \quad i=0,1, \cdots, n, \] into the space $V_{n}= \textmd{Span}\{\mathcal{L}_{j}(s)\}_{j=0}^{n}$
for appropriate basis functions $\mathcal{L}_{j}(s), j=0,1,\cdots,n$ to obtain
\begin{equation*}
(\mathcal{P}_{n}K_{i})(s) = \sum_{j=0}^{n}K_{i}(t_{j}, {u}(t_{j}))\mathcal{L}_{j}(s).
\end{equation*}
Now, the above approximation of the kernel is substituted into (\ref{collocation}) and the following system of equations is obtained
\begin{equation}\label{dis}
u_{i} = g(t_{i}, u_{i}) + \underbrace{\sum_{j=0}^{i} w_{i,j} k_{1}(t_{i}, t_{j}, {u}_{i}, {u}_{j})}_{\text{product integration}}+ \underbrace{\sum_{j=0}^{i} \omega_{j} k_{2}(t_{i}, t_{j}, {u}_{i}, {u}_{j})}_{\text{direct quadrature}}, \quad i=0, 1, \cdots, n,
\end{equation}
where $w_{i, j} = \int_{0}^{t_{i}} \frac{\mathcal{L}_{j}(s)}{\sqrt{t_{i}-s}}\mathrm{d}s$ and $\omega_{j}$'s are the quadrature weights.
In practice, the weights $w_{i, j}$ and $\omega_{j}$ should be computed numerically by an efficient quadrature rule with rapid convergence, such as Gauss-Legendre or Clenshaw-Curtis method.
When the weights are obtained, the approximate solutions, $u_{i} \approx {u}(t_{i})$, are computed as the solution of the nonlinear system of equations (\ref{dis}).
In the implementation of the product integration method, there are two crucial points which should be taken into account: one is choosing a finite dimensional subspace $V_{n}$ of $C([0,T])$ and the second point is the numerical quadrature used in the discretization of the integrals. A natural and available choice for these aims is to interpolate the kernel with the Lagrange polynomials and use interpolatory quadrature rules. It is now a well-known fact that barycentric form of interpolation is a viable variant of Lagrange's classic polynomial interpolation which has desirable features such as stability and computational speed \cite{kleinthesis, bary}. In the sequel, we present a brief overview of barycentric interpolation and quadrature methods.
\subsection{Barycentric Interpolation}\label{baryin}
Let $\{t_{i}\}_{i=0}^{n}$ be a set of strictly ordered equidistant nodes in $[0, T]$ with a fix grid spacing $h$. The barycentric interpolation of the data values $ \{(t_{i}, f(t_{i}))\}_{i=0}^{n} $ could be written as
\begin{equation}\label{barycent}
\mathcal ({\mathcal{P}}_{n}f)(t)=\frac{\sum_{i=0}^{n}\frac{\beta_{i}}{t-t_{i}}f(t_{i})}{\sum_{i=0}^{n}\frac{\beta_{i}}{t-t_{i}}}
= \sum_{i=0}^{n}f(t_{i})\mathcal{L}_{i}(t),
\end{equation}
in which \begin{equation}\label{li}
\mathcal{L}_{i}(t) =\frac{\frac{\beta_{i}}{t-t_{i}}}{\sum_{i=0}^{n}\frac{\beta_{i}}{t-t_{i}}},\quad i=0,1,\cdots, n.\end{equation}
In the case of Lagrange interpolation, the weights $ \beta_{i} $ are given by
\begin{equation}\label{weibary}
\beta_{i}=\frac{1}{\Pi_{i\neq j}(t_{i} - t_{j})}, \quad i=0,1, \cdots, n, \end{equation}
but if we choose other weights in the above expression, then the resulting function $({\mathcal{P}}_{n}f)(t)$ still interpolates the data $f$ even though it is no longer in general a polynomial \cite{bary}.
Among the most important alternative options for the $\beta_{i}$'s, we could mention the Berrut's weights given by
\begin{equation}\label{ber}
\beta_{i}=(-1)^{i}, \quad i=0,1, \ldots, n,
\end{equation}
in which $n$ is an odd number \cite{kai112}. It could be shown (see e.g. \cite{bary}) that with the above weights, the resulting interpolator is a rational function with no poles in the interval of interpolation and the order of convergence is $\mathcal{O}(\frac{1}{n})$.
Investigations in this area to obtain some weights which will produce interpolants, $ \mathcal{P}_{n}f $, with no poles and good approximation
properties have led to the family of linear barycentric
rational interpolations introduced by Floater and Hormann \cite{kai}.
Let for a fixed integer $ 0\leq d \leq n, $ the polynomials $ \{p_{i}(t) \}_{i=0}^{n-d} $ interpolate $ f $ at the nodes
$ \{ t_{i}, \ldots, t_{i+d}\} $. Then we could write
\begin{equation}\label{rational}
(\mathcal{P}_{n}f)(t)=\frac{\sum_{i=0}^{n-d} \lambda_{i}(t)p_{i}(t)}{\sum_{i=0}^{n-d} \lambda_{i}(t)},
\end{equation}
where
\begin{equation*}
\lambda_{i}(t)= \frac{(-1)^{i}}{(t-t_{i})\ldots (t-t_{i+d}) }.
\end{equation*}
Eq. (\ref{rational}) can be rewritten in the barycentric form (\ref{barycent}) with the weights
\begin{equation}\label{baryrashi}
\beta_{i}=(-1)^{i-d}\sum_{j\in J_{i}} \binom {d} {i-j},
\end{equation}
where $ J_{i} $ is defined as
\[ J_{i}=\{ \max (1, i-d) \leq j \leq \min (i, n-d-1) \}.\]
Rational barycentric interpolation with the weights (\ref{baryrashi}) has a superior advantage compared to other forms of the barycentric interpolation as the following theorem shows:
\begin{theorem}\label{inregh} (Floater and Hormann, \cite{kai}) Suppose that $d\geq 1$ and $f \in C^{d+2}([0, T])$. If $n-d$ is odd, then
\[
\Vert f - \mathcal{P}_{n}f \Vert_{\infty} \leq h^{d+1} T\frac{\Vert f^{(d+2)}\Vert_{\infty}}{d+2},
\]
if $n-d$ is even, then
\[
\Vert f - \mathcal{P}_{n}f \Vert_{\infty} \leq h^{d+1}\Big( T\frac{\Vert f^{(d+2)}\Vert_{\infty}}{d+2} + \frac{\Vert f^{(d+1)}\Vert_{\infty}}{d+1}\Big).
\]
\end{theorem}
\subsection{Barycentric Rational Quadrature}\label{quada123}
In this subsection, an equivalent interpolatory quadrature based on rational interpolation is introduced.
Barycentric quadrature and its features have been studied extensively in \cite{kleinthesis, bary}.
The linear interpolant \[ (\mathcal{P}_{n}f)(t)=\sum_{i=0}^{n}f(t_{i})\mathcal{L}_{i}(t), \]
naturally leads to the following classical quadrature formula
\begin{equation}\label{2.10}
{Q}_{n}[f] = \sum_{i=0}^{n}\omega_{i,n}f(t_{i}),
\end{equation}
where the corresponding quadrature weights, $ \omega_{i,n} $, are defined by
\begin{equation}\label{quadwei}\omega_{i,n}=\int_{0}^{T}\mathcal{L}_{i}(t)\mathrm{d}t, \quad i=0, \ldots, n.\end{equation}
The stability condition of the quadrature method is given by (see \cite{hack2})
\[\sup \Big\{ \sum_{i=0}^{n}\vert \omega_{i,n} \vert, n\in \Bbb N\Big\} < \infty.\]
It follows from (\ref{quadwei}) that
\begin{equation}\label{123}
\sum_{i=0}^{n}\vert \omega_{i,n}\vert \leq \int_{0}^{T}\sum_{i=0}^{n}\bigg\vert \dfrac{\frac{\beta_{i}}{t-t_{i}}}{\sum_{i=0}^{n}\frac{\beta_{i}}{t-t_{i}}} \bigg{\vert}\mathrm{d}t = \int_{0}^{T} \Lambda_{n}(t)\mathrm{d}t,
\end{equation}
where the function
$\Lambda_{n}(t)=\sum_{i=0}^{n}\vert \mathcal{L}_{i}(t) \vert$
is the \textit{Lebesgue function} and
\begin{equation}\label{leb}\Lambda_{n}=\sup_{t\in [a,b]}
\Lambda_{n}(t),\end{equation}
is the \textit{Lebesgue constant} \cite{bary}.
By this relation, the following upper bound could be obtained for (\ref{123})
\begin{equation}
\sum_{i=0}^{n}\vert \omega_{i,n}\vert \leq T\Lambda_{n},
\end{equation}
so the stability of the direct quadrature method depends on the stability of the interpolation process.
The Lebesgue constant for Lagrange interpolation at equaidistant nodes grow exponentially
\[\Lambda_{n}\approx \frac{2^{n+1}}{n\log(n)}, \quad n \rightarrow \infty,\]
as presented in \cite{bary}.
It is shown this value associated with the family of Floater-Hormann interpolant with $d\geq 1$ grows logarithmically as demonstrated by the following theorem:
\begin{theorem}(Bos et al., \cite{bos})
The Lebesgue constant associated with rational interpolation at equidistant nodes with basis functions (\ref{li}) associated with coefficients
(\ref{baryrashi}) satisfies
\[\Lambda_{n} \leq 2^{d-1}\Big(2+\log(n)\Big).\]
\end{theorem}
The following theorem gives an upper bound for the linear barycentric rational quadrature.
\begin{theorem}\label{quad}(Klein, \cite[Theorem 4.1]{kleinthesis})
Suppose $n$ and $d$ with $d\leq n$ are positive integers, $ f \in C^{d+2}[a, b] $ and $ \mathcal{P}_{n}f $ is the rational interpolant with parameter $d$ given by (\ref{rational}). Let the quadrature weights (\ref{baryrashi}) be approximated by a quadrature rule which convergence at least at the rate $ \mathcal{O}(h^{d+1}) $ and degree of precision at least $d+1$. Then
\begin{equation}
\Big\vert \int_{a}^{b}f(t)\mathrm{d}t - \sum_{i=0}^{n}\omega_{i,n}f_{i} \Big\vert \leq C h^{d+1},
\end{equation}
where $ C $ is a constant depending on $ d $, derivatives of $ f $ and the length of the interval.
\end{theorem}
\subsection{Approximation of the Early Exercise Boundary}
In this subsection, we first review some regularity properties of the early exercise boundary and then based on the previous tools, we discretize the nonlinear integral equation to obtain an approximation for $ \mathcal{B}(t) $. Finally an error analysis for the proposed method will be presented.
\begin{theorem}(Karatzas \textit{et al. }\cite{karatzas1998methods})\label{reg}
Let $ \mathcal{B}(t) $ be the early exercise boundary of the American put price. Then it is a continuously differentiable function on $ (0,T] $ and
\begin{equation}\label{w}
\begin{split}
\lim_{s\rightarrow 0} \mathcal{B}(t)& = \mathcal{B}(0) = K, \quad \delta \leq r,\\
\lim_{s\rightarrow 0} \mathcal{B}(t)& = \mathcal{B}(0) =(\frac{r}{\delta}) K, \quad \delta > r.
\end{split}
\end{equation}
\end{theorem}
We now discretize Eq. (\ref{kimnondiv}) using the product integration method to arrive at
\begin{equation}\label{diskimnondiv}
\begin{split}
\mathcal{B}_{i}\aleph(d_{1}(\mathcal{B}_{i},t_{i}, K)) +& \mathcal{B}_{i} \frac{1}{\sigma\sqrt{2\pi t_{i}}}K\exp\Big( -\frac{1}{2}d_{1}(\mathcal{B}_{i},t_{i}, K)^{2}\Big) \\
= &\frac{1}{\sigma \sqrt{2\pi t_{i}}}K\exp\Big(-\Big[rt_{i} + \frac{1}{2} d_{2}(\mathcal{B}_{i},t_{i}, K)^{2}\Big]\Big)\\
&+ \frac{rK}{\sigma\sqrt{2\pi}}\sum_{j=0}^{i}w_{i,j} \exp\Big(-(r(t_{i}-t_{j})+\frac{1}{2}d_{2}(\mathcal{B}_{i}, t_{i}-t_{j}, \mathcal{B}_{j})^{2})\Big), \end{split}
\end{equation}
in which $\omega_{i,j} = \int_{0}^{t_{i}} \frac{\mathcal{L}_{j}(s)}{\sqrt{t_{i}-s}}\mathrm{d}s$ and $\mathcal{L}_{j}(s)$ is defined as in (\ref{li}) with the coefficients (\ref{ber}) or (\ref{baryrashi}). A similar expression could be obtained for Eq. (\ref{kimdiv}) by the product integration and also direct quadrature methods:
\begin{eqnarray}\label{diskimdiv}
-\mathcal{B}_{i}\exp(-\delta t_{i}) \aleph \Big(d_{1}(\mathcal{B}_{i},t_{i}, K)\Big) +\frac{K}{\sigma \sqrt{2\pi t_{i}}} \exp \Big(- (rt_{i} + \frac{1}{2}d_{2}(\mathcal{B}_{i}, t_{i}, K)^{2}) \Big)\\\notag
-\frac{\mathcal{B}_{i}}{\sigma \sqrt{2\pi t_{i}}} \exp \Big(- (\delta t_{i} + \frac{1}{2}d_{1}(\mathcal{B}_{i}, t_{i}, K)^{2}) \Big)\\\notag
+
\frac{1}{\sigma \sqrt{2\pi}}\sum_{j=0}^{i} w_{i,j}\Big[ rK \exp \Big(- r(t_{i}-t_{j}) - \frac{1}{2} d_{2}(\mathcal{B}_{i}, t_{i}-t_{j}, \mathcal{B}_{j})^{2}\Big)\\\notag
-\delta \mathcal{B}_{i} \exp \Big(- \delta(t_{i} - t_{j}) - \frac{1}{2} d_{1}(\mathcal{B}_{i}, t_{i}-t_{j}, \mathcal{B}_{j})^{2}\Big)\Big]\\\notag
- \delta \mathcal{B}_{i} \sum_{j=0}^{i}\omega_{j}\exp \Big(-\delta (t_{i} - t_{j})\Big) \aleph \Big(d_{1} (\mathcal{B}_{i}, t_{i}- t_{j}, \mathcal{B}_{j})\Big) = 0.
\end{eqnarray}
As soon as $\mathcal{B}_{i}$'s are obtained from the above equations, we could employ the barycentric rational interpolation to obtain a continuous approximating function
\begin{equation}\label{ghe}
\mathcal{B}_{n}(t) = \sum_{i=0}^{n}\mathcal{B}_{i}\mathcal{L}_{i}(t).\end{equation}
In the following, we give an error bound for discretization process obtained via (\ref{diskimdiv}). It must be mentioned that for Eq. (\ref{diskimnondiv}) a similar result could be obtained.
\begin{lemma}\label{bn}
Let $\mathcal{B}(t)$ be the exact solution of Eq. (\ref{kimdiv}) and $\mathcal{B}_{n}(t)$ be given by (\ref{ghe}). Then there exists a positive constant $C$ independent of $n$ such that
\begin{equation*}
\Vert \mathcal{B} - \mathcal{B}_{n} \Vert_{\infty} \leq C\log(n)h^{d+1}.
\end{equation*}
\end{lemma}
\begin{proof}
Using the triangle inequality, we arrive at
\begin{equation}\label{3.19}
\begin{split}
\Vert \mathcal{B} - \mathcal{B}_{n} \Vert_{\infty} & =\Vert \mathcal{B} - \mathcal{P}_{n} \mathcal{B}+ \mathcal{P}_{n} \mathcal{B} - \mathcal{B}_{n} \Vert_{\infty} \\
& \leq \Vert \mathcal{B} - \mathcal{P}_{n} \mathcal{B} \Vert_{\infty} + \Vert \mathcal{P}_{n} \mathcal{B}-\mathcal{B}_{n} \Vert_{\infty},
\end{split}
\end{equation}
in which $\mathcal{P}_{n}$ is the interpolation operator defined in (\ref{barycent}). The first term in the right hand side of (\ref{3.19}) is the interpolation error which by Theorem \ref{inregh}, its rate of convergence is $\mathcal{O}(h^{d+1})$. Also, the second term could be bounded for each $t \in (0,T]$ as
\begin{equation}
\begin{split}
\vert (\mathcal{P}_{n}\mathcal{B})(t) - \mathcal{B}_{n}(t)\vert &= \Big \vert \sum_{i=0}^{n} \mathcal{L}_{i}(t) (\mathcal{B}(t_{i}) -\mathcal{B}_{i}) \Big \vert \\
& \leq \sum_{i=0}^{n} \vert \mathcal{L}_{i}(t) \vert \vert \mathcal{B}(t_{i}) - \mathcal{B}_{i}\vert,\\
\end{split}
\end{equation}
and so
\begin{center}
\begin{equation}\label{uppad}
\Vert \mathcal{P}_{n} \mathcal{B}-\mathcal{B}_{n} \Vert_{\infty} \leq \Lambda_{n} \max_{i} \{\mathcal{B}(t_{i}) - \mathcal{B}_{i}\}.\end{equation}
\end{center}
Notice that if we collocate Eq. (\ref{kimdiv}) at the grid points, it could be seen that $\mathcal{B}(t_{i})$ is the exact solution of the obtained equation. Based on this fact and using Eq. (\ref{diskimdiv}), we see that the upper bound for $ \max_{i} \{\mathcal{B}(t_{i}) - \mathcal{B}_{i}\}$ depends on the interpolation and numerical quadrature errors. Due to the smoothness of the functions $\exp(.)$ and $\aleph(.)$ inside the equation and using Theorem \ref{inregh}, an error of order $\mathcal{O}(h^{d+1})$ is achieved in the collocation procedure. On the other hand, the Lebesgue constant $\Lambda_{n}$ is bounded by the term $2^{d-1}(2+\log(n))$, so the final result is given by the Theorem \ref{quad}.
\end{proof}
\section{Approximation of the American Option Price}\label{thistable}
In this section, the pricing of an American put option will be considered. Note that the price of the corresponding American call could be found by put-call symmetry \cite{andersen}. It could easily be seen that as soon as the early exercise boundary is determined, the option price could then be obtained by employing an appropriate quadrature rule applied to the integral terms in Eq. (\ref{price}).
For this purpose and due to the complexity of the kernel, we utilize the quadrature method introduced in Subsection \ref{quada123} to approximate the price. In the reminder, we analyze the approximation order of the proposed quadrature method in Theorem \ref{errores}.
Before that, we introduce the notations $P_{n}(t, S)$ and $\tilde{P}_{n}(t, S)$, defined respectively by
\begin{equation}\label{PN}
\begin{split}
P_{n}(t, S)=~&p(t, S) + \int_{0}^{t}rK e^{-r(t-\xi)}\aleph (-d_{2}(S, t -\xi, \mathcal{B}_{n}(\xi)))\mathrm{d}\xi \\
& - \int_{0}^{t} \delta S e^{-\delta (t- \xi)} \aleph (-d_{1}(S, t- \xi, \mathcal{B}_{n}(\xi))) \mathrm{d}\xi,
\end{split}
\end{equation}
\begin{equation}\label{PN2}
\begin{split}\tilde{P}_{n}(t, S) = ~&p(t, S) + \sum_{i=0}^{n}rK e^{-r(t - t_{i})}\aleph \Big(-d_{2}(S, t - t_{i}, \mathcal{B}_{n}(t_{i}))\Big) \\
& - \sum_{i=0}^{n} \delta S e^{-\delta (t- t_{i})} \aleph \Big(-d_{1}(S, t - t_{i}, \mathcal{B}_{n}(t_{i}))\Big).\end{split}
\end{equation}
In both formulae, $\mathcal{B}_{n}(\xi)$ is the approximant of the early exercise boundary obtained as (\ref{ghe}).
Let us consider the price representation (\ref{price}) as a nonlinear operator
\begin{equation}\label{nonop}
\begin{split}
P : C\big( (0, \infty) \big) \rightarrow & C \left((0, T]\times (0, \infty)\right) \\
\mathcal{B}\mapsto & P(\mathcal{B}) = P(t,S).
\end{split}
\end{equation}
In the following lemma, the Fr\'{e}chet derivative of this nonlinear operator is given explicitly.
\begin{lemma} (Heider, \cite{heider2007condition})
The Fr\'{e}chet derivative of the nonlinear operator (\ref{nonop}) at $\mathcal{B}(t)$ is given by
\begin{equation}
\begin{split}
(P'(\mathcal{B})h) (t,S) =& \frac{rK}{\sigma \sqrt{2\pi}}\int_{0}^{t} \frac{e^{-r (t-\xi)}}{\mathcal{B}(\xi) \sqrt{t-\xi}} e^{- \frac{d_{2}(S, t-\xi, \mathcal{B}(\xi))^{2}}{2}} h(\xi) \mathrm{d}\xi \\
- & \frac{\delta S}{\sigma \sqrt{2\pi}}\int_{0}^{t} \frac{e^{-\delta (t-\xi)}}{\mathcal{B}(\xi) \sqrt{t-\xi}} e^{- \frac{d_{1}(S, t-\xi, \mathcal{B}(\xi))^{2}}{2}} h(\xi) \mathrm{d}\xi.
\end{split}
\end{equation}
\end{lemma}
\begin{theorem}\label{errores}
Let $P(t, S)$ be the price of an American put option with the parameters defined in Section \ref{option}. Futhermore assume that $\mathcal{B}(t)$ denotes its early exercise boundary function. Let also $\tilde{P}_{n}(t, S)$ be an approximation of $P(t, S)$.
Then we have
\[\vert P(t, S) - \tilde{P}_{n}(t, S) \vert \leq \frac{\theta -1}{\sigma \theta \sqrt{2}} \left( \frac{\sqrt{\delta}S}{K} + \sqrt{r}\right)
C\log(n)h^{d+1}, \]
where
\[ \theta = \frac{- (r-\delta- \frac{1}{2}\sigma^{2}) - \sqrt{(r-\delta- \frac{1}{2}\sigma^{2})^{2} + 2 \sigma^{2}r}}{ \sigma^{2}}. \]
\end{theorem}
\begin{proof}
The triangle inequality gives
\[
\vert P(t, S) - \tilde{P}_{n}(t, S) \vert \leq \vert P(t, S) - P_{n}(t, S) \vert + \vert P_{n}(t, S) - \tilde{P}_{n}(t, S) \vert.
\]
Now applying the mean value theorem for operators (see e.g. Proposition 5.3.11 in \cite{atkinson}), we obtain:
\begin{equation}\label{frechetterm}
\vert P(t, S) - \tilde{P}_{n}(t, S) \vert\leq \sup_{0 \leq \lambda \leq 1} \Vert P'((1- \lambda)\mathcal{B} + \lambda \mathcal{B}_{n}) \Vert_{\infty} \Vert \mathcal{B} - \mathcal{B}_{n} \Vert_{\infty},\end{equation}
in which $P'$ is the Fr\'{e}chet derivative derived in Lemma \ref{nonop}. It could easily verified that for $a> 0$ we have
\begin{equation}\label{1e} a \int_{0}^{t} \frac{e^{-a(t- \xi)}}{\sqrt{t -\xi}}\mathrm{d}\xi = \sqrt{a \pi} \erf (\sqrt{a t}).\end{equation}
Furthermore, the monotonicity of $\mathcal{B}$ and Theorem \ref{reg} gives
\begin{equation}\label{2e} (1-\theta)\mathcal{B}(\xi) + \theta \mathcal{B}_{n}(\xi) \geq \mathcal{B}(0^{+}). \end{equation}
Moreover, it could be shown (see e.g. \cite{kim}) that
\begin{equation}\label{3e}\frac{\theta K}{\theta -1 } \leq \mathcal{B}(t) \leq \mathcal{B}(0^{+}). \end{equation}
So by the relations (\ref{1e} - \ref{3e}), the supremum term in (\ref{frechetterm}) could be bounded by $\frac{\theta -1}{\sigma \theta \sqrt{2}} \left( \frac{\sqrt{\delta}S}{K} + \sqrt{r}\right)$ (for more detail see Proposition 3.1 in \cite{heider2007condition}).
Also, an upper bound could be obtained for $\vert P_{n}(t, S) - {P}(t, S) \vert$ by considering Eqs. (\ref{PN}) and (\ref{PN2}) and the Theorem \ref{quad}. The final result now could be obtained from Lemma \ref{bn}.
\end{proof}
\section{Numerical Experiments}\label{NE}
In this section, we give some numerical evidence concerning accuracy and the rate of convergence of the presented method in this paper. In this respect, we compute the early exercise boundary as well as the option price for a set of test problems chosen from the literature (see e. g. \cite{ju, kallast}). We also compare our results with a number of alternative approaches, some of them based on integral equation representations and the others belonging to the semi-analytical family of methods.
In the reminder, we denote by FH($d$) the product integration method based on linear barycentric rational interpolation using Floater-Hormann weights of degree $d$ (introduced in Subsection \ref{baryin}).
The combination of Berrut and Floater-Hormann weights (see respectively (\ref{ber}) and (\ref{baryrashi})) is used to compute the early exercise boundary of an American put option which is denoted by BFH($d$) in the sequel.
In order to solve the system of equations (\ref{dis}), a natural idea is to utilize the Newton method which is a popular choice\footnote{By using the \texttt{fsolve} command in {MATLAB}$^{\circledR}$ environment.} in the corresponding literature \cite{brunner}. But due to the complexity of the kernel and forcing functions, computing such a nonlinear scheme may lead to a potentially time consuming procedure involving sequential iterative linearization.
In this respect, along with the Newton iteration, we also propose a hybrid ``Newton-interpolation scheme'' which solves the system of equations by Newton method based on a small number of grid points and then interpolates the results linearly between the nodes.
More precisely, we distribute $m-2$ points in the interval $[t_{i}, t_{i+1}]$ and recover $\{\mathcal{B}(t_{i,j})\}_{j=2}^{m-1}$ by using the linear interpolant from (\ref{dis}). This approach combined with Berrut-Floater-Hormann and Floater-Hormann schemes will be denoted by BFH($d$, $m$) and FH($d$, $m$), respectively in the reminder. Also in this case, the total number of grid points will be $N= n+(n-1)(m-2)$.
The proposed algorithms are implemented in {MATLAB}$^{\circledR}$ on a PC with 4.00 GHz Intel$^{\circledR}$ Core\textsuperscript{TM} i7 dual processor with 16 GB RAM.
We report our results for the early exercise boundary, $\mathcal{B}(t)$ and also the American put value $P(T,S)$ with the parameter set $( K, T, r, \sigma) = (100, 3, 0.08, 0.2)$ and with the dividend yeilds
$\delta \in \{0, 0.04, 0.08, 0.12\}$
for $n=64$ and $d=3$ in Figures \ref{boundary} and \ref{putvalue}, respectively. In Figure \ref{putvalue}, the dotted lines show the exact put values obtained from the binomial tree model (BIN) with $n = 10,000$ time steps which will be used as the benchmarks in each case.
We also have prepared Table \ref{numericalresults} which shows the absolute error of the results and a comparison between the studied test cases. This table confirms that BFH($2$) gives a better result in comparison with the other reported cases. It must be noticed that the columns KJK which utilizes a fixed point method and also BFH($2$) method, both are based on the approximation of the same integral equation.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10.01cm, height=5cm]{Ju.eps}
\caption{The early exercise boundary of an American put obtained from FH($3$) method for $n=64$ and $\delta \in \{0, 0.04, 0.08, 0.12\}.$}
\label{boundary}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10.01cm, height=5cm]{Pfinal.eps}
\caption{The put value $P(T, S)$ for $S= 120$, $n=64$ and $\delta \in \{0, 0.04, 0.08, 0.12\}$. }
\label{putvalue}
\end{center}
\end{figure}
\begin{table}[ht!]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l c | l l l l l l l l}
$S$ & BIN & GJ4 & MGJ2 & LUBA &EXP3 & KJK &KK &BFH($2$)\\[0.5ex]
\hline
&$22.2050$&$22.2079$ & $22.7106$ & $22.1985$ & $22.2084$ & $22.1942$ & $22.1900$&$22.2048$ \\[-1ex]
\raisebox{1.5ex}{$80$}
&- &$2.9$\text{e}$-03$ & $5.1$\text{e}$-01$ & $6.5$\text{e}$-03$ & $3.4$\text{e}$-03$& $1.1$\text{e}$-02$ & $1.5$\text{e}$-02$& $2.0$\text{e}$-04$ \\[1ex]
\hline
&$16.2071$&$16.1639$ & $16.5205$ & $16.1986$ & $16.2106$ & $16.1999$ & $16.1960$&$16.2068$ \\[-1ex]
\raisebox{1.5ex}{$90$}
&-& $4.3$\text{e}$-02$ & $3.6$\text{e}$-01$ & $5.9$\text{e}$-02$ & $7.2$\text{e}$-03$& $1.1$\text{e}$-02$ & $3.9$\text{e}$-03$& $1.1$\text{e}$-04$ \\[1ex]
\hline
&$11.7037$ &$11.7053$ & $11.8106$ & $11.6988$ & $11.7066$ & $11.6991$ & $11.6958$&$11.7037$ \\[-1ex]
\raisebox{1.5ex}{$100$}
& -&$1.6$\text{e}$-03$ & $1.1$\text{e}$-01$ & $4.9$\text{e}$-03$ & $2.9$\text{e}$-03$& $4.9$\text{e}$-03$ & $7.9$\text{e}$-03$& $1.0$\text{e}$-05$ \\[1ex]
\hline
&$8.3671$ &$8.3886$ & $8.4072$ & $8.3630$ & $8.3695$ & $8.3638$ & $8.3613$&$8.3669$ \\[-1ex]
\raisebox{1.5ex}{$110$}
&-& $2.1$\text{e}$-02$ & $4.0$\text{e}$-02$ & $4.1$\text{e}$-03$ & $2.4$\text{e}$-03$& $3.3$\text{e}$-03$ & $5.8$\text{e}$-03$& $2.0$\text{e}$-04$ \\[1ex]
\hline
&$5.9299$ &$5.9435$ & $5.9310$ & $5.9261$ & $5.9323$ & $5.9278$ & $5.9258$&$5.9298$ \\[-1ex]
\raisebox{1.5ex}{$120$}
&-& $1.4$\text{e}$-02$ & $1.1$\text{e}$-03$ & $3.8$\text{e}$-03$ & $2.4$\text{e}$-03$& $2.1$\text{e}$-03$ & $4.1$\text{e}$-03$& $1.0$\text{e}$-04$ \\[1ex]
\end{tabular}}
\caption{
Estimated $3$-year put option values by BFH($2$) for $K = 100 $ and $S $ as listed in the last column of the table. The parameter
set used are $r = \delta = 0.08$ and $\sigma = 0.2$ and $n=32$. The other columns are respectively
BIN: the binomial tree model with $n=10000$ time steps;
GJ4: the four-point extrapolation scheme of Geske and Johnson \cite{geske1984american};
MGJ2: the modified two-point Geske and Johnson method of Bunch and Johnson \cite{bunch1992simple};
LUBA: the lower and upper bound approximation of Broadie and Detemple \cite{broadie1996american};
EXP3: the multi-piece exponential functions method of Ju \cite{ju} using the three-point Richardson extrapolation;
KJK: the iteration method of Kim \textit{et al.} \cite{kim2};
KK: the trapezoidal formulas approximations of Kallast and Kivinukk accompanied by the Newton-Raphson iteration \cite{kallast}.}
\label{numericalresults}
\end{table}
In order to gain some insight into the efficiency of FH($d$), BFH($d$), FH($d$, $m$) and BFH($d$, $m$) methods we have reported work-precision diagrams for the proposed methods in Figures \ref{fig:test 1}-\ref{modifiedcom}. As it is expected, using more nodes will lead to more time to obtain the approximate solution with a different rate in each case. Figures \ref{fig:test 1} and \ref{fig:test 2} show that by increasing the number of grid points, the absolute error is reduced which confirms the results obtained in Section \ref{thistable}. The same conclusion is true in Figure \ref{fig:test 2} which shows the computed results for the method FH($d$, $m$). Furthermore, Figure \ref{modifiedcom} gives a clear evidence for choosing a new strategy in the numerical solution of nonlinear system of equations presented in (\ref{dis}). In fact it could be seen that there is a meaningful difference in computing times when we use the ``Newton-interpolation'' scheme.
In summary, we conclude this section by noting that if the speed of computation is the main criteria in choosing a specific pricing framework, we could use the BFH$(d,m)$ method which also provides an acceptable error both in the free boundary and also the price.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.08\textwidth, height=0.2\textheight]{image1}
\label{fig:sub51}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.08\textwidth, height=0.2\textheight]{image2}
\label{fig:sub52}
\end{subfigure}
\caption{Work precision diagrams for Berrut and Floater-Hormann method }
\label{fig:test 1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.07\textwidth, height=0.2\textheight]{image3}
\caption{}
\label{fig:sub31}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.07\textwidth, height=0.2\textheight]{image4}
\caption{}
\label{fig:sub32}
\end{subfigure}
\caption{Work precision diagrams for Floater-Hormann method }
\label{fig:test 2}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=10.01cm, height=5cm]{image5.eps}
\caption{Comparison of Berrut and Floater-Hormann method with Floater-Hormann method}
\label{test222}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=10.01cm, height=5cm]{image6.eps}
\caption{Comparison of Newton-Interpolation method with Floater-Hormann method }
\label{modifiedcom}
\end{center}
\end{figure}
\section{Conclusion and Further Remarks}\label{conclusion}
In this paper, some integral equation representations describing the early exercise boundary of an American option were considered. We also reviewed some numerical approaches employed in the current literature to solve for the early exercise boundary based on these integral equation classes.
The existence and uniqueness issue is discussed for some classes of these integral equations which could be extended to other classifications. We also discussed the problem of equivalence between these integral equation representations. By employing a revised form of Kim's integral representation of the free boundary and because of the weakly singular behavior of the kernel, a product integration method based on the barycentric rational
quadrature is proposed to compute the American put price. We also have provided a theoretical analysis of the proposed method as well as some
numerical evidence concerning the accuracy and efficiency of this framework. This work could be extended by studying the numerical stability as well as extending this framework to the numerical study of other integral equation classes, specially those leading to Urysohn type first kind integral equations defined on an unbounded domain. Extension to integral equations arising from more complicated dynamics such as jump-diffusions will be also worthy of investigation.
\begin{appendices}
\section{Fourier Transform Approach}\label{forieh}
In the literature, Fourier transform is used in complete and incomplete forms to reformulate the option pricing problem.
\subsection{McKean's Approach}
Let us define the Fourier and incomplete Fourier transforms of $V(t, S)$ as
\begin{equation}\mathcal{F}\{V(t, S) \} = \int_{-\infty}^{\infty}e^{\mathrm{i}\omega S} V(t, S) \mathrm{d}S, \quad \mathcal{F}_{b}\{V(t, S)\} = \int_{b}^{\infty}e^{\mathrm{i}\omega S} V(t, S) \mathrm{d}S, \end{equation}
for $b<S<\infty$. Chiarella et al. \cite{chiarella2014numerical, chia} inspired by McKean's work \cite{kean} derived a fully nonlinear Volterra integro-differential equation by applying the change of variable $S = e^{x}$, as well as the incomplete Fourier transform to Eq. (\ref{pde}) as follows
\begin{equation}\label{aa1}
\begin{split}
\frac{v(\ln \mathcal{B}(t))}{2} = & \frac{e^{-rt}}{\sigma \sqrt{2 \pi t}}\int_{-\infty}^{\ln \mathcal{B}(0^{+})} e^{-\frac{(\ln \mathcal{B}(t) -u-k\tau)^{2}}{2 \sigma^{2}\tau}} v(u)\mathrm{d}u \\
& + \int_{0}^{t} \frac{e^{-r(t-s)}}{\sigma \sqrt{ 2 \pi (t-s)}} \left[ e^{-h(\ln \mathcal{B}(t), t, s)} Q(\ln \mathcal{B}(t),t, s) \right] \mathrm{d}s.
\end{split}
\end{equation}
In the above formula, we have used the notations
\[ v(x)\equiv \max \{ e^{x} -K, 0 \}, \]
\[ h(x, t, s) = \frac{(x - \ln \mathcal{B}(s) + k(t-s))^{2}}{2 \sigma^{2}(t-s)},\]
\[ Q(x,t, s) = \frac{\sigma^{2} v' (\ln \mathcal{B}(s))}{2} + \left(
\frac{\mathcal{B}'(s)}{\mathcal{B}(s)} + \frac{1}{2} \left[ k - \frac{(x- \ln \mathcal{B}(s))}{(t-s)} \right] v(\ln (\mathcal{B}(s) \right),
\]
and
\[k = r - \delta - \frac{1}{2} \sigma^{2}.\]
\subsection{Chadam-Stamicar-\v{S}ev\v{c}ovi\v{c}'s Approach}
\v{S}ev\v{c}ovi\v{c} \cite{shev2} and Stamicar et al. \cite{stamicar} have utilized the Fourier sine and cosine transforms defined as
\begin{align*}
\mathcal{F}_{s}\{ V(t, S)\} = &\int_{0}^{\infty}V(t, S)\sin(\omega S)\mathrm{d}S,\\
\mathcal{F}_{c}\{ V(t, S)\} = &\int_{0}^{\infty}V(t, S)\cos(\omega S)\mathrm{d}S,\end{align*}
to find the
American option price. In the zero-dividend case, they proved that the early exercise boundary of an American put satisfies the integral equation defined recursively as follows
\begin{equation}\label{Aa2}
\begin{split}
\eta (t) = & - \sqrt{- \ln \left[ \sqrt{\pi} \sqrt{t} \exp \left( \frac{2r}{\sigma^{2}}\right) \left(1- \frac{F(t)}{\sqrt{\pi}} \right) \right]},\\
g(t, \theta) = & \frac{1}{\cos \theta} \left[ \eta (t) -\sin \theta \eta (t \sin^{2}\theta) \right],\\
F(t) = & 2\int_{0}^{\frac{\pi}{2}} \exp \left( - \frac{2r}{\sigma^{2}} t \cos^{2} \theta - g^{2}(t, \theta) \right) \Big\{ \sqrt{t} \sin \theta + g(t, \theta)\tan \theta\Big\} \mathrm{d}\theta,
\end{split}
\end{equation}
and the early exercise boundary is obtained by the formula \[\mathcal{B}(t) = K \exp \left( - \left( \frac{2r}{\sigma^{2}}-1\right) t \right) \exp \left( 2 \sqrt{t} \eta (t)\right).\]
By the change of variable $s= t \sin^{2} \theta $, the following fully nonlinear weakly singular Volterra integral equation is obtained
\begin{equation} \label{shevin}F (t) = \int_{0}^{t}\exp \left( - \frac{2r}{\sigma^{2}} (t-s) - \frac{(\sqrt{t}\eta (t) - \sqrt{s}\eta (s))^{2}}{t-s} \right) \Big\{ 1+ \frac{\sqrt{t}\eta (t) - \sqrt{s}\eta (s)}{t-s} \Big\} \frac{\mathrm{d}s}{\sqrt{t-s}}.\end{equation}
For the divided paying case, they have extended this approach (written here for a call option) have shown that the early exercise boundary satisfies
\begin{equation}\label{shevon}
\begin{split}
\mathcal{B}(t) = & \frac{r K}{\delta} \Big( 1+ \frac{\sigma}{r \sqrt{2\pi t}}
\exp \Big( -rt - \frac{(A(t,s)+ \ln(\frac{r}{\delta}))^{2}}{2 \sigma^{2}t} \Big)\Big) \\
& + \frac{1}{\sqrt{2 \pi}}\int_{0}^{t} \Big[ \sigma + \frac{1}{\sigma}
(1 - \frac{\delta \mathcal{B}(s)}{r K})\frac{A(t,s)}{t-s} \Big] \frac{\exp \Big( -r (t-s) - \frac{A(t,s)^{2}}{2 \sigma^{2}(t-s)}\Big)}{\sqrt{t-s}}\mathrm{d}s,
\end{split}
\end{equation}
where the function $A$ is defined as
\[A(t,s) = \ln \frac{\mathcal{B}(t)}{\mathcal{B}(s)} + \left( r- \delta - \frac{\sigma^{2}}{2}\right) \left( t-s\right). \]
\section{Laplace Transform Approach}\label{laplas}
As it is usual in the literature of partial differential equation, this transformation could be used to reduce the dimension of equation. This idea is used by some researchers in order to find an appropriate solution for the free boundary problem by reducing it to an integral equation which is reviewed in the following.
\subsection{Knessl's Approach} Knessl \cite{knessl} use the idea of ``moving reference frame" to convert the free boundary problem (\ref{pde})-(\ref{tah}) to a fixed boundary value problem.
By introducing new variables, he converts Eq. (\ref{pde}) into a PDE with constant coefficients
\begin{align}
p_{t}&=p_{xx}+(\rho -1)p_{x}, \quad x>b(t), \quad t>0,\\
b(0)&=0,\\
p(0,x)&=e^{x}-1, \quad x\geq 0,\\
p(t,b(t))&=e^{\rho t}-1, \quad p_{x}(t,b(t))=0, \quad t>0.
\end{align}
Then by a new variable $y=x-b(t)$, free boundary problem is converted to a fixed boundary value problem given as
\begin{align}
p_{t}&=p_{xx}+\Big[\rho-1+b'(t)\Big]p_{y}, \quad y>0, \quad t>0,\\
p(0,y)&=e^{y}-1, \quad y\geq0,\\
p(t,0)&=e^{\rho t}-1, \quad p_{y}(t,0)=0, \quad t>0.
\end{align}
Applying the Laplace transform
\[\mathcal{L}\{p(t,x)\} = \int_{0}^{\infty} p(t,y)e^{-sy}\mathrm{d}y,\]
to this PDE leads to the following nonlinear integral equation for $b(t)$ as
\begin{equation}\label{bb1}
\frac{1}{s-1} = \frac{2r}{\sigma^{2}}\int_{0}^{\infty} \exp \Big(\frac{2r}{\sigma^{2}} t -s {b}(t) -s (s + \frac{2r}{\sigma^{2}} -1) \Big) \mathrm{d}t, \quad \Re (s) > 1, \end{equation}
and finally the early exercise boundary is obtained as $\mathcal{B}(t)=K e^{b(t)}$. It is seen that the above equation is a Fredholm integral equation of the first kind.
\subsection{Mallier-Alobaidi's Approach}
Laplace transform in time is used to Eq. (\ref{pde}) with the conditions (\ref{smoothput1})-(\ref{tah}) and also in order to tackle the difficulty of holding the Black-Scholes-Merton PDE, they utilize incomplete Laplace transform and obtain an integral equation for the early exercise boundary.
To introduce this approach, we define the notations
\[
S_{0} = \frac{Kr}{\delta}, \quad \alpha^{+}= \frac{1}{2\sigma^{2}}\left[ \sigma^{2} -2(r-\delta) + \sqrt{4 \delta^{2} - 8 \delta r + 4 \delta \sigma^{2} + 4 r^{2} + 4\sigma^{2}r + \sigma^{4} } \right].
\]
Let $S^{*} = \frac{K}{1-\frac{1}{\alpha^{+}}}$. It can be shown that for $r>\delta>0$, the early exercise boundary of the American call satisfies the following equation
\begin{equation}\label{l1}
\begin{split}
\int_{S_{0}}^{S^{*}} S^{\frac{-1}{2 \sigma^{2}}\left( 2 \delta -2r + 3\sigma^{2}-\lambda(p)\right) } F(S)\mathrm{d}S& = \frac{1}{4}e^{pT}K^{\frac{-1}{2\sigma^{2}}\left( 2 \delta -2r -3 \sigma^{2}-\lambda(p) \right) }\\
& \times \left[ 1 - (\frac{r}{\delta})^{\frac{-1}{2\sigma^{2}}\left( 2 \delta -2r - \sigma^{2}-\lambda(p) \right) } \right] \\
& \times \left[ \frac{2\delta -2r - \sigma^{2}+ \lambda(p)}{p+ \delta} - \frac{2\delta -2r + \sigma^{2}+ \lambda(p)}{p+ r} \right],
\end{split}
\end{equation}
where
\[ \lambda (p) = \sqrt{ 4 \delta^{2} - 8 \delta r + 4 \delta \sigma^{2} + 4 r^{2} + 4\sigma^{2}r + \sigma^{4} + 8\sigma^{2} p},\]
and
\[ F(S) = (S - K)e^{p T_{f}(S)} - \left[ (r-\delta)(K-S)S - \sigma^{2}S^{2} \right] T'_{f}(S) -\frac{1}{2}\sigma^{2}S^{2}(S-K)T''_{f}(S). \]
In the above equation, $T_{f}(S)$ is the early exercise boundary in the Laplace space \cite{gada1}. Furthermore, it can be proved that the early exercise boundary of the American put solves the equation
\begin{equation}\label{l2}
\int_{S^{*}}^{K} S^{-\frac{1}{2\sigma^{2}}\left[ 2\delta -2r +3 \sigma^{2} + \lambda(p)\right]} F(S) \mathrm{d}S = 0. \end{equation}
Both of equations, (\ref{l1}) and (\ref{l2}) could be categorized as the Urysohn integral equations of the first kind.
\section{Mellin Transform Approach}\label{melina}
\subsection{Mellin Transform}The Mellin transform of $V(t,S)$ defined by
\[\mathcal{M}\{V(t, S)\} = \int_{0}^{\infty} V(t, S)S^{\omega -1}\mathrm{d}S,\]
is applied to Eq. (\ref{pde}) with the conditions (\ref{putcon})-(\ref{tah}) to obtain the following inhomogeneous ordinary differential equation
\[
\frac{d\widehat{P}}{dt} + \Big( \frac{\sigma^2}{2}(\omega^{2}+\omega)-r\omega -r \Big)\widehat{P} = \frac{-rK}{\omega}(\mathcal{B}(t))^{\omega}.
\]
Solving this ODE gives
\begin{align*}
\widehat{P}(t, \omega) =A(\omega)e^{-\frac{1}{2}\sigma^{2}q(\omega)t} +\frac{rK}{\omega}\int_{t}^{T}(\mathcal{B}(s))^{\omega}e^{\frac{1}{2}\sigma^{2}q(\omega)(s-t)}\mathrm{d}s,
\end{align*}
where $Q(\omega) = \omega^{2} + \omega \left( 1- \frac{2(r-\delta)}{\sigma^{2}}\right) - \frac{2r}{\delta}$.
Finally using the inversion of the Mellin transform, we arrive at the following representation for the put price
\begin{align*}
P(t,S) =& \frac{1}{2\pi \mathrm{i}}\int_{c-\mathrm{i} \infty}^{c+\mathrm{i} \infty} \widehat{\theta}(\omega) e^{\frac{1}{2}\sigma^{2} Q(\omega)(T-t)} S^{-\omega}\mathrm{d}\omega \\
&+ \frac{rK}{2\pi \mathrm{i}}\int_{c-\mathrm{i} \infty}^{c+\mathrm{i}\infty} S^{-\omega} \int_{t}^{T}\frac{(\mathcal{B}(s))^{\omega}}{\omega}e^{\frac{1}{2}\sigma^{2}Q(\omega)(s-t)}\mathrm{d}s\mathrm{d}\omega.
\end{align*}
The above approach has been studied in \cite{frontczak2008pricing, panini} and it gives the following fully nonlinear Volterra integral equation for the early exercise boundary
\begin{equation}\label{melin}
\begin{split}
\mathcal{B}(t) - K = p(t,\mathcal{B}(t)) & + \frac{1}{2\pi \mathrm{i}}\int_{c- \mathrm{i} \infty}^{c+\mathrm{i} \infty}\int_{t}^{T} \frac{r K}{\omega} \left( \frac{\mathcal{B}(t)}{\mathcal{B}(s)} \right) ^{-\omega} e^{\frac{1}{2} \sigma^{2}Q(\omega)(s-t)} \mathrm{d}s \mathrm{d}\omega\\
& - \frac{1}{2\pi \mathrm{i}}\int_{c-\mathrm{i} \infty}^{c+\mathrm{i} \infty}\int_{t}^{T} \frac{r \mathcal{B}(t)}{\omega +1} \left( \frac{\mathcal{B}(t)}{\mathcal{B}(s)} \right) ^{-\omega} e^{\frac{1}{2} \sigma^{2}Q(\omega)(s-t)} \mathrm{d}s \mathrm{d}\omega.
\end{split}
\end{equation}
It could be shown that Eq. (\ref{melin}) is equivalent to Eq. (\ref{kim}) via the convolution property of the Mellin transform (for more details see \cite{frontczak2008pricing}).
\subsection{Modified Mellin Transform} Let $C^{E}(t,S)$ denote the European call option price. Since $C^{E}(t,S)= \mathcal{O}(1)$ for $S\rightarrow 0^{+}$ and $C^{E}(t,S)= \mathcal{O}(S)$ as $S\rightarrow\infty$, Frontczak and Sch\"{o}bel \cite{patrik2} proposed a modified Mellin transform defined by
\[\mathcal{M}(C^{E}(t,S), -\omega):=\int_{0}^{\infty}C^{E}(t,S) S^{-(\omega +1)}\mathrm{d}S. \]
They have shown that the price of an European call option is given by
\[C^{E}(t,S)=\frac{1}{2\pi \mathrm{i}}\int_{c-\mathrm{i} \infty}^{c+ \mathrm{i} \infty}K^{-\omega +1}\Big(\frac{1}{\omega-1}-\frac{1}{\omega}\Big)e^{\frac{1}{2}\sigma^{2}Q(\omega)(T-s)}S^{\omega}\mathrm{d}\omega,\]
which is equivalent to the Black-Scholes-Merton formula for European call price. They also showed that the price of an American call could be obtained by
\begin{align}
C^{A}(t,S) =& C^{E}(t,S)+ \frac{1}{2\pi \mathrm{i}}\int_{c-\mathrm{i} \infty}^{c+\mathrm{i} \infty}\int_{t}^{T} \frac{\delta \mathcal{B}(s)}{\omega -1}\Big(\frac{S}{\mathcal{B}(s)}\Big)^{\omega}e^{\frac{1}{2}\sigma^{2}Q(\omega)(s-t)}\mathrm{d}s\mathrm{d}\omega\\
&-\frac{1}{2\pi \mathrm{i}}\int_{c-\mathrm{i} \infty}^{c+\mathrm{i} \infty}\int_{t}^{T} \frac{rK}{\omega}\Big( \frac{S}{\mathcal{B}(s)}\Big)^{\omega} e^{\frac{1}{2}\sigma^{2}Q(\omega)(s-t)}\mathrm{d}s\mathrm{d}\omega,\notag
\end{align}
which is equivalence to Eq. (\ref{price}).
\section{Green's Function Approach}\label{green1}
\subsection{Zero-Dividend Case}
We consider Eq. (\ref{pde}) with the initial and boundary conditions (\ref{putcon}) and rewrite the dimensionless form of it with the variables:
\[ \rho = \frac{2r}{\sigma^{2}}, \quad S=Ke^{x}, \quad t=T-\frac{2}{\sigma^{2}}\tau, \quad b(t) = \log\Big[\frac{\mathcal{B}(t)}{K}\Big], \quad P(t,S)= Kp(x,\tau).\]
The fundamental solution of the reformulated PDE which is given by the Green's identity is as follows
\begin{equation}
p(\tau, x) = \int_{-\infty}^{0}(1-e^{y})\Gamma(x-y, \tau)\mathrm{d}y + \rho \int_{0}^{\tau}\int_{-\infty}^{b(\tau - s)}\Gamma(x-y, s)\mathrm{d}y\mathrm{d}s,
\end{equation}
where
\[\Gamma (\tau, x) = \frac{1}{\sqrt{4 \pi \tau}}e^{-\frac{[x+(\rho - 1)\tau]^{2}}{4 \tau}- \rho \tau}. \]
The above expression for the price solves Eq. (\ref{pde}) as well as the early exercise boundary satisfies the following integral and integro-differential equations
\begin{equation}
\begin{split}
\int_{0}^{\tau}\Gamma (s, b(\tau))\mathrm{d}s =& \, \rho \int_{0}^{\tau}\int_{b(\tau -s)}^{0} \Gamma (s, b(\tau)-y)\mathrm{d}y\mathrm{d}s,\\
\int_{0}^{\tau} \Gamma_{x}(s, b(\tau)) + \rho \Gamma (s, b(\tau))\mathrm{d}s =& \,\rho \int_{0}^{\tau} \Gamma (s, b(\tau) - b(\tau -s))\mathrm{d}s,\\
\Gamma ( \tau, b(\tau)) = &\, - \rho \int_{0}^{\tau} \Gamma(s,b(\tau) - b(\tau -s)) b'(\tau -s )\mathrm{d}s,\\
\Gamma ( \tau, b(\tau)) = & \, \frac{\rho}{2} + \rho \int_{0}^{\tau} \Gamma_{x}(s,b(\tau) - b(\tau -s)) - \Gamma (s,b(\tau) - b(\tau -s)) \mathrm{d}s,\\
b'(\tau) = &\, - \frac{2 \Gamma_{x}( \tau, b(\tau))}{\rho} - 2 \int_{0}^{\tau} \Gamma_{x} (s,b(\tau) - b(\tau -s)) b'(\tau -s)\mathrm{d}s.
\end{split}
\end{equation}
\subsection{Non-Zero Dividend Case}
The above discussion can be extended for non-zero divided case. Let us introduce the Green's function for (\ref{pde})
\[ G(x, \tau; \xi,s) = \Gamma(x-\xi, \tau-s)e^{\rho(\tau-s)}. \]
For the case of $\delta >0 $, the price of the American put option is given by
\begin{eqnarray}
P(\tau, x) & = & \int_{0}^{\infty} (e^{\xi} - 1)\Gamma(x-\xi, \tau) e^{\rho \tau} \mathrm{d}\xi \\\notag
& & + \int_{0}^{\tau}\int_{b(s)}^{\infty} ( e^{\xi} - \rho) e^{\rho (\tau-s)} \Gamma(x-\xi, \tau-s) \mathrm{d}\xi \mathrm{d}s, \\\notag
& =& I^{(1)}(\tau, x) + I^{(2)}(\tau, x)
\end{eqnarray}
for more detail see \cite{evans}. The payoff condition implies $P_{\tau}( \tau, b(\tau)) = 0$, which gives
\begin{equation}\label{gens}
\frac{\partial I^{(1)}}{\partial \tau} [\tau, b(\tau)] = - \lim_{x\rightarrow b(\tau)} \frac{\partial I^{(2)}}{\partial \tau} [\tau,x].
\end{equation}
Eq. (\ref{gens}) equation is a weakly singular Volterra integral equation of nonlinear type.
The early exercise boundary introduced at (\ref{pde}) can be obtained by $\mathcal{B}(t) = K e^{b(t)}$.
\section{Optimal Stopping Approach} \label{stop}
It is a well-known fact that in a complete market and using arbitrage arguments, we could use the existence of a unique equivalent martingale measure, $Q$ to derive a unique price for both European and American option contracts. Among the early contributions to this field of research, one could mention \cite{ben, karat, myneni} in which the authors show that the price of an American put option could be represented as the expected supremum of the discounted payoff function over all admissible stopping times, $\tau$, of the form
\begin{equation}\label{optimal}
V(t,x) = \sup_{0\leq \tau \leq T-t} E_{t,x}\left( e^{-r\tau}\left( K- X_{t+\tau}\right)^{+} \right),
\end{equation}
where $E_{t,x}[\cdot] = E_{Q}[\cdot|X_{t} = x]$.
In (\ref{optimal}), the stochastic process $X=(X_{t+s})_{s\geq 0}$ satisfies the geometric Brownian motion differential equation of the form
\[ d X_{t+s} = r X_{t+s} ds + \sigma X_{t+s}d B_{s}, \quad X_{t}= x,\]
with the exact solution
\[ X_{t+s} = x \exp \left( \sigma B_{s} + (r-\frac{\sigma^{2}}{2})s \right), \]
in which $B = (B_{s})_{s\geq 0}$ denotes the standard Brownian motion process starting at zero and $ (x, t) \in (0, T]\times \Bbb{R} $ is given beforehand.
Under some regularity conditions on $V$, applying It\^{o}'s formula to $e^{-rs}V(t+s, X_{t+s})$ and taking the $P_{t,x}$-expectation on both sides of the resulting identity, we obtain ``the early exercise premium representation'' of the form
\begin{equation}
\begin{split}
V(t,x) = e^{-r(T-t)}E_{t,x}(G(X_{T})) + rK \int_{0}^{T-t}e^{-ru} P_{t,x}(X \leq \mathcal{B}(t+u))\mathrm{d}u,
\end{split}
\end{equation}
where $G(x) = (K-x)^{+}$ (for more details see \cite{peskir}).
Applying the accompanying conditions (\ref{putcon}), one obtains the integral equation
\begin{dmath*}
K-\mathcal{B}(t)=e^{-r(T-t)}\int_{0}^{K}\aleph\Big(\dfrac{1}{\sigma \sqrt{T-t}}(\log(\dfrac{K-s}{\mathcal{B}(t)})-(r-\frac{\sigma^{2}}{2})(T-t))\Big)\mathrm{d}s+rK\int_{0}^{T-t}e^{-rs}\aleph\Big(\dfrac{1}{\sigma \sqrt{s}}(\log(\dfrac{\mathcal{B}(t+s)}{\mathcal{B}(t)})-(r-\dfrac{\sigma^{2}}{2})s)\Big)\mathrm{d}s,
\end{dmath*}
in the non-dividend paying case. Furthermore, when the option pays dividends, Kim \cite{kim} has employed the risk-neutral valuation framework of Cox and Ross \cite{cox} to obtain the nonlinear integral equation (\ref{kim}).
\end{appendices}
\bibliographystyle{plain}
|
1,116,691,500,360 | arxiv | \section{Introduction}
For several years now, elaborating upon an idea proposed in
\cite{ZHSL}, we have been pursuing the problem of deriving
(hypothetically exact) formulas for the proportion of states of
qubit-qubit \cite{avron} and qubit-qutrit \cite{sudarshan}
systems that are {\it separable}
(classically-correlated) in nature
\cite{slaterHall,slaterA,slaterC,slaterOptics,slaterJGP,slaterPRA,pbsCanosa,slaterPRA2}.
Of course, any such proportions will critically depend upon the measure
that is placed upon
the quantum
systems. In particular, we have---in analogy to (classical) Bayesian
analyses, in which the {\it volume element} of the {\it Fisher information}
metric for a parameterized family of probability distributions
is utilized as a measure (``Jeffreys' prior'')
\cite{kass}---principally
employed the volume elements of the well-studied (Euclidean, flat)
Hilbert-Schmidt (HS) and
Bures ({\it minimal} monotone or symmetric-logarithmic-derivative [SLD])
metrics (as well as a number of other
[non-minimal] {\it monotone} metrics \cite{slaterJGP}).
\.Zyczkowski and Sommers \cite{szHS,szBures}
have, using methods of random matrix theory \cite{random}
(in particular, the Laguerre ensemble), obtained formulas,
general for all $n$, for the
HS and Bures {\it total} volumes (and hyperareas) of $n \times n$
(real and complex)
quantum systems. Up to normalization factors, the HS total volume
formulas were also found by Andai \cite{andai}, in a rather different
analytical framework, using a number of
(spherical and beta) integral identities
and positivity (Sylvester) conditions. (He also obtained
formulas---general for any monotone metric [including the Bures]---for the
volume of {\it one}-qubit [$n=2$] states \cite[sec. 4]{andai}.)
Additionally, Andai did specifically study
the HS {\it quaternionic} case. He derived the
HS total volume for $n \times n$ quaternionic systems \cite[p. 13646]{andai},
\begin{equation} \label{andaiQuatVol0}
V_{quat}^{HS} = \frac{(2 n-2)!
\pi^{n^2-n}}{(2 n^2-n-1)!} \Pi_{i=1}^{n-2} (2 i)!,
\end{equation}
giving us for the two-qubit ($n=4$) case that will be our specific
initial interest here,
the 27-dimensional volume,
\begin{equation} \label{andaiQuatVol}
\frac{\pi ^{12}}{7776000} \cdot \frac{1}{40518448303132800} =
\frac{\pi ^{12}}{315071454005160652800000}
\approx 2.93352 \cdot 10^{-18}.
\end{equation}
(In the analytical setting employed by \.Zyczkowski and Sommers \cite{szHS},
this volume would appear as
$2^{12}$ times as large \cite[p. 13647]{andai}.)
If one then possessed a
companion volume formula for the {\it separable}
subset, one could immediately compute
the HS two-qubit quaternionic separability {\it probability}
($P^{HS}_{quat}$) by taking
the ratio of the two volumes.
In fact, following a convenient paradigm we have developed, and will employ
several times below, in varying contexts, we will
compute $P^{HS}_{quat}$ as the product ($R_1 R_2$) of two {\it ratios},
$R_1$ and $R_2$. The first (24-dimensional) factor on the left-hand side of
(\ref{andaiQuatVol})
will serve
as the denominator of $R_1$ and the second (3-dimensional)
factor, as the denominator of $R_2$. The determinations
of the {\it numerators}
of such pairs of complementary ratios will
constitute, in essence, our (initial) principal computational
challenges.
\subsection{Bloore parameterization of density matrices} \label{secBloore}
One analytical approach to the separable volume/probability
question that has
recently proved to be productive \cite{slater833}---particularly, in
the case of the Hilbert-Schmidt (HS) metric (cf. \cite{slaterDyson})---makes
fundamental use of a (quite elementary)
form of density matrix parameterization first
proposed by Bloore \cite{bloore}. This methodology
can be seen to be strongly related
to the very common and long-standing use of {\it correlation matrices}
in statistics and its many fields of application \cite{joe,kurowicka,kurowicka2}.
(Correlation matrices can be obtained by standardizing {\it covariance}
matrices. Density matrices have been viewed as covariance matrices of
multivariate normal [Gaussian] distributions \cite{guiasu}. Covariance
matrices for certain observables
have been used to study the separability of finite-dimensional
quantum systems \cite{guhne}. The possible
states of polarization of a two-photon system are describable by six
Stokes parameters and a $3 \times 3$ ``polarization correlation'' matrix
\cite{vanik}.)
In the Bloore (off-diagonal scaling)
parameterization, one simply represents an off-diagonal {\it ij}-entry
of a density matrix $\rho$, as $\rho_{ij} = \sqrt{\rho_{ii} \rho_{jj}} w_{ij}$,
where $w_{ij}$ might be real, complex or quaternionic
\cite{asher2,adler,batle2} in nature.
The particular attraction of the Bloore scheme, in terms of the
separability problem in which we are interested, is that one can
(in the two-qubit case) implement
the well-known Peres-Horodecki separability (positive-partial-transpose)
test \cite{asher,michal}
using only the ratio,
\begin{equation} \label{firstratio}
\mu =\sqrt{\nu} = \sqrt{\frac{\rho_{11} \rho_{44}}{\rho_{22}
\rho_{33}}},
\end{equation}
rather than the four (three independent)
diagonal entries of $\rho$ individually \cite[eq. (7)]{slaterPRA2}
\cite[eq. (5)]{slater833}.
Utilizing the Bloore parameterization, we have, accordingly, been able to reduce the problem
of computing the desired HS volumes of two-qubit separable states
to the computations
of {\it one}-dimensional integrals (\ref{Vsmall})
over $\mu \in [0,\infty]$. The
associated integrands are the
{\it products} of
{\it two}
functions, one a readily determined jacobian function
$\mathcal{J}(\mu)$ (corresponding, first,
to
the transformation to the Bloore variables $w_{ij}$
and, then, to $\mu$)
and the other,
the more problematical (what we have termed)
{\it separability function} $\mathcal{S}^{HS}(\mu)$
\cite[eqs. (8), (9)]{slaterPRA2}.
In the qubit-{\it qutrit} case [sec.~\ref{QubQut}], {\it two}
ratios,
\begin{equation} \label{tworatios}
\nu_{1}= \frac{\rho_{11} \rho_{55}}{\rho_{22} \rho_{44}}, \hspace{.2in}
\nu_{2}= \frac{\rho_{22} \rho_{66}}{\rho_{33} \rho_{55}},
\end{equation}
are required to express the separability conditions (choosing to compute
the partial transpose by transposing four $3 \times 3$ blocks, rather than
nine $2 \times 2$ blocks of the $6 \times 6$ density matrices),,
but analytically the corresponding HS separability
functions also appear to be {\it univariate} in nature, being
simply functions of either $\nu_1$ or of $\nu_2$ singly, or
the product \cite[sec. III]{slater833},
\begin{equation}
\eta = \nu_1 \nu_2 =\frac{\rho_{11} \rho_{66}}{\rho_{33} \rho_{44}}.
\end{equation}
\subsection{Euler-angle parameterization of density matrices}
Here, one can again divide the set of parameters into two groups, in a
natural manner (that is,
the diagonal and off-diagonal parameters in the Bloore
framework). Now, the two sets are composed of the eigenvalues of $\rho$ and
of the Euler angles parameterizing the associated unitary matrix of
eigenvectors \cite{sudarshan,tbs}.
With both forms of parameterizations we have discussed,
one can obtain the total volume of $n \times n$ quantum
systems as the {\it product} of integrals over the two complementary sets
\cite{szHS,szBures}.
But this direct approach
no longer holds in terms of computing the separable volume.
So, we have evolved the following general strategy
\cite{slaterPRA2,slater833}.
We integrate over the larger set (off-diagonal or Euler-angle
parameters), {\it while} enforcing separability conditions, leaving us
with {\it separability functions} that are functions of only the {\it smaller}
set of parameters (diagonal entries or eigenvalues). Doing so, of course,
substantially reduces the dimensionality of the problem.
We are, then, left with such separability functions and the
ensuing task of
appropriately integrating these functions
over the remaining parameters (diagonal
entries or eigenvalues), so as to obtain the requisite {\it separable
volumes}.
\subsection{Immediately preceding studies}
In our extensive numerical (quasi-Monte Carlo integration)
investigation \cite{slaterPRA2} of the 9-dimensional and 15-dimensional
convex sets of real and complex $4 \times 4$ density matrices,
we had formulated ans{\"a}tze for the two associated separability
functions ($\mathcal{S}^{HS}_{real}(\mu)$ and $\mathcal{S}^{HS}_{complex}(\mu)$),
proposing that
they were
proportional to certain (independent)
{\it incomplete beta functions} \cite{handbook},
\begin{equation}
B_{\mu^2}(a,b) =\int_{0}^{\mu^2} \omega^{a-1} (1-\omega)^{b-1} d \omega,
\end{equation}
for particular values of $a$ and $b$.
However, in
the subsequent study
\cite{slater833},
we were led to somewhat
modify these ans{\"a}tze, in light of multitudinous
exact {\it lower}-dimensional results obtained there. Since
these further results clearly manifested patterns fully consistent with
the {\it Dyson index} (``repulsion exponent'')
pattern ($\beta =1, 2, 4$)
of random matrix theory \cite{dyson}, we proposed
that, in the (full 9-dimensional) real case,
the separability function was proportional to a specific
incomplete
beta
function ($a=\frac{1}{2},b=2$),
\begin{equation}
\mathcal{S}^{HS}_{real}(\mu) \propto B_{\mu^2}(\frac{1}{2},2)
\equiv \frac{3}{4} (3 -\mu^2) \mu
= \frac{3|\rho|^{\frac{1}{2}}}
{4 \rho_{22} \rho_{33}} (3 \rho_{22} \rho_{33}- \rho_{11} \rho_{44})
\end{equation}
and in the complex case, proportional, not just
to an independent function, but
simply to the {\it square} of
$\mathcal{S}^{HS}_{real}(\mu)$.
(These proposals are strongly consistent
\cite[Fig. 4]{slater833} with the numerical
results generated in \cite{slaterPRA2}.) This
chain of reasoning, then,
immediately suggests the further proposition
that the separability function in
the {\it quaternionic} case is exactly proportional to the {\it fourth}
power of that for the real case (and, obviously, the square of that for
the complex case). It is that specific proposition we will, first,
seek to evaluate here.
\subsection{Objectives}
We seek below (sec.~\ref{secQuat}) to further test the validity of our
Dyson-index ansatz, first advanced in \cite{slater833},
as well as possibly develop an enlarged perspective
on the still not yet fully
resolved problem of the two-qubit
HS separability probabilities
in all three (real, complex and quaternionic) cases.
(In \cite{slater833}, we proposed, combining numerical and theoretical
arguments--not fully rising to the level of a formal
demonstration--that in the real two-qubit case, the HS separability probability
is $\frac{8}{17}$, and in the complex
two-qubit case, $\frac{8}{33}$.) A supplementary treatment of
the {\it truncated} quaternionic scenario ($\beta=3$) is presented in
sec.~\ref{secTrunc}.
We invesigate related separability-function questions
in the qubit-{\it qutrit} framework, again making use of
the Hilbert-Schmidt metric (sec.~\ref{QubQut}), and,
also in the two-qubit setting, employing the Bures (minimal monotone) metric
(sec.~\ref{secBures}).
Since it becomes more problematical to obtain separability functions
in the Bures case, we explore--as originally proposed in \cite{slaterJPAreject}--the use of the $SU(4)$ Euler-angle parameterization of Tilma, Byrd and
Sudarshan \cite{tbs} for similarly-minded purposes (sec.~\ref{secEuler}).
\section{Bloore-parameterization separability functions} \label{Bp}
\subsection{Quaternionic two-qubit Hilbert-Schmidt analysis}
\label{secQuat}
Due to the ``curse of dimensionality'' \cite{bellman,kuosloan},
we must anticipate that for the
same number of sample ("low-discrepancy" Tezuka-Faure (TF)
\cite{giray1,tezuka})
points generated in the quasi-Monte
Carlo integration
procedure employed in \cite{slaterPRA2} and here, our numerical estimates
of the quaternionic separability function will be less precise than
the estimates were for
the complex, and {\it a fortiori}, real cases.
(An interesting, sophisticated alternative
approach to computing the Euclidean volume
of {\it convex} bodies involves a variant of
{\it simulated annealing} \cite{lovasz} (cf. \cite{dyer}), and allows one---unlike the Tezuka-Faure approach, we have so far employed---to
establish confidence intervals for estimates.)
Our first extensive numerical
analysis here involved the generation of sixty-four million 24-dimensional
Tezuka-Faure points, all situated in
the 24-dimensional unit hypercube $[0,1]^{24}$.
(The three independent
diagonal entries of the density matrix $\rho$---being incorporated
into the jacobian $\mathcal{J}(\mu)$---are
irrelevant at this stage of the calculations
of $\mathcal{S}^{HS}_{quat}(\mu)$.
The 24 [off-diagonal] Bloore variables had been linearly
transformed so that each ranged over the unit interval [0,1].
The computations were done over several weeks, using compiled Mathematica
code, on a MacMini workstation.)
Of the sixty-four million sample points
generated, 7,583,161, approximately 12$\%$,
corresponded to possible
$4 \times 4$ quaternionic density matrices---satisfying nonnegativity
requirements. For each of these feasible points, we evaluated whether or not
the Peres-Horodecki positive-partial-transpose separability test was
satisfied for 2,001 equally-spaced values of $\mu \in [0,1]$.
Here, we encounter another computational ``curse'',
in addition to that already
mentioned pertaining to the high-dimensionality of our problem,
and also the infeasibility
of most ($88\%$) of the sampled Tezuka-Faure points. In the standard
manner \cite[eq. (5.1.4)]{random}
\cite[p. 495]{adler} \cite[eq. (17)]{slaterJMP1996}
\cite[sec. II]{JIANG},
making use of
the Pauli matrices, we
transform the $4 \times 4$ {\it quaternionic} density matrices---and their
partial transposes---into
$8 \times 8$ density matrices with [only] complex entries.
Therefore, given a feasible 24-dimensional point, we have to check
for each of the 2,001 values of $\mu$, an $8 \times 8$
matrix for nonnegativity, rather than a $4 \times 4$ one, as was done in
both the real and complex two-qubit cases. In all three of these cases,
we found that it would be incorrect
to simply assume---which would, of course, speed computations---that
if the separability test is passed for a certain $\mu_{0}$,
it will also be passed for all $\mu$ lying between $\mu_{0}$ and 1.
This phenomenon reflects the intricate (quartic
{\it both} in $\mu$ and in the Bloore variables $w_{ij}$'s,
in the real and complex cases) nature of the
polynomial separability constraints
\cite[eq. (7)]{slaterPRA2} \cite[eq. (5)]{slater833}.
\subsubsection{Estimated separability function and probability}
In Fig.~\ref{fig:quatsepfunct} we show the estimate we, thus, were able
to obtain
of the two-qubit quaternionic separability function
$\mathcal{S}^{HS}_{quat}(\mu)$, in its normalized form.
(Around $\mu=1$, one must have the evident symmetrical relation
$\mathcal{S}^{HS}(\mu) = \mathcal{S}^{HS}(\frac{1}{\mu})$.) Accompanying our estimate
in the plot is
the (well-fitting) hypothetical true
form (according with our Dyson-index ansatz
\cite{slater833}) of the HS two-qubit separability function,
that is, the {\it fourth} power, $\Big(\frac{1}{2} (3 -\mu^2) \mu\Big)^4$, of the
normalized form of $\mathcal{S}^{HS}_{real}(\mu)$.
\begin{figure}
\includegraphics{quatsepfunct.jpg}
\caption{\label{fig:quatsepfunct}Estimate---based on 64,000,000 sampled
24-dimensional points---of the normalized form
of the two-qubit {\it quaternionic}
separability function $S^{HS}_{quat}(\mu)$, along with its (well-fitting)
hypothetical true form,
the {\it fourth} power of the normalized
form of $\mathcal{S}^{HS}_{real}(\mu)$, that is,
$\Big(\frac{1}{2} (3 -\mu^2) \mu\Big)^4$}
\end{figure}
For the specific, important value of
$\mu=1$--implying that $\rho_{11}
\rho_{44}=\rho_{22} \rho_{33}$--the ratio ($R_{1}$) of the 24-dimensional
HS measure ($m_{sep} = R^{numer}_{1}$)
assigned in our
estimation procedure to separable
density matrices to the (known) total 24-dimensional
HS measure ($m_{tot} =R^{denom}_{1}$)
allotted to all (separable and
nonseparable) density matrices is $R_{1} = 0.123328$.
The exact value of $m_{sep}$ is, of course, to begin here, unknown, being
a principal desideratum of our investigation. On the other hand,
we can directly deduce that
$m_{tot} = R_{1}^{denom} =
\frac{\pi ^{12}}{7776000} \approx 0.118862$---our sample
estimate being 0.115845---by dividing the two-qubit HS quaternionic
27-dimensional volume
(\ref{andaiQuatVol}) obtained by Andai \cite{andai} by
\begin{equation} \label{rationalfraction}
R_{2}^{denom} =2 \int_{0}^{1} \mathcal{J}_{quat}(\mu) d \mu =
\frac{\Gamma \left(\frac{3 \beta }{2}+1\right)^4}{\Gamma
(6 \beta +4)} =
\frac{1}{40518448303132800} \approx 2.46801 \cdot 10^{-17},
\hspace{.1in} \beta = 4.
\end{equation}
Here, $\mathcal{J}_{quat}(\mu)$ is the quaternionic jacobian function
(Fig.~\ref{fig:quatjacobian}),
obtained by transforming the
quaternionic Bloore
jacobian $\Big(\rho_{11} \rho_{22} \rho_{33}
(1-\rho_{11} -\rho_{22} -\rho_{33})\Big)^\frac{3 \beta}{2}$, $\beta=4$,
to the $\mu$ variable by replacing, say $\rho_{33}$ by $\mu$,
and integrating out $\rho_{11}$ and $\rho_{22}$.
(We had presented plots of $\mathcal{J}_{real}(\mu)$
and $\mathcal{J}_{complex}(\mu)$
in \cite[Figs. 1, 2]{slaterPRA2}, and observed
apparently highly oscillatory behavior in both functions in
the vicinity of $\mu=1$.
However, a referee of \cite{slater833} informed us that this was simply
an artifact of using standard machine precision, and that with
sufficiently enhanced
precision [only recently available for plotting purposes in Mathematica
6.0]--as now employed in Fig.~\ref{fig:quatjacobian}--the
oscillations could be seen to be, in fact, illusory.)
\begin{figure}[!tbp]
\includegraphics{quatjacobian.jpg}
\caption{\label{fig:quatjacobian}The univariate quaternionic jacobian function
$\mathcal{J}_{quat}(\mu)$}
\end{figure}
We theoretically can obtain the two-qubit
quaternionic {\it separability}
probability $P^{HS}_{sep/quat}$ by multiplying the true value
(which we do not beforehand know, but seek) of the ratio
$R_{1}$ by a second (known, computable) ratio $R_{2}$.
The denominator of $R_{2}$ has already been given
(\ref{rationalfraction}).
The {\it numerator} of $R_{2}$ is the specific value
\begin{equation} \label{R2numerator}
R_{2}^{numer} = 2 \int_{0}^{1} \mathcal{J}_{quat}(\mu)
\Big(\frac{1}{2} (3 -\mu^2) \mu\Big)^4 d \mu = \frac{5989}{358347086242825680000}
\approx 1.67128 \cdot 10^{-17},
\end{equation}
where, to obtain the integrand,
we have multiplied (in line with our basic
[Bloore-parameterization] approach to the separability
probability question)
the quaternionic jacobian function (Fig.~\ref{fig:quatjacobian})
by the
(normalized) putative
form of the two-qubit quaternionic separability function.
(Note the use of the $\beta=4$ exponent.)
The counterpart of $R^{numer}_{2}$ in the 9-dimensional {\it real} case is
$\frac{1}{151200}$ and in the 15-dimensional {\it complex} case,
$\frac{71}{99891792000}$. We further note, regarding the last
denominator, that
\begin{equation}
99891792000 = \left(
\begin{array}{c}
\text{11} \\
\text{2}
\end{array}
\right) \frac{\Gamma{(16)}}{\Gamma{(7)}}
\end{equation}
is the coefficient
of $\mu^2$ in $11 !
L_{11}^{4}(\mu)$ and $\frac{151200}{2}= 75600$ plays the exact same role
in $6! L_{6}^{4}(\mu)$, where $L_{m}^{4}(\mu)$ is a generalized ($a=4$)
Laguerre polynomial (see sequences A062260 and A062140
in the {\it The On-Line Encylopaedia of Integer
Sequences}). (Also, as regards the denominator of (\ref{R2numerator}),
$\frac{358347086242825680000}{3587352665} = 99891792000$.)
\.Zyczkowski and Sommers had
made use of the Laguerre ensemble in deriving
the HS and Bures volumes and hyperareas of $n$-level quantum systems
\cite{szHS,szBures}. Generalized (associated/Sonine) Laguerre
polynomials [``Laguerre functions''] have been employed
in another important quantum-information context, in
proofs of Page's conjecture on the average entropy of a subsystem
\cite{sanchez,sen}.)
We, thus, have, for our two-qubit quaternionic case, that
\begin{equation} \label{ratio2}
R_{2} = \frac{R_{2}^{numer}}{R_{2}^{denom}} =
\frac{125769}{185725} \approx 0.677179.
\end{equation}
(The {\it real} counterpart of $R_{2}$ is
$\frac{1024}{135 \pi ^2} \approx 0.76854$, and the
{\it complex} one, $\frac{71}{99} \approx 0.717172$.
Additionally, we computed that the
corresponding ``truncated'' quaternionic \cite{pfaff}
ratio---when {\it one} of the four quaternionic parameters is set to
zero, that is the Dyson-index
case $\beta=3$--- is $\frac{726923214848}{106376244975
\pi ^2} \approx 0.692379$. Thus, we see that these four
important ratios $R_{2}(\beta)$
monotonically
decrease as $\beta$ increases, and also, significantly, that the two
ratios for odd values of $\beta$
differ qualitatively---both having $\pi^2$ in their denominators---from
those two for even $\beta$.)
Our quasi-Monte Carlo (preliminary)
estimate of the two-qubit quaternionic separability
{\it probability} is, then,
\begin{equation} \label{r1r2}
P_{sep/quat}^{HS} \approx R_{1} R_{2} =0.0813594.
\end{equation}
Multiplying the total volume of the 27-dimensional convex set of
two-qubit quaternionic states, given in the framework
of Andai \cite{andai} by (\ref{andaiQuatVol}), by this result
(\ref{r1r2}), we
obtain the two-qubit quaternionic separable volume
estimate $V^{HS}_{sep/quat} \approx 2.38775 \cdot 10^{-19}$.
Our 24-dimensional quasi-Monte Carlo integration procedure
leads to a derived estimate of (the total 27-dimensional volume)
$V^{HS}_{quat}$, that was
somewhat smaller, $2.85906
\cdot 10^{-18}$, than the actual value $2.93352 \cdot 10^{-18}$
given by (\ref{andaiQuatVol}).
Although rather satisfying, this
was sufficiently imprecise to discourage us
from further attempting to ``guestimate'' the
(all-important) constant ($R_{1}$) by which to multiply
the putative normalized form, $(\frac{1}{2} (3-\mu^2) \mu)^4$,
of the quaternionic separability function in (\ref{R2numerator})
in order to yield
the true separable volume.
In our previous study \cite[sec. IX.A]{slater833}, we presented certain plausibility
arguments to the effect that
the corresponding $R_{1}$
constant in the 9-dimensional real case was
$\frac{135 \pi^2}{2176} = (\frac{20 \pi^4}{17})/(\frac{512 \pi^2}{27})$, and
$\frac{24}{71} =(\frac{256 \pi^6}{639})/(\frac{32 \pi^6}{27})$ in the 15-dimensional complex case.
(This leads---multiplying by the corresponding $R_{2}$'s,
$\frac{1024}{135 \pi^2}$ and $\frac{71}{99}$---to
separability probabilities of $\frac{8}{17}$ and
$\frac{8}{33}$, respectively.)
\subsubsection{Supplementary estimation of $R_{1}$ constant} \label{supp1}
In light of such imprecision, in our initial estimates,
we undertook a supplementary
analysis, in which, instead of examining each feasible 24-dimensional
TF point
for 2,001 possible values of $\mu$, with respect to separability or not,
we simply used $\mu=1$. This, of course,
allows us to significantly increase the number of
points generated from the 64,000,000 so far employed.
We, thusly, generated 1,360,000,000 points, finding
that we obtained a remarkably good fit to the important ratio
$R_{1}$ of the 24-dimensional measure ($m_{sep}$),
at $\mu=1$, assigned to the separable
two-qubit quaternionic density matrices to the (known)
measure
($m_{tot}=\frac{\pi^{12}}{7776000}$) by setting $R_{1} =(\frac{24}{71})^2
\approx 0.114263$ (our sample estimate of this quantity
being the very close 0.114262). This is
{\it exactly} the square of the corresponding ratio $\frac{24}{71}$ we had
conjectured (based on extensive numerical and theoretical evidence) for
the full (15-dimensional) complex two-qubit case
in \cite{slater833}.
\subsubsection{Conjectured complex and quaternionic
separability functions and probabilities}
Under this hypothesis on $R_{1}$ for $\beta=4$, we have the ensuing
string of relationships
\begin{equation} \label{bigfish}
\mathcal{S}^{HS}_{quat}(\mu) = \Big(\frac{24}{71})^2 (\frac{1}{2} (3 -\mu^2) \mu\Big)^4
= \Big(\frac{6}{71}\Big)^2 \Big( (3 -\mu^2) \mu\Big)^4
=\Big( \mathcal{S}^{HS}_{complex}(\mu) \Big)^2,
\end{equation}
with (as already advanced in \cite{slater833}),
\begin{equation}
\mathcal{S}^{HS}_{complex}(\mu) =
\frac{24}{71} \Big(\frac{1}{2}
(3-\mu^2) \mu\Big)^2= \frac{6}{71} \Big((3-\mu^2) \mu\Big)^2.
\end{equation}
Then, using our knowledge of the complementary ratio $R_{2}$, given in
(\ref{ratio2}), we obtain the desired exact result,
\begin{equation} \label{HSquat}
P^{HS}_{sep/quat} = R_{1} R_{2} = \frac{72442944}{936239725}
=\frac{2^6 \cdot 3^3 \cdot 7 \cdot 53 \cdot 113}{5^2 \cdot 17 \cdot 19 \cdot 23 \cdot 71^2}
\approx 0.0773765,
\end{equation}
(the complex counterpart being $\frac{8}{33}$),
as well as---in the framework of Andai \cite{andai}---that
\begin{equation}
V^{HS}_{sep/quat} =\frac{5989 \pi ^{12}}{24386773433626137413880000000}
\approx 2.26986 \cdot 10^{-19}.
\end{equation}
\subsection{{\it Truncated} quaternionic analysis ($\beta=3$)} \label{secTrunc}
For possible further insight into the HS two-qubit separability
probability question,
we undertook a parallel quasi-Monte Carlo (Tezuka-Faure) integration
(setting $\mu=1$) for
the truncated quaternionic case ($\beta=3$), in which one of the four
quaternionic parameters is set to zero. Although there was
no corresponding formula
for the HS total volume for this scenario given in \cite{andai},
upon request, A. Andai
kindly derived the result
\begin{equation} \label{puzzling}
V^{HS}_{trunc} = \frac{\pi ^{10}}{384458588946432000}
\approx 2.43584 \cdot 10^{-13}.
\end{equation}
In fact, Andai was able to derive {\it one} simple
overall comprehensive formula,
\begin{equation}
V^{HS}_{n,\beta}= \frac{\pi^{\frac{\beta n (n-1)}{4}}}{\Gamma (\beta \frac{n (n-1)}{2} +n)}
\Pi_{i=1}^{n-1} \Gamma(\frac{i \beta}{2}+1)
\end{equation}
yielding
the total HS volumes for all $n \times n$ systems and Dyson indices $\beta$.
Let us, further, note that Andai obtains
the result (\ref{puzzling}) as the product of three factors,
$V^{HS}_{trunc} =\pi_1 \pi_2 \pi_3$, where
\begin{equation} \label{3factors}
\pi_1 = \frac{128 \pi ^8}{105}; \hspace{.1in} \pi_2 =\frac{128}{893025};
\hspace{.1in}
\pi_3=\frac{189 \pi ^2}{12696335643836416}.
\end{equation}
Now, we will simply {\it assume}---in line with our basic Dyson-index
ansatz, substantially supported in
\cite{slater833} and above---that
the corresponding separability function is of the form
\begin{equation}
\mathcal{S}^{HS}_{trunc}(\mu) \propto
( (3 -\mu^2) \mu)^\beta, \hspace{.15in} \beta=3.
\end{equation}
(Of course, one should ideally {\it test} this
specific application of the ansatz too, perhaps using the
same quasi-Monte Carlo method we have
applied to the $\beta =4$ instance above [Fig.~\ref{fig:quatsepfunct}].)
We were somewhat perplexed, however, by the results of our quasi-Monte Carlo
integration procedure, conducted in the 18-dimensional space of
off-diagonal entries of the truncated quaterionic density matrix
$\rho$. Though, we
anticipated (from our previous
extensive numerical experience here and elsewhere) that
the estimate of the associated 18-dimensional volume would be, at least,
within a few percentage points of $\pi_1 \pi_2 =
\frac{16384 \pi ^8}{93767625} \approx
1.65793$, our actual
estimate was, in fact,
close to 0.967 (1, thus, falling within the possible margin of error).
Assuming the correctness of the analysis of Andai, which we have no other
reason to doubt, the only possible explanations seemed to be that we had
committed some programming error (which
we were unable to discern) or that we had
some conceptual misunderstanding regarding the analysis of truncated
quaternions. (Let us note that we do convert the $4 \times 4$ density matrix
to $8 \times 8$ [complex] form
\cite[p. 495]{adler} \cite[eq. (17)]{slaterJMP1996}
\cite[sec. II]{JIANG}, while it appears that Andai does not
directly employ such a transformation in his derivations.)
In any case, we did
devote considerable computing time to the $\beta=3$ problem
(generating 1,180,000,000 18-dimensional Tezuka-Faure
points), with the hope being
that if we were in some way in error, the error would be
an {\it unbiased}
one, and
that the all-important {\it ratio} of separable to total volume
would be unaffected.
Proceeding thusly, our best estimate
({\it not} making use of the Andai result (\ref{puzzling}) for the present)
of the HS separability probability
was 0.193006. One interesting possible candidate exact value
is, then, $\frac{128}{633} = \frac{2^7}{3 \cdot 13 \cdot 17}
\approx 0.193062$. (Note the presence of 128 in the numerators, also,
of both factors
$\pi_1$ and $\pi_2$, given in (\ref{3factors}).)
This would give us a
counterpart [$\beta=3$] value for the ratio $R_{2}$ of
$\frac{160446825 \pi ^2}{5679087616} \approx 0.278838$.
In \cite{slater833}, we had asserted that, in the other
odd $\beta=1$ case, the counterpart of $R_{2}$ was
$\frac{135 \pi ^2}{2176} \approx 0.612315$. (Multiplying this by
$\frac{1024}{135 \pi^2}$ gave us the
conjectured HS {\it real} two-qubit separability probability
of $\frac{8}{17}$.)
So, let us say that although we believe we have successfully
resolved---though still not having formal proofs---the
two-qubit Hilbert-Schmidt
separability probability question for the $\beta =2$ and 4
(complex and quaternionic) cases, the odd ($\beta =1, 3$) cases, in
particular $\beta =3$, appear at this point to be
more problematical.
\subsection{Real and complex Qubit-{\it Qutrit} Hilbert-Schmidt Analyses}
\label{QubQut}
For qubit-qutrit systems, we have previously reported
\cite[eq. (44)]{slater833},
following the lines of our (Bloore-parameterization-based)
two-qubit analyses,
that rather than the use of one ratio variable $\mu$, in implementing the
Peres-Horodecki positive-partial-transpose test for
separability, it is necessary
to employ two (corresponding specifically here to the case where
the partial transpose is implemented by transposing the four $3 \times 3$
blocks of the $6 \times 6$ density matrix $\rho$ in place) variables,
already
presented in
(\ref{tworatios}).
Once again, employing the Tezuka-Faure quasi-Monte Carlo methodology, we
generated 133,545 30-dimensional and 1,950,000 20-dimensional {\it feasible}
points, corresponding now
to the off-diagonal Bloore parameters of $6 \times 6$
complex and real
density matrices, respectively.
(Each analysis
was run on a MacMini workstation
for a number of weeks.)
The much larger number of feasible Tezuka-Faure points generated in the
real case was primarily due to our reparameterization in that case
of the Bloore off-diagonal entries (essentially correlations)
in terms of {\it partial} correlations
\cite{joe,kurowicka,kurowicka2} (cf. \cite{budden}).
This allowed us to somewhat mitigate the computational ``curse''
of high dimensionality, in that {\it each} sampled point now corresponds to a
density matrix and {\it none} (theoretically, at least)
has to be discarded. (H. Joe has demonstated that it is
possible to also implement this approach in the complex case,
but the programming
challenges for us were substantially
greater, so we have not yet pursued such
a course.) Of the 2,250,000 20-dimensional
points, 38,622 were discarded because certain numerical difficulties
(mainly convergence problems), arose in transforming to
the partial correlations. The 133,545 feasible 30-dimensional points
were drawn from (a much larger) 430,000,000 ones.
For each feasible sampled point we tested whether the
associated
(real or complex)
$6 \times 6$ density matrix was separable or not (that is,
whether or not it passed the Peres-Horodecki test) for all possible
pairs of $\nu_{1}$ and $\nu_{2}$ ranging from 0 to 1 in
increments of $\frac{1}{100}$--that is, $101^2=10,201$ Peres-Horodecki positive-partial-transpose tests were performed for {\it each}
feasible sampled Tezuka-Faure point.
We present the two estimated bivariate separability functions in Fig.~\ref{fig:QubQut} and Fig.~\ref{fig:QubQut2} (cf. \cite[Figs. 3, 5]{slater833}).
\begin{figure}
\includegraphics{QbQtReal.jpg}
\caption{\label{fig:QubQut}Interpolated estimate
over the unit square of the real qubit-qutrit
separability function
$S_{real/qub-qut}^{HS}(\nu_{1},\nu_{2})$, based on 2,211,378
20-dimensional Tezuka-Faure points. For {\it each} of these points,
10,201 associated
$6 \times 6$ density matrices, parameterized by $\nu_1 \in [0,1]$
and $\nu_2 \in [0,1]$, were
tested for separability.}
\end{figure}
\begin{figure}
\includegraphics{QbQtComplex.jpg}
\caption{\label{fig:QubQut2}Interpolated estimate
over the unit square of the complex qubit-qutrit
separability function $S_{complex/qub-qut}^{HS}(\nu_{1},\nu_{2})$, based on 133,545
{\it feasible}
30-dimensional Tezuka-Faure points. For each point, 10,201 associated
$6 \times 6$ density matrices, parameterized by $\nu_1 \in [0,1]$ and
$\nu_2\in [0,1]$, were
tested for separability.}
\end{figure}
In Fig.~\ref{fig:Comparison} we present a test of our Dyson-index
HS
separability-function ansatz by subtracting from Fig.~\ref{fig:QubQut2}
the {\it square} of the function in Fig.~\ref{fig:QubQut},
which has been normalized
so that its value at $\nu_{1}=1,\nu_{2}=1$ equals that of the raw,
unadjusted complex separability function.
\begin{figure}
\includegraphics{SepFunctDiff.jpg}
\caption{\label{fig:Comparison}The complex
qubit-qutrit separability function shown in Fig.~\ref{fig:QubQut2} minus
the {\it square} of the real qubit-qutrit separability
function (Fig.~\ref{fig:QubQut2}), the latter function normalized so
that the value in the plot at $\nu_1=1,\nu_2=1$ is 0. Note, importantly,
the greatly-reduced $z$-axis scale {\it vis-{\`a}-vis} that of
Fig.~\ref{fig:QubQut2}}
\end{figure}
One should, of course, note the greatly-reduced $z$-axis scale from
Fig.~\ref{fig:QubQut2}, indicating close adherence to the HS Dyson-index
separability-function ansatz, which it has been a principal goal
of this study to test.
Now, in Fig.~\ref{fig:Transect} we plot two very closely-fitting curves.
One is the {\it complex}
separability function holding $\nu_1=\nu_2$, and the other,
the {\it square} of the
{\it real} separability function also holding $\nu_1=\nu_2$, but normalized
to equal the first (complex) function at the point
$(1,1)$. This is also compelling
evidence for the validity of the HS Dyson-index ansatz.
\begin{figure}
\includegraphics{Transect.jpg}
\caption{\label{fig:Transect}The complex qubit-qutrit separability function
(Fig.~\ref{fig:QubQut2}) and the normalized square of the real function
(Fig.~\ref{fig:QubQut}), holding $\nu_1=\nu_2$. By construction, the two
curves are equal at $\nu_1=\nu_2=1$. The
observed closeness of the two curves
would be suggested by the HS Dyson-index ansatz}
\end{figure}
Also, in \cite{slater833}, we indicated that it strongly appeared that though
{\it two} ratio variables, $\nu_1$ and $\nu_2$,
given in (\ref{tworatios}), are {\it ab initio}
necessary in the qubit-qutrit analysis, it seems that upon further analysis
they coalesce
into a product
\begin{equation}
\eta =\nu_1 \nu_2 = \frac{\rho_{11} \rho_{66}}{\rho_{33} \rho_{44}},
\end{equation}
and the
separability function problem becomes actually simply univariate
in nature, rather than bivariate. This aspect needs, of course, to be more
closely evaluated in light of our new numerical results.
In fact, one candidate HS separability function of such a {\it univariate}
nature which can be seen to fit
our estimated functions (Figs.~\ref{fig:QubQut}
and \ref{fig:QubQut2})
both very well (when appropriately normalized and/or squared) is
(Fig.~\ref{fig:NewFit})
\begin{equation} \label{newcandidate}
\mathcal{S}^{HS}_{real/qub-qut}(\nu_1 \nu_2) \propto
1-\left(1-\nu _1 \nu _2\right)^{\frac{5}{2}} = 1-(1-\eta)^{\frac{5}{2}}
= \frac{5}{2} B_{\eta}(1,\frac{5}{2}).
\end{equation}
In Fig.~\ref{fig:NewFit} we show the fit of this function to the
estimated qubit-qutrit real separability function (Fig.~\ref{fig:QubQut}).
\begin{figure}
\includegraphics{QubQutFit.jpg}
\caption{\label{fig:NewFit}
The estimated qubit-qutrit real separability function (Fig.~\ref{fig:QubQut})
minus the candidate function (\ref{newcandidate}), the latter function being
scaled so the plotted value at $(\nu_1,\nu_2)=(1,1)$ is 0.}
\end{figure}
In Fig.~\ref{fig:LS1} we show the sum-of-squares of the fit of the
one-parameter family of functions $1-(1- \nu_1 \nu_2)^{\gamma}$ to the
normalized estimated real qubit-qutrit separability function
(Fig.~\ref{fig:QubQut}).
(For $\gamma=\frac{5}{2}$ we obtain (\ref{newcandidate}). We observe that
the minimum of the curve is somewhat in the neighborhood of
$\gamma=\frac{5}{2}$.)
\begin{figure}
\includegraphics{LeastSquares1.jpg}
\caption{\label{fig:LS1}The sum-of-squares (SS) of the fit of the
one-parameter family of functions $1-(1- \nu_1 \nu_2)^{\gamma}$ to the
normalized estimated real qubit-qutrit separability function
(Fig.~\ref{fig:QubQut}). For $\gamma=2.5$, we obtain the
candidate separability function (\ref{newcandidate})}
\end{figure}
In Fig.~\ref{fig:LS2} we show the sum-of-squares of the fit of the
{\it two}-parameter family of functions
$1-(1- (\nu_1 \nu_2)^{\theta})^{\gamma}$ to the
normalized estimated real qubit-qutrit separability function
(Fig.~\ref{fig:QubQut}).
(For $\gamma=\frac{5}{2},\theta=1$
we obtain (\ref{newcandidate}). We observe that
the minimum of the curve is somewhat in the neighborhood of
$\gamma=\frac{5}{2},\theta=1$.)
\begin{figure}
\includegraphics{LeastSquares2.jpg}
\caption{\label{fig:LS2}The sum-of-squares (SS) of the fit of the
{\it two}-parameter family of functions
$1-(1- (\nu_1 \nu_2)^{\theta})^{\gamma}$ to the
normalized estimated real qubit-qutrit separability function
(Fig.~\ref{fig:QubQut}). For $\gamma=2.5,\theta=1$, we obtain the
candidate separability function (\ref{newcandidate})}
\end{figure}
In the framework of Andai \cite{andai}, the total volume of
the 25-dimensional convex set of real $6 \times 6$ density matrices
can be represented as the product
\begin{equation}
V^{HS}_{qub-qut/real} = \frac{8192 \pi ^6}{253125}
\cdot \frac{25 \pi ^3}{1399771004732964864}
= \frac{\pi ^9}{1730063650258944000}
\approx 1.72301 \cdot 10^{-14}
\end{equation}
and the total volume of
the 35-dimensional convex set of complex $6 \times 6$ density matrices
as the product
\begin{equation}
V^{HS}_{qub-qut/complex} = \frac{\pi ^{15}}{86400000} \cdot
\frac{1}{3460550346681745424512204800}
\end{equation}
\begin{displaymath}
= \frac{\pi ^{15}}{298991549953302804677854494720000000}
\approx 9.58494 \cdot 10^{-29}.
\end{displaymath}
In both of these products, the first (20- or 30-dimensional)
factor serves as the denominator
of the ratio $R_{1}$ and the second (5-dimensional)
factor, as the denominator of the
ratio $R_{2}$. (The numerator of $R_{1}$ is the 20- or 30-dimensional mass
assigned to the {\it separable} density matrices, and the numerator
of $R_{2}$ is the integral of the product
of the corresponding suitably normalized separability
function and Bloore jacobian over the 5-dimensional unit simplex.)
Under the assumption of the correctness of (\ref{newcandidate}),
we find that the qubit-qutrit counterparts of the important constants
$R_{2}$ employed above in the two-qubit case ((\ref{ratio2}) and
immediately below there) are
\begin{equation}
R_{2_{qub-qut}}(\beta=1) = 1-\frac{4194304}{4849845 \pi }
\approx 0.724715,
\end{equation}
\begin{equation}
R_{2_{qub-qut}}(\beta=2) = \frac{-44632342463+68578836480 \log (2)}{4190140110}
\approx 0.692789,
\end{equation}
\begin{equation}
R_{2_{qub-qut}}(\beta=4) =
\end{equation}
\begin{displaymath}
\frac{192210846322598002116984324520591-277301145703236210250598232096768
\log (2)}{501570554133080277487570824}
\end{displaymath}
\begin{displaymath}
\approx 0.675902,
\end{displaymath}
for the real, complex and quaternionic cases, respectively. Again, we
note a monotonic decrease as the Dyson index $\beta$ increases.
Also, for the truncated quaternionic ($\beta=3$) case, we found
\begin{equation}
R_{2_{qub-qut}}(\beta=3) =
\end{equation}
\begin{displaymath}
-\frac{967504709}{552123}-\frac{18446744073709551616
(-67294453713397888+5638997741091 \pi
)}{71729672378917671400466262753675 \pi ^2}
\end{displaymath}
\begin{displaymath}
\approx 0.681261.
\end{displaymath}
Our sample estimates of the complementary $R_{1}$ constants were 0.226468,
in the real case, and 0.047679 in the complex case.
(We note that these estimates should be
{\it independent} of the choice
(correctness) of separability functions (\ref{newcandidate})
and, in the complex case, the square of (\ref{newcandidate}).)
Forming the
products $R_{1} R_{2}$ based on these estimates, we obtain an
estimated real separability probability of 0.164125 and complex separability
probability of 0.0330446.
Our numerical analyses have been concerned only with pairs of values
of $\nu_1$ and $\nu_2$ lying within the (bounded) unit square
$[0,1] \times [0,1]$. We have made the implicit assumption that
the substance of our analyses would not be altered/biased if we were able to
incorporate all possible pairs lying within the {\it unbounded} quadrant
$[0,\infty] \times [0,\infty]$. (Clearly, for points $\nu_1>1,\nu_2>1$,
simply
by symmetry considerations, we immediately
expect the separability function to be
proportional to $1-(1-\frac{1}{\eta})^{\frac{5}{2}}$. For pairs of points
$(\nu_1,\nu_2)$
for which one member is greater than 1 and the other less than one,
we expect the form the separability function takes to depend on whether
or not $\eta= \nu_1 \nu_2 >1$.)
\subsubsection{Relations to previous qubit-qutrit analyses
\cite{slaterPRA2,slater833}}
In \cite{slaterPRA}, we
had undertaken
a large-scale numerical
(again, Tezuka-Faure quasi-Monte Carlo
integration) analysis of the separable volumes of
the 35-dimensional convex set of {\it complex} qubit-qutrit systems, endowed with the Hilbert-Schmidt, as well as a number of monotone (including the
Bures) metrics. The estimate we obtained there of the HS separability
probability was 0.0266891. As we pointed out in our subsequent study
\cite[p. 14305]{slater833}, this is remarkably close to
$\frac{32}{1199} \approx 0.0266889$.
In the subsequent study
\cite{slater833}, which was chiefly devoted to the case of
two-qubit systems, we had included
supplementary analyses of the real and complex
qubit-qutrit systems. But there we had only employed--in the interest of
alacrity--Monte Carlo (random number), rather than
(``lower-discrepancy'')
quasi-Monte Carlo methods. So, the results presented in this paper,
we believe
should be more accurate and informative. (Also, rather than sampling grids
for $\nu_1,\nu_2$ of size $101 \times 101$, we had employed
grids of sizes $50 \times 50$ in the real case, and $20 \times 20$ in the complex case.) In \cite[sec.~10]{slater833}, we had
put forth the tentative hypothesis that the corresponding real
qubit-qutrit separability function was simply proportional to
$\sqrt{\eta} = \sqrt{\nu_1 \nu_2}$ (having a somewhat similar profile to
our present candidate
(\ref{newcandidate}), but definitely providing an inferior fit to our
numerical results here).
\subsection{Bures two-qubit separability functions and probabilities} \label{secBures}
Now, in our present study,
we shall somewhat parallel the sequential approach of
\.Zyczkowski and Sommers in that they, first, computed the {\it
total} volume of
(separable {\it and} nonseparable) $n \times n$ density matrices in terms of
the (flat or Euclidean)
Hilbert-Schmidt metric
\cite{szHS} \cite[secs. 9.6-9.6, 14.3]{ingemarkarol}, and then,
using the
fundamentally important
Bures (minimal monotone) metric \cite[sec. 14.4]{ingemarkarol}
\cite{szBures}.
(In particular, they employed the Laguerre ensemble of random matrix theory
\cite{random}
in both sets of computations (cf. \cite{andai}). The Bures and HS metrics
were compared by Hall \cite{hall}, who concluded that the Bures induced
the ``minimal-knowledge ensemble'' (cf. \cite{slatersrednicki}), also
noting that in the single-qubit case, the Bures metric ``may be recognized
as the spatial part of the Robertson-Walker metric in general relativity''.)
That is, we will seek now
to extend the form of analysis applied in the Hilbert-Schmidt context in
\cite{slater833} to the Bures setting.
\subsubsection{Review of earlier parallel Hilbert-Schmidt findings}
To begin, let us review the most elementary findings reported in
\cite[sec. II.A.1]{slater833}.
The simplest (four-parameter) scenario studied there posits a
$4 \times 4$ density matrix $\rho$ with
fully general diagonal entries ($\rho_{11}, \rho_{22}, \rho_{33},
\rho_{44} = 1 -\rho_{11}-\rho_{22}-\rho_{33}$) and only one pair of real off-diagonal non-zero entries,
$\rho_{23}=\rho_{32}$. The HS separability function
for that scenario was found to take
the form
\cite[eq. (20)]{slater833},
\begin{equation} \label{equationA}
\mathcal{S}^{HS}_{[(2,3)]}(\mu) =
\begin{cases}
2 \mu & 0\leq \mu \leq 1 \\
2 & \mu >1
\end{cases},
\end{equation}
where we primarily employ
the variable $\mu=
\sqrt{\frac{\rho_{11} \rho_{44}}{\rho_{22} \rho_{33}}}$, rather than
$\nu=\mu^2$, as in \cite{slater833,slaterPRA2}.
Allowing the 23- and 32-entries to be complex conjugates of one another,
we further found for the corresponding
separability function \cite[eq. (22)]{slater833}---where the
wide tilde over an ${i,j}$ pair will
throughout indicate a complex entry (described by
{\it two} parameters)---
\begin{equation} \label{equationB}
\mathcal{S}^{HS}_{[\widetilde{(2,3)}]}(\mu)=
(\sqrt{\frac{\pi}{2}} \mathcal{S}^{HS}_{[(2,3)]}(\mu))^2 =
\begin{cases}
\pi \mu^2 & 0\leq \mu \leq 1 \\
\pi & \mu >1
\end{cases}.
\end{equation}
Further, permitting
the 23- and 32-entries to be {\it quaternionic} conjugates of one another
\cite{adler,asorey}, the corresponding
separability function \cite[eq. (24)]{slater833}---where the
wide hat over an ${i,j}$ pair will throughout indicate a quaternionic
entry (described by
{\it four} parameters)---took the form
\begin{equation} \label{equationQuat}
\mathcal{S}^{HS}_{[\widehat{(2,3)}]}(\mu)=
(\sqrt{\frac{\pi}{2}} \mathcal{S}^{HS}_{[\widetilde{(2,3)}]}(\mu))^2 =
(\sqrt{\frac{\pi}{2}} \mathcal{S}^{HS}_{[(2,3)]}(\mu))^4=
\begin{cases}
\frac{\pi^2 \mu^4}{2} & 0\leq \mu \leq 1 \\
\frac{\pi^2}{2} & \mu >1
\end{cases}.
\end{equation}
So, the real (\ref{equationA}), complex (\ref{equationB}),
and quaternionic (\ref{equationQuat})
HS separability functions accord {\it perfectly}
with the Dyson index sequence $\beta= 1, 2, 4 $ of random matrix theory
\cite{dyson}. ``The value of $\beta$ is given by the number of independent
degrees of freedom per matrix element and is determined by the antiunitary
symmetries \ldots It is a concept that originated in Random Matrix Theory
and is important for the Cartan classification of symmetric spaces''
\cite[p. 480]{kogut}. The Dyson index corresponds to the ``multiplicity of
ordinary roots'', in the terminology of symmetric spaces \cite[Table 2]{caselle}.
However, we remain unaware of any specific line of argument using random
matrix theory \cite{random} that can be used to formally confirm
the HS separability function Dyson-index-sequence
phenomena we have noted above and observed in
\cite{slater833}. (The basic difficulty/novelty
appears to be that the separability
aspect of the problem introduces a totally new set of complicated
constraints---{\it quartic} (biquadratic) in $\mu$
\cite[eq. (5)]{slater833} \cite[eq. (7)]{slaterPRA2}---that the multivariate integration must respect
\cite[sec. I.C]{slater833}.)
As a new exercise here, unreported in \cite{slater833},
we found that
setting any single one of the four components of the quaternionic
entry, $x_{23} +{\bf{i}} y_{23} +{\bf{j}} j u_{23} +{\bf{k}}
v_{23}$, in the
scenario just described, to zero, yields the
(truncated quaternionic) separability function,
\begin{equation} \label{missing1}
\mathcal{S}^{HS}_{[\hat{(2,3)}]} =
\begin{cases}
\frac{4 \pi \mu ^3}{3} & 0\leq \mu \leq 1 \\
\frac{4 \pi }{3} & \mu >1
\end{cases},
\end{equation}
consistent, at least, in terms
of the exponent of $\mu$, with the Dyson-index
pattern previously observed.
Continuing the analysis in \cite{slater833}, we computed
the integrals
\begin{equation} \label{Vsmall}
V^{HS}_{sep/scenario}= \int_{0}^{\infty} \mathcal{S}^{HS}_{scenario}(\mu)
\mathcal{J}^{HS}_{scenario}(\mu)
d \mu,
\end{equation}
of the products of
these separability functions with the corresponding (univariate)
marginal jacobian functions
(which are obtained by integration over
diagonal parameters only and {\it not} any of
the off-diagonal $x_{ij}$'s and
$y_{ij}$'s) for the reparameterization of $\rho$ using the Bloore variables
\cite[eq. (17)]{slater833}. This
yielded the HS scenario-specific {\it separable} volumes
$V^{HS}_{sep/scenario}$. The ratios of such separable volumes to
the HS total volumes
\begin{equation} \label{Vbig}
V^{HS}_{tot/scenario}= c_{scenario}^{HS}
\int_{0}^{\infty} \mathcal{J}_{scenario}^{HS}(\mu) d \mu,
\end{equation}
where $c^{HS}_{scenario}$ is a scenario-specific constant, gave
us in \cite{slater833} (invariably, it seems, exact) separability {\it probabilities}. (For the three scenarios listed above, these probabilities were,
respectively, $\frac{3 \pi}{16}, \frac{1}{3}$ and $\frac{1}{10}$.)
Based on the numerous scenario-specific
analyses in \cite{slater833}, we are led to believe
that the real, complex and quaternionic separability functions conform to
the Dyson-index pattern for general scenarios, when
the Hilbert-Schmidt measure has been employed. This apparent adherence
was of central importance
in arriving at the assertions in \cite[secs.~IX.A.1 and
IX.A.2]{slater833} that the HS separability probabilities of generic
(9-dimensional] real)
and (15-dimensional)
complex two-qubit states
are $\frac{8}{17}$ and
$\frac{8}{33}$, respectively.
There we had posited---using mutually supporting numerical and theoretical
arguments---that \cite[eq. (102)]{slater833}
\begin{equation}
\mathcal{S}^{HS}_{real}(\mu) \propto \frac{1}{2} (3- \mu^2) \mu,
\end{equation}
and, further pursuing our basic Dyson-index ansatz (fitting our
numerical simulation extremely well \cite[Fig. 4]{slater833}), that
$(\mathcal{S}^{HS}_{real}(\mu))^2 \propto \mathcal{S}^{HS}_{complex}(\mu)$.
(Also, in the first part of the analyses above, we presented numerical
evidence
that $(\mathcal{S}^{HS}_{real}(\mu))^4
\propto \mathcal{S}^{HS}_{quat}(\mu)$, and made this relation
more precise (\ref{bigfish}).)
\subsubsection{Four-parameter density-matrix scenarios}
Now, employing
formulas (13) and (14)
of Dittmann \cite{explicit} for the {\it Bures} metric---which
avoid the possibly problematical need for diagonalization
of $\rho$---we were able to find the
Bures volume elements for the same three
basic (one pair of free off-diagonal entries)
scenarios. We obtained for the real case,
\begin{equation} \label{V23real}
dV^{Bures}_{[(2,3)]} = \frac{\sqrt{\rho _{11}} \sqrt{1-\rho _{11}-\rho _{22}}
\sqrt{\rho _{22}}}{4 \sqrt{1-x_{23}^2} \left(\rho _{22}
\mu ^2+\rho _{11}\right) \sqrt{\mu ^2 \rho
_{22}^2+\left(1-\rho _{11}\right) \rho _{11}}} d \rho_{11} d\rho_{22} d x_{23} d \mu,
\end{equation}
for the complex case,
\begin{equation} \label{V23complex}
dV^{Bures}_{[\widetilde{(2,3)}]}= \frac{\rho _{11} \rho _{22} \left(\rho _{11}+\rho
_{22}-1\right)}{4 \sqrt{1-y_{23}^2-x_{23}^2}
\left(\rho _{22} \mu ^2+\rho _{11}\right) \left(-\rho
_{11}^2+\rho _{11}+\mu ^2 \rho _{22}^2\right)}
d \rho_{11} d\rho_{22} d x_{23} d y_{23} d \mu,
\end{equation}
and for the quaternionic case,
\begin{equation} \label{V23quaternionic}
dV^{Bures}_{[\widehat{(2,3)}]}= \frac{A}{B}
d \rho_{11} d\rho_{22} d x_{23} d y_{23} d u_{23} d v_{23} d \mu,
\end{equation}
where
\begin{displaymath}
A= -\rho _{11}^2 \rho _{22}^2 \left(\rho _{11}+\rho
_{22}-1\right)^2,
\end{displaymath}
and
\begin{displaymath}
B=4 \sqrt{1-u_{23}^2-v_{23}^2-x_{23}^2-y_{23}^2} \left(\rho
_{22} \mu ^2+\rho _{11}\right) \left(-\rho _{11}^2+\rho
_{11}+\mu ^2 \rho _{22}^2\right)^2.
\end{displaymath}
In analyzing the
quaternionic case, we transformed---using standard
procedures \cite[p. 495]{adler} \cite[eq. (17)]{slaterJMP1996}---the
corresponding $4 \times 4$ density matrix into an $8 \times 8$ density matrix with (only) complex entries. To
this, we found it most convenient to apply---since its
eigenvalues and eigenvectors
could be explicitly computed---the basic formula of H\"ubner
\cite{hubner} \cite[p. 2664]{dittmann} for the Bures metric.
Integrating these three volume elements over all the
(four, five or seven)
variables, while enforcing the nonnegative definiteness
requirement for $\rho$, we derived
the Bures
{\it total} (separable {\it and} nonseparable) volumes for the three
scenarios---$V^{Bures}_{tot/[(2,3)]} =\frac{\pi^2}{12}
\approx 0.822467$,
$V^{Bures}_{tot/[\widetilde{(2,3)}]}=\frac{\pi^3}{64} \approx 0.484473$, and
$V^{Bures}_{tot/[\widehat{(2,3)}]}=\frac{\pi^4}{768} \approx 0.126835$.
We note importantly that
the Bures volume elements ((\ref{V23real}), (\ref{V23complex}),
(\ref{V23quaternionic})), in these three cases, can be
{\it factored} into products of functions of
the {\it off-diagonal} Bloore
variables, $u_{23}, v_{23}, x_{23}$ and $y_{23}$,
and functions of the {\it diagonal}
variables, $\rho_{11}, \rho_{22}$ and $\mu$. Now, we will integrate
(one may transform to polar and spherical coordinates, as appropriate)
just the
factors ---$\frac{1}{\sqrt{1-x_{23}^2}}$,
$\frac{1}{\sqrt{1-x_{23}^2 - y_{23}^2}}$
and $\frac{1}{\sqrt{1-u_{23}^3-v_{23}^2-x_{23}^2-y_{23}^2}}$---involving
the off-diagonal variable(s) over
those variables. In doing this, we will further enforce
(using the recently-incorporated
integration-over-implicitly-defined-regions feature of Mathematica)
the Peres-Horodecki positive-partial-transpose-criterion
\cite{asher,michal,bruss},
expressible as
\begin{equation}
\mu^2 -x_{23}^2 \geq 0
\end{equation}
in the real case,
\begin{equation}
\mu^2 -x_{23}^2 -y_{23}^2 \geq 0,
\end{equation}
in the complex case, and
\begin{equation}
\mu^2 -x_{23}^2 -y_{23}^2 - u_{23}^2 -v_{23}^2 \geq 0,
\end{equation}
in the quaternionic case. (None of the individual diagonal $\rho_{ii}$'s
appears explicitly in these constraints, due to an attractive property of the
Bloore [correlation coefficient/off-diagonal scaling]
parameterization. Replacing $\mu^2$ in these three constraints by
simply unity, we obtain the non-negative definiteness constraints on $\rho$ itself, which we also obviously must enforce.)
Performing the indicated three integrations, we obtain
the {\it Bures} separability functions,
\begin{equation} \label{Bures1}
\mathcal{S}^{Bures}_{[(2,3)]}(\mu) =
\begin{cases}
\pi & \mu \geq 1 \\
2 \sin ^{-1}(\mu ) & 0 < \mu <1
\end{cases},
\end{equation}
\begin{equation} \label{Bures2}
\mathcal{S}^{Bures}_{[\widetilde{(2,3)}]}(\mu) =
\begin{cases}
2 \pi & \mu \geq 1 \\
2 \pi \left(1-\sqrt{1-\mu ^2}\right) & 0<\mu <1
\end{cases},
\end{equation}
and
\begin{equation} \label{Bures3}
\mathcal{S}^{Bures}_{[\widehat{(2,3)}]}(\mu) =
\begin{cases}
\frac{4 \pi ^2}{3} & \mu >1 \\
\frac{2}{3} \pi ^2 \left(-\sqrt{1-\mu ^2} \mu ^2-2
\sqrt{1-\mu ^2}+2\right) & 0 <\mu <1
\end{cases}.
\end{equation}
Then, utilizing these three separability functions---that is,
integrating the products of the functions and the
corresponding remaining
{\it diagonal}-variable factors
in the Bures volume elements ((\ref{V23real}), (\ref{V23complex})),
((\ref{V23quaternionic}))
over the $\mu, \rho_{11}$ and $\rho_{22}$ variables---we obtain
{\it separable} volumes of $V^{Bures}_{sep/[(2,3)]}= 0.3658435525$ and
\begin{equation}
V^{Bures}_{sep/[\widetilde{(2,3)}]}=
V^{Bures}_{tot/[\widetilde{(2,3)}]} - \frac{1}{32} \pi ^2 (-2 C+\pi ) =
\frac{1}{64} \pi ^2 (4 C-6 +\pi )
\approx 0.124211
\end{equation}
and consequent
separability {\it probabilities}, respectively,
of 0.4448124200 and (our only {\it exact} Bures separability probability
result in this study (cf. \cite{slaterC})),
\begin{equation} \label{exactprob}
P^{Bures}_{sep/[\widetilde{(2,3)}]} = \frac{4 C-6+\pi }{\pi }
\approx 0.256384,
\end{equation}
where $C \approx 0.915966$ is Catalan's constant (cf. \cite{collins}).
(This constant appears
commonly in estimates of
combinatorial functions and in certain classes of sums and definite
integrals \cite[sec.~1.7]{finch}.
The ratio $\frac{C}{\pi}$--as well as
having an interesting series expansion \cite[p. 54]{finch}--occurs in
exact solutions to the dimer problem of
statistical mechanics \cite[p. 54]{finch} \cite{temperley,gagun}).
Further, for the quaternionic case,
$V^{Bures}_{sep/\widehat{[(2,3)]}} \approx 0.012954754466$, and
$P^{Bures}_{sep/\widehat{[(2,3)]}} \approx 0.10213883862$.
(The corresponding HS separability probability was also of the
same relatively
small magnitude, that is, $\frac{1}{10}$ \cite[sec.~II.A.3]{slater833}.
We have computed the various Bures separable volumes and probabilities
to high numerical accuracy, hoping that such accuracy may be useful
in searches for possible further exact formulas for them.)
So, the normalized---to equal 1 at $\mu=1$---forms of these three
separability
functions are $\frac{\mathcal{S}^{Bures}_{[(2,3)]}(\mu)}{\pi}$,
$\frac{\mathcal{S}^{Bures}_{[\widetilde{(2,3)}]}(\mu)}{2 \pi}$ and
$\frac{3 \mathcal{S}^{Bures}_{[\widehat{(2,3)}]}(\mu)}{4 \pi^2}$.
In Fig.~\ref{fig:functs}, we plot---motivated by the appearance of the
Dyson indices in the analyses of
\cite{slater833}---the {\it fourth} power of the first (real) of these
three normalized functions together with the {\it square} of the
second (complex) function and the (untransformed) third
(quaternionic) function itself.
\begin{figure}
\includegraphics{BuresSepPlot.jpg}
\caption{\label{fig:functs} Joint plot of the
normalized Bures {\it quaternionic}
separability function
$\frac{3 \mathcal{S}^{Bures}_{[\widehat{(2,3)}]}(\mu)}{4 \pi^2}$,
the {\it square} of
the normalized Bures {\it complex} separability function
$\frac{\mathcal{S}^{Bures}_{[\widetilde{(2,3)}]}(\mu)}{2 \pi}$,
and the {\it fourth} power of the normalized Bures {\it real} separability
function $\frac{\mathcal{S}^{Bures}_{[(2,3)]}(\mu)}{\pi}$. The order of
dominance of the three curves is the same as the order in which they have been
mentioned.}
\end{figure}
We find a very close,
\begin{equation}
\Big(\frac{\mathcal{S}^{Bures}_{[(2,3)]}(\mu)}{\pi}\Big)^4 \approx (\frac{\mathcal{S}^{Bures}_{[\widetilde{(2,3)}]}(\mu)}{2 \pi})^2
\approx (\frac{3 \mathcal{S}^{Bures}_{[\widehat{(2,3)}]}(\mu)}{4 \pi^2}),
\end{equation}
but now {\it not} exact fit, as we did find
in \cite{slater833} for their (also normalized)
Hilbert-Schmidt
counterparts $\frac{\mathcal{S}^{HS}_{[(2,3)]}(\mu)}{2}$,
$\frac{\mathcal{S}^{HS}_{[\widetilde{(2,3)}]}(\mu)}{\pi}$
and $\frac{2 \mathcal{S}^{HS}_{[\widehat{(2,3)}]}(\mu)}{\pi^2}$ ((\ref{equationA}), (\ref{equationB}), (\ref{equationQuat})).
As an additional exercise (cf. (\ref{missing1})),
we have computed the Bures separability function
in the (truncated quaternionic)
case that a single one of the four components of
the (2,3)-quaternionic entry is set to zero.
Then, we have (falling into the same tight cluster in Fig.~\ref{fig:functs},
when the $\frac{4}{3}$-power of its
normalized form is plotted)
\begin{equation} \label{missing2}
\mathcal{S}^{Bures}_{[\hat{(2,3)}]}=
\begin{cases}
\frac{1}{8} \pi ^2 \left(4-\sqrt{2} \log \left(3+2
\sqrt{2}\right)\right) & \mu >1 \\
\frac{1}{4} \pi \left(\mu \sqrt{1-\mu ^2}-\sin
^{-1}(\mu )\right) \left(\sqrt{2} \log \left(3+2
\sqrt{2}-4\right)\right) & 0 < \mu <1
\end{cases}.
\end{equation}
We have been able, further, using the formulas of Dittmann \cite{explicit},
to compute the Bures volume elements for the
corresponding (five-dimensional) real and
(seven-dimensional) complex scenarios, in which {\it both} the $\{2,3\}$ and $\{1,2\}$ entries are allowed to
freely vary. But these volume elements do not appear, now,
to fully factorize into
products of functions
(as is the case for (\ref{V23real}) and (\ref{V23complex}))
involving just $\rho_{11}, \rho_{22}, \mu$ and just the off-diagonal
variables $x_{ij}$'s and $y_{ij}$'s. The requisite integrations are, then,
more problematical and it seemed impossible to obtain an
explicit univariate
separability function of $\mu$.
For instance, in this regard, we have for the indicated five-dimensional real
scenario that
\begin{equation}
dV^{Bures}_{[(1,2),(2,3)]}=
\frac{1}{4} \sqrt{\frac{A}{B C (D +E)}} d \rho_{11} d\rho_{22}
d x_{12} d x_{23} d \mu,
\end{equation}
where
\begin{equation}
A=-\rho _{11}^2 \rho _{22}^2 \left(\rho _{11}+\rho
_{22}-1\right) \left(\left(\mu ^2-1\right) \rho
_{22}+1\right),
\end{equation}
\begin{equation}
B= \left(\rho _{22} \mu ^2+\rho _{11}\right)^2, C=x_{12}^2+x_{23}^2-1,
\end{equation}
\begin{equation}
D= \left(\rho _{11}+\rho _{22}\right) \left(x_{12}^2 \rho
_{22} \left(\rho _{22} \mu ^2+\rho
_{11}\right)^2-\left(\left(\mu ^2-1\right) \rho
_{22}+1\right) \left(-\rho _{11}^2+\rho _{11}+\mu ^2
\rho _{22}^2\right)\right)
\end{equation}
and
\begin{equation}
E=-x_{23}^2 \rho _{22} \left(\rho _{11}+\rho _{22}-1\right)
\left(-\rho _{11}^2+\rho _{11}+\mu ^2 \rho
_{22}^2\right).
\end{equation}
So, no desired factorization is apparent.
\subsubsection{Five-parameter density-matrix scenarios}
However, the computational situation greatly improves if we let the (1,4)
and (2,3)-entries be the two free ones. (These entries are the specific ones
that are interchanged under
the operation of partial transposition, so there is a greater
evident symmetry in such a scenario.)
Then, we found that the three Bures
volume elements all do factorize into products of functions of
off-diagonal entries and functions of diagonal entries. We have
\begin{equation}
dV^{Bures}_{[(1,4),(2,3)]} =
\frac{1}{8} \sqrt{-\frac{1}{\left(x_{14}^2-1\right)
\left(x_{23}^2-1\right) \left(\rho _{22}+\rho
_{33}-1\right) \left(\rho _{22}+\rho _{33}\right)}} d \rho_{11}
d \rho_{22} d \rho_{33} d x_{14} d x_{23},
\end{equation}
where simply for succinctness, we now show the volume elements before
replacing the $\rho_{33}$ variable by $\mu$.
(We note that the expression for
$dV^{Bures}_{[(1,4),(2,3)]}$ is independent of
$\rho_{11}$.)
For the corresponding complex scenario,
\begin{equation}
dV^{Bures}_{[\widetilde{(1,4)},\widetilde{(2,3)}]} =
\frac{1}{8} \sqrt{\frac{F}{G}} d \rho_{11}
d \rho_{22} d \rho_{33} d r_{14} d r_{23} d \theta_{14} d \theta_{23},
\end{equation}
where
\begin{equation}
F=-r_{14}^2 r_{23}^2 \rho _{11} \rho _{22} \rho _{33}
\left(\rho _{11}+\rho _{22}+\rho _{33}-1\right),
\end{equation}
and
\begin{equation}
G=\left(r_{14}^2-1\right) \left(r_{23}^2-1\right) \left(\rho
_{22}+\rho _{33}-1\right)^2 \left(\rho _{22}+\rho
_{33}\right)^2,
\end{equation}
and we have now further shifted to polar coordinates,
$x_{ij} + {\bf{i}} y_{ij} = r_{ij} (\cos{\theta_{ij}} + {\bf{i}}
\sin{\theta_{ij}})$.
For the quaternionic scenario, we have
(using two sets of hyperspherical coordinates $(r_{14}, \theta_{14}^{(1)}, \theta_{14}^{(2)},\theta_{14}^{(3)})$ and $(r_{23}, \theta_{23}^{(1)}, \theta_{23}^{(2)}, \theta_{23}^{(3)})$),
\begin{equation}
dV^{Bures}_{[\widehat{(1,4)},\widehat{(2,3)}]} = \frac{1}{8} \sqrt{\frac{\tilde{F}}{\tilde{G}}} d \rho_{11}
d \rho_{22} d \rho_{33} d r_{14} d r_{23} d \theta_{14}^{(1)} d \theta_{14}^{(2)} d \theta_{14}^{(3)} d \theta_{23}^{(1)} d \theta_{23}^{(2)} d \theta_{23}^{(3)},
\end{equation}
where
\begin{equation}
\tilde{F}=\sin ^2\left(\theta _{14}^{(1)}\right) \sin \left(\theta _{14}^{(2)}\right)
\sin ^2\left(\theta_{23}^{(1)}\right) \sin \left(\theta_{23}^{(2)}\right)
r_{14}^3 r_{23}^3 \rho _{11}^{3/2} \rho _{22}^{3/2}
\left(-\rho _{11}-\rho _{22}-\rho _{33}+1\right)^{3/2}
\rho _{33}^{3/2}
\end{equation}
and
\begin{equation}
\tilde{G}=\sqrt{1-r_{14}^2} \sqrt{1-r_{23}^2} \left(\rho _{22}+\rho
_{33}-1\right)^2 \left(\rho _{22}+\rho _{33}\right)^2.
\end{equation}
The total Bures volume for the first (real) of these three scenarios is
$V^{Bures}_{tot/[(1,4),(2,3)]} = \frac{\pi^3}{64} \approx 0.484473$, for the second (complex)
scenario, $V^{Bures}_{tot/[\widetilde{(1,4)},\widetilde{(2,3)}]} =
\frac{\pi^4}{192} \approx 0.507339$, and for the third (quaternionic),
$V^{Bures}_{tot/[\widehat{(1,4)},\widehat{(2,3)}]} =
\frac{\pi^6}{245760} \approx 0.0039119$.
In the
two corresponding
Hilbert-Schmidt (real and complex) analyses
we have previously reported, we had the results \cite[eq. (28)]{slater833},
\begin{equation} \label{suggestion}
\mathcal{S}^{HS}_{[(1,4),(2,3)]}(\mu) =
\begin{cases}
4 \mu & 0\leq \mu \leq 1 \\
\frac{4}{\mu} & \mu >1
\end{cases}.
\end{equation}
and \cite[eq. (34)]{slater833}
\begin{equation} \label{secondmixed}
\mathcal{S}^{HS}_{[\widetilde{(1,4)},\widetilde{(2,3)}]}(\mu) =
\begin{cases}
\pi ^2 \mu^2 & 0\leq \mu \leq 1 \\
\frac{\pi ^2}{\mu^2 } & \mu >1
\end{cases},
\end{equation}
thus, exhibiting the indicated exact (Dyson-index sequence)
proportionality relation.
We now found, for the two Bures analogs, that
\begin{equation} \label{suggestion2}
\mathcal{S}^{Bures}_{[(1,4),(2,3)]}(\mu) =
\begin{cases}
\pi ^2 & \mu =1 \\
2 \pi \csc ^{-1}(\mu ) & \mu >1 \\
2 \pi \sin ^{-1}(\mu ) & 0<\mu <1
\end{cases},
\end{equation}
\begin{equation} \label{secondmixed2}
\mathcal{S}^{Bures}_{[\widetilde{(1,4)},\widetilde{(2,3)}]}(\mu) =
\begin{cases}
16 \pi^2 & \mu =1 \\
16 \pi^2 \left(1-\frac{\sqrt{\mu ^2-1}}{\mu }\right) & \mu >1
\\
16 \pi^2 \left(1-\sqrt{1-\mu ^2}\right) & 0<\mu <1
\end{cases},
\end{equation}
and, further still, for the quaternionic scenario,
\begin{equation} \label{thirdmixed2}
\mathcal{S}^{Bures}_{[\widehat{(1,4)},\widehat{(2,3)}]}(\mu) =
\begin{cases}
\frac{16 \pi ^4}{9} & \mu =1 \\
-\frac{8 \pi ^4 \left(2 \left(\sqrt{\mu ^2-1}-\mu \right)
\mu ^2+\sqrt{\mu ^2-1}\right)}{9 \mu ^3} & \mu >1 \\
\frac{8}{9} \pi ^4 \left(-\sqrt{1-\mu ^2} \mu ^2-2
\sqrt{1-\mu ^2}+2\right) & 0<\mu <1
\end{cases}.
\end{equation}
Employing these several results, we obtained that
$V^{Bures}_{sep/[(1,4),(2,3)]} \approx 0.1473885131$,
$V^{Bures}_{sep/[\widetilde{(1,4)},\widetilde{(2,3)}]}
\approx 0.096915844$, and
$V^{Bures}_{sep/[\widehat{(1,4)},\widehat{(2,3)}]}
\approx 0.000471134100$
giving us real,
complex and quaternionic separability probabilities of 0.3042243652,
0.19102778 and 0.120436049.
We see that for values of $\mu \in [0,1]$, the {\it
normalized} forms of these
three Bures separability functions are {\it identical} to the three
obtained above ((\ref{Bures1}), (\ref{Bures2}), (\ref{Bures3}))
for the corresponding {\it single}-nonzero-entry scenarios.
While those earlier functions were all constant for $\mu>1$, we now have
symmetrical behavior about $\mu=1$ in the form, $\mathcal{S}^{Bures}_{scenario}(\mu) =
\mathcal{S}^{Bures}_{scenario}(\frac{1}{\mu})$.
In Fig.~\ref{fig:functs2}, we show the analogous plot to Fig.~\ref{fig:functs},
using the normalized (to equal 1 at $\mu=1$)
forms of the three additional Bures separability functions
((\ref{suggestion2}), (\ref{secondmixed2}), (\ref{thirdmixed2})).
We again, of course, observe a very close fit to the type of proportionality relations
{\it exactly} observed in the Hilbert-Schmidt case
((\ref{suggestion}), (\ref{secondmixed})).
\begin{figure}
\includegraphics{BuresSepPlot2.jpg}
\caption{\label{fig:functs2} Joint plot of the
normalized Bures {\it quaternionic}
separability function
$\frac{9 \mathcal{S}^{Bures}_{[\widehat{(1,4)},\widehat{(2,3)}]}(\mu)}{4}$,
the {\it square} of
the normalized Bures {\it complex} separability function
$\frac{\mathcal{S}^{Bures}_{[\widetilde{(1,4)},\widetilde{(2,3)}]}(\mu)}{16 \pi^2}$,
and the {\it fourth} power of the normalized Bures {\it real} separability
function $\frac{\mathcal{S}^{Bures}_{[(1,4),(2,3)]}(\mu)}{\pi^2}$.
Over the interval
$\mu \in [0,1]$, the three functions are identical---with the same
order of dominance---to those in
Fig.~\ref{fig:functs}.}
\end{figure}
We were, further, able to compute the Bures volume element for the
{\it three}-nonzero-entries
complex scenario $[\widetilde{(1,2)},\widetilde{(1,4)},
\widetilde{(2,3)}]$, but it was considerably more complicated in form than
those reported above, so no
additional analytical progress seemed possible.
\subsubsection{Additional analyses}
Regarding the possible computation of Bures separability functions for the
{\it totality} of
9-dimensional real and 15-dimensional complex two-qubit states, we have found,
preliminarily,
that the corresponding metric tensors (using the Bloore parameterization
[sec.~ \ref{secBloore}])
decompose into $3 \times 3$ and
$6 \times 6$, and $3 \times 3$ and $12 \times 12$ blocks, respectively.
The $3 \times 3$ blocks themselves
are identical in the two cases, and of precisely
the (simple diagonal) form (if we employ hyperspherical coordinates) that
Akhtarshenas found for the Bures metric using the coset
parameterization \cite[eq. (23)]{iran2}. These $3 \times 3$
blocks, thus, depend only upon the
diagonal entries (while in \cite{iran2}, the dependence, quite
differently, was upon the
eigenvalues). It appears,
though, that the determinants---for which we presently lack
succinct formulas---of the complementary
$6 \times 6$ and $12 \times 12$ blocks, do depend upon all, diagonal and
non-diagonal, parameters, rendering further analytical progress, along
the lines pursued with substantial success
for the Hilbert-Schmidt metric, for these
scenarios rather problematical.
\subsubsection{Discussion}
The close proximity observed above between
certain two-qubit separability results for the
Hilbert-Schmidt and Bures metrics is perhaps somewhat similar in nature/explanation to a form of high similarity also observed in our previous analysis
\cite{slaterPRA}. There, large scale numerical (quasi-Monte Carlo)
analyses strongly suggested that the ratio of Hilbert-Schmidt separability
probabilities of generic (rank-6)
qubit-qutrit states ($6 \times 6$ density matrices)
to the separability probabilities
of generically minimally degenerate (boundary/rank-5)
qubit-qutrit states was equal to 2. (This has since been formally
confirmed and generalized---in terms of
positive-partial-transpose-ratios---to arbitrary bipartite systems
by Szarek, Bengtsson and
{\.Z}yczkowski in \cite{sbz}.
They found that the set of positive-partial-transpose states
is ``pyramid decomposable'' and, hence, is a body of constant height.)
Parallel numerical ratio estimates
also obtained in \cite{slaterPRA} based on
the Bures (and a number of other monotone) metrics were also surprisingly close to 2, as well (1.94334 in the Bures case \cite[Table IX]{slaterPRA}).
However, no exact value for the Bures qubit-qutrit ratio has yet been
established, and our separability function
results above, might be taken to suggest that
the actual Bures ratio is not exactly equal to 2, but only quite close to it.
(Possibly, in these regards,
the Bures metric might profitably be considered as some
perturbation of the flat Euclidean metric (cf. \cite{gross}).)
Further study of the forms the Bures separability functions take
for qubit-qubit and qubit-qutrit scenarios is, of course, possible,
with the hope that one can gain
as much insight into the nature of Bures separability probabilities as
has been obtained by examining the analogous
Hilbert-Schmidt separability functions \cite{slater833}.
(In \cite{slaterJGP}, we had formulated, based on extensive numerical
evidence, conjectures---involving the silver mean, $\sqrt{2}-1$---for the
Bures [and other monotone metric]
separability probabilities of the 15-dimensional convex set of
[complex] qubit-qubit states, which we would further aspire to test.
One may also consider the use of monotone metrics other than the
{\it minimal} Bures one
\cite{andai}---such as the Kubo-Mori and Wigner-Yanase.)
The analytical challenges to further progress,
however, in light of the apparent non-factorizability of volume elements
into diagonal and off-diagonal terms,
appear quite formidable.
\section{Euler-angle-parameterization separability functions} \label{secEuler}
In the previous section (sec.~\ref{secBures}),
we found that the Bloore parameterization (sec.~\ref{secBloore}),
markedly
useful in determining separable volumes based on the (non-monotone)
Hilbert-Schmidt metric, is less immediately fruitful when the Bures
(and presumably other monotone) metrics is employed.
In light of this development,
it appeared to be of interest to see if some
other parameterizations might prove
amenable to such type of ``separability function'' analyses.
In particular, we will examine here the use for such purposes--as
earlier suggested
in \cite{slaterJPAreject}--of the $SU(4)$
Euler-angle parameterization of the 15-dimensional
complex set of $4 \times 4$ density
matrices developed by Tilma, Byrd and Sudarshan \cite{tbs}.
(We will closely follow the notation and terminology of \cite{tbs}. In
\cite{slaterJPAreject}, we simply attempted to fit symmetric
polynomials \cite{ig} to yield previously-conjectured separable
volumes, and did not initiate any
independent quasi-Monte Carlo estimation procedures, as we will
immediately below.)
\subsection{Complex two-qubit case}
The fifteen parameters, then,
employed will be twelve Euler angles ($\alpha_{i}$'s)
and three independent eigenvalues ($\lambda_{1}, \lambda_2, \lambda_3$).
The {\it total} (separable {\it and} nonseparable) volume is simply
(for all metrics of interest)
the {\it product} of integrals over these two sets of parameters
\cite{szHS,szBures}.
Now, to study the separable-volume question,
in complete analogy to our methodology
above, we would like to integrate over the twelve Euler angles
(rather than the off-diagonal Bloore parameters, as before), while
enforcing the positivity of the partial transpose, required for
separability. We were, fortunately, able to perform such enforcement
in the Bloore-parameterization case employing
only a {\it single} diagonally-related
parameter ($\mu$), given in (\ref{firstratio}). Such a
reduction in the number of relevant parameters,
however, does not seem
possible in the Euler-angle
parameterization case. So, the analogue of the separability
function we will obtain here will be a {\it trivariate} function (of the
three eigenvalues). Hopefully, we will be able to determine an exact
functional form for it, and then utilize it in further integrations
to obtain separable volumes, in terms of both
monotone and non-monotone metrics. (Also, the question of whether counterpart
Euler-angle separability
functions in the real and quaternionic cases adhere to
some form of Dyson-index-sequence
behavior certainly merits attention.)
\subsection{{\it Trivariate}
separability function for {\it volume} computation}
Now, we made use of a sequence of 1,900,000
12-dimensional Tezuka-Faure points (twelve being the number of
Euler angles over which we will integrate).
For each such point, we let the associated three (free) eigenvalues
each take on
all possible values from $\frac{1}{40}$ to 1, in steps of $\frac{1}{40}$. Of
course, the possible triads of free
eigenvalues is constrained by the requirement
that they not all
sum to more than 1. There were 9,880 such possible triads.
For each such triple--holding the Euler angles constant--we
evaluated whether the associated
$4 \times 4$ density matrix was separable or not.
We, then, interpolated the results to obtain functions defined over the
three-dimensional hypercube $[0,1]^3$.
In Fig.~\ref{fig:TBS2}, we show a two-dimensional marginal section
(over $\lambda_1, \lambda_2$, say) of this function
(obtained by summing over the values
of $\lambda_3$) of the
estimated three-dimensional separability function.
We know from the work of Pittenger and Rubin \cite[Cor. 4.2]{pittenger}, for
example, that,
for the specific case of two-qubits, any density matrix all of the
four eigenvalues
of which are greater than $\frac{7}{30} \approx 0.2333$ {\it must}
be separable.
Therefore, there certainly does exist
a neighborhood of the fully-mixed state (having its
four eigenvalues equal to
$\frac{1}{4}$) that is
composed of only
separable states.
This is reflected in the plateau present in Fig..~\ref{fig:TBS2}.
\begin{figure}
\includegraphics{TBS2.jpg}
\caption{\label{fig:TBS2}Two-dimensional marginal
section of the estimated
three-dimensional separability function based on the Euler-angle
parameterization for the 15-dimensional convex set of complex
$4 \times 4$ density matrices. Note the mesa/plateau shape, indicative of
the fully separable neighborhood of the fully-mixed state}
\end{figure}
In Fig.~\ref{fig:TBS1} we, additionally display a one-dimensional
marginal section (over $\lambda_1$), obtained by summing
over both
$\lambda_1$ and $\lambda_2$, of the
estimated three-dimensional separability function. (The curve now appears
unimodal rather than flat at its maximum,)
\begin{figure}
\includegraphics{TBS1.jpg}
\caption{\label{fig:TBS1}One-dimensional marginal section of the estimated
three-dimensional separability function based on the Euler-angle
parameterization for the 15-dimensional convex set of complex
$4 \times 4$ density matrices}
\end{figure}
Employing the trivariate separability function obtained by interpolation
from the data we have generated here, we were able to obtain an estimate of
0.242021 for the HS separability probability. (From our extensive
Bloore-parameterization analyses, as previously noted
we believe its true value is
$\frac{8}{33} \approx
0.242424$.)
We were essentially just
as readily able to obtain an estimate of the Bures
(minimal monotone) separability probability (or any of the other monotone
metrics--Kubo-Mori, Wigner-Yanse,\ldots \cite{petzsudar},
it appears) of 0.0734223, while in
\cite{slaterJGP}, this had been conjectured to equal
$\frac{1680 \left(-1+\sqrt{2}\right)}{\pi ^8} \approx 0.733389$.
(In the HS and Bures computations reported here and in the next section,
we perform numerical integrations over the simplices of eigenvalues,
in which the integrands are the products of our interpolated separability
functions and the appropriate
scenario-specific volume elements indicated in the
twin Sommers-\.Zyczkowski 2003 papers \cite{szHS,szBures}.)
Of course, now the research agenda should turn to the issue of finding a
possibly exact formula (undoubtedly {\it symmetric} in the three eigenvalues
\cite[secs. V and VI]{slaterJPAreject})
for this three-dimensional Euler-angle-based
separability function, and for other qubit-qubit and
qubit-qutrit scenarios.
In Fig.~\ref{fig:SepFunctMassTri} we show, based on the 9,880 points
sampled for each 12-dimensional TF-point,
the estimated value of the separability function for that point {\it
paired} with
the Euclidean distance of the vector of eigenvalues
($\lambda_1,\lambda_2,\lambda_3,1-\lambda_1-\lambda_2-\lambda_3$)
for that point from
the vector of eigenvalues
($\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$,
corresponding to the fully mixed state.
\begin{figure}
\includegraphics{SepFunctMassTri.jpg}
\caption{\label{fig:SepFunctMassTri}The trivariate Euler-angle
complex qubit-qubit
separability function paired with the
{\it Euclidean distance}
of the corresponding vector of eigenvalues
to the vector ($\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$,
corresponding to the fully mixed state}
\end{figure}
\subsection{{\it Bivariate} separability function for {\it area} computation}
\label{secBivariate}
Now we repeat the procedures described immediately before, except for
the {\it a priori} setting of {\it one}
of the three free eigenvalues to zero, so the
associated density matrices must lie on the 14-dimensional boundary
of the 15-dimensional convex set of two-qubit complex states.
(The analysis was conducted independently of that pertaining to the volume,
and now we were able to employ a much larger number--23,500,000--of TF-points.)
The resulting separability function is now bivariate, lending itself
immediately to graphic display.
In Fig.~\ref{fig:TBSarea2} we show this
function, and in Fig.~\ref{fig:TBSarea1}, its one-dimensional section
over $\lambda_1$.
\begin{figure}
\includegraphics{TBSarea2.jpg}
\caption{\label{fig:TBSarea2}Bivariate Euler-angle
separability function for the 14-dimensional boundary hyperarea
of the 15-dimensional convex set of complex
$4 \times 4$ density matrices}
\end{figure}
\begin{figure}
\includegraphics{TBSarea1.jpg}
\caption{\label{fig:TBSarea1}One-dimensional marginal section
of the estimated
two-dimensional Euler-angle-based separability function for the
14-dimensional hyperarea of the 15-dimensional convex set of complex
$4 \times 4$ density matrices}
\end{figure}
Employing the bivariate separability function (Fig.~\ref{fig:TBSarea2})
obtained by interpolation
from the data (23,500,000 TF-points)
we generated, we were able to obtain an estimate of
0.12119 for the HS separability probability, which, from our
complementary Bloore analyses, together with the (``one-half'')
Theorem 2 of
\cite{sbz}, we believe to be exactly $\frac{4}{33} \approx
0.121212$. (The proximity of our estimate to this value clearly serves
to further fortify our conjecture that the HS separability probability of
generic complex two-qubit states is $\frac{8}{33}$.)
Additionally, our estimate of the associated {\it Bures}
separability probability was 0.0396214, {\it approximately} one-half that of
the corresponding probability for the non-degenerate
complex two-qubit states
\cite{slaterPRA}.
In Fig.~\ref{fig:SepFunctMassBi} we show, based on 780 points sampled
for each 12-dimensional TF-point,
the estimated value of the bivariate separability function for that
point paired with the Euclidean distance of the vector of eigenvalues
for that
point from the vector of eigenvalues
($\frac{1}{3},\frac{1}{3},\frac{1}{3},0)$
corresponding to the most fully mixed boundary state.
\begin{figure}
\includegraphics{SepFunctMassBi.jpg}
\caption{\label{fig:SepFunctMassBi}Paired values of the estimated
bivariate Euler-angle separability function (Fig.~\ref{fig:TBSarea2})
and the
{\it Euclidean distance} of the associated
vector of eigenvalues to the vector ($\frac{1}{3},\frac{1}{3},\frac{1}{3},0)$
corresponding to the most fully mixed boundary state}
\end{figure}
\subsection{Participation ratios}
The {\it participation ratio} of a state $\rho$ is defined
as \cite[eq. (17)]{ZHSL} \cite[eq. (15.61)]{ingemarkarol}
\cite{batlecasas1,batleplastino1}
\begin{equation}
R(\rho)= \frac{1}{\mbox{tr} \rho^2} .
\end{equation}
For $R(\rho)>3$, a two-qubit state {\it must} be separable.
For convenience, we will also employ the variable
\begin{equation}
S(\rho)=\frac{3}{2} (1-\frac{1}{R(\rho)}),
\end{equation}
which varies over the interval [0,1] for states {\it outside} the separable
ball.
In Fig.~\ref{fig:participationRatio} we show a plot--having set {\it one}
of the four eigenvalues to zero--of
the sixth-power
of this ratio. We note a close similarity in its shape to the
estimated bivariate
separability function displayed in
Fig.~\ref{fig:TBSarea2}.
\begin{figure}
\includegraphics{participationRatio.jpg}
\caption{\label{fig:participationRatio}The sixth-power,
$R(\rho)^{6}$, of the participation ratio
for minimally degenerate two-qubit states
with one eigenvalue equal to zero. Note the similarity to Fig.~\ref{fig:TBSarea2}}
\end{figure}
Although we can not similarly visually display
the trivariate separability function we have found
that it is closely fit--outside the separable ball ($R(\rho)>3$)
\cite[Fig. 15.7]{ingemarkarol}-- by the fourth-power
of the participation ratio.
In Fig.~\ref{fig:partTri}, we show the trivariate separability function
{\it vs.} $R(\rho)^{4}$. (For $R(\rho)^{4}>
81$,
only separable states are encountered).
\begin{figure}
\includegraphics{partTri.jpg}
\caption{\label{fig:partTri}The (Euler-angle)
trivariate separability function
for generic (15-dimensional) complex two-qubit states plotted
against the fourth-power
of the participation ratio for the 9,880 points
sampled}
\end{figure}
In Fig.~\ref{fig:partBi}, we show the comparable
bivariate separability function
{\it vs.} $R(\rho)^{6}$.
\begin{figure}
\includegraphics{partBi.jpg}
\caption{\label{fig:partBi}The (Euler-angle)
bivariate separability function
for generic (14-dimensional)
minimally degenerate complex two-qubit states plotted
against the sixth-power
of the participation ratio for the 780 points
sampled}
\end{figure}
As an exercise (not directly tied to our quasi-Monte Carlo computations),
we assumed that the trivariate Euler-angle separability functions are all
proportional (outside the separable ball, $R(\rho) >3$)
to some powers of $R(\rho)$, for each of the generic real,
complex, truncated quaternionic and quaternionic two-qubit states.
Then, we found those powers which fit our conjectured values
(discussed in sec.~\ref{Bp})
for the associated Hilbert-Schmidt two-qubit separability probabilities.
The powers we found were 1.36743 ($\beta=1$), 2.36904 ($\beta=2$)
and 4.0632 ($\beta=4$). We observe here a
rough approximation to Dyson-index behavior, but can speculate that
when and if the true forms of the Euler-angle separability functions are found, such behavior will be strictly adhered to. (In fact, one research strategy
might be to seek functions that fully conform to this principle,
while fitting the conjectured HS separability probabilities.
Also, below we will find closer adherence when we switch from the use
of the participation ratio to a simple linear transform of the
Verstraete-Audenaert-De Moor-function \cite{ver} (cf. \cite{roland}), which
provides an improved bound on separability.)
When we similarly
sought to fit our prediction of $\frac{4}{33}$ (that is, one-half of
$\frac{8}{33}$ by the results of \cite{sbz}) for the HS
separability probability of generic minimally degenerate complex two-qubit
states to a bivariate function proportional to a power of
the participation ratio, we obtained a power of 6.11646, according rather
well with Fig.~\ref{fig:partBi}.
In terms of the Hilbert-Schmidt metric, the lower bound on the
complex two-qubit separability
probability provided by the separable ball ($R(\rho)>3$)
is rather small, that is
$\frac{35 \pi}{23328 \sqrt{3}} \approx 0.00272132$, while relying upon
the improved inequality reported in \cite{ver},
\begin{equation} \label{VADbound}
VAD(\rho)=
\lambda_{1}-\lambda_3 -2 \sqrt{\lambda_2 \lambda_4} <0,\hspace{.3in}
(\lambda_1>\lambda_2 >\lambda_3 >\lambda_4),
\end{equation}
it is 0.00365406. These figures are both considerably smaller
than the comparable ones (0.3023 and 0.3270)
given in \cite{ver} using (apparently)
the measure ({\it uniform} on the simplex of eigenvalues) first employed in
\cite{ZHSL}.
\subsection{Verstraete-Audenaert-De Moor function}
If we switch from the participation ratio
$R(\rho)$ to a simple linear transformation of the
Verstraete-Audenaert-De Moor function, that is,
$1-VAD(\rho)$ (which varies over [0,1] for states outside the separable
VAD set), in seeking to fit the trivariate
separability function to our conjectured Hilbert-Schmidt two-qubit
separability
probabilities (sec.~\ref{Bp}), we find that
in the generic complex ($\beta=2$) case,
$\Big(1-VAD(\rho)\Big)^{3.15448}$ gives the best fit (for $VAD(\rho)>0$).
Then, closely consistent with Dyson-index behavior, we obtained
$\Big(1-VAD(\rho)\Big)^{1.53785}$ as the best fit in the generic real ($\beta=1$)
scenario. (The VAD-bound (\ref{VADbound})
provides us with no useful information if we set $\lambda_4=0$, so no
relevance to the minimally degenerate two-qubit scenario is apparent.)
\subsection{Beta function fits to Euler-angle separability functions}
We can fit within $0.4\%$
our conjectured values of $\frac{4}{33}$ and $\frac{4}{17}$ for
the Hilbert-Schmidt separability probabilities of the complex and real
minimally degenerate two-qubit states, respectively, by assuming--in line with the Dyson-index ansatz-- that
the Euler-angle separability function in the {\it real} case is a
{\it regularized beta function} (incomplete beta function ratio)
\cite[p. 11]{handbook} of the form
$I_{S(\rho)^2}(58,22)$ (Fig.~\ref{fig:BetaFit1}),
\begin{figure}
\includegraphics{BetaFit1.jpg}
\caption{\label{fig:BetaFit1}Incomplete beta function estimate of the
{\it bivariate} Euler-angle separability function that closely
reproduces the conjectured Hilbert-Schmidt
{\it minimally degenerate}
two-qubit real, complex and quaternionic
separability probabilities}
\end{figure}
{\it and} the Euler-angle separability function
(Fig.~\ref{fig:TBSarea2})
in the {\it complex} case, the {\it square} of that function.
(Continuing along such lines, if we employ the fourth-power of the function,
our estimate of the associated quaternionic separability probability is some
$91.45\%$ of the conjectured value of $\frac{36221472}{936239725}
\approx 0.0386882$.)
Similarly, for the generic {\it nondegenerate} complex and real
two-qubit states, we can achieve fits within $0.7\%$ to {\it both}
the conjectured
HS separability probabilities of $\frac{8}{33}$ and $\frac{8}{17}$,
respectively, by taking in the real case the Euler-angle separability
function to be $I_{\Big((1-VAD(\rho)\Big)^2}(24,28)$
(Fig.~\ref{fig:BetaFit2}) and its square in the
complex case. (Use of its fourth power to estimate the
HS {\it quaternionic} two-qubit separability probability yielded a result
0.795969 as large as the value,
$\frac{72442944}{936239725}
\approx 0.0795969$, conjectured above (\ref{HSquat}).)
\begin{figure}
\includegraphics{BetaFit2.jpg}
\caption{\label{fig:BetaFit2}Incomplete beta function estimate of the
{\it trivariate} Euler-angle separability function that closely
reproduces the conjectured Hilbert-Schmidt
{\it nondegenerate}
two-qubit real, complex and quaternionic separability probabilities}
\end{figure}
\section{Summary}
We have extended the findings and analyses of our two recent studies
\cite{slaterPRA2} and \cite{slater833} by, first,
obtaining numerical estimates of the separability function based on the
(Euclidean, flat) Hilbert-Schmidt (HS)
metric for the 27-dimensional convex set of
quaternionic two-qubit systems (sec.~\ref{secQuat}).
The estimated function closely conformed to
our previously-formulated Dyson-index ($\beta = 1, 2, 4$)
ansatz, dictating that
the quaternionic ($\beta=4$) separability function should be
exactly proportional to the square of the separability function
for the
15-dimensional convex set of
two-qubit complex ($\beta=2$)
systems, as well as the fourth power of the
separability function for the 9-dimensional convex set of
two-qubit real ($\beta=1$) systems.
In particular, these additional analyses led us specifically to aver that
\begin{equation}
\mathcal{S}^{HS}_{quat}(\mu) = (\frac{6}{71})^2 \Big((3-\mu^2) \mu\Big)^4
= (\mathcal{S}^{HS}_{complex}(\mu))^2, 0 \leq \mu \leq 1.
\end{equation} Here,
$\mu =\sqrt{\frac{\rho_{11} \rho_{44}}{\rho_{22}
\rho_{33}}}$,
where $\rho$ denotes a $4 \times 4$ two-qubit density matrix.
We have, thus, been able to supplement ({\it and} fortify)
our previous assertion that the HS separability
probability of the two-qubit complex
states is $\frac{8}{33} \approx 0.242424$, claiming that its quaternionic
counterpart is $\frac{72442944}{936239725} \approx 0.0773765$.
We have also commented on and analyzed the odd $\beta=1$ and $\beta=3$ cases
(sec.~\ref{secTrunc}), which still remain somewhat problematical.
Further, we found (sec.~\ref{QubQut})
strong evidence of adherence to the Dyson-index
ansatz for the 25-dimensional real and 35-dimensional complex
qubit-{\it qutrit} systems
with real HS separability function being proportional to a function of
the form,
$1-(1-\nu_1 \nu_2)^{{\frac{5}{2}}}$, with $\nu_1,\nu_2$ defined in
(\ref{tworatios}).
Subject to the validity of this separability function,
we have obtained the corresponding $R_{2}$ constants
($\beta=1,\ldots,4)$ and estimated the complementary $R_{1}$ constants
(the products $R_{1} R_{2}$ giving throughout, in our
fundamental paradigm,
the desired separability probabilities).
Then (sec.~\ref{secBures}),
we determined that in terms of the
Bures (minimal monotone) metric--for certain, basic simple scenarios
(in which the diagonal entries of $\rho$ are
unrestricted, and one or two off-diagonal
[real, complex or quaternionic] pairs of entries are nonzero)--that the
Dyson-index power relations no longer strictly hold, but
come remarkably close to doing so.
Finally (sec.~\ref{secEuler}), we examined the possibility of defining
``separability functions'' using the Euler-angle parameterization
of Tilma, Byrd and Sudarshan \cite{tbs},
rather than the Bloore (correlation/off-diagonal scaling)
framework \cite{bloore}. Although now we are, {\it prima
facie}, faced with a trivariate (in the complex two-qubit case)
function, rather than a univariate one,
we do not encounter the problem of having to
determine an overall normalization
factor for the separability function, since it is known that any density
matrix with all eigenvalues equal to one another must be separable.
It also appears that this simplifying feature further extends to the case
where minimally degenerate (boundary) complex qubit-qubit
states are considered (sec.~\ref{secBivariate}), with any
such state having its three non-zero eigenvalues all equal to $\frac{1}{3}$
(lying on the boundary of the separable ball $R(\rho)=3$)
being necessarily separable.
Use of the estimated
Euler-angle trivariate separability function lent still further
(numerical) support
to the $\frac{8}{33}$ conjecture \cite{slater833}
for the HS separability probability (and the associated $\frac{4}{33}$
conjecture for the hyperarea of the minimally degenerate two-qubit states)
and the $\frac{1680 (\sqrt{2}-1)}{\pi^8}$ ``silver mean'' conjecture
\cite{slaterJGP} for
the Bures separability probability
of generic complex two-qubit states.
\begin{acknowledgments}
I would like to express appreciation to the Kavli Institute for Theoretical
Physics (KITP)
for computational support in this research. Also K,
{\,Z}yczkowski alerted me
to the relevance of \cite{ver}, and M. Trott assisted with certain
computations.
\end{acknowledgments}
|
1,116,691,500,361 | arxiv | \section{Coarse structures}
Following \cite{b10}, we say that a family $\mathcal{E}$ of subsets of $X\times X$ is a {\it coarse structure} on a set $X$ if
\begin{itemize}
\item{} each $\varepsilon \in \mathcal{E}$ contains the diagonal $\vartriangle _{X}$, $\vartriangle _{X}= \{(x,x) : x \in X\}$ ; \vskip 5pt
\item{} if $\varepsilon, \delta\in\mathcal{E}$ then $\varepsilon \circ\delta\in\mathcal{E}$ and $\varepsilon^{-1}\in\mathcal{E}$ where $\varepsilon \circ\delta = \{(x, y): \exists z ((x,z)\in\varepsilon, (z,y)\in\delta)\}, $ $ \ \varepsilon^{-1}= \{(y,x): (x,y)\in\varepsilon\}$;
\item{} if $\varepsilon\in\mathcal{E}$ and $\bigtriangleup_{X}\subseteq \varepsilon^{\prime}\subseteq\varepsilon$ then $\varepsilon^{\prime}\in\mathcal{E}$.
\end{itemize}
\vskip 5pt
Each $\varepsilon\in\mathcal{E}$ is called an {\it entourage} of the diagonal.
A subset $\mathcal{E}^{\prime}\subseteq\mathcal{E}$ is called a {\it base} for $\mathcal{E}$ if, for
every $\varepsilon\in\mathcal{E}$ there exists $\varepsilon^{\prime}\in\mathcal{E}^{\prime}$ such that $\varepsilon\subseteq\varepsilon^{\prime}$.
The pair $(X, \mathcal{E})$ is called a {\it coarse space}. For
$x\in X$ and $\varepsilon\in\mathcal{E}$, we denote
$B(x, \varepsilon)= \{y\in X: (x,y)\in\varepsilon \}$
and say that
$B(x,\varepsilon)$ is a {\it ball of radius $\varepsilon$ around $x$.}
We note that a coarse space can be considered as
an asymptotic counterpart of a uniform topological space and could
be defined in terms of balls, see
\cite{b7}, \cite{b9}.
In this case a coarse space is called a {\it ballean}.
A coarse space $(X,\mathcal{E})$ is called {\it connected} if, for any $x, y \in X$,
there exists $\varepsilon\in\mathcal{E}$ such that $y\in B(x,\varepsilon)$.
A subset $Y$ of $X$ is called {\it bounded} if there exist $x\in X$ and $\varepsilon\in\mathcal{E}$
such that $Y\subseteq B(x, \varepsilon)$. The coarse
structure
$\mathcal{E}=\{\varepsilon\in X\times X: \bigtriangleup_{X}\subseteq\varepsilon\}$ is
the unique coarse structure such that $(X,\mathcal{E})$ is connected and bounded.
In what follows, all coarse spaces under consideration are supposed to be {\bf connected}.
Give a coarse space $(X, \mathcal{E})$, each subset $Y \subseteq X$ has the natural coarse structure
$\mathcal{E}|_{Y}= \{\varepsilon\cap(Y\times Y): \varepsilon\in\mathcal{E} \}$, $(Y, \mathcal{E}|_{Y})$
is called a {\it subspace} of $(X, \mathcal{E})$.
A subset $Y$ of $X$ is called {\it large} (or {\it coarsely dense}) if there exists $\varepsilon\in \mathcal{E}$ such that $X= B(Y, \varepsilon)$ where $B(Y, \varepsilon)=\cup_{y\in Y} B(Y, \varepsilon)$.
Let $(X, \mathcal{E})$, $(X^{\prime}, \mathcal{E}^{\prime})$ be coarse spaces. A mapping
$f: X\longrightarrow X^{\prime}$
is called {\it coarse} (or {\it bornologous} in terminology of \cite{b8}) if, for every
$\varepsilon\in\mathcal{E}$
there exists $\varepsilon^{\prime}\in\mathcal{E}$ such that, for every $x\in X$, we have
$f(B(x,\varepsilon))\subseteq (B(f(x),\varepsilon^{\prime}))$.
If $f$ is surjective and coarse then $(X^{\prime}, \mathcal{E}^{\prime})$
is called a {\it coarse image} of $(X, \mathcal{E})$.
If $f$ is a bijection such that $f$ and $f^{-1}$ are coarse mappings then $f$ is called an
{\it asymorphism}.
The coarse spaces
$(X, \mathcal{E})$, $(X^{\prime}, \mathcal{E}^{\prime})$
are called {\it coarsely equivalent} if there exist large subsets
$Y\subseteq X$, $Y^{\prime}\subseteq X$ such that
$(Y, \mathcal{E}|_{Y})$
and $(Y^{\prime}, \mathcal{E}^{\prime}|_{Y^{\prime}})$
are asymorphic.
To conclude the coarse vocabulary, we take a family
$\{(X_{\alpha}, \mathcal{E}_{\alpha}) : \alpha< \kappa\}$
of coarse spaces and define the
{\it product}
$P_{\alpha< \kappa}(X_{\alpha}, \mathcal{E}_{\alpha})$
as the set $P_{\alpha< \kappa} X_{\alpha}$
endowed with the coarse structure with the base
$P_{\alpha< \kappa} \mathcal{E}_{\alpha}$.
If $\varepsilon_{\alpha}\in\mathcal{E}_{\alpha}$, $\alpha<\kappa$
and $x,y\in P_{\alpha<\kappa}X_{\alpha}$,
$x=(x_{\alpha})_{\alpha<\kappa}$, $y=(y_{\alpha})_{\alpha<\kappa}$
then $(x,y)\in (\varepsilon_{\alpha})_{\alpha<\kappa}$
if and only if $(x_{\alpha}, y_{\alpha})\in\varepsilon_{\alpha}$
for every $\alpha<\kappa$.
\section{Coarse groups }
Let $G$ be a group with the identity $e$.
For a cardinal $\kappa$, $[G]^{<\kappa}$ denotes the set $\{Y\subseteq G: |Y|<\kappa\}$.
A family $\mathcal{I}$ of subsets of $G$ is called a {\it group ideal} if $\mathcal{I}$ is closed and formations of subsets and finite unions,
$[G]^{<\omega}\subseteq\mathcal{I}$
and $AB^{-1}\in\mathcal{I}$
for all $A,B\in \mathcal{I}$.
A group ideal $\mathcal{I}$ is called {\it invariant} if
$\cup _{g\in G} \ \ g^{-1}A g\in\mathcal{I}$
for each $A\in \mathcal{I}$.
For example, $[G]^{<\kappa}$
is a group ideal for any infinite cardinal $\kappa$.
If $\kappa>|G|$ we get the ideal $\mathcal{P}_{G}$ of all subsets of $G$.
We note also that $[G]^{<\omega}$ is invariant if and only if the set
$\{x^{-1}gx: x\in G\}$
is finite for each $g\in G$. By \cite{b6}, for every countable group $G$, there are $2^{2^{\omega}}$ distinct group ideals on $G$
Let $X$ be a $G$-space with the action $G\times X\longrightarrow X$, $(g,x)\longmapsto gx$.
We assume that $G$ acts on $X$ transitively, take a group ideal $\mathcal{I}$ on $G$ and consider the coarse structure $\mathcal{E}(G, \mathcal{I}, X)$ on $X$ with the base $\{\varepsilon_{A}: A\in\mathcal{I}, \ e\in A\}$, $ \ \varepsilon_{A}=\{(x, gx): x\in X, g\in A\} $.
Then $(x,y)\in \varepsilon_{A}$ if and only if $yx ^{-1} \in A$ so
$B(x,\varepsilon)=Ax$, $ \ Ax=\{gx: g\in A\}$.
By \cite[Theorem 1]{b5}, for every coarse structure $\mathcal{E}$ on $X$,
there exist a group $G$ of permutations of $X$ and a group ideal $\mathcal{I}$ on $G$ such that
$\mathcal{E}= \mathcal{E} (G, \mathcal{I}, X)$.
Now let $X=G$ and $G$ acts on $X$ by the left shifts.
We denote
$\mathcal{E}_{\mathcal{I}}= \mathcal{E} (G, \mathcal{I}, G)$.
Thus, every group ideal $\mathcal{I}$ on $G$ turns $G$ into the coarse space
$(G, \mathcal{E}_{\mathcal{I}})$.
We note that a subset $A$ of $G$ is bounded in $(G, \mathcal{E}_{\mathcal{I}})$ if and only if $A\in\mathcal{I}$.
For finitely generated groups, right coarse groups $(G, \mathcal{E}_{[G]<\omega})$ in metric form take a great part of
{\it Geometrical Group Theory}, see \cite[Chapter 4]{b2}.
A group $G$ endowed with a coarse structure $\mathcal{E}$ is called {\it left (right) coarse group} if, for every $\varepsilon\in \mathcal{E}$, there exists $\varepsilon^{\prime}\in \mathcal{E}$
such that $g B(x,\varepsilon)\subseteq B(gx,\varepsilon^{\prime})$ $((B(x,\varepsilon)g \subseteq B(xg,\varepsilon^{\prime}))$
for all $x, g\in G$.
A group $G$ endowed with a coarse structure $\mathcal{E}$ is called a
{\it coarse group} if the group multiplication $(G, \mathcal{E})\times (G, \mathcal{E})\longrightarrow (G, \mathcal{E}),$ $ \ (x,y)\longmapsto xy$ and the inversion
$(G, \mathcal{E})\longrightarrow (G, \mathcal{E}), $ $ \ x\longmapsto x^{-1}$
are coarse mappings. In this case, $\mathcal{E}$ is called a group coarse structure.
For proofs of the following two statements see \cite{b8} or \cite[Section 6]{b9}.
\vskip 6pt
{\bf Proposition 1. } {\it
A group $G$ endowed with a coarse structure $\mathcal{E}$ is a right coarse group if and only if there exists a group ideal $\mathcal{I}$ on $G$ such that $\mathcal{E}=\mathcal{E}_{\mathcal{I}}$. }
\vskip 6pt
{\bf Proposition 2. } {\it
For a group $G$ endowed with a coarse structure $\mathcal{E}$, the following conditions are equivalent:
\vskip 5pt
(i) $(G,\mathcal{E})$ is a coarse group;
\vskip 5pt
(ii) $(G,\mathcal{E})$ is left and right coarse group;
\vskip 5pt
(iii) there exists an invariant group ideal $\mathcal{I}$ on $G$ such that $\mathcal{E}=\mathcal{E}_{\mathcal{I}}$.
}
\vskip 7pt
{\bf Proposition 3. } {\it
Every group coarse structure $\mathcal{E}$ on a subgroup $H$ of an Abelian group $G$ can be extended to a group coarse structure $\mathcal{E}^{\prime}$ on $G$.\vskip 5pt
Proof.}
We take a group ideal $\mathcal{I}$ on $G$ such that $\mathcal{E}=\mathcal{E}_{\mathcal{I}}$, denote by
$\mathcal{I}^{\prime}$ the group ideal on $G$ with the base $A+B$,
$A\in [G] ^{<\omega}$, $B\in \mathcal{I}$ and put $\mathcal{E}^{\prime}=\mathcal{E}_{\mathcal{I}^{\prime}}$.
$ \ \ \ \Box$
\vskip 7pt
{\bf Example 1.} We construct a group $G$ with a normal Abelian
subgroup $H$
of index $|G: H|=2$ such that some group
coarse structure $\mathcal{E}$ on $H$ can not be extended to a right
group coarse structure on $G$. Let $H=\otimes_{n\in\mathbb{Z}} C_{n}$,
$C_{n}\simeq\mathbb{Z}_{2}$.
Every element $a\in H$ can be written as
$a=(a_{n})_{n\in \mathbb{Z}}$ with $a_{n}\in C_{n}$
and $a_{n} =0 $ for all but finitely many $n$. We define an automorphism $\varphi$ of
order 2 of $H$ by $\varphi(a_{n})_{n\in \mathbb{Z}}= (c_{n})_{n\in \mathbb{Z}}$, $c_{n}= a_{-n}$
for each $n\in \mathbb{Z}$.
We put $<\varphi>= \{\varphi, id\}$ and consider the
semidirect product
$G= H \ \ \leftthreetimes <\varphi>$.
If $(h_{1}, \varphi_{1}), (h_{2}, \varphi_{2})\in G$
then $(h_{1}, \varphi_{1}), \ (h_{2}, \varphi_{2})= (h_{1}, \ \varphi(h_{2}), \ \varphi_{1} \varphi_{2}) $.
For each $m\in\mathbb{Z}$
we set $H_{m}=\otimes_{n\geq m} H_{n}$.
Then the family
$\{H_{m}: m\in \mathbb{Z}\}$
is a base for some group ideal $\mathcal{I}$ on $G$.
We put $\mathcal{E}=\mathcal{E}_{\mathcal{I}}$ and take an arbitrary invariant
group ideal $\mathcal{J}$ on $G$ such that $\mathcal{I} \subset \mathcal{J}$.
Since $\varphi H_{0}\varphi\cup H_{0}=H$,
we see that $H\in \mathcal{J}$.
It follows that the coarse structure $\mathcal{E}_{\mathcal{J}}|_{H}$
is bounded so $\mathcal{E}_{\mathcal{J}}|_{H}\neq\mathcal{E}$.
\vskip 7pt
{\bf Example 2. }
Let $G$ be an infinite group with only
two classes of conjugated elements, see \cite{b3}.
Then there is only one group coarse structure $\mathcal{E}$ on $G$,
namely $\mathcal{E}= \mathcal{E} _{\mathcal{P}(G)}$.
\section{Free coarse groups}
A class $\mathfrak{M}$ of groups is called a {\it variety} if $\mathfrak{M}$ is
closed under formation of subgroups, homomorphic images and products.
We assume that $\mathfrak{M}$ is non-trivial (i.e.
there exists $G\in \mathfrak{M}$ such that $|G|>1$) and recall that
the {\it free group} $F_{\mathfrak{M}} (X)$ is defined by the following
conditions: $F_{\mathfrak{M}} (X)\in\mathfrak{M}$, $X\subset F_{\mathfrak{M}} (X)$,
$X$ generates $F_{\mathfrak{M}} (X)$
and every mapping $X \longrightarrow G$, $G\in \mathfrak{M}$ can be extended to homomorphism
$F_{\mathfrak{M}} (X)\longrightarrow G$.
Let $(X, \mathcal{E})$ be a coarse space. We assume that
$(F_{\mathfrak{M}} (X), \mathcal{E}^{\prime})$ is a coarse group
such that $(X, \mathcal{E})$
is a subspace of
$(F_{\mathfrak{M}} (X), \mathcal{E}^{\prime})$
and every coarse mapping
$(X, \mathcal{E})\longrightarrow (G, \mathcal{E}^{\prime\prime})$,
$G\in \mathfrak{M}$, $(G, \mathcal{E^{\prime\prime}})$
is a coarse group, can be extended to coarse homomorphism
$(F_{\mathfrak{M}} (X), \mathcal{E}^{\prime})\longrightarrow (G, \mathcal{E^{\prime\prime}})$.
We observe that this $\mathcal{E}^{\prime}$ is unique, denote
$F_{\mathfrak{M}} (X, \mathcal{E})=(F_{\mathfrak{M}} (X), \mathcal{E}^{\prime})$
and say that $F_{\mathfrak{M}} (X, \mathcal{E})$
is a {\it free coarse group} over $(X, \mathcal{E})$ in the variety $\mathfrak{M}$.
Our goal is to prove the existence of $F_{\mathfrak{M}} (X, \mathcal{E})$
for every coarse space $(X, \mathcal{E})$ and every non-trivial variety $\mathfrak{M}$.
\vskip 7pt
{\bf Lemma 1. } {\it Let $(X, \mathcal{E})$ be a coarse space. If there is a group coarse structure
$\mathcal{E}^{\prime}$ on $F_{\mathfrak{M}} (X)$
such that $\mathcal{E}^{\prime}|_{X}= \mathcal{E}$
then there exists $F_{\mathfrak{M}} (X, \mathcal{E})$ .
\vskip 6pt
Proof.} We denote $\mathfrak{F}=\{\mathcal{T}: \mathcal{T}$ is a group coarse structure on
$F_{\mathfrak{M}} (X)$
such that
$\mathcal{T}|_{X}=\mathcal{E}\}$.
By the assumption, $\mathcal{E}^{\prime}\in\mathfrak{F}$.
We set $\mathcal{T}^{\prime}= \cap\mathfrak{F}$
and note that $\mathcal{T}^{\prime}$
is a group coarse structure and
$\mathcal{T}^{\prime}|_{X}=\mathcal{E}$.
Let $G\in\mathfrak{M}$, $(G, \mathcal{E}^{\prime\prime})$
be a coarse group,
$f: (X, \mathcal{E})\longrightarrow (G, \mathcal{E}^{\prime\prime})$
be a coarse mapping. We extend $f$ to homomorphism
$f: F_{\mathfrak{M}} (X)\longrightarrow G$.
Then the coarse structure on $ F_{\mathfrak{M}} (X)$
with the base
$\{f^{-1}(\varepsilon^{\prime\prime}): \varepsilon^{\prime\prime}\in\mathcal{E}^{\prime\prime} \}$
is in $\mathfrak{F}$.
It follows that the homomorphism
$f: (F_{\mathfrak{M}} (X), \mathcal{T}^{\prime}) \longrightarrow (G, \mathcal{E}^{\prime\prime})$
is coarse. Hence,
$ (F_{\mathfrak{M}} (X), \mathcal{T}^{\prime}) = \mathfrak{F}_{\mathfrak{M}}
(X, \mathcal{E})$.$ \ \ \ \Box$
\vskip 7pt
{\bf Lemma 2.} {\it
For every coarse space $(X, \mathcal{E})$ and every non-trivial variety $\mathfrak{M}$ of groups, there exists a group coarse structure $\mathcal{E}^{\prime}$ on $F_{\mathfrak{M}} (X)$ such that $\mathcal{E}^{\prime}|_{X} = \mathcal{E}$.
\vskip 6pt
Proof.}
For some prime number $p$, $\mathfrak{M}$ contains the variety
$\mathcal{A}_{p}$ of all Abelian group of period $p$.
We prove the theorem for $\mathcal{A}_{p}$ and then for $\mathfrak{M}$.
We take the free group $A(X) $ over $X$ in $A_{p}$.
Every non-zero element $a\in A(X)$ has the unique (up to permutations of items) representation
$$(\ast) \ m_{1}x_{1} + m_{2}x_{2} + \ldots + m_{k}x_{k}, \ \ x_{i}\in X, \ m_{i}\in \mathbb{Z}_{p}\setminus \{0\}, \ \ i\in\{1, \ldots , k\}.$$
For every $\varepsilon\in \mathcal{E}$, $\varepsilon= \varepsilon^{-1}$
we denote
$Y_{\varepsilon}=\{x-y: x, y \in X, (x,y)\in\varepsilon\}$
and by $Y_{n,\varepsilon}$ the sum on $n$ copies of $Y_{\varepsilon}$.
We take $z\in X$ and consider the ideal $\mathcal{I}$ on $A(X)$ with the base
$$Y_{n,\varepsilon} + \{0, z, 2z, \ldots , (p-1)z\}, n<\omega.$$
We note that $Y_{n,\varepsilon} - Y_{n^{\prime},\varepsilon^{\prime}} \subseteq Y_{n+n^{\prime},\varepsilon\circ\varepsilon^{\prime}} $.
It follows that $B-C\in\mathcal{I}$
for all $B, C\in\mathcal{I}$.
To show that $[F_{\mathfrak{M}}(X)]^{<\omega}\in\mathcal{I}$,
we take $x\in X$ and find $\varepsilon\in\mathcal{E}$ such that
$(x,z)\in \varepsilon$.
Then $x-z\in Y_{\varepsilon}$
and $x\in Y_{\varepsilon} + z$.
Hence, $\mathcal{I}$ is a group ideal.
We put $\mathcal{E}^{\prime}=\mathcal{E}_{\mathcal{I}}$ and show that $\mathcal{E}^{\prime}|_{X}=\mathcal{E}$.
If $\varepsilon\in\mathcal{E}$, $\varepsilon=\varepsilon^{-1}$ and $(x,y)\in\varepsilon$
then $x-y\in Y_{\varepsilon}$ so $\mathcal{E}\subseteq\mathcal{E}^{\prime}$. To prove the inverse inclusion, we take
$Y_{n,\varepsilon}+\{0, z, \ldots , (p-1)z\}$,
assume that $x-y\in Y_{n,\varepsilon}+ \{0, z, \ldots , (p-1)z\}$
and consider two cases.
\vskip 7pt
{\it Case:} $x-y\in Y_{n,\varepsilon}+ iz,$ $ \ i\neq 0$.
We denote by $H$ the subgroup of all $a\in A(X)$ such that
$m_{1} + \ldots + m_{k} =0 (mod \ p)$ in the canonical representation $( \ast)$.
Then $x-y\in H$, $Y_{n,\varepsilon}\subseteq H $ but $iz\notin H$ so this case is impossible.
\vskip 7pt
{\it Case:} $x-y\in Y_{n,\varepsilon}.$
We show that $(x,y)\in\varepsilon^{n}$.
We write $x-y$ as $(x_{1}- y_{1})+ \ldots + (x_{n}- y_{n})$, $x_{i},y_{i}\in Y_{\varepsilon}$
so $(x_{i},y_{i})\in\varepsilon$.
Assume that there exists $k\in\{1, \ldots, n-1\}$
such that
$$\{x_{1},y_{1}, \ldots , x_{k},y_{k}\}\cap \{x_{k+1},y_{k+1}, \ldots , x_{n},y_{n}\}=\emptyset . $$
Then either
$(x_{1}- y_{1})+ \ldots + (x_{k}- y_{k})=0$, or
$(x_{k+1}- y_{k+1})+ \ldots + (x_{n}- y_{n})=0$.
Otherwise, $x-y$ in the representation $( \ast)$ has more
then two items. It follows that there is a representation
$$x-y=(x_{1}^{\prime}- y_{1}^{\prime})+ \ldots + (x_{k}^{\prime}- y_{k}^{\prime}), \ \ x_{i}^{\prime}, y_{i}^{\prime}\in Y_{\varepsilon}, \ \ i\in\{1, \ldots, k\}, \ \ k\leq n $$
such that
$\{x_{i+1}^{\prime}, y_{i+1}^{\prime}\}\cap \{x_{1}^{\prime}, y_{1}^{\prime}, \ldots , x_{i}^{\prime}, y_{i}^{\prime}\}\neq\emptyset$
for each $i\in\{1,\ldots, k-1\}$.
If $(x^{\prime}, y^{\prime})\in\varepsilon^{i}$ for all $x^{\prime}, y^{\prime}\in \{x_{1}^{\prime}, y_{1}^{\prime}, \ldots, x_{i}^{\prime}, y_{i}^{\prime} \}$
then $(x^{\prime}, y^{\prime})\in\varepsilon^{i+1}$
for all
$x^{\prime}, y^{\prime}\in \{x_{1}^{\prime}, y_{1}^{\prime},
\ldots, x_{i+1}^{\prime}, y_{i+1}^{\prime} \}$.
After $k$ steps, we get $x-y \in \varepsilon^{k}$ so $x-y\in \varepsilon^{n}$.
To conclude the proof, we extend the mapping $id: X\longrightarrow X$ to homomorphism
$f: F_{\mathfrak{M}}(X)\longrightarrow A(X)$.
Then $\{f^{-1}(Y): Y\in\mathcal{I}\}$
is a base for some invariant group ideal $\mathcal{J}$ on $F_{\mathfrak{M}}(X)$.
Then $(F_{\mathfrak{M}}(X), \mathcal{E}_{\mathcal{J}})$
is a coarse group. Since $f|_{X}= id$,
we have $\mathcal{E}_{\mathcal{J}}|_{X}= \mathcal{E}_{\mathcal{J}}|_{X}= \mathcal{E}$.
$ \ \ \ \Box$
\vskip 10pt
{\bf Theorem.} {\it
For every coarse space $(X, \mathcal{E})$ and every non-trivial variety $\mathfrak{M}$ of groups, there exists the free coarse group $F_{\mathfrak{M}}(X, \mathcal{E})$.
\vskip 6pt
Proof.} Apply Lemma 2 and Lemma 1.$ \ \ \ \Box$
\vskip 10pt
{\bf Remark 1.}
To describe the coarse structure $\mathcal{E}^{\ast}$ of $ F_{\mathfrak{M}} (X, \mathcal{E})$ explicitly,
for every $\varepsilon\in \mathcal{E}$, we put $\mathcal{D}_{\varepsilon}=\{xy^{-1}: x,y\in X, \ \ (x,y)\in\varepsilon\}$, take $z\in X$ and denote by $P _{n,\varepsilon}$ the product on $n$ copies of the set
$$\bigcup_{g\in F_{\mathfrak{M}}(X)} \ \ g^{-1} (\mathcal{D}_{n, \varepsilon} \bigcup \mathcal{D}_{n, \varepsilon} \ z) \ g .$$
Then
$\{P_{n,\varepsilon}: \varepsilon\in\mathcal{E}, n<\omega\}$
is a base for some invariant group ideal $\mathcal{I}^{\ast}$ on $F_{\mathfrak{M}} (X)$.
Each subset $A\in I^{\ast}$ is bounded in $F(X, \mathcal{E})$ so
$\mathcal{E} _{\mathcal{I}^{\ast}} \subseteq \mathcal{E}^{\ast}$. To see that
$\mathcal{E}^{\ast} \subseteq \mathcal{E} _{\mathcal{I}^{\ast}} $,
the reader can repeat the arguments concluding the proof of Lemma 2.
Hence,
$\mathcal{E}^{\ast} = \mathcal{E} _{\mathcal{I}^{\ast}} $
\vskip 10pt
{\bf Remark 2.} Each metric space $(X, d)$
defines the coarse structure
$\mathcal{E}_{d}$ on $X$ with the base
$\{(x,y): d(x,y)<n\}$, $n<\omega$.
By \cite[Theorem 2.1.1]{b9}, a coarse structure $\mathcal{E}$ is metrizable if and only if $\mathcal{E}$ has a countable base. If $\mathcal{E}$ is metrizable then, in view of Remark 1, the coarse structure of
$F _{\mathfrak{M}} (X, \mathcal{E})$ is metrizable.
\vskip 10pt
{\bf Remark 3.} If the coarse spaces $(X, \mathcal{E}), (X, \mathcal{E}^{\prime})$ are asymorphic
then evidently $F _{\mathfrak{M}} (X, \mathcal{E})$, $F _{\mathfrak{M}} (X^{\prime}, \mathcal{E}^{\prime})$
are asymorphic but this is not true with coarse equivalences in place of asymorphisms.
Let $\mathfrak{M}= \mathcal{A} _{p}$ and let $X$ be an infinite set endowed
with the bounded coarse structure $\mathcal{E}$.
We take
$X^{\prime}, |X^{\prime}|=1$ and denote by $\mathcal{E}^{\prime}$ the unique coarse structure on $X^{\prime}$.
Clearly,
$(X, \mathcal{E})$ and $(X^{\prime}, \mathcal{E}^{\prime})$
are coarsely equivalent, and
$F _{\mathfrak{M}} (X^{\prime}, \mathcal{E}^{\prime})$
is a cyclic group of order $p$ with bounded coarse structure.
To see that
$F _{\mathfrak{M}} (X)$ is unbounded,
we take the subset $Y _{n,\varepsilon}$ (see proof of Lemma 2)
and note that the length of any element from $Y _{n,\varepsilon}$ in representation $(\ast)$
does not exceed $2n$, but $F _{\mathfrak{M}} (X)$ has elements of any length.
\vskip 10pt
{\bf Remark 4.}
Let $X$ be a Tikhonov space with distinguished point $x_{0}$.
In \cite{b1} M. I. Graev defined a group topology on
$F(X\setminus \{x_{0}\} )$ such that $X$ is a closed subset of
$F(X\setminus \{x_{0}\} )$, $x_{0}= e$, and every continuous mapping $f: (X)\longrightarrow G $,
$f(x_{0})=e$, $G$ is a topological group, can be extended to continuous homomorphism
$F(X\setminus \{x_{0}\} )\longrightarrow G$.
Let $(X, \mathcal{E})$ be a coarse space with distinguished point
$x_{0}$, $Y= X\setminus\{x_{0}\}$, $\mathcal{E}^{\prime}= \mathcal{E} \mid _{Y}$.
We take the free coarse group
$F(Y, \mathcal{E}^{\prime})$
and note that $\{e\}\cup Y$ is asymorphic to $(X, \mathcal{E})$ via the mapping
$h(y)=y$, $y\in Y$ and $h(e)=x_{0}$.
Hence, it does not make sense to define the coarse counterparts of the Graev free topological groups.
|
1,116,691,500,362 | arxiv | \section*{Introduction}
\section{Introduction}
The discovery of Weyl semimetals and topological semimetals in general~\cite{xu2015discovery,yang2015weyl,lv2015experimental,xu2015discoveryNbAs,armitage2018weyl,lv2021experimental,hasan2021weyl,yan2017topological,bansil2016colloquium} has prompted research into
new phenomena and exciting applications in various areas of condensed matter physics.
These include usage as
far-infrared and tetrahertz detectors~\cite{wu2017giant}, magnetoresistive memory devices~\cite{han2021current,de2021gigantic}, photovoltaic devices~\cite{osterhoudt2019colossal}, and as interconnects in next-generation integrated circuits (ICs)~\cite{chen2020topological,han20211d,gall2021materials,lanzillo2022size}.
A Weyl semimetal can be formed by breaking either time-reversal or inversion symmetry in a crystal with 3D Dirac cones, leading to pairs of band crossing points called Weyl nodes.
The surface Brillouin Zone of Weyl semimetals have projections of such Weyl node pairs connected through series of topologically-protected Fermi-arc surface states~\cite{lv2015experimental}.
Substantial recent research efforts have targeted first-principles prediction of new topological semimetals, material syntheses, and confirmation of nontrivial band structures and Fermi-arc surface states using angle-resolved photoemission spectroscopy (ARPES)~\cite{bansil2016colloquium,armitage2018weyl,lv2021experimental,hasan2021weyl,yan2017topological}.
These materials have been shown to exhibit novel transport, optical and magnetic phenomena~\cite{nagaosa2020transport,hu2019review,wang2017quantum,gorbar2018anomalous}, including chiral anomalies~\cite{ong2021experimental}, a nonlinear Hall effect~\cite{sodemann2015quantum,ma2019observation,kang2019nonlinear}, a quantized circular photogalvanic effect~\cite{de2017quantized,rees2020helicity,ni2021giant} and giant second-harmonic generation~\cite{wu2017giant,patankar2018resonance}.
Like topological insulators, the surface states of topological semimetals have received considerable attention, which if topologically protected, could potentially lead to high surface conduction.
Previous theoretical work has argued that the Fermi-arc states in a toy-model Weyl semimetal contribute the same order of magnitude as the bulk states to total conduction~\cite{breitkreiz2019large} and could be highly disorder tolerant when the Fermi arcs are nearly straight~\cite{resta2018high}.
However, other studies have shown that the transport due to Fermi arcs is dissipative due to a strong hybridization of surface and bulk states, which leads to scattering between surface and bulk states~\cite{gorbar2016origin,wilson2018surface}.
Since these studies relied primarily on highly-simplified Hamiltonians and analytical models, a comprehensive study of transport fully accounting for the electronic structure at dimensions relevant to future device applications is now necessary.
In this work, we pursue a fundamental understanding of electron transport properties of Weyl semimetals at nanoscale and evaluate their potential as high conductivity future interconnect metals.
In modern-day ICs, the devices patterned on a silicon substrate are linked to form a circuit using Cu nanowires called interconnects.
The resistivity of Cu increases dramatically with decreasing size due to enhanced scattering of electrons from surfaces, defects, and grain boundaries~\cite{fuchs1938math,sondheimer1952adv,mayadas1969electrical,mayadas1970electrical}.
Such increase in the resistivity can increase the signal delay and energy consumption by $\sim 40\times$, a major bottleneck in the semiconductor industry~\cite{gall2021materials,gall2020search}.
The search to replace Cu has expanded from elemental metals to intermetallics~\cite{soulie2021aluminide,chen2018nial,chen2021interdiffusion,zhang2022resistivity}, metallic carbides and nitrides such as MAX phases~\cite{sankaran2020ab,zhang2021resistivity}, directional conductors~\cite{DirectionalConductors}, and topological materials~\cite{chen2020topological,han20211d,han2022topological,lien2022unconventional,lanzillo2022size}.
In a recent breakthrough, Zhang et al. \cite{zhang2019ultrahigh} showed experimentally that the electrical resistivity of nanobelts of [001] oriented NbAs, a Weyl semimetal, becomes an order of magnitude lower than the bulk single-crystal resistivity.
In some nanobelt samples, the resistivity can even be lower than the bulk resistivity of Cu.
Such an anomalous reduction was attributed to transport via the disorder-tolerant Fermi-arc surface states in NbAs. %
Furthermore, using first-principles calculations, Chen et al. ~\cite{chen2020topological} predicted that thin films of a prototypical chiral topological semimetal CoSi can exhibit conduction dominated by Fermi-arc surface states, leading to an resistance-area ($RA$) product that decreases with decreasing thickness, in stark contrast to Cu and other conventional metal films.
Despite the promising trend of decreasing $RA$ product with dimensions, CoSi is still at a disadvantage compared to Cu because of the low density of states at the Fermi level and a significantly higher bulk resistivity.
Hence, we need semimetals with larger numbers of topologically-protected surface states~\cite{chen2020topological,lanzillo2022size}.
The aforementioned Weyl semimetal NbAs is one such candidate with 12 pairs of Weyl nodes.
In this work, we use first-principles quantum transport calculations to predict the $RA$ product scaling of (001) NbAs thin films with and without surface defects.
We find that the $RA$ product decreases with decreasing film thickness for both pristine and defect-laden films, as previously shown for CoSi~\cite{chen2020topological}.
However, NbAs does not exhibit the protection of surface transport protected against line-defects that was shown for CoSi due to the chiral nature of its surface states
Our calculations illustrate that the observed $RA$ scaling in NbAs is due to the large number of surface states that account for at least $50\%$ of conduction for films thinner than $\sim7$ nm.
The contribution of the Nb-terminated surfaces in (001) NbAs films is roughly 3 times that of the As-terminated surfaces.
Lastly, we show that surface-mediated conduction and favorable $RA$ scaling with thickness survives in the presence of minor surface disorder.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{Figure1.pdf}
\caption{(a) The tetragonal unit cell of NbAs comprises 8 atomic layers such that each Nb and As atom has a coordination number of 6. The crystal has time-reversal symmetry but lacks a space-inversion symmetry. When a bulk NbAs crystal is cleaved along (001) surface, it produces a Nb-terminated surface (top) and an As-terminated surface (bottom). (b) DFT bandstructures for 16AL and 40AL (001) films of NbAs with colors representing the contribution of the bulk (gray), Nb-terminated (red) and As-terminated (blue) surfaces to the electronic states. Increasing the thickness of slabs, increases the number of bulk bands at the Fermi level though the surface bands remain largely unchanged. }
\label{fig:intro}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{FS.pdf}
\caption{Isoenergy surfaces for a relaxed 56AL (001) slab of NbAs (thickness $\sim 80.24 \textup{~\AA}$) at energies $\varepsilon = 120, 80, 40$ and $0$~meV below the neutral Fermi level $\varepsilon_F$.
The colors represent the contribution of the bulk (gray), Nb-terminated surface (red) and As-terminated surface (blue) to the electronic states.
The Nb-terminated surface contributes many more states that extend throughout the Brillouin zone, compared to the fewer As-terminated surface states that are localized in the $\mathbf{k}$-space.
The Fermi arc states agree with ARPES measurements best for $\varepsilon - \varepsilon_F = -80$~meV.}
\label{fig:FS}
\end{figure*}
\section{Results and Discussion}
\textbf{Bandstructure and Fermi surface:} Figure~\ref{fig:intro}(b) shows the first-principles-computed bandstructures of 16 atomic-layer (AL) ($\sim 21.43 \textup{~\AA}$) and 40 AL ($\sim 56.71 \textup{~\AA}$) (001) slabs of NbAs. The colors represent the contribution of spatial regions to each electronic state: bulk in gray, Nb-terminated surface in red and As-terminated surface in blue. Increasing the thickness of the slabs (16 AL $\rightarrow$ 40 AL) increases the number of bulk bands but the surface bands remain largely unchanged. Note that the (001) surface of NbAs reduces the $C_{4}$ rotational symmetry of the bulk to $C_{2}$~\cite{xu2015discovery,sun2015topological}. As a result, both the Nb-terminated (red) and As-terminated (blue) surface bands differ between the $\Gamma$-X and $\Gamma$-Y high-symmetry $\mathbf{k}$-point paths. The bulk (gray) bands, which dominate the Y-$\Gamma$-X path, however, are mostly symmetric about $\Gamma$. At the Fermi level, the Nb-terminated surface bands are hole-like along X-$\Gamma$-Y and electron-like along M-Y-$\Gamma$-X. These results agree with previous DFT bandstructure calculations for NbAs films~\cite{sun2015topological}.
Next, we analyze the Fermi surfaces of (001) NbAs slabs to get insight into its electronic bandstructure.
Since electronic states at the Fermi level dominate conduction, we aim to find the chemical potential at which the DFT-predicted isoenergy surfaces agree the best with ARPES data~\cite{xu2015discoveryNbAs} to use for subsequent non-equilibrium Green's function (NEGF) calculations.
Figure~\ref{fig:FS} shows the isoenergy surfaces for a 56 AL slab computed using the Wannierized electronic states ($\mathbf{k}$-point grid: $512\times512)$ at different energy levels $\varepsilon$ near the neutral Fermi level $\varepsilon_F$ , with $\varepsilon - \varepsilon_F \in \{-120, -80, -40, 0\}$~meV.
As described in the Methods section, these isoenergy surfaces have been resolved by contributions of the bulk (gray), Nb-terminated (red) and As-terminated (blue) surfaces.
Comparing the top and bottom rows, we find that a disproportionate number of states belong to the Nb-terminated surface.
Sun et al.~\cite{sun2015topological} noted that strong hybridization between the surface and bulk states, and between trivial Fermi surfaces and arcs, makes it difficult to isolate the topological Fermi-arc states.
However, careful analysis of spin textures~\cite{sun2015topological} and ARPES measurements~\cite{xu2015discoveryNbAs} has indicated that the outer arc of the spoon-shaped features along $\Gamma$-X and $\Gamma$-Y are the Fermi arcs.
These arcs are not clearly visible at the DFT-predicted Fermi level $\varepsilon_F$, but become clearer with decreasing $\varepsilon$ and achieve good agreement with the ARPES measurements of the As-terminated surface at $\varepsilon - \varepsilon_F = -80$~meV. Hence, we shift the Fermi level down by 80 meV from the original DFT-computed value for all transport predictions reported below.
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{conductance.pdf}
\caption{Ballistic conductance as a function of thickness for pristine (001) NbAs films.
The total conductance increases linearly due to the linear increase in bulk conductance, while the conductance contribution due to surface states remains constant.
The dashed lines are linear fits to the computed data points for 16, 24, 32 and 40 AL slabs.
Surface conduction dominates over bulk conduction for slabs thinner than 6.8 nm, reaching $76\%$ of the total conduction for 2-nm-thick slabs.}
\label{fig:conductance_contrib}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{Figure_pristine.pdf}
\caption{(a) Transmission as a function of thickness for pristine (001) slabs of NbAs.
The non-zero intercept corresponds to surface conduction.
(b) Normalized resistance-area $RA$ product for transport along [100] direction of NbAs (001) slabs, compared against CoSi and Cu from Ref.~\citenum{chen2020topological}.
The dashed line is the fit of Equation~\ref{eqn:RA_fit} to the computed conductance data for NbAs.
Topological semimetals NbAs and CoSi show a promising trend of decreasing $RA$ product with decreasing thickness due to the significant contribution of surface states to total conductance for thin films of both materials. }
\label{fig:RA}
\end{figure}
\textbf{Ballistic conductance scaling:} We first compute the ballistic conductance of pristine films of NbAs as a function of thickness using
\begin{align}
G = \frac{e^2}{2}\int_{BZ} \sum_b g_s\frac{d\vec{k}}{(2\pi)^3} f'_0(\varepsilon_{\vec{k}b})|v^x_{\vec{k}b}|
\label{eqn:ballistic}
\end{align}
where $\varepsilon_{\vec{k}b}$ and $\vec{v}_{\vec{k}b}$ are the electronic energies and velocities of band $b$ and wavevector $\vec{k}$ in the Brillouin zone (BZ) and $g_s$ is the spin degeneracy factor ($g_s=1$ for the $\varepsilon_{\vec{k}b}$ vs. $\mathbf{k}$ relations with spin-orbit coupling). The derivative of the Fermi-Dirac occupations $f'_0(\varepsilon_{\vec{k}b})$ limits the contributions of electronic states to within a few $k_BT$ around the Fermi level $\varepsilon_F$ (Refer SI for a detailed derivation of Equation~\ref{eqn:ballistic}).
We evaluate the above expression in JDFTx~\cite{JDFTx} for room temperature ($k_BT=0.026$ eV) using a Monte Carlo sampling of 250,000 $\mathbf{k}$-points in the BZ.
We also decompose the total conductance $G$ into contributions from bulk, Nb-terminated and As-terminated surfaces by weighing each electronic state in the integrand of Eq.~\ref{eqn:ballistic} with a slab weight function described in the Methods sections. The bounding box or slab used to define the spatial region for the surface states has been show in Figure~\ref{fig:conductance_contrib}.
We see that the total ballistic conductance per unit length ($G/L$) increases linearly with thickness (Figure~\ref{fig:conductance_contrib}).
The decomposition of the total conductance into bulk and surface contributions shows that the Nb- and As-terminated surface-state contributions remain constant with thickness.
The bulk conductance contribution decreases linearly with decrease in film thickness and extrapolates to nearly zero for zero thickness. Hence, the total conductance $G$ can be expressed as
\begin{align}
G = g\sub{bulk} t
+ G\super{Nb}\sub{surf}+G\super{As}\sub{surf}
\label{eqn:g_fit}
\end{align}
where $t$ is the thickness of the film, $g\sub{bulk}$ is the slope of the linear fit to bulk conductance and $G\super{Nb}\sub{surf}$ and $G\super{As}\sub{surf}$ are the conductance due to Nb- and As-terminated surfaces respectively. For a 16 AL ($\sim$ 2.1 nm) slab, the surface states and bulk account for $76.3\%$ and $23.7\%$ of the total ballistic conductance respectively.
Such large surface state contributions to conductance have been observed for other topological semimetals as well, e.g. $\sim 90~\%$ surface-state contribution in 2.7-nm-thick CoSi~\cite{lien2022unconventional}.
As we increase the thickness to 40 AL ($\sim$ 5.7 nm), the bulk conductance contribution for NbAs increases to $45.4\%$ while the surface contribution reduces to $54.6\%$.
Extrapolation of the linear fits to bulk and total surface conductance ($G\super{Nb}\sub{surf}+G\super{As}\sub{surf}$) reveals that the crossover point where surface and bulk conductance become equal is at around 6.8 nm which corresponds to a relaxed 48 AL slab.
We also find that due to the larger number of states at the Fermi level, the ballistic conductance of NbAs (001) films is larger than that of CoSi (See Figure S7).
Specifically, for a 2.5-nm-thick slab, the conductance for NbAs is around $57\%$ higher than that of CoSi
Importantly, the Nb-terminated surface contributes almost 3 times as much as the As-terminated surface to ballistic conductance, i.e., $G\super{Nb}\sub{surf} \approx 3G\super{As}\sub{surf}$.
This is in line with the Nb-terminated surface states vastly outnumbering the As-terminated surface states in the surface-resolved Fermi surfaces shown in Figure~\ref{fig:FS}.
Note that shifting the boundary of the bounding box/slab further into the slab (Figure~\ref{fig:conductance_contrib}) would count more electronic states as surface states, including a part of the bulk conductance into the surface.
While the definition of these boundaries is arbitrary, we have chosen it to the maximum value for which $G^{\mathrm{Nb/As}}\sub{surf}$ remains thickness independent in order to capture as much of the surface state contribution as possible, without including the bulk.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.\textwidth]{defects_24AL.pdf}
\caption{(a-f) Illustration of the different surface line-defect configurations studied in our NEGF calculations.
(g) Transmission and (h) normalized resistance-area $RA$ product for [100] transport in NbAs(001) with defects.
Line defects reduce the net transmission, but the intercept remains nonzero and the corresponding $RA$ remains below the bulk value, indicating that surface conduction persists.
Only when the notch is made deep enough, such as for the 20-atom defect, the $RA$ product begins to increase with decreasing thickness due to a significant reduction in the transmission.}
\label{fig:defects}
\end{figure*}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{k_resolved_24_40.pdf}
\caption{Momentum $\mathbf{k}$-resolved transmission for (a) 24 AL and (b) 40 AL NbAs(001) slabs with different defects.
Increasing the thickness of the slabs increases the transmission (higher peaks) owing to the larger number of bulk states.
Except for the case of 1-atom (As) defect configuration, line defects reduce the transmission at every $\mathbf{k}$-point indicating the absence of protection.
For the 1-atom defect, where an atom is removed from the As-terminated surface, transmission is not affected in regions of the Brillouin zone which do not have the localized As-surface states.}
\label{fig:k_resolved}
\end{figure}
\textbf{Resistance-area product scaling:} We next analyze the resistance-area ($RA$) product scaling for films of NbAs with and without defects (pristine) and compare the results with those of Cu (a conventional metal) and CoSi (a chiral multifermion semimetal) (Figure~\ref{fig:RA}(a)).
The resistance $R$ of these films have been calculated using the transmission $T$ at the Fermi level $\varepsilon_F$ using
\begin{align}
R = \frac{1}{G_0 T(\varepsilon=\varepsilon_F)}
\end{align}
Here, $G_0$ is the quantum of conductance $e^2/h$. The transmission is computed using the NEGF method~\cite{datta2005quantum}, where we employ Wannier tight-binding Hamiltonians constructed using DFT as described in the Methods section.
Previous first-principles NEGF calculations have shown that the $RA$ product of slabs $(RA)\sub{slab}$ for pristine Cu is mostly independent of slab thickness~\cite{timoshevskii2008influence, chen2020topological,lien2022unconventional}, because bulk states dominate conduction.
A similar trend has also been observed for MoP, a topological metal, where most of the electronic states at the Fermi level are bulk states~\cite{han2022topological}.
Hence, for such materials, conductance $G(=1/R)$ decreases linearly with decreasing thickness or cross-sectional area $A$, making the $RA$ product constant.
Consequently, independent of film thickness, the normalized $RA$ product $(RA)\sub{slab}/(RA)\sub{bulk} \approx 1$.
In contrast, NEGF calculations of pristine films of NbAs show that $(RA)\sub{slab}$ decreases with decreasing film thickness and is always less than $(RA)\sub{bulk}$, similar to previous reports for CoSi films~\cite{chen2020topological, lien2022unconventional}.
This can be explained by extending Equation~\ref{eqn:g_fit} to calculate $(RA)\sub{slab}/(RA)\sub{bulk}$
\begin{align}
(RA)\sub{slab}/(RA)\sub{bulk}\approx \frac{1}{1+\alpha/t}
\label{eqn:RA_fit}
\end{align}
where $\alpha = (G\super{Nb}\sub{surf}+G\super{As}\sub{surf})/g\sub{bulk}$. (See SI for derivation.)
Equation~\ref{eqn:RA_fit} predicts that $(RA)\sub{slab} < (RA)\sub{bulk}$ in pristine slabs of any finite thickness $t$, as long as there is some surface contribution, $\alpha > 0$.
When surface conductance is negligible, $G\sub{surf}\rightarrow 0$ leading to $\alpha\to 0$, we find normalized $RA \to 1$ for all thicknesses, exactly as observed for conventional metals such as Cu.
Figure~\ref{fig:RA}(b) shows an excellent fit of the computed $(RA)\sub{slab}/(RA)\sub{bulk}$ with Equation~\ref{eqn:RA_fit}, establishing the validity of the simple model of additive surface and bulk conductance for Weyl semimetals.
We now investigate the effect of notches or surface line-defects on the ballistic conductance of NbAs films.
We study six different types of defects as shown in Figure~\ref{fig:defects}(a-f).
The calculated transmission and the resultant normalized $RA$ product are shown in Figures~\ref{fig:defects}(g) and (h) respectively.
As expected, the transmission for pristine films increases linearly with thickness, which corresponds to the increasing number of bulk conducting channels/bands at the Fermi level.
We perform a linear fit ($y = mx+b$) for the thickness-dependent transmission data for all defect types.
(The parameters slope $m$, intercept $b$ and $R^2$ have been provided in Table S1.)
Removing an As atom from the As-terminated (top) surface leads to a relatively small drop in the transmission for all the slabs, such that the intercept of the linear fit drops only slightly, $\Delta b \sim -0.6$.
However, removing a Nb atom from the Nb-terminated (bottom) surface causes an almost 4 times larger reduction ($\Delta b \sim -2.2$) in the transmission.
For the third case (Figure~\ref{fig:defects}(c)), where we remove an atom each from the top and bottom surface, the transmission reduces by $\sim 2.8$, which is equal to the sum of the above two reductions.
As we increase the depth of the `notch' on the surfaces (Figure~\ref{fig:defects}(d-f)), the net transmission continues to diminish.
We note that the total transmission extrapolated to zero thickness remains finite in films with single-atom, 2-atom, and 6-atom line-defects, indicative of the survival of surface-state conduction in films with sufficiently small disorder.
With the deep 12-atom and 20-atom defects, the transmission reduction levels off for the thinnest film and leads to the downturn in the $RA$ product with scaling as shown in Figure~\ref{fig:defects}(h).
This is similar to the resistivity scaling trend reported previously for CoSi films with high surface defect densities~\cite{lien2022unconventional}.
To further understand the above observations, we analyze the $\mathbf{k}$-resolved transmission for two representative cases of 24AL and 40AL slabs.
Figure~\ref{fig:k_resolved} shows the transmission plotted against direction $k_y$ which is the in-plane direction normal to the transport direction $k_x$.
Since the transmission for pristine films essentially represents the number of states at the Fermi level, the values are integers for any $k_y$.
As the thickness increases, we see an increase in the peak heights around $k_y\sim0$ , $k_y\sim \pm0.45\pi/a$, and $k_y\sim \pm\pi/a$ corresponding to the increasing number of bulk states around those points in the Brillouin Zone. (See Figure~S6 in SI).
In general, defects reduce the transmission, though by varying degree as noted in Figure~\ref{fig:defects}(g).
The 1-atom defect on the As-terminated surface negligibly changes the transmission for $k_y\sim[-0.9\pi/a,-0.47\pi/a]$ and $k_y\sim[0.47\pi/a,0.9\pi/a]$, because there are no surface states on the As-terminated surface in that region (Figure~S6).
Consequently, the localization of As-terminated surface states in the $\mathbf{k}$-space leads to the small overall change in transmission.
Most of the surface states at the Fermi level that contribute to conduction exist on the Nb-terminated surface, as shown previously in Figure~\ref{fig:conductance_contrib}.
Correspondingly, a defect on the Nb-terminated surface considerably reduces the transmission throughout $k_y$, since the Nb-terminated states extend throughout the projected 2D Brillouin Zone.
The 2-atom defect, removing one Nb and one As atom on each surface, reduces the transmission by roughly the sum of the previous two cases.
We find the transmission reduces further for every point along $k_y$ for 6-atom, 12-atom and 20-atom defects.
Using the net transmission calculated above, we plot the normalized $RA$ product in Figure~\ref{fig:defects}(h) for the various defect configurations.
Since the transmission $T$ continues to exhibit a roughly linear dependence on thickness $t$ for the cases with defects, we could employ a model similar to Equation~\ref{eqn:RA_fit} to fit the computed data.
Specifically, we write $G = G_0T(\varepsilon_F) = G_0 (mt+b)$.
Comparing to Equation~\ref{eqn:RA_fit}, we note that $\alpha = b/m$.
We find that the normalized $RA$ product for the first four defect types exhibit a trend similar to the pristine case, i.e. $(RA)\sub{slab}/(RA)\sub{bulk}$ decreases with decreasing thickness. Since the net transmission doesn't change significantly for the 1-atom defect on the As-terminated surface, its normalized $RA$ curve (blue) is very close to the case with no defects (black) in Figure~\ref{fig:defects}(h).
Increasing the size of the defects makes the $(RA)$ vs. $t$ curves flatter, as the normalized $RA$ product begins to become thickness independent and approaches 1 when the surface state conduction gradually diminishes, manifested in the intercept of transmission $b$ approaching zero.
Thus, in the limit of sufficiently strong surface disorder, the Weyl semimetal behaves more like a conventional metal in this respect.
For example, the 12-atom and 20-atom defect configurations begin to kill the transmission of the bulk states besides significantly suppressing surface conduction, which makes the $RA$ product of the slab greater than that of the ideal bulk.
Importantly, for very thin films, the 12-atom and 20-atom defect configurations are large enough to significantly perturb the electronic states in the region near the defect.
In our current approach, however, the tight-binding models are based on the ground state of the pristine films, and the couplings linked to the removed atom are deleted to mimic the defect.
Therefore, the calculated transmissions at 16 AL and below for these two defect configurations are likely to be less reliable than the remaining cases.
Although more accurate results can be obtained using self-consistent DFT and NEGF in QuantumATK~\cite{smidstrup2020quantumatk}, it can be computationally expensive and potentially prohibitive for large-thickness structures with spin-orbit coupling, as studied here.
Nevertheless, the qualitative trend discussed above that the transmission levels off in the ultra-thin film limit is also demonstrated by the fully self-consistent DFT with NEGF calculations using QuantumATK~\cite{smidstrup2020quantumatk} (Figure~S1).
Finally, we compare the conductance scaling of NbAs films with that of CoSi.
Both materials show decreasing $RA$ with reduced thickness in pristine films, owing to the dominance of surface conduction over bulk conduction at the nanometer scale.
However, CoSi is a chiral semimetal with forward- and backward-moving surface carriers from the Fermi-arc states of the same transverse momentum spatially separated on opposite surfaces of a CoSi thin film~\cite{chen2020topological}.
Consequently, line defects which preserve the transverse momentum cannot backscatter these states into each other and the transmission of the CoSi Fermi arc states is robust against such defects~\cite{chen2020topological}.
In contrast, the forward- and backward-moving surface carriers with the same transverse momentum coexist on both surfaces of NbAs, and thus can intermix (See Figure S8).
Therefore, transmission of the NbAs Fermi-arc states is much more susceptible to defect scattering.
As shown in Figure~\ref{fig:k_resolved}, a line-defect can reduce the transmission for all $\mathbf{k}$-points in the Brillouin zone, contrary to that in the CoSi films.
This explains the substantial reduction of total transmission in NbAs films with single-atom defects (see Figure~\ref{fig:defects}). Experimentally, the 2-3 orders of magnitude increase in resistivity observed in $\sim$600 nm diameter NbAs nanowires~\cite{xu2015discoveryNbAs} when compared to the micron-size wide nanobelts also demonstrates the sensitivity of the NbAs surface states to defect scattering at the boundaries.
Therefore, for materials with optimal disorder-tolerant surface-state conductivity, future work should explore chiral topological semimetals with multiple pairs of Weyl nodes.
\section{Conclusions}
In summary, we performed first-principles NEGF calculations to understand the mechanism of electron transport in thin films of a representative Weyl semimetal, NbAs.
The resistance-area $RA$ product in pristine NbAs films decreases with thickness at the nanometer scale, in contrast to a nearly constant $RA$ product in ideal Cu films.
This anomalous scaling is the manifestation of the numerous surface states in the bandstructure of NbAs. The surface states account for over 70$\%$ of the conductance for 2.1-nm-thick (relaxed 16 AL) films and $\sim$ 50$\%$ for 6.8-nm-thick (relaxed 48 AL) films; furthermore, contribution from the Nb-terminated surface states is almost 3 times that of the As-terminated-surface states.
The decreasing $RA$ with reducing dimensions persists even with surface defects, as long as the degree of disorder is moderate.
This contrasts the ever increasing $RA$ with reducing dimensions in conventional metals like Cu when disorder is present, and highlights the promise of Weyl semimetal NbAs, and topological semimetals in general, for integrated circuits.
Finally, analyses of electron transmission in $\mathbf{k}$-space show that electron transport in NbAs is not immune to defect scattering because forward- and backward-moving states coexist on the same surface, in contrast to the protected chiral surface transport in CoSi thin films.
The comparison between the two material systems calls for the search for \emph{chiral} topological semimetals with large numbers of Fermi arcs for low-resistance nanoscale interconnects.
\section{Methods}
We use open-source plane-wave DFT software JDFTx~\cite{JDFTx} for the generation of self-consistent relaxed crystal structures, electron bandstructures and Wannier tight-binding models.
We use the fully-relativistic optimized norm-conserving Vanderbilt pseudopotentials (ONCVPSP)~\cite{hamann2013optimized} as distributed by the open-source PSEUDODOJO library~\cite{van2018pseudodojo} to include spin-orbit coupling self-consistently.
These DFT calculations are performed using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) to the exchange-correlation functional~\cite{PBE} at a plane-wave cutoff of 40 Hartrees and a charge density cutoff of 200 Hartrees.
For the first-principles study of NbAs slabs, we construct films of (001) orientation to allow direct comparison of our computed Fermi surfaces with the available ARPES results, which have been experimentally measured for the cleaved (001) surfaces~\cite{xu2015discovery,sun2015topological}.
These slabs have tetragonal unit cells, and are constructed with a vacuum spacing of $12\textup{~\AA}$ thickness along the $c-$direction, employing Coulomb truncation to eliminate long-range interactions between periodic images along this direction~\cite{sundararaman2013regularization, ismail2006truncation, rozzi2006exact}.
Cleaving the surface along (001) direction leads to two asymmetric surfaces with Nb and As terminations respectively (Figure~\ref{fig:intro}(a)), which produces an overall dipole moment in the unit cell.
Figure~S2 shows that the Coulomb truncation scheme accounts for this dipole correctly and produces zero electric field in the vacuum region away from both surfaces.
With the computational setup described above, we first perform an optimization of the ionic positions and lattice parameters of the body-centered tetragonal unit cell of bulk NbAs (space group $I4_1md$).
The initial crystal structure was obtained from the Materials Project database~\cite{Jain2013}.
The relaxation yields lattice constants of $a = b = 3.46 \textup{~\AA}$ and $c = 11.80 \textup{~\AA}$ which are within $\sim$ 1\% of the XRD measured values of $a = 3.45 \textup{~\AA}$ and $c = 11.68 \textup{~\AA}$ (Figure~\ref{fig:intro}(a))~\cite{xu2015discovery,boller1963transposition}.
Starting from a single-unit-cell thick slab, we then construct films with seven different thicknesses in steps of 1 unit cell.
Hence, the thickness of our films vary from 1 unit cell or 8 atomic layers (AL) to 7 unit cells or 56 atomic layers (AL).
Previous first-principles calculations for NbAs have found no noticeable change in the band structure and Fermi surfaces for slabs larger than 7 unit cells in thickness~\cite{sun2015topological}. The DFT calculations for the bulk and slabs are performed using \textbf{k}-point meshes of $8\times8\times2$ and $8\times8\times1$ respectively, and Fermi smearing with width 0.01~Hartrees.
Keeping the in-plane lattice constants of the slabs fixed ($a=b$), we optimize the ionic positions using self-consistent DFT for subsequent calculations of electronic bands, Fermi surface and electron transport properties.
We then construct a tight-binding model using a maximally-localized Wannier function basis set~\cite{WannMarzari} in JDFTx.
Figure~S3 shows the contribution of $s-$, $p-$ and $d-$orbitals of Nb and As atoms to each band for a 16AL slab.
The electron bands in the energy range $\pm 6.5$ eV around the Fermi level are mostly composed of the $d-$ and $p-$orbitals of Nb and As atoms respectively.
Hence, we choose a basis set of 10 $d-$orbitals per Nb atom and 6 $p-$orbitals per As atom in the unit cell as the initial guesses.
We construct maximally-localized Wannier functions for our \emph{ab initio} tight-binding model that reproduces the DFT bands in the energy window of $\varepsilon_F - \sim 7.3$ eV to $\varepsilon_F + \sim2.9$ eV above $\varepsilon_F$, as shown in Figure~S4.
To pinpoint the contributions of surface and bulk contributions to the band structure, Fermi surfaces and conductance in the Wannier basis, we compute weights of each Wannier-interpolated electronic state from the surface regions.
Specifically, we define functions $w^X(z)$ for $X$ = Nb and As, which are 1 within the dashed rectangles shown in the bottom panel of Fig.~\ref{fig:conductance_contrib}, and 0 outside it.
We then compute the matrix elements $w^X_{\vec{k}ab} \equiv \int_\Omega \mathrm{d}\vec{r} \psi_{\vec{k}a}^\ast(\vec{r}) \bar{w}^X(z) \psi_{\vec{k}b}$(\vec{r}),
where $\bar{w}^X(z)$ is $w^X(z)$ smoothed by convolution with a Gaussian of width 1 bohr.
Finally, we interpolate $w^X_{\vec{k}ab}$ using the Wannier representation in exactly the same way as the Hamiltonian and momentum matrix elements described in detail elsewhere~\cite{habib2018hot, kumar2022plasmonic}.
Using the tight-binding models created above, we employ Non-equilibrium Green’s Function (NEGF) method to compute the electron transport properties of the films~\cite{datta2005quantum}. For the slab of NbAs, we consider transport along the [100] direction and calculate the total transmission as
\begin{align}
T(E) = \int d k_y T(k_y,E)
\label{eqn:T_slab}
\end{align}
where $T(k_y,E)=Tr(\Gamma_L G^R \Gamma_R G^A)$ is the $k_y$-resolved transmission and $k_y$ is the in-plane direction.
Here, $G^R(k_y,E)=[E+i\eta-H_{C,k_y}-\Sigma(k_y,E)]$ is the retarded Green’s function, $H_{C,k_y}$ is the tight-binding Hamiltonian of channel, and $\Sigma(k_y,E)=\Sigma_L(k_y,E)+\Sigma_R(k_y,E)$ is the contact self-energy of the left $(L)$ and right $(R)$ contact. $G^A(k_y,E)$ is the advanced Green’s function, and $\Gamma_\alpha=i(\Sigma_\alpha-\Sigma_\alpha^\dagger)$ is the broadening of the contact-$\alpha$ $(\alpha=L,R)$. The contact self-energies are numerically solved using the Sancho-Rubio’s method~\cite{sancho1984quick}. For the bulk of NbAs, similarly, the transmission can be written as
\begin{align}
T(E) = \int d k_y d k_z T(k_y,k_z,E)
\label{eqn:T_bulk}
\end{align}
where the $k_z$ is the out-of-plane direction for the bulk. We use $k$-point sampling of $400$ $k_y$ and $800$ $k_y\times 800$ $k_z$ for slab and bulk transport calculation, respectively.
The Hamiltonian of the channel $H_C$ is constructed from the slab tight-binding model.
In order to consider surface defect configurations in the channel, we remove the orbitals of atoms entirely from the Hamiltonian of channel $H_C$.
Figure~S5 shows the schematic view of structure for NEGF calculation for 24AL slab of NbAs with 12-atom defect configuration.
\section{Acknowledgements}
S.K and R.S. acknowledge funding from Semiconductor Research Corporation under Task No. 2966.002.
Calculations were carried out at the Center for Computational Innovations at Rensselaer Polytechnic Institute. The work at the National University of Singapore was supported by MOE-2017-T2-2-114, MOE-2019-T2-2-215, and FRC-A-8000194-01-00. T.-R.C. was supported by 2030 Cross-Generation Young Scholars Program from the National Science and Technology Council (NSTC) in Taiwan (MOST111-2628-M-006-003-MY3), National Cheng Kung University, Taiwan, and the National Center for Theoretical Sciences, Taiwan. We gratefully acknowledge the helpful discussions with Daniel Gall (RPI), Utkarsh Bajpai (IBM) and Vijay Narayanan (IBM).
|
1,116,691,500,363 | arxiv | \section{Introduction } \label{s:intro}
The population of observed supernovae (SNe) is growing swiftly as high-cadence surveys fill regions of observational phase space that were previously much less accessible.
Among the peculiar objects found are a class of rapidly fading supernovae (RFSNe) with peak luminosities ranging widely from sub-luminous to brighter than ``normal" SNe. Well known single objects include SN\,2002bj\, \citep{poznanski10}, SN\,2010X\, \citep{kasliwal10}, and SN\,2015U\, \citep{shivvers16}, but studies of the larger population have also emerged \citep[e.g.,][]{drout14, arcavi16}. The progenitor systems and explosion mechanisms of RFSNe these events remain in dispute.
RFSNe exist in what is currently the shortest-timescale region of optical observational parameter space, with rise and decline times lasting days to weeks.
If these transients are interpreted as powered by centrally concentrated radioactive $^{56}\rm{Ni}$, the total ejected mass must be small
($\sim 0.1~\rm{M}_\odot$, assuming a constant opacity) so as to produce a short effective diffusion time.
Several theoretical models may produce such ejecta, for example the thermonuclear detonation of a helium shell atop a white dwarf \citep[a ``point Ia supernova",][]{bildsten07, shen10}, the explosion of a highly stripped massive star \citep{tauris15}, or a core collapse supernova experiencing heavy fallback \citep{moriya10}.
However, low-mass $^{56}\rm{Ni}$ powered models likely cannot explain many of the RFSNe. The light curves of many observed events show no noticeable late-time ``tail" indicating a continuing input of decay energy (although incomplete trapping of the radioactive $\gamma$-rays could perhaps explain this behavior). Moreover, some objects, such as SN\,2002bj\, and SN\,2015U\,, are so bright that simple analytic estimates lead to the unphysical inference that the mass of $^{56}\rm{Ni}$ must be larger than the total ejecta mass.
For such reasons, \citet{drout14} conclude that many of the RFSNe are likely powered by shock energy rather than radioactivity.
Previous modeling by \cite{kleiser14} has shown that some RFSNe like SN2010X could be explained by the explosion of a hydrogen-poor star with a relatively large radius ($\sim 20~R_\odot$). The ejected mass of radioactive isotopes was assumed to be small, such that the luminosity was powered by diffusion of the shock deposited energy. The model light curves declined rapidly due to recombination in the cooling ejecta (composed of helium or carbon/oxygen) which reduced the opacity and led to a rapid depletion of the thermal energy, similar to the end of the plateau in
Type~IIP SN. Dim transients of this sort had been studied in the SNIb models of \citet{dessart11}.
To produce a bright RFSN from shock cooling requires a progenitor star with a radius much greater than the few ${\rm R}_\odot$ found in
stellar evolution models of hydrogen-stripped stars \cite{crowther07}.
\cite{kleiser14} suggested that the effective presupernova star radius may be increased due to envelope inflation of mass loss
just prior to explosion. Strong mass-loss episodes could arise due to binary interaction \citep{chevalier12} or dynamics driven by nuclear burning \citep{quataert12, smith16}. Indeed, the spectra of Type Ibn SN \citep[e.g.][and citations therein]{pastorello15,pastorello16} and of SN\,2015U\,\ provide direct evidence for a hydrogen-poor circumstellar medium (CSM) around some massive star explosions.
In this paper, we pursue the shock cooling model for RFSN by carrying out a parameter study of the dynamics and shock cooling light curves of supernova exploding into an extended, hydrogen poor CSM.
In \S \ref{s:analytics}, we provide simple analytic scalings for how the interaction dynamics and resulting light curve should depend on physical parameters such as the mass and radius of the CSM shell. In \S \ref{s:methods}, we describe a pipeline to model the 1D hydrodynamics of the interaction and the subsequent light curves. In \S \ref{s:results}, we show how different shell parameters affect the dynamics and the possibility of fallback. We present light curves for nickel-free and nickel-rich ejecta profiles, and we explore how Ralyeigh-Taylor mixing effects may effect the results. Finally, \S \ref{s:discussion} contains discussion of our results and their implications for our understanding of RFSNe and the possible outcomes of stellar evolution that could produce such peculiar objects.
\section{Analytics} \label{s:analytics}
We first present simple analytic scalings that can be used to estimate the properties of interacting SN. As an idealized model, we consider the case of homologously
expanding SN ejecta running into a stationary CSM shell or wind. Although the interaction with the CSM will generally occur before the stellar ejecta has had time to establish homology, our hydrodynamical models (see \S \ref{s:results}) indicate that the post-shock velocity structure of the exploded star is approximately linear in radius. We therefore assume the ejecta velocity at radius $r$ and time $t$ is $v = r/t$ and describe the ejecta structure with a broken power law profile \citep{chevalier89} in which the density in the outer layers (above a transition velocity $v_t$) is
\begin{equation}
\rho_{\rm ej} \propto \frac{M_{\rm ej}}{v_t^3 t^3} \biggl(\frac{r}{v_t t}\biggr)^{-n}\,\,,
\end{equation}
where $v_t \propto (E_{\rm exp}/M_{\rm ej})^{1/2}$, and $M_{\rm ej}$ is the ejecta mass and $E_{\rm exp}$ the energy of the explosion.
Interaction with the (nearly) stationary CSM will decelerate the ejecta and convert its kinetic energy into thermal energy. By conservation of momentum, the mass of ejecta that can be significantly decelerated in the interaction is of order the total mass of the CSM.
For the power-law density profile, the ejecta mass
above some velocity coordinate $v_0 > v_t$ is
\begin{equation}
M(v_0) = \int_{v_0}^{\infty} 4 \pi r^2 \rho_{\rm ej}(r) dr \propto \frac{4 \pi }{n-3} M_{\rm ej}\,\,,
\left(\frac{ v_0}{v_t} \right)^{3-n}
\end{equation}
which assumes $n > 3$. Setting $M(v_0) \sim M_{\rm CSM}$ (where $M_{\rm CSM}$ is the total CSM mass) implies
that the velocity coordinate above which the ejecta is slowed by the interaction is
\begin{equation*}
v_0 \propto v_t \left( \frac{M_{\rm ej}}{M_{\rm CSM}} \right)^\frac{1}{n-3}\,\,.
\end{equation*}
The ejecta kinetic energy contained in the layers above $v_0$ is
\begin{eqnarray}
{\rm KE}(v_0) &=& \int_{v_0}^{\infty} \frac{1}{2} \rho_{\rm ej} v^2 4 \pi r^2 dr
\propto
M_{\rm ej} v_t^2 \left( \frac{v_0}{v_t} \right)^{3-n}
\end{eqnarray}
which suggests that the energy thermalized in the interaction should scale as
\begin{equation}
E_{\rm th,0} \propto {\rm KE}(v_0) \propto M_{\rm ej}v_t^2\biggl( \frac{M_{\rm CSM}}{M_{\rm ej}} \biggr)^\frac{n-5}{n-3}\,\,.
\label{eq:Eth0}
\end{equation}
For $n=8$, for example, the energy thermalized scales as $(M_{\rm CSM}/M_{\rm ej})^{3/5}$.
The thermalization of the ejecta kinetic energy will occur over the timescale for the ejecta to accelerate the CSM.
To estimate the interaction timescale we follow the self-similar arguments of \citep{chevalier92} and assume that the CSM has a power-law density structure of the form
\begin{equation}
\rho_{\rm CSM}(r) \propto \frac{M_{\rm CSM}}{R_{\rm CSM}^3} \biggl(\frac{r}{R_{\rm CSM}}\biggr)^{-s}\,\,,
\end{equation}
where $R_{\rm CSM}$ is the outer radius of CSM and $s < 3$.
In a self-similar interaction, the ejecta and CSM densities maintain a constant ratio at the contact discontinuity,
$\rho_{\rm ej}(r_c)/\rho_{\rm CSM}(r_c) = C$, with $C$ a constant.
This implies that $r_c$, the radius of the contact discontinuity between the ejecta and CSM, evolves as \citep{chevalier92}
\begin{equation}
r_c(t) = t^{\frac{n-3}{n-s}} \biggl[ \frac{M_{\rm ej}}{M_{\rm CSM}} \frac{R_{\rm CSM}^{3-s}}{ C v_t^{3-n}}\biggr]^{\frac{1}{n-s}}\,\,.
\end{equation}
Setting $r_c(t) \approx R_{\rm CSM}$ gives an estimate of the time $t_{\rm bo}$ when the forward shock from interaction will breakout of the CSM \citep{harris16}:
\begin{equation}
t_{\rm bo} \approx \frac{R_{\rm CSM}}{v_t} \biggl(\frac{C M_{\rm CSM}}{M_{\rm ej}}\biggr)^{\frac{1}{n-3}}\,\,.
\end{equation}
The total amount of ejecta kinetic energy thermalized will rise until $t \approx t_{\rm bo}$, then decline as the interaction abates and the system adiabatically expands.
Because the pressure is radiation dominated (adiabatic index $\gamma=4/3$) the thermal energy after expansion to a radius $R(t)$ is
\begin{equation}
E_{\rm th}(t) = E_{\rm th,0}\frac{R_{\rm CSM}}{R(t)} \propto E_{\rm th,0} \left( \frac{t_{\rm bo}}{t} \right)\,\,,
\end{equation}
where $R(t)$ is the radius of the expanding, post-interaction ejecta, and the last equation assumes homologous expansion, $R(t) \sim t$, following the breakout. The thermal energy at time $t$ is then
\begin{equation}
E_{\rm th}(t) \propto R_{\rm CSM} M_{\rm ej}^{1/2}E_{\rm exp}^{1/2}\biggl( \frac{M_{\rm CSM}}{M_{\rm ej}} \biggr)^\frac{n-4}{n-3} t^{-1}.
\end{equation}
For the case of $n = 8$, for example, which will approximate the post-shock density structure of our hydrodynamical models, we have
\begin{equation}
E_{\rm th}(t) \propto R_{\rm CSM} E_{\rm exp}^{1/2}M_{\rm CSM}^{4/5} M_{\rm ej}^{-4/5} t^{-1}.
\label{eq:Etht}
\end{equation}
We will show using hydrodynamical models in \S\ref{s:param} that Equation~\ref{eq:Etht} accurately predicts how the thermal energy content depends on the CSM and ejecta properties. The derivation assumes $M_{\rm CSM} \lesssim M_{\rm ej}$.
The light curves arising from the interaction are the result of the diffusion of thermal radiation from the shocked region. The opacity $\kappa$ is usually dominated by electron scattering and is constant in ionized regions, but will drop sharply to near zero once
the temperature drops below the recombination temperature $T_I$. Scaling
relations for the duration and peak luminosity of thermal supernovae, including the effects of recombination, have been determined by \citet{popov93} and verified numerically by \citet{kasen09}
\begin{equation}
t_\mathrm{sn} \propto E_{\rm th,0}^{-1/6}M_{\rm diff}^{1/2}R_0^{1/6}\kappa^{1/6}T_I^{-2/3},
\label{eq:t_sn_1}
\end{equation}
\begin{equation}
L_\mathrm{sn} \propto E_{\rm th,0}^{5/6}M_{\rm diff}^{-1/2} R_{0}^{2/3} \kappa^{-1/3}T_I^{4/3}\,\,,
\label{eq:L_sn_1}
\end{equation}
where $M_{\rm diff}$ is the effective amount of mass the photons must diffuse through. We take this to be some combination of $M_{\rm ej}$ and $M_{\rm CSM}$, depending on the distribution of thermal energy among the relative masses. Taking $R_0 = R_{\rm CSM}$ and using our Equation~\ref{eq:Eth0} for $E_{\rm th,0}$ gives
\begin{equation}
t_\mathrm{sn} \propto E_{\rm exp}^{-1/6}
\biggl(\frac{M_{\rm CSM}}{M_{\rm ej}}\biggr)^{\frac{-(n-5)}{6(n-3)}}
M_{\rm diff}^{1/2
R_{\rm CSM}^{1/6}
\kappa^{1/6}T_I^{-2/3}\,\,,
\label{eq:t_sn}
\end{equation}
\begin{equation}
L_\mathrm{sn} \propto E_{\rm exp}^{5/6}
\biggl(\frac{M_{\rm CSM}}{M_{\rm ej}}\biggr)^{\frac{5(n-5)}{6(n-3)}}
M_{\rm diff}^{-1/2
R_{\rm CSM}^{2/3}
\kappa^{-1/3}T_I^{4/3}\,\,.
\label{eq:L_sn}
\end{equation}
For the purposes of easy comparison to numerical data, we would like to devise simple power laws to describe the dependency of $L_{\rm sn}$ and $t_{\rm sn}$ on the parameters. This is complicated by the $M_{\rm diff}$ factor, but there are limits we can consider. First it is necessary to recognize that the masses change the light curve in two opposing ways: increasing $\frac{M_{\rm CSM}}{M_{\rm ej}}$ increases the amount of available thermal energy to power the light curve, which would increase the peak luminosity and decrease the timescale, according to Equations \ref{eq:t_sn_1} and \ref{eq:L_sn_1}. Meanwhile, the diffusion mass $M_{\rm diff}$ also slows the diffusion of photons out of the ejecta more as it increases, lowering the peak luminosity and increasing the timescale.
In the cases presented here, we hold $M_{\rm ej}$ fixed. One limit is to imagine that the circumstellar mass is small compared to the ejecta mass, so the dependence on $M_{\rm diff}$ goes away. Then the equations become
\begin{equation}
t_\mathrm{sn} \propto E_{\rm exp}^{-1/6}
M_{\rm CSM}^{\frac{-(n-5)}{6(n-3)}}
R_{\rm CSM}^{1/6}
\kappa^{1/6}T_I^{-2/3}\,\,,
\label{eq:t_sn_a}
\end{equation}
\begin{equation}
L_\mathrm{sn} \propto E_{\rm exp}^{5/6}
M_{\rm CSM}^{\frac{5(n-5)}{6(n-3)}}
R_{\rm CSM}^{2/3}
\kappa^{-1/3}T_I^{4/3}\,\,.
\label{eq:L_sn_a}
\end{equation}
In the case of $n = 8$, we then have $t_{\rm sn} \propto M_{\rm CSM}^{-1/10}$ and $L_{\rm sn} \propto M_{\rm CSM}^{1/2}$. If $n=6$, $t_{\rm sn} \propto M_{\rm CSM}^{-1/18}$ and $L_{\rm sn} \propto M_{\rm CSM}^{5/18}$.
This limit essentially assumes the increase in circumstellar mass does not contribute significantly to inhibiting the travel of photons out of the ejecta. Alternatively, we can imagine that the CSM makes up the bulk of the mass available, or that the total mass scales roughly as the CSM mass. In this case, $M_{\rm diff} \propto M_{\rm CSM}$, so
\begin{equation}
t_\mathrm{sn} \propto E_{\rm exp}^{-1/6}
M_{\rm CSM}^{\frac{-(n-5)}{6(n-3)} + \frac{1}{2}}
R_{\rm CSM}^{1/6}
\kappa^{1/6}T_I^{-2/3}\,\,,
\label{eq:t_sn_b}
\end{equation}
\begin{equation}
L_\mathrm{sn} \propto E_{\rm exp}^{5/6}
M_{\rm CSM}^{\frac{5(n-5)}{6(n-3)} - \frac{1}{2}}
R_{\rm CSM}^{2/3}
\kappa^{-1/3}T_I^{4/3}\,\,.
\label{eq:L_sn_b}
\end{equation}
For $n=8$, $t_{\rm sn} \propto M_{\rm CSM}^{2/5}$ and $L_{\rm sn} \propto M_{\rm CSM}^{0}$. For $n=6$, $t_{\rm sn} \propto M_{\rm CSM}^{4/9}$ and $L_{\rm sn} \propto M_{\rm CSM}^{-2/9}$. We will find in \S \ref{s:results} that this last case with $n=6$ appears to fit our numerical results for the light curves most closely.
\section{Methods } \label{s:methods}
We adopt a spherically symmetric framework to model the light curves of hydrogen-poor stars exploding into an extended CSM. We use the MESA stellar evolution code to model massive stars that have lost their hydrogen envelopes due to heavy mass loss. At the point of core collapse, we add to the
MESA models a parameterized external shell or wind of mass $M_{\rm CSM}$. We map this progenitor
structure into a 1D hydrodynamics code and explode it by depositing a central bomb of thermal energy. Once the ejecta have neared homologous
expansion, the structure is fed into the
SEDONA radiation transport code to calculate time-dependent light curves and spectra.
\subsection{Progenitor Star Models}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{multi_shell_profile.pdf}
\end{center}
\caption{Density profile for an example star + shell model. The same stripped MESA star model is used throughout this paper, and different toy shells are constructed around it. The original stellar profile is shown in orange. Blue-green colors show various shell profiles. Two of the shells shown here are Gaussian profiles modified by $r^{-2}$ based on the fact that we assumed a Gaussian $\dot{M}$ whose velocity was constant (see Equation \ref{eq:gauss_shell}) with different values of $\tau$. The third is simply a density profile $\propto r^{-2}$, corresponding to a constant wind prior to explosion. This is essentially the case of infinite $\tau$. Final models are shown in black, with a smooth transition between stellar and shell densities. All shells in this plot have the same amount of total mass. \label{f:shell_profile}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{composition.pdf}
\end{center}
\caption{Composition plot for an example star + shell model. The iron core has been removed already by cutting out the mass interior to the point where $^{56}{\rm Fe}$ drops below 10\% of the composition. The star used for all runs is the same, and the shell is assumed to have the same abundances as the outermost layer of the star. In this case, all shells are very dominated by $^4 {\rm He}$. The dotted black line indicates where the star ends and the shell begins.\label{f:comp}}
\end{figure}
We use MESA version~7184 to produce a hydrogen-stripped stellar model using a simple artificial mass loss prescription. The prescription is meant to approximate Case B mass transfer to a binary companion, which should be common among the massive progenitors of Type Ibc SNe \citep[see][]{sana12,smith11}. We use a zero-age main sequence (ZAMS) mass of 20 $\rm{M}_\odot$ and evolve the star through hydrogen burning until the surface temperature reaches $T_{\rm eff} = 5000~{\rm K}$, indicating that the radius has expanded significantly. We then initiate a constant mass loss at $\dot{M} = 10^{-3}~\rm{M}_\odot~{\rm yr}^{-1}$ until a desired final mass is reached, in the present case 5 $\rm{M}_\odot$. This mass loss history qualitatively resembles the more detailed Roche lobe overflow calculations in \citet{yoon10}. Therefore, even though the mass loss prescription is simple, it is similar to the natural loss of a large amount of mass (in this case the entire hydrogen envelope) expected in some systems by Roche lobe overflow. Other or more complex mass loss histories may yield different final stellar structures.
The MESA model is evolved to the point of iron core collapse. Before exploding the model, we first cut out the remnant based on the point at which $^{56}{\rm Fe}$ drops below 10\% going outward---in our case, the remnant mass is $1.395~\rm{M}_\odot$. We then insert an ad-hoc distribution of extended CSM, which
is meant to mock up a heavy mass loss episode in the final days before explosion. We assume that the CSM mass was lost at a constant velocity,
$v_{\rm CSM} \ll v_{\rm ej}$ with a rate $\dot{M}$ that was Gaussian in time. This leads to a CSM density profile
\begin{equation}
\rho_{\rm CSM} (r) =
\frac{M_{\rm CSM}}{ 4 \pi r^2 \Delta r \sqrt{2 \pi} } \exp \biggl[ \frac{-(r - r_{\rm mid})^2}{2 \Delta r}\biggr]\,\,,
\label{eq:gauss_shell}
\end{equation}
where $r_{\rm mid}$ and $\Delta r$ are free parameters specifying, respectively, the peak and the width of the Gaussian.
For a constant mass rate and wind velocity, $\Delta r = v_{\rm CSM} \tau$ where $\tau$ is the standard deviation of the Gaussian and can be used as a measure of the duration of the
mass loss episode. For large values of $\tau$, the CSM resembles that of a constant $\dot{M}$ wind with a
$1/r^2$ density profile. We chose here $v_{\rm CSM} = 100~{\rm km~s^{-1}}$. While the value of $v_{\rm CSM}$ would be interesting in the context of understanding the nature and mechanism of the mass loss, here the actual quantity is of little consequence for the light curves and spectra since the velocity of the ejecta is so much greater.
Figure \ref{f:shell_profile} shows the density profile of the progenitor star model with a few
different distributions of CSM. Figure \ref{f:comp} shows the composition of a progenitor model. We assume that the CSM
composition is homogenous and equal to that at the surface of the stellar model, which is helium-dominated.
Our parameterized progenitor configuration is artificial in that the progenitor star structure is not self-consistently altered to compensate for the presumed final episodes of mass loss.
In addition, in some models we rescale the mass of the progenitor star by simply dividing the density profile everywhere by a constant.
The assumption is that the density profile of our MESA progenitor star provides a reasonable representation of presupernova stars of other masses. In the present context, a simplified approach is not unreasonable in that we will explode the star with a 1D thermal bomb, and the detailed internal structure of the star will be largely washed out by the blastwave. What is most important to the light curve is the structure of the CSM, which in the present case is parameterized in a simplified way that allows us to easily control the physical characteristics. Future studies using more realistic CSM structures and progenitors are clearly warranted.
\subsection{Hydrodynamical Explosion Simulations}
For modeling the explosion of the star, we use a 1D staggered moving-mesh hydrodynamical code and a gamma-law equation of state with $\gamma = 4/3$, as the SN shock is radiation-pressure dominated. We do not compute the complex mechanism of the explosion itself but instead deposit a
chosen amount of thermal energy $E_{\rm exp}$ at the center of the stellar model to create a thermal bomb. We evolve the explosion until the ejecta profile is roughly homologous, i.e. $r \sim v t$ for all zones. This method has the advantage of speed but is limited to cases in which the CSM radius is small enough that radiative diffusion is not important before homology is reached.
In the hydrodynamical calculation, some inner zones may remain bound and fall back toward the remnant. In order to capture this, we use the following criteria to determine if the innermost zone should be ``accreted" and removed from the calculation: 1) the zone has negative velocity; and 2) the gravitational potential energy of the zone exceeds the kinetic and thermal energy of the zone combined by a factor of $1 + \epsilon$, where we typically take $\epsilon$ to be $\sim 0.2$.
Sometimes an innermost zone will also be removed if its density is some factor $\eta$ larger than the density of the next zone, where $\eta$ is typically $\sim 100$. The density criterion is used because sometimes a zone that is considered unbound by the prior criteria will nevertheless remain spatially small, which imposes a very small time step on the calculation without significantly affecting the results.
\subsection{Radiative Transfer Calculations}
Once our exploded profiles are close to homology, we map the final ejecta structure into SEDONA, a time-dependent Monte Carlo radiation transport code that takes into account the composition, density, and temperature-dependent opacities \citep{kasen06}. We run the code with the assumption of local thermodynamic equilibrium (LTE), which should be reasonable
for approximating the phases of the light curve after which interaction with the CSM has taken place, but before the ejecta have become optically thin.
For the models in which we include $^{56}\rm{Ni}$ in the ejecta, we assume the nickel mass fraction $X_{\rm ni}$ profile follows
\begin{equation}
X_{\rm ni} = \frac{1}{2}\biggl( \tanh \biggl[\frac{-(r - r_{\rm ni} )}{s \,dr}\biggr] + 1\biggr)\,\,,
\label{eq:ni_dist}
\end{equation}
where $dr$ is the width of each zone. This equation essentially produces a smoothed step function where $s$ controls the amount of smoothing and the quantity $r_{\rm ni}$ is the shift required, given $s$ to make the total mass of nickel present match a user-specified $M_{\rm ni}$. In this paper, every SEDONA run has the same number of equally spaced radial zones ($N=200$), so $s\,dr$ represents the spatial extent of the smearing and is a fraction of the radial extent of the ejecta controlled by $s$.
\section{Results } \label{s:results}
\subsection{Dynamics of Interaction \label{s:hydro}}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{vel_profiles.pdf}
\end{center}
\caption{Velocity profiles at various times for two hydrodynamical calculations. Each profile corresponds to roughly a doubling in time, i.e. $\sim2~{\rm s}$, $\sim4~{\rm s}$, $\sim8~{\rm s}$, and so forth. Top panel: explosion of a 5 $\rm{M}_\odot$ progenitor star ($\sim 3.4~\rm{M}_\odot$ once the iron core is removed) with no CSM added. Bottom panel: explosion of the same star with a 3 $\rm{M}_\odot$ CSM. The addition of the CSM slows down the forward shock, producing a reverse shock moving toward the center. \label{f:vel_profiles}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{E_Mbig.pdf}
\end{center}
\caption{Evolution of the total kinetic and thermal energy in the explosion of a $5~\rm{M}_\odot$ star with $3~\rm{M}_\odot$ of CSM (red lines). For comparison, a model with
no CSM is also shown (black lines)
A central thermal bomb is input to give an initial thermal energy just above $2~{\rm B}$, resulting in a final kinetic energy of $1~{\rm B}$ once the gravitational potential has been overcome.
At the earliest times ($t \lesssim 10^2$~s), thermal energy is converted to kinetic energy as the star explodes. The interaction with the CSM begins at times
$t \gtrsim 10^2$~s and converts kinetic energy back into thermal energy. At a time near $10^4$~s, the forward shock breaks out of the CSM. Thereafter the thermal energy declines, closely following the $t^{-1}$ scaling of adiabatic homologous expansion. \label{f:E_Mbig}}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{E_fourpanel.pdf}
\end{center}
\caption{Thermal energy evolution for models with different physical parameters. The panels show the effect of varying the CSM mass (top left), CSM radius (top right), CSM thickness (bottom left, note both $r_{\rm mid}$ and $\tau$ are varied proportionally to one another to produce self-similar solutions), and the explosion energy (bottom right). \label{f:E_fourpanel}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{fintherm_vary.pdf}
\end{center}
\caption{Final thermal energy at $t_{\rm end} = 10^5~{\rm s}$ for each simulation presented in Figure \ref{f:E_fourpanel}. The power-law fits to our numerical data are listed in the figure, and solid gray lines show the fits to the data. Solid magenta lines show our analytical power laws for comparison. The fitted exponents correspond well to our analytical scalings in Equation~\ref{eq:Etht} of \S \ref{s:analytics}. \label{f:fintherm_vary}}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{prof_vary_M.pdf}
\end{center}
\caption{Final density and energy density profiles for the explosion of a $5~\rm{M}_\odot$ star with different CSM masses. Most of the thermal energy is contained between the reverse shock and
the star/CSM contact discontinuity. The thermal energy is greater for models with larger CSM masses, and both the density and energy density are concentrated farther inward in mass coordinate. \label{f:prof_vary_M}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{prof_vary_tau.pdf}
\end{center}
\caption{Same as Figure \ref{f:prof_vary_M} but for models varying the $\tau$ parameter that sets the CSM thickness. While the CSM thickness does not greatly affect the total thermal energy, it does affect the final distribution of the thermal energy and the location of the reverse shock. \label{f:prof_vary_tau}}
\end{figure}
We present here a study of hydrodynamical simulations of the explosion of the described progenitor star plus CSM configuration. Figure \ref{f:vel_profiles} compares the velocity evolution of a model with no CSM to one with a $3~\rm{M}_\odot$ CSM shell. In both models, a strong shock initially propagates outward through the star, reaching the surface (at mass coordinate $3.4~\rm{M}_\odot$) at a time $t \approx 10^2$ s. In the model with no CSM, the shock breaks out and accelerates the surface layers of the star to high velocity. In the model with a CSM shell, the interaction produces a reverse shock and a forward shock, the latter of which breaks out of the CSM shell some time later ($t \approx 10^4$ s). The reverse shock weakens after the forward shock breakout due to the pressure release and stalls before reaching the ejecta center.
Figure~\ref{f:E_Mbig} shows the temporal exchange of kinetic and thermal energy in a model with a total kinetic energy at infinity of 1~B. The thermal energy declines over the intial $\sim 300$ seconds as the shock travels through the star, overcoming the gravitational binding energy and imparting kinetic energy to the stellar material. In the absence of a CSM shell, Figure~\ref{f:E_Mbig} shows that the thermal energy continues to decline to late times due to expansion loss. In the presence of a CSM shell, however, the outer layers of stellar ejecta impact the shell at $\sim 300$~s and shocks begin to convert kinetic energy back into thermal energy again. The thermal energy content peaks around $5\times 10^3$ seconds, which occurs shortly before the breakout of the forward shock from the CSM. Thereafter, the thermal energy declines again as $1/t$, as expected from $p\,dV$ loses.
\subsubsection{Parameter Study \label{s:param}}
Figure~\ref{f:E_fourpanel} shows how the thermal energy evolution depends on the ejecta and CSM parameters. The end result is quantified further in
Figure~\ref{f:fintherm_vary}, which shows the thermal energy content $E_{\rm th}(t_{\rm end})$ found at a final reference time $t_{\rm end} = 10^5~{\rm s}$.
The general trends noted are: 1) $E_{\rm th}(t_{\rm end})$ increases with explosion energy, due to the larger available energy budget; 2) $E_{\rm th}(t_{\rm end})$ increases with
shell mass, due to a larger deceleration and hence thermalization of the ejecta kinetic energy; 3) $E_{\rm th}(t_{\rm end})$ increases with shell radius, as a later onset of interaction leads to less expansion losses by $t_{\rm end}$. Figure~\ref{f:fintherm_vary} demonstrates that the scaling with these three parameters closely follow the analytic scalings of \S~\ref{s:analytics}. The analytics did not take into account the shell width, and Figure~\ref{f:E_fourpanel} shows that it is has a relatively small impact on the final thermal energy content.
The radial density and energy density distributions of our exploded models at $t_{\rm end}$ are shown in Figures \ref{f:prof_vary_M} and \ref{f:prof_vary_tau}. The density profiles show two sharp features, one at the location where the inward propagating reverse shock stalled, and one at the location of the contact discontinuity between the star and CSM. The energy density has a smoother radial distribution.
Figure \ref{f:prof_vary_tau} shows that, even though the shell width does not impact the total thermal energy content, it does affect the radial distribution, with more extended shells leading to more central concentration of mass and energy. This will have some effect of the shape of the resulting light curve.
\subsubsection{Fallback }\label{s:fallback}
For models with strong interaction, the
reverse shock may reach the center of the ejecta and induce fallback onto the remnant
\citep[e.g.,][]{chevalier89b}. Alternatively, low explosion energies could also allow larger amounts of mass to remain bound to the remnant.
It is interesting to speculate whether this fallback could provide a mechanism to explain the apparently low $^{56}\rm{Ni}$ masses inferred for some RFSNe, as $^{56}\rm{Ni}$ is synthesized in the innermost layers of the star.
Following previous work on SN fallback \citep[see e.g.][]{mcfadyen01, zhang08}, we explore here the amount of material which may remain bound to the central remnant following the explosion.
Figure~\ref{f:fallback} shows the amount of fallback for models with $3~\rm{M}_\odot$ of CSM and various explosion energies. For models with $E = 1$~B the fallback mass is small ($\lesssim 0.01~\rm{M}_\odot$).
This is because the reverse shock stalls before reaching the ejecta center. A CSM mass of $M_{\rm CSM} \gtrsim M_{\rm ej}$ is needed for the reverse shock
to approach the center in a $E = 1$~B explosion (see Figure~\ref{f:prof_vary_M}).
For low explosion energies ($E \lesssim 0.3-0.5$~B) and $M_{\rm CSM} \approx M_{\rm ej}$ the fallback mass may
be significant, $\gtrsim 0.05~M_\odot$. This is comparable to
the typical mass of $^{56}\rm{Ni}$ inferred to be ejected in core collapse SNe. Since $^{56}\rm{Ni}$ is synthesized in the densest, innermost regions, such strong fallback could significantly reduce or eliminate entirely the radioactivity available to contribute to the light curve.
The results in Figure~\ref{f:fallback} are only suggestive, as the actual amount of fallback will depend on the details of the progenitor structure and explosion mechanism.
Whether fallback is relevant for RFSNe is unclear. Given the scalings of Figure~\ref{f:fintherm_vary}, a low explosion energy will lead to a dim light curve unless the progenitor star radius is very large. Alternatively, if the explosion energy is typical ($E \approx 1$~B), the
CSM mass likely needs to exceed that of the ejecta. Even in cases where the fallback mass is significant, multi-dimensional effects could mix synthesized $^{56}\rm{Ni}$ out
to larger radii, allowing some radioactive material to be ejected. More detailed simulations are needed to evaluate the importance of fallback in RFSNe.
\subsection{Light Curves \label{s:lc}}
\subsubsection{Nickel-Free Light Curves \label{s:nifree}}
Having run hydrodynamical simulations of the ejecta/CSM interaction, we post-process the results with radiation transport calculations in SEDONA.
Table~\ref{t:many_lc} gives the parameters of the models considered, along with our calculated rise time, decline time, and peak brightness.
Figure \ref{f:lc_10x} shows a specific example light curve compared to data from SN\,2010X\,. While the parameters ($M_{\rm shell} = 3.0~\rm{M}_\odot$, $R_{\rm mid} = 2 \times 10^{12}~{\rm cm}$, $\tau = 1~{\rm day}$, $E_{\rm exp} = 3~{\rm B}$)
were not finely tuned to fit this particular object, the model reproduces the bulk properties of this supernova rather well.
We show in Figure \ref{f:many_lc} the variety of $r$-band light curves and bulk properties
(peak brightness, rise time, and decline time) for our parameter survey of different CSM structures and explosion energies.
Similar to the observed diversity in RFSNe shown by \citet{drout14}, the model light curves
display generally short durations but span a wide range in brightness. For the parameter range chosen,
most of our models occupy the lower-luminosity ($M_r > -17$) region. However, models with higher explosion energies $(E > 1$~B) or larger radii $R_{\rm csm} \gtrsim 10^{14}~$cm, and lower ejected masses ($M \lesssim 2 M_\odot$) begin to approach the luminosity and rapid timescales of the brightest RFSNe.
To explore the effect of ejecta mass in a parameterized way, we have also included in our sample a model for which the stellar density profile has been reduced by a factor of 3 and exploded into a $1~{\rm M}_\odot$ shell with 3~B. The resulting light curve is very similar to that of the original mass star exploded into a $1~{\rm M}_\odot$ shell with 6 B, suggesting that the structure of the star itself is not particularly important to the shape of the light curve but rather that the $E/M$ ratio and CSM structure primarily determine the gross properties of the observed supernova.
While the properties of the models in our parameter survey resemble those of many observed RFSNe, the models do not well fit the light curves of some higher-luminosity events. As shown in Figure \ref{f:many_lc}, while we can attain the necessary peak luminosities and timescales for SN\,2002bj\, and SN\,2015U\,, the shapes of the light curves are different; in particular, it is difficult to obtain a short enough rise time to match the observations. This indicates
that the fastest rising events may not be explained by post-shock cooling. A fast ($\sim$days) rise of the light curve may be possible as a result of
shock breakout in dense CSM \citep{chevalier11}. It is also possible that
in some events, significant CSM interaction is ongoing throughout the light curve. The narrow He lines seen in SN\,2015U\, \citep{shivvers16} certainly suggest that there is ongoing conversion of kinetic energy to thermal energy, well past the supernova peak. Capturing these properties would require the use of radiation-hydrodynamics calculations (rather than treating the hydrodynamics and radiation transport separately in sequence).
Figures \ref{f:Lpeak_vary} and \ref{f:timescale_vary} show numerical versus analytical results for the same series as presented in Figure \ref{f:fintherm_vary}. While our analytical estimates for the total available energy were quite accurate, the light curves are somewhat more complex. Because $t_{\rm sn}$ and $L_{\rm sn}$ depend on both the sum and ratio of $M_{\rm CSM}$ and $M_{\rm ej}$ in Equations \ref{eq:t_sn} and \ref{eq:L_sn}, they do not lend themselves to simple power laws because of the $M_{\rm diff}$ factor. As we showed subsequently in \S \ref{s:analytics}, there are some assumptions that can be used to simplify these expressions. In these figures, we have plotted the examples using $t_{\rm sn} \propto M_{\rm CSM}^{\frac{-(n-5)}{6(n-3)} + \frac{1}{2}}$ and $L_\mathrm{sn} \propto M_{\rm CSM}^{\frac{5(n-5)}{6(n-3)} - \frac{1}{2}}$ with $n=6$ and $n=8$ as examples.
We also see that, while our analytics did not consider the effects of varying the shell width $\tau$, $L_{\rm sn}$ shows a nearly linear dependence on this parameter. This may be because a more diffuse shell produces a weaker reverse shock and more evenly distributes thermal energy in the ejecta (see Figure \ref{f:prof_vary_tau}), allowing for a higher and earlier peak. We also see a much larger dependence on radius than expected, possibly in part due to the fact that when increasing the radius we also increased $\tau$ proportionally such that the profile of the ejecta would simply scale.
\begin{figure*}
\begin{center}
\includegraphics[width=7in]{fallback.pdf}
\end{center}
\caption{Amount of fallback in the explosion of a 5~$\rm{M}_\odot$ star with $3~{\rm M}_\odot$ of CSM. Left: Cumulative fallback mass over time for models with various explosion energies. Right: Final amount of fallback as a function of explosion energy. Here explosion energy refers to the final kinetic energy of the ejecta at infinity. For lower energies ($E < 0.5$~B) the fallback mass can be
significant ($\gtrsim 0.05~\rm{M}_\odot$) and may influence the mass of radioactive $^{56}\rm{Ni}$ ejected. \label{f:fallback}}
\end{figure*}
We also derive scalings from our numerical results, including for $\tau$, which was not included in our analytical predictions. Equations for peak luminosity and timescale based on the fits to our numerical results are:
\begin{equation}
L_{\rm sn} \approx (1.3\times10^{42}~{\rm erg/s})
~M_{\rm CSM}^{-0.27}R_0^{1.17}\tau^{0.98}E_{\rm exp}^{0.87}\,\,,
\end{equation}
\begin{equation}
t_{\rm sn} \approx (29~{\rm days})
~M_{\rm CSM}^{0.4}R_0^{0.16}\tau^{-0.11}E_{\rm exp}^{-0.22}\,\,.
\end{equation}
The normalizations are obtained by taking the average value from the fits to each parameter variation and then reducing to one significant figure due to the uncertainty.
\subsubsection{Spectra for SN 2010X}
While a comprehensive study of the spectroscopic properties of our models is beyond the scope of this work, we show in Figure \ref{f:spec_10x} example spectra of the single SN\,2010X\, model whose light curve is shown in Figure \ref{f:lc_10x}. Figure \ref{f:spec_10x} shows comparisons of our calculated spectra to those obtained by \citet{kasliwal10} at similar days. The observed spectra have been corrected for the redshift of the host galaxy (NGC 1573A at $z = 0.015014$) and de-reddened using Galactic extinction value along the line of sight $A_V = 0.401$ but assuming no host extinction. As can be expected, the results from our model resemble those of a typical SN Ibc, although at early times they are quite blue. They compare fairly well with SN\,2010X\, spectra, showing many of the same features but not always recovering their relative strengths. The calculated spectra are also slightly bluer across the board, which could be due to unaccounted-for host extinction that we have chosen to exclude from our corrections to the data.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{lc_10x.pdf}
\end{center}
\caption{Light curve from one run plotted against the light curves for SN\,2010X\,. The parameters used here are $E_{\rm exp} = 3~B$, $M_{\rm shell} = 3~\rm{M}_\odot$, $r_{\rm mid} = 2\times10^{12}~{\rm cm}$, and $\tau = 1~{\rm day}$. Because the parameters were not specifically tuned, we do not expect a perfect fit, but this comparison is to demonstrate the viability of the shock cooling model to explain main RFSNe even without extensive model tweaking. We correct the data for Galactic extinction along the line of sight to the host galaxy, NGC 1573A: $A_g = 0.483$; $A_r = 0.334$; $A_i = 0.248$. We do not assume host galaxy extinction. \label{f:lc_10x}}
\end{figure}
\clearpage
\onecolumn
\begin{figure*}
\begin{center}
\includegraphics[width=3in]{many_lc_thickline.pdf}
\includegraphics[width=3in]{magvst.pdf}
\end{center}
\caption{Calculated $r$-band optical data for many of the hydrodynamical models from Section \ref{s:hydro}. Left: Light curves including parameter variation in radius, explosion energy, shell mass, and $\tau$. This plot also includes more extreme runs with large energy $E_{\rm exp} = 6~B$ and fallback models with $E_{\rm exp} = 0.22,~0.25~B$. Light curves have been run with low photon counts for speed and then smoothed using Savitzsky-Golay filtering. Right: Peak magnitude and timescale plots for these light curves. To the left of the plot is the rise time ($t_{\rm peak} - t_0$). To the right are decline times determined by how long it takes for the $r$-band light curve to decline from peak by two magnitudes. The parameters and bulk properties of the runs plotted here are shown in Table
\ref{t:many_lc}.
\label{f:many_lc}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{Lpeak_vary.pdf}
\end{center}
\caption{Peak luminosities for the parameter study shown in Figure \ref{f:fintherm_vary}. The power-law fits to our numerical data are listed in the figure, and solid gray lines show the fits to the data. Solid magenta lines show our analytical power laws from Equation~\ref{eq:L_sn_b} of \S \ref{s:analytics} using $n=6$. The cyan line in the first panel represents the same but using $n=8$ for the mass variation. Note that there is a stronger dependence of $L_{\rm sn}$ on both $\tau$ and $R_{\rm CSM}$, which we tentatively attribute to the different distribution of energy for different CSM structures, as shown in Figure \ref{f:prof_vary_tau}. \label{f:Lpeak_vary}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{timescale_vary.pdf}
\end{center}
\caption{Same as Figure \ref{f:Lpeak_vary} but for timescales $t_{\rm sn} = t_{\rm rise} + t_{\rm decline}$. Again, gray lines show our power-law fits to the data, while magenta lines show analytic results from Equation~\ref{eq:t_sn_b} of \S \ref{s:analytics}. As in Figure Figure \ref{f:Lpeak_vary}, the magenta line in the first panel uses $n=6$, and the cyan line uses $n=8$. \label{f:timescale_vary}}
\end{figure*}
\begin{table*}
\footnotesize
\caption{ Table of values presented in Figure \ref{f:many_lc}.}
\centering
\begin{tabular}{c c c c c c c c}
\hline\hline
$M_{\rm shell}~({\rm M}_\odot)$ & $\tau~({\rm d})$ & $R_{\rm mid}$ & $E_{\rm exp}$ & $M_{\rm peak}$ & decline time (d) & rise time (d) & color plotted \\ [0.5ex]
3.0 & 0.5 & $2 \times 10^{12}$ & 1.0 & -16.1391 & 15 & 20.5 & black \\
3.0 & 0.75 & $2 \times 10^{12}$ & 1.0 & -16.0881 & 16 & 18.5 & black \\
3.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -16.0729 & 14 & 18.5 & black \\
3.0 & 1.25 & $2 \times 10^{12}$ & 1.0 & -16.0990 & 15 & 17.5 & black \\
3.0 & 1.4 & $2 \times 10^{12}$ & 1.0 & -16.1721 & 16 & 15.5 & black \\
3.0 & 4.0 & $2 \times 10^{12}$ & 1.0 & -16.4137 & 17 & 12.5 & black \\
3.0 & 10.0 & $2 \times 10^{12}$ & 1.0 & -17.0311 & 20 & 12.5 & black \\
\hline
1.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -15.6547 & 12 & 9.5 & green \\
2.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -15.8692 & 11 & 16.5 & green \\
3.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -16.0729 & 14 & 18.5 & green \\
4.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -16.0987 & 15 & 22.5 & green \\
\hline
3.0 & 0.5 & $1 \times 10^{12}$ & 1.0 & -15.6602 & 13 & 16.5 & cyan \\
3.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -16.0774 & 14 & 18.5 & cyan \\
3.0 & 1.5 & $3 \times 10^{12}$ & 1.0 & -16.3172 & 17 & 18.5 & cyan \\
3.0 & 2.0 & $4 \times 10^{12}$ & 1.0 & -16.4376 & 15 & 21.5 & cyan \\
\hline
3.0 & 1.0 & $2 \times 10^{12}$ & 1.0 & -16.1009 & 13 & 19.5 & magenta \\
3.0 & 1.0 & $2 \times 10^{12}$ & 1.5 & -16.3225 & 12 & 18.5 & magenta \\
3.0 & 1.0 & $2 \times 10^{12}$ & 2.0 & -16.5175 & 12 & 16.5 & magenta \\
3.0 & 1.0 & $2 \times 10^{12}$ & 2.5 & -16.6194 & 15 & 11.5 & magenta \\
\hline
3.0 & 10.0 & $2 \times 10^{12}$ & 6.0 & -18.0486 & 9 & 11.5 & red \\
3.0 & 1.0 & $2 \times 10^{12}$ & 0.22 & -14.9561 & 17 & 26.5 & red \\
3.0 & 1.0 & $2 \times 10^{12}$ & 0.25 & -15.0852 & 16 & 25.5 & red \\
3.0 & 1.0 & $2 \times 10^{12}$ & 3.0 & -16.7979 & 12 & 14.5 & red \\
1.0* & (wind) & $2 \times 10^{14}$ & 3.0 & -18.8123 & 9 & 18.5 & red \\
\hline
1.0 & 10.0 & $1 \times 10^{12}$ & 3.0 & -17.6945 & 8 & 11.5 & blue \\
1.0 & 10.0 & $1 \times 10^{12}$ & 6.0 & -17.7910 & 6 & 10.5 & blue \\
1.0 & 10.0 & $2 \times 10^{12}$ & 3.0 & -18.1923 & 10 & 12.5 & blue \\
1.0 & 10.0 & $2 \times 10^{12}$ & 6.0 & -18.3955 & 8 & 11.5 & blue \\
1.0* & 10.0 & $2 \times 10^{13}$ & 3.0 & -18.3533 & 8 & 12.5 & blue \\
\hline
\end{tabular} \\
* Stellar model with density profile reduced by a factor of three in order to explore lower ejecta mass. \\
The label (wind) signifies that in this case the CSM density profile goes as $r^{-2}$ and is not modified by the Gaussian.
\label{t:many_lc}
\end{table*}
\normalsize
\twocolumn
\subsubsection{Double-Peaked Light Curves \label{s:dbl_pk}}
The contribution of significant emission from shock cooling does not necessarily preclude the presence of radioactive nickel in the ejecta. Models that include some radioactive $^{56}\rm{Ni}$ can produce more complex light curves with double-peaked morphologies. Figure \ref{f:lc_ni} shows our light curves using the parameters in Figure \ref{f:lc_10x} ($E_{\rm exp} = 3~B$, $M_{\rm shell} = 3~\rm{M}_\odot$, $r_{\rm mid} = 2\times10^{12}~{\rm cm}$, and $\tau = 1~{\rm day}$) as well as 0.01, 0.05, or 0.1 $\rm{M}_\odot$ of $^{56}\rm{Ni}$ concentrated in the center of the ejecta. The $^{56}\rm{Ni}$ is distributed throughout the ejecta using the parameterized radial profile Equation \ref{eq:ni_dist} with smearing parameters $s = 10$ and $50$. These light curves qualitatively resemble those of double-peaked SNe discussed in \citet{drout16}, such as SNe 2005bf, 2008D, and 2013ge.
As expected, the additional nickel increases the peak luminosity and adds the characteristic radioactive tail. The $^{56}\rm{Ni}$ can also produce a second peak in light curve,
but the radioactive peak can blend with the shock-cooling peak for models with smeared nickel distributions. Interestingly, the model with only $0.01~\rm{M}_\odot$ of nickel but smearing factor $s = 50$ produces a bright, short-lived peak that drops precipitously to a very low magnitude, which might often be below the limits of detectors, depending on the object's distance. Therefore an object with a small amount of very smeared nickel in addition to the shock cooling contribution might increase the luminosity without producing a detectable tail.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{nickel.pdf}
\end{center}
\caption{Model light curves obtained by adding $^{56}\rm{Ni}$ to the ejecta structures for the SN\,2010X\, fit in Figure \ref{f:lc_10x}. The Figure shows models with nickel masses of 0.01, 0.05, and 0.1 $\rm{M}_\odot$; and for two levels of smearing, $s = $10 and 50. Less smearing (with nickel concentrated toward the center) is more likely to result in two distinct peaks.
\label{f:lc_ni}}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{spec_d10_m9p5.pdf}
\includegraphics[width=6in]{spec_d23_m23p5.pdf}
\end{center}
\caption{Spectra of the same model shown in Figure \ref{f:lc_10x} at days 12 and 23 after explosion (black). We have plotted data from SN\,2010X\, at days 9.5 and 23.5, respectively, for comparison (red), after correcting for redshift and Galactic extinction. The presumed day after explosion for the data is determined by the shift we use in matching the light curve data to our model light curves. Note that many of the same features are reproduced, but the relative strengths can differ for a variety of possible reasons, including variations in composition, temperature, and ejecta structure. Because we have not finely tuned our model to fit this object, we expect it to recover only the bulk properties of the spectra, which is typical of SNe Ibc. Our calculated spectra are also slightly bluer, which could be corrected by assuming some amount of extinction for the host galaxy. \label{f:spec_10x}}
\end{figure*}
\subsection{Effects of Rayleigh-Taylor Mixing \label{s:rt}}
While our hydrodynamical models have been carried out in 1D, it is well known that the SN interaction is subject to the Rayleigh-Taylor instability (RTI).
The sharp features and spikes in the density profiles of Figures~\ref{f:prof_vary_M} and \ref{f:prof_vary_tau} can be expected to smoothed out by RT instabilities, which will also mix the ejecta and CSM.
These multi-dimensional affects could in principle affect the rate at which light diffuses out of the ejecta and could affect the shape of the light curve.
To estimate the effects of the RTI on the models, we ran one of our star + CSM models using the hydrodynamics code from \citet{duffel16}, which includes a 1D RTI mixing prescription that has been calibrated to 3D models. In this case, we used a CSM mass of $3~\rm{M}_\odot$ and a CSM radius of $2\times 10^{13}$ cm, chosen in order approach the higher luminosities of SN\,2015U\, and SN\,2002bj\,. The hydrodynamics results are shown in Figure \ref{f:rt_hydro}. RTI mixing almost entirely eliminates the large density spike that occurs in 1D models at the CSM/ejecta contact discontinuity.
The energy density in the RTI calculation is also somewhat higher than a model without RTI, since kinetic energy in the form of turbulence eventually cascades into lower spatial scales until it is thermalized. Rather than all the kinetic energy go into expansion and acceleration of the ejecta, some instead becomes turbulent kinetic energy and eventually thermal energy.
Figure \ref{f:rt_lc} shows the resulting light curves from the runs with RT prescription turned both on and off. It seems, in this case, that even though the final hydrodynamics profile is dramatically different, the mixing does not affect the overall peak luminosity or timescale, although it does affect the very early behavior of the light curve. This may be due to the fact that in the RT-off case, the shock passes through, heats, and accelerates the outer layers to large radii and large velocities, so the diffusion time for the small amount of radiation in these outer layers is short; in the RT-on case, much of the shock energy is dissipated into heat before it can reach these outer layers, and outer layers are not as accelerated and therefore do not reach the low densities needed for a very short diffusion time. In both runs, the peak luminosity is similar to that of SN\,2002bj\,, but the rise time is still too long to fit these fast-rising objects.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{RT_hydro.pdf}
\end{center}
\caption{Energy density and mass density profiles from the 1D hydrodynamics code from \citet{duffel16}, which includes a 3D-calibrated prescription for Rayleigh-Taylor mixing. Here the forward shock is stronger than shown in previous figures because we used a large radius ($2\times 10^{13}$ cm) in the hopes of capturing fast-rising, bright RFSNe. The density structure is dramatically affected by RT instabilities. Note that the run with Rayleigh-Taylor mixing on has a higher energy density; however the envelope is also not as extended as it is without mixing, since more of the outward kinetic energy is converted into turbulence. \label{f:rt_hydro}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{rt_lc.pdf}
\end{center}
\caption{Light curves using the hydro output from our code and the code from \citet{duffel16} with the Rayleigh-Taylor mixing prescription on and off. Evidently even though mixing can significantly affect the structure of the ejecta, it may not have a large effect on the bulk light curve properties.\label{f:rt_lc}}
\end{figure}
\section{Discussion and Future Directions} \label{s:discussion}
We have shown that models of the core-collapse SN with large pre-supernova radii and lacking $^{56}\rm{Ni}$ are a viable explanation for some H-free short-duration transients of a range of luminosities. We suggested that the large initial radius may be due to heavy mass loss just prior to the explosion, and we explored the dynamics and observable signatures of stars exploding into shells and winds. The model light curves presented here resemble those of many of the observed RFSNe, but they struggle to capture the light curve shapes for some objects with high luminosities and rapid rise times. It is likely that for brighter objects the stellar radius would be large enough that the shock has not propagated all the way through the shell by the time radiation losses become significant. Scenarios involving shock breakout in a wind may be more appropriate for these events, and this will be an area of exploration using radiation-hydrodynamical simulations in later work.
We expect that the use of radiation-hydrodynamics will change calculations for larger-radius progenitor systems. In such models, radiation will begin escaping at early times when the ejecta have not yet reached homologous expansion. These radiation losses can affect the dynamics; in particular, if radiation can escape directly from the region of the shock, the shock could lose significant energy and result is less acceleration of the outer layers. This could quantitatively change the peak and timescale of the light curve as well as the velocities of spectroscopic lines.
Two outstanding questions remain for the presented model for RFSNe. One is the reason for the apparent low ejection of $^{56}\rm{Ni}$. Observations and parameterized 1D models of massive star explosions suggest that $\sim 0.05~\rm{M}_\odot$ of $^{56}\rm{Ni}$ should be synthesized in typical core collapse events. In \S \ref{s:fallback}, we studied whether
RFSNe may enhanced fallback, which could rob the ejecta of radioactivity. In stars surrounded by a dense CSM, the interaction of the ejecta with the CSM will produce a reverse shock which can decelerate and push material back onto the central remnant. While this suggests an intriguing connection between nickel-free explosions and progenitors with extended envelopes or shells, achieving significant fallback through the reverse shock would require that the mass of the CSM more than exceed that of the ejecta.
Alternatively, independent of the presence of the CSM, fallback can occur if the explosion energy is somewhat less than the canonical 1~B. We showed that for certain stellar structures, the explosion energy can be tuned to allow $\sim 0.1~\rm{M}_\odot$ of
of fallback while still unbinding the rest of the star and accelerating outer layers to high velocities. Light curves calculated for these examples are relatively dim and long-lived, so obtaining RFSNe with fallback may require lower-mass, higher-radius pre-SN configurations.
Our 1D studies, however, are merely a proof of concept for the viability of removing $^{56}\rm{Ni}$ by fallback. More detailed calculations would consider how the interior stellar structure may have been modified by the pre-supernova mass-loss, as well as the influence on fallback mass of both multi-dimensional dynamics and the particular explosion mechanism.
The second outstanding question is how H-stripped stars might be able to obtain extended envelopes or mass shell ejections that produce an adequately bright shock cooling light curve. While several theoretical studies have the laid the groundwork for understanding that late burning phases could unbind or extend much of the stellar envelope, more detailed stellar evolution calculations are needed to understand if these instabilities can occur in the final few days of a stripped envelope stars life.
\section*{Conclusions}
In this paper, we have explored the viability of hydrogen-stripped core-collapse supernova models using no radioactive nickel and extended helium envelopes to explain the enigmatic rapidly fading supernovae discovered in the last few years. Using 1D stellar evolution, hydrodynamics, and radiation transport codes in sequence, we have shown that such models reproduce the bulk properties of these events. We also compare our numerical results to analytical scalings predicted for the light curve properties. Further investigation using radiation-hydrodynamics codes would help understand the cases with more extended envelopes, as it is expected that sometimes the ejecta will still be dynamically interacting with the CSM even while radiation losses occur. Additional insight into possible mechanisms for both attaining such extended envelopes and failing to produce nickel in the ejecta are also necessary to validate this explanation.
\section*{Acknowledgements}
The authors would like to thank Sterl Phinney, Andrew McFadyen, Lars Bildsten, and Matteo Cantiello for useful discussion and collaboration. IK is supported by the DOE NNSA Stockpile Stewardship Graduate Fellowship Program. This research is also funded in part by the Gordon and Betty Moore Foundation through Grant GBMF5076. DK is supported in part by a Department of Energy Office of Nuclear Physics Early Career Award, and by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Divisions of Nuclear Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
\bibliographystyle{mn2e}
|
1,116,691,500,364 | arxiv | \section{Introduction}
\label{intro}
Wireless network jamming consists in compromising the functionality of a wireless network by activating jamming devices (\emph{jammers}) that disrupt network communications by emitting interfering signals on the same frequencies of the network. Jamming is commonly associated with military and security questions: it is immediate to think about jamming hostile networks in war scenarios, to deny enemy communications, or in high-risk events, such as visits of heads of State, during which it is imperative to avoid bomb detonation by cellular phones. However, in recent times, jamming is also increasingly found in peaceful everyday-life applications that are not related to military and security issues. Italy provides two examples of such applications: the Italian public administration has evaluated the expediency of using jammers during large competitive examinations to prevent cheating, while schools have started to deploy jammers to avoid that students get distracted by smart phones during lectures\footnote{
Panorama. Cellulari a scuola: la soluzione c'\`e ma la vietano (in Italian).
\url{http://italia.panorama.it/Cellulari-a-scuola-la-soluzione-c-e-ma-la-vietano}
(2007)}.
USA provides another example: in some hotels, there is the suspicion that unscrupulous managers have shrewdly placed jammers to deny cellular coverage and force businessmen to use room phones, in an attempt to raise the final bill of stays\footnote{
C. Elliott: The Cellphone That Doesn't Work at the Hotel.
\url{http://www.nytimes.com/2004/09/07/business/07jamming.html?_r=0}.
The New York Times 07.09.2004
(2004)}.
The \emph{Wireless Network Jamming Problem (NJP)} can be described as the problem of optimally placing and configuring a set of jammers in order to interdict communications of a wireless network.
As pointed out by Commander et al. in \cite{CoEtAl07,CoEtAl08}, though the problem is very relevant and there is a wide literature about \emph{preventing} hostile jamming, surprisingly the NJP has been practically neglected until their work. Moreover, together with the work \cite{CoEtAl09}, these seem to be the only papers that have directly addressed the problem.
\noindent
Our main original contributions in the present paper are:
\begin{enumerate}
\item revisiting the models for the NJP introduced by Commander et al. \cite{CoEtAl07,CoEtAl08,CoEtAl09}. Specifically, we highlight the strong connections of the NJP with classical wireless network design and, as recommended by regulatory bodies, we adopt a testpoint model and signal-to-interference (SIR) quantities
to represent coverage and jamming conditions, refining the models of \cite{CoEtAl07,CoEtAl08,CoEtAl09};
\item addressing the uncertain nature of the NJP, considering a more realistic in-between case w.r.t. \cite{CoEtAl07} (complete information case) and \cite{CoEtAl08} (complete uncertainty case), where we suppose to have partial information about the network to be jammed. In particular, we suppose to have an estimate of the SIR balance in each testpoint of the network and we propose an original Robust Optimization (RO) approach to provide protection against estimated deviations. Our RO approach also presents a different way of dealing with uncertainty w.r.t. the scenario-based approach of \cite{CoEtAl09};
\item proposing an original \emph{robust cutting-plane algorithm} to tackle the right-hand-side (RHS) uncertainty coming from uncertain SIR quantities. Tackling RHS uncertainty by a canonical row-wise uncertainty approach and cardinality-constrained uncertainty sets like \cite{BeSi04} leads to trivial and conservative robust counterparts. Our new algorithm allows to overcome these conservatism and model rigidity and to exploit in an innovative way the potential of recent Multiband Robust Optimization (see e.g., \cite{BaEtAl13,BuDA12a,BuDA12b}).
\end{enumerate}
\smallskip
\noindent
The remainder of the paper is organized as follows. In Section \ref{sec:WND}, we introduce fundamentals of \emph{realistic} wireless network design. These concepts are then used in Section \ref{sec:NJP} to derive an optimization model for the NJP. In Section \ref{sec:Robust-NJP}, we discuss data uncertainty in jamming and present our original algorithm. Finally, in Section \ref{sec:experiments} we evaluate our original algorithm on realistic wireless network instances, to then derive conclusions in Section \ref{sec:conclusions}.
\section{Classical Wireless Network Design}
\label{sec:WND}
To define our new model for network jamming, we first discuss closely related concepts from wireless network design. For modeling purposes, a wireless network can be essentially described as a set $S$ of
\emph{transceivers stations (TRXs)} that provide a telecommunication service to a set of users that are located in a target area. In line with recommendations by telecommunication regulatory bodies (e.g., \cite{AGCOM,Chester}), we decompose the target area into a set $T$ of \emph{testpoints (TPs)}, namely elementary portions of territory of identical and squared size. Each TP is assumed to correspond to a ``superuser'' that is representative of all users within the corresponding elementary area.
TRXs and TPs are characterized by a location (geographical coordinates) and a number of radio-electrical
parameters (e.g., power emission, frequency channel, transmission scheme). The \emph{Wireless
Network Design Problem} (WND)
consists in establishing the location and suitable values for the parameters of
the TRXs to optimize
an objective function that expresses the interest of the
decision maker (e.g., maximizing a service revenue function).
For an exhaustive introduction to the WND, we refer the reader to
\cite{DA12,DAMaSa13,KeOlRa10,RePa06}.
An optimization model for the WND typically focuses attention only on a subset of the parameters characterizing a TRX. In particular, the majority of the models considers the setting of power emission of TRXs and the assignment of served TPs to TRXs as the main decision variables. These are indeed two critical decisions that must be taken by a network administrator, as indicated in several real studies (e.g., \cite{CaEtAl11,DA12,DAMaSa11,DAMaSa13,KeOlRa10,MaRoSm06,RePa06}). Other parameters that are commonly considered are the frequency and the transmission scheme used to serve a terminal (e.g., \cite{DAMa09,DAMaSa11,MoSmAl04}). In \cite{DA12,MaRoSm06}, several distinct versions of the WND are presented and a hierarchy of WND problems is identified.
Let us now focus attention on a TP $t \in T$: when covered with service, $t$ is served by a single TRX $s \in S$, called \emph{server}, that provides the telecommunication service to it. Once the server of a TP is chosen, all the other TRXs are \emph{interferers} and reduce the quality of service obtained by $t$ from its server $s$.
Analytically, if we denote by $p_s > 0$ the power emission of a TRX $s \in S$, a TP $t \in T$ is covered with service (or
\emph{served}) when the ratio of the \emph{received} service power to the sum of the \emph{received} interfering powers
(\emph{signal-to-interference ratio} - \emph{SIR}) is above a threshold $\delta >
0$, which depends on the desired quality of service \cite{Ra01}:
\begin{equation}
\label{eq:firstSIRineq}
SIR_{t s}(p) = \frac{a_{t s(t)} \cdot p_{s(t)}}
{N + \sum_{s \in S\setminus\{s(t)\}} a_{ts} \cdot p_s}
\hspace{0.1cm}
\geq
\hspace{0.1cm}
\delta \; .
\end{equation}
\noindent
In this inequality:
1) $s(t) \in S$ is the server of TP $t$;
2) $N > 0$ represents the noise of the system, which is commonly regarded as a constant whose value depends upon the frequency used for transmissions (see \cite{DAMaSa13,Ra01});
3) the \emph{received power} $P_{s}(t)$ that $t$ gets from any TRX $s \in S$ is the product of the power $p_s$ emitted by $s$ multiplied by the factor $a_{ts}$, i.e.
$P_{s}(t) = a_{ts}\cdot p_s$.
The factor $a_{ts}$ is called {\em fading
coefficient}, lies in the range $[0,1]$ and summarizes the reduction in power that a signal experiences while propagating from $s$ to $t$ \cite{Ra01}.
Through simple algebra operations, inequality (\ref{eq:firstSIRineq}) can be
transformed into the
following linear inequality, commonly called \emph{SIR inequality}:
\begin{equation}\label{eq:secondSIRineq}
a_{ts(t)} \cdot p_{s(t)} - \delta \sum_{s \in S \setminus\{s(t)\}} a_{t s} \cdot p_s
\hspace{0.1cm}
\geq
\hspace{0.1cm}
\delta \cdot N \; .
\end{equation}
\noindent
Since assessing service coverage is a central question in the design of any
wireless network, the SIR inequality constitutes the core of any optimization
problem used in wireless network design.
In a hierarchy of WND problems,
a particularly relevant case is constituted by the \emph{Scheduling and Power Assignment Problem (SPAP)} \cite{DA12,DAMaSa11,DAMaSa13,MaRoSm06,RePa06}.
In the SPAP, two decisions must be taken: 1) setting the power emission of each TRX $s \in S$ and 2) assigning served TPs to activated TRXs (note that this corresponds to identify a subset of service links TRX-TP that can be scheduled simultaneously without interference, so we use the term \emph{scheduling}).
To model these two decisions, two types of decision variables are commonly introduced:
\begin{enumerate}
\item a non-negative continuous \emph{power variable} $p_s \in [0,P_{TRX}]$ to represent the power emission of each TRX $s \in S$;
\item a binary \emph{service assignment variable} $x_{ts} \in \{0,1\}$, $\forall \hspace{0.1cm} t \in T, s \in S$, that is equal to 1 if TP $t \in T$ is served by TRX $s \in S$ and 0 otherwise.
\end{enumerate}
\noindent
By exploiting these two families of variables, the SPAP can be naturally formulated as the following Mixed-Integer Linear Program (SPAP-MILP):
\begin{align}
\max\;\;
&
\sum_{t \in T} \sum_{s \in S} r_{t} \cdot x_{ts}
&&
\tag{SPAP-MILP}
\\
&
a_{ts} \cdot p_{s} - \delta \sum_{s \in S \setminus\{\sigma\}} a_{t \sigma} \cdot p_\sigma
+ M \cdot (1 - x_{ts})
\hspace{0.1cm}
\geq
\hspace{0.1cm}
\delta \cdot N
&&
t \in T, s \in S
\label{SPAPsir}
\\
&
\sum_{s \in S} x_{ts} \leq 1
&&
t \in T
\label{SPAPgub}
\\
&
0 \leq p_s \leq P_{TRX}
&&
s \in S
\nonumber
\\
&
x_{ts} \in \{0,1\}
&&
t \in T, s \in S \; .
\nonumber
\end{align}
\noindent
The objective function aims at maximizing the total revenue obtained by serving testpoints (the coverage of each TP generates a revenue equal to $r_t > 0$). Each constraint \eqref{SPAPsir} corresponds with the SIR coverage condition \eqref{eq:firstSIRineq} defined for a TP $t$ served by TRX $s$ and includes a sufficiently large value $M$ (so-called, \emph{big-M coefficient}) to activate/deactivate the constraint.
Finally, constraints \eqref{SPAPgub} impose that each TP is served by at most one TRX.
\section{The Wireless Network Jamming Problem}
\label{sec:NJP}
Consider a wireless network designed by solving SPAP-MILP.
Our aim is now to compromise the functionality of the network by deploying jamming stations (\emph{jammers}). A jammer has the essential task of emitting a signal on the same frequency channel used by the jammed network to interfere with the transmissions of the TRXs and destroy their service.
Let $J$ be the set of deployed jammers and denote by $p_j > 0$ the power emission of each jammer $j \in J$. The presence of the jammers in the wireless network has the effect of creating an \emph{additional interfering summation} in the SIR inequality \eqref{eq:secondSIRineq} associated with each testpoint $t \in T$, namely:
\begin{equation}
\label{eq:secondjammedSIRineq}
a_{ts(t)} \cdot p_{s(t)}
\hspace{0.1cm} - \delta \sum_{s \in S \setminus\{s(t)\}} a_{t s} \cdot p_s
\hspace{0.1cm} - \delta \sum_{j \in J} a_{t j} \cdot p_j
\hspace{0.1cm}
\geq
\hspace{0.1cm}
\delta \cdot N \; .
\end{equation}
\noindent
Assume now that we want to interdict the communications in the network by jamming.
To operate the jamming we are allowed to choose the subset $J' \subseteq J$ of jammers that are activated and the corresponding power emissions $p_j \in [0,P_{JAM}]$, $\forall j \in J'$. We stress that it is rational to set the power emission of each activated jammer to its highest feasible value, since this provides the highest jamming effect. So we assume that if $j \in J$ is activated, then it emits at maximum power, i.e. $p_j = P_{JAM}$.
Let us consider now a wireless network made up of a set of TRXs $S$ providing the service to a set of TPs $T$. Moreover, let us assume that this network has been configured by solving problem SPAP-MILP. So we have at disposal a feasible solution $(\bar{x},\bar{p})$ of SPAP-MILP, which identifies the subset $T'\subseteq T$ of served TPs (i.e., $T' = \{t \in T: \bar{x}_{ts} = 1 \mbox{ for some } s \in S\}$ and the power emission $\bar{p}_{s}$ of each TRX $s \in S$.
Given a served TP $t \in T'$, we know that the corresponding SIR inequality \eqref{eq:secondjammedSIRineq} without the jamming terms is satisfied by the feasible power vector $\bar{p}$, i.e.
$
a_{ts(t)} \cdot \bar{p}_{s(t)}- \delta \sum_{s \in S \setminus\{s(t)\}} a_{t s} \cdot \bar{p}_s
\geq
\delta \cdot N \; .
$
\noindent
In order to compromise service in $t$ by jamming, the SIR inequality must be violated and we must activate a subset $J' \subseteq J$ of jammers such that:
\begin{equation}
\label{eq:violatedSIRineq}
a_{ts(t)} \cdot \bar{p}_{s(t)}
\hspace{0.1cm} - \delta \sum_{s \in S \setminus\{s(t)\}} a_{t s} \cdot \bar{p}_s
\hspace{0.1cm} - \delta \sum_{j \in J'} a_{t j} \cdot P_{JAM}
\hspace{0.1cm}
<
\hspace{0.1cm}
\delta \cdot N \; .
\end{equation}
\noindent
If we introduce a binary \emph{jammer activation variable} $y_{j} \in \{0,1\}$ for each $j \in J$, which is equal to 1 if jammer $j$ is activated (at its maximum power $P_{JAM}$, as discussed above) and 0 otherwise, and if we define the quantity:
\begin{equation}
\label{eq:SIRbalance}
\Delta SIR_{ts(t)} (\bar{p})
=
a_{ts(t)} \cdot \bar{p}_{s(t)}
- \delta \sum_{s \in S \setminus\{s(t)\}} a_{t s} \cdot \bar{p}_s
- \delta \cdot N \; ,
\end{equation}
\noindent
which expresses the SIR balance in TP $t$ when assigned to TRX $s(t)$ for a power vector $\bar{p}$, then the violated SIR inequality \eqref{eq:violatedSIRineq} can be rewritten as:
\begin{equation}
\label{eq:violatedSIRineq_concise}
\delta \sum_{j \in J} a_{t j} \cdot P_{JAM} \cdot y_{j}
\hspace{0.1cm}
>
\hspace{0.1cm}
\Delta SIR_{ts(t)} (\bar{p}) \; .
\end{equation}
\noindent
This inequality expresses the jamming condition: to jam and deny service in a TP, we must activate a subset of jammers whose total power received in the TP is greater than the SIR balance granted by the TRXs of the wireless network.
This inequality constitutes the central element of the new jamming optimization model that we introduce in the next paragraph.
\subsection{A SIR-based model for the Wireless Network Jamming Problem}
In our study, given an operating wireless network, we define the \emph{Wireless Network Jamming Problem} (NJP) as follows:
we must select which jammers to activate to maximize a profit function associated with jamming of served TPs, while respecting a budget that we have at disposal for the activation. The budget is introduced to model the fact that in real-world deployments we expect to have limited resources available, thus restricting the possibility of deploying jammers in the target area.
Suppose now that for each potentially activable jammer $j \in J$, we have the possibility of choosing among $m \in M = \{1,\ldots,|M|\}$ typologies of jamming devices, each associated with a distinct maximum power emission $P_{JAM}^{m}$ and a distinct cost of deployment $c_{j}^{m} > 0$. In particular, $\forall j \in J$ we assume that $P_{JAM}^{m} < P_{JAM}^{m+1}$ and $c_{j}^{m} < c_{j}^{m+1}$, $\forall m \in \{1,\ldots,|M|-1\}$.
If we add an index $m \in M$ to the \emph{jammer activation variables} to consider the presence of multiple jamming devices and we introduce binary \emph{jamming variables} $z_{t} \in \{0,1\}$, $\forall \hspace{0.1cm} t \in T$, that are equal to 1 if served TP $t \in T'$ is jammed and 0 otherwise, the NJP can be modeled as the following 0-1 linear program:
\begin{align}
\max
&
\sum_{t \in T'} \pi_{t} \cdot z_{t}
&&
\tag{NJP-01}
\nonumber
\\
&
\delta \sum_{j \in J} \sum_{m \in M} a_{t j} \cdot P_{JAM}^{m} \cdot y_{jm}
+ M \cdot (1 - z_t)
\hspace{0.05cm}
\geq
\hspace{0.05cm}
\Delta SIR_{t} + \epsilon
&&
t \in T'
\label{GenNJP_SIR}
\\
&
\sum_{j \in J} \sum_{m \in M} c_{jm} \cdot y_{jm} \leq C
&&
\label{GenNJP_costbudget}
\\
&
\sum_{m \in M} y_{jm} \leq 1
&&
j \in J
\label{GenNJP_GUB}
\\
&
z_{t} \in \{0,1\}
&&
t \in T'
\nonumber
\\
&
y_{jm} \in \{0,1\}
&&
j \in J, m \in M
\nonumber
\end{align}
\noindent
In this model, we maximize an objective function that includes profits $\pi_{t} > 0$ deriving from jamming served TPs $t \in T'$. Constraints \eqref{GenNJP_SIR} are derived from the SIR jamming condition \eqref{eq:violatedSIRineq_concise}. Note that the constraints include a big-M term for activation/deactivation: this is necessary since, due to the budget constraint \eqref{GenNJP_costbudget}, it may happen that not all served TPs can be jammed at the same time, thus requiring to choose those that are jammed.
Constraint \eqref{GenNJP_costbudget} expresses the budget condition: we can activate a subset of jammers whose total cost does not exceed the total budget $C>0$. Finally, constraints \eqref{GenNJP_GUB} impose that we can install at most one jamming device in each activated jammer.
\smallskip
\noindent
\textbf{Remark.} In constraints \eqref{GenNJP_SIR}, we just show the dependence of the SIR balance $\Delta SIR_{t}$ on the TP index t,
omitting
$s(t)$ and $\bar{p}$.
We do this since, assuming the point of view of the NJP decision maker, we are only interested in knowing the value of the SIR balance in $t$ and we can neglect the information about the serving TRX and the power of the TRXs. We also highlight the presence of a very small value $\epsilon > 0$ to overcome the strict inequality of \eqref{eq:violatedSIRineq_concise}.
\section{Multiband Robust Optimization in Wireless Network Jamming}
\label{sec:Robust-NJP}
In the previous section, we have considered a \emph{deterministic} version of the NJP, namely we have assumed to know exactly the value of all data involved in the problem. However, in practice this assumption is likely to be not true, as also discussed in \cite{CoEtAl08,CoEtAl09}:
assuming to possess a complete knowledge about the unfriendly network is unrealistic, especially in defence and security applications, where it may be very difficult or even dangerous to gather accurate information. Instead it is rational to assume that we can just rely on estimates of the position and the radio-electrical configuration of the TRXs.
As a consequence, it is highly reasonable to assume that we just possess an estimate of the value of the SIR balance $\Delta SIR_{t}$ in every TP $t$.
Following a practice that we have observed among wireless network design professionals dealing with uncertain SIR quantities (see \cite{DA12}), we use an estimate $\Delta\overline{SIR}_{t}$ as a reference \emph{nominal value} to define an interval of variation of the quantity, whose bounds reflect the reliability of the limited information that we have at disposal and our risk aversion.
If we denote by $d_{t}^{-} < 0$ and $d_{t}^{+} > 0$ the maximum negative and positive deviation from $\Delta\overline{SIR}_{t}$ that we have derived on the basis of our limited information, then the (unknown) actual value $\Delta SIR_{t}$ belongs to the interval:
$
[\Delta\overline{SIR}_{t} + d_{t}^{-}, \hspace{0.2cm} \Delta\overline{SIR}_{t} + d_{t}^{+}] \;.
$
We note that the definition of the interval of variation of $\Delta SIR_{t}$ can also take into account the intrinsic uncertainty of the fading coefficients $a_{t j}$ of the jammers: propagation of wireless signals in a real environment is affected by many distinct factors (e.g., distance between the TRX and the TP, presence of obstacles, weather) that are very hard to assess precisely. Therefore the exact value of the fading coefficients is typically not known
(see \cite{DA12} and \cite{Ra01} for an exhaustive discussion).
An example may help to clarify the negative effects of uncertainty in the NJP. Note that, as common in the WND practice, we express fading and power quantities according to a \emph{decibel} (\emph{dB}) scale. More specifically, since we measure power quantities in \emph{milliwatts} (\emph{mW}), we express power in decibels by referring to \emph{decibel-milliwatts} (\emph{dBmW}).
\noindent
\textbf{Example 1 (Uncertainty in the NJP).}
Consider a TP that is part of a wireless network subject to a noise of $N_{dB} = -114$ $dBmW$ and operating with a SIR threshold $\delta_{dB} = 10$ $dB$ and that receives a serving power of $-48$ $dBmW$ and a total interfering power (including noise) of $-61$ $dBmW$. By formula \eqref{eq:firstSIRineq}, the SIR in the TP is higher than $\delta_{dB}$, being equal to about $13$ $dB$. Therefore the TP is served.
The corresponding SIR balance $\Delta SIR$ can be computed by formula \eqref{eq:SIRbalance} and is equal to about $-50$ $dBmW$.
Suppose now that we want to jam the TP and that we can install a single jammer in a site associated with a fading coefficient of $-77$ $dB$ towards the considered TP. Additionally, suppose that the jammer can accommodate either a device $J_1$ with $P_{JAM}^{J_1} = 20$ $dBmW$ or a more powerful and costly device $J_2$ with $P_{JAM}^{J_2} = 27$ $dBmW$.
If we assume to know all the features of the jammed network, then we can successfully jam the TP by generating an \emph{additional} received interfering power of at least about $17$ $dBmW$.
So installing the less powerful device $J_1$ emitting at $P_{JAM}$ is sufficient to deny service.
However, as previously discussed, in real-world scenarios it is likely that we do not know the exact value of $\Delta SIR$, but we just possess an estimate $\Delta\overline{SIR}$ and an interval of deviation. Suppose then that our estimate is $\Delta\overline{SIR} = - 50$ $dBmW$ and that we consider reasonable to experience deviations up to $\pm$20\% of this value. So the actual value $\Delta SIR$ lies in the interval $[-60,-40]$ dBmW. This interval reflects how trustable we consider the available information about the unfriendly network and expresses also our personal risk aversion.
If the worst deviation occurs, we have $\Delta SIR = - 40$ $dBmW$ and activating $J_1$ would be no more sufficient to successfully jam the TP: the jamming solution deploying $J_1$ would be infeasible and thus useless. So, if we want to be protected against this deviation, we should switch to the more powerful jammer $J_2$, at the price of a higher deploying cost.
\qed
\medskip
\noindent
As the example has shown, the presence of uncertain data in an optimization problem can lead to very bad consequences: as it is known from sensitivity analysis, even small deviations in the value of input data may make an optimal solution heavily suboptimal, whereas feasible solutions may reveal to be infeasible and thus completely useless in practice \cite{BeElNe09,BeBrCa11}. In our application, it is thus not possible to optimize just referring to the nominal values $\Delta\overline{SIR}_{t}$, but we must take into account the possibility of deviations in an interval.
Many methodologies have been proposed over the years to deal with data uncertainty: \emph{Stochastic Programming} is commonly considered the oldest, while in the last decade \emph{Robust Optimization} has known a wide success, especially in real-world applications thanks to its accessibility and computational tractability. We refer the reader to \cite{BeElNe09,BeBrCa11} for a general discussion about the impact of data uncertainty in optimization and for an overview of the main methodologies proposed in literature to deal with uncertain data. The two references are in particular focused on theory and applications of Robust Optimization (RO), the methodology that we adopt in this paper to tackle data uncertainty. RO is based on two main concepts: 1) the decision maker defines an \emph{uncertainty set}, which reflects his risk aversion and identifies the deviations of coefficients against which he wants to be protected; 2) protection against deviations specified by the uncertainty set is guaranteed under the form of hard constraints that cut off all the feasible solutions that may become infeasible for some deviations included the uncertainty set.
Formally, suppose that we are given a generic 0-1 linear program:
$$
v
\hspace{0.1cm} = \hspace{0.1cm}
\max
\hspace{0.1cm}
c' \hspace{0.05cm} x
\hspace{0.5cm}
\mbox{ with }
\hspace{0.1cm}
x \in {\cal F}
=
\{
A \hspace{0.05cm} x \leq b,
\hspace{0.15cm}
x \in \{0,1\}^{n}
\}
$$
\noindent
and that the coefficient matrix $A$ is uncertain, i.e. we do not know the exact value of its entries. However, we are able to identify a family $\cal A$ of coefficient matrices that represent possible valorizations of the uncertain matrix $A$, i.e. $A \in \cal A$. This family represents the uncertainty set of the robust problem. A \emph{robust optimal solution}, i.e. a solution protected against data deviations, can be computed by solving the \emph{robust counterpart} of the original problem:
$$
v^{{\cal R}}
\hspace{0.1cm} = \hspace{0.1cm}
\max
\hspace{0.1cm}
c' \hspace{0.05cm} x
\hspace{0.5cm}
\mbox{ with }
\hspace{0.1cm}
x \in {\cal R}
=
\{
A \hspace{0.1cm} x \leq b
\hspace{0.3cm}
\forall A \in {\cal A},
\hspace{0.15cm}
x \in \{0,1\}^{n}
\} \; ,
$$
\noindent
which considers only the solutions that are feasible for all the coefficient matrices in the uncertainty set ${\cal A}$. Therefore, the robust feasible set is such that ${\cal R} \subseteq {\cal F}$. The choice of the coefficient matrices included in ${\cal A}$ should reflect the risk aversion of the decision maker.
Guaranteeing protection against data deviations entails the so-called \emph{price of robustness} \cite{BeSi04}: the optimal value of the robust counterpart is in general worse than the optimal value of the original problem, i.e., $v^{{\cal R}} \leq v$, due to having restricted the feasible set to only robust solutions. The price of robustness reflects the features of the uncertainty set: uncertainty sets expressing higher risk aversion will take into account more severe and unlikely deviations, leading to higher protection but also higher price of robustness; conversely, uncertainty sets expressing risky attitudes will tend to neglect improbable deviations, offering less protection but also a reduced price of robustness.
\\
In the next paragraph, we fully describe the uncertainty model that we adopt.
\subsection{RHS Uncertainty in Wireless Network Jamming}
The data uncertainty affecting our problem needs a special discussion. As pointed out in \cite{BeElNe09,BeBrCa11}, most RO models considers so-called \emph{row-wise} uncertainty. This means that protection against data deviations is separately defined for each constraint subject to uncertainty, by considering the worst total deviation that the constraint may experience w.r.t. the uncertainty set. More formally, consider again a generic uncertain 0-1 linear program:
\begin{align*}
\max
\sum_{j\in J} c_j \cdot x_j
\hspace{0.4cm}
\mbox{ s.t. }
\sum_{j\in J} a_{ij} \cdot x_j \leq b_i
\hspace{0.3cm}
i\in I,
\hspace{0.4cm}
x_j \in \{0,1\}
\hspace{0.3cm}
j \in J
\; .
\end{align*}
\noindent
where w.l.o.g we assume that the uncertainty just regards the coefficients $a_{ij}$ (uncertainty affecting cost coefficients or RHSs can be easily reformulated as coefficient matrix uncertainty, see \cite{BeSi04}).
If we denote the uncertainty set by $U$, following a row-wise uncertainty paradigm the robust counterpart is:
\begin{align*}
\max
\sum_{j\in J} c_j \cdot x_j
\hspace{0.15cm}
\mbox{ s.t. }
\sum_{j\in J} a_{ij} \cdot x_j + DEV_i(x,U) \leq b_i
\hspace{0.25cm}
i\in I,
\hspace{0.30cm}
x_j \in \{0,1\}
\hspace{0.15cm}
j \in J
\; .
\end{align*}
\noindent
where each uncertain constraint $i \in I$ 1) refers to the nominal value $\bar{a}_{ij}$ of each coefficient and 2) includes an additional term $DEV_i(x,U)$ to represent the maximum total deviation that $i$ may experience for the solution $x$ and the uncertainty set $U$. This problem is actually non-linear since $DEV_i(x,U)$ hides a maximization problem based on the uncertainty set definition (see \cite{BeSi04,BuDA12a,BuDA12b}).
A central question in RO is how to model the uncertainty through a suitable uncertainty set $U$. The majority of applied studies of RO model $U$ as a cardinality-constrained uncertainty set \cite{BeBrCa11}, primarily referring to the renowned $\Gamma$-robustness model ($\Gamma$-Rob) by Bertsimas and Sim \cite{BeSi04}. The main feature of these particular uncertainty sets is to impose an upper bound on the number of coefficients that may deviate to their worst value in each constraint. The non-linearity of the robust counterpart due to the presence of $DEV_i(x,U)$ is then solved by exploiting strong duality and defining a larger but compact and linear robust counterpart, as explained in \cite{BeSi04} and \cite{BuDA12a,BuDA12b}.
In relation to this general row-wise RO setting, the uncertain NJP that we consider is a special type of uncertain problem: uncertainty just affects the RHS of each SIR constraint of NJP-01. As a consequence, if we adopt row-wise uncertainty and a cardinality-constrained uncertainty set, then the upper bound on the number of deviating coefficient in each constraint \eqref{GenNJP_SIR} is equal to either 0 or 1. In other words, either the constraint is not subject to uncertainty and thus the actual value and the nominal value coincide (i.e., $\Delta SIR_{t} =\Delta\overline{SIR}_{t}$) or the constraint is subject to uncertainty and thus the actual value is equal to the highest deviating value (i.e., $\Delta SIR_{t} = \Delta\overline{SIR}_{t} + d_{t}^{+}$). Thus the robust counterpart simply reduces to a nominal problem with modified RHS values. We stress that this is a very rigid representation of the uncertainty and we would like to benefit from a richer representation.
A source of inspiration for a richer model can be represented by \emph{Multiband Robust Optimization} (MB) and related multiband uncertainty sets, introduced by B\"using and D'Andreagiovanni in 2012 to generalize and refine classical $\Gamma$-Rob (see e.g., \cite{BuDA12a,BuDA12b,BuDA14} and \cite{BaEtAl13}).
In our case, we want to adopt a distinct but similar definition of multiband uncertainty.
To define this multiband-like uncertainty set for RHS uncertainty:
\begin{enumerate}
\item we partition the overall deviation range
$
[d_{t}^{-}, \hspace{0.1cm} d_{t}^{+}]
$
into $K$ bands, defined on the basis of $K$ deviation values:
\\
$
-\infty<
{d_{t}^{-} = d_{t}^{K^{-}}<\cdots<d_{t}^{-1}
\hspace{0.1cm}<\hspace{0.2cm}d_{t}^{0}=0\hspace{0.2cm}<\hspace{0.1cm}
d_{t}^{1}<\cdots<d_{t}^{K^{+}}} = d_{t}^{+}
<+\infty ;
$
\item through these deviation values, $K$ deviation bands are defined, namely:
a set of positive deviation bands $k\in \{1,\ldots,K^{+}\}$ and a set of negative deviation bands $k \in \{K^{-}+1,\ldots,-1,0\}$, such that a band $k\in \{K^{-}+1,\ldots,K^{+}\}$ corresponds to the range $(d_{t}^{k-1},d_{t}^{k}]$, and band $k = K^{-}$
corresponds to the single value $d_{t}^{K^{-}}$. Note that $K = K^{+} + K^{-}$;
\item considering the RHS values $\Delta SIR_{t}$ of the entire set of constraint \eqref{GenNJP_SIR}, we impose a lower and upper bound on the number of values that may deviate in each band: for each band $k\in K$, we introduce two bounds $l_k, u_{k} \in \mathbb{Z}_{+}$: $0 \leq l_k \leq u_{k} \leq |T'|$. As additional assumptions, we do not limit the number of coefficients that may deviate in band $k = 0$ (i.e., $u_{0}=|T'|$), and we impose that $\sum_{k \in K} l_k \leq |T'|$, to ensure the existence of a feasible realization of the uncertainty set.
\end{enumerate}
\noindent
We call this set \emph{RHS-Multiband Set (RHS-MB)}.
\smallskip
\noindent
\textbf{Remark.}
We stress that point 3 differs from the classical definition of multiband uncertainty set, presented in \cite{BuDA12a,BuDA12b}, where a row-wise uncertainty perspective is assumed and the system of bounds for the bands is defined separately for each uncertain constraint of the problem.
\smallskip
\noindent
An MB uncertainty set is particularly suitable to represent the past behaviour of uncertainty represented by histograms, as explained in \cite{BaEtAl13,BuDA12a,BuDA12b}. Moreover, such set has the advantage of taking into account negative deviation bands, which are neglected in classical cardinality-constrained sets: we want of course to be protected against positive deviations that lead to infeasibility, but in real-world applications we commonly experience also negative deviations,
which compensate the positive deviations and reduce the conservatism of solutions.
A critical question is now: how can we solve the uncertain NJP when RHS uncertainty is modeled by a RHS-Multiband Set? In the case of a purely linear program, we could define the dual problem of our uncertain problem, thus transforming the RHS uncertainty into objective function uncertainty and then adopt a standard RO dualization approach and reach a compact robust counterpart, as in \cite{Mi09}.
However, due to the integrality constraints, the classical dualization approach in our case cannot be operated.
As an alternative, we can adopt a \emph{robust cutting plane approach}: we solve NJP-01 obtaining an optimal solution, then we check whether the solution is also robust and feasible w.r.t. a specified RHS-MB. If this is the case, we have found a robust optimal solution and we have done. Otherwise we separate a \emph{robustness cut}, namely an inequality that cut off this non-robust solution, we add the cut to the problem and we solve the new resulting problem. This basic step is then iterated as in a canonical cutting-plane algorithm, until no new cut is separated and thus the generated solution is robust and optimal.
Under canonical row-wise uncertainty, in $\Gamma$-Rob and MB, robustness cuts can be efficiently separated.
For $\Gamma$-Rob, the separation of a robustness cut is trivial and just consists in sorting the deviations and choosing the worst $\Gamma > 0$ \cite{FiMo12}. This simple approach is not valid instead for MB, but in \cite{BuDA12a,BuDA12b} we proved that the separation can be done in polynomial time by solving a min-cost flow problem.
As we stressed above, RHS-MB poses a new challenge.
More formally, suppose that we have a feasible solution $(\bar{z},\bar{y})$ to NJP-01 and that we want to check its robustness. Let us denote by $T'$ the subset of TPs that are successfully jammed.
A robustness cut is generated by solving the following 0-1 linear program, that can be interpreted as the problem of an adversarial that attempts to compromise the feasibility of our optimal jamming solution by picking up the worst deviation for $(\bar{z},\bar{y})$ allowed by RHS-MB.
\begin{align}
V
=
\max
&
\sum_{t \in T'} v_t
&&
\tag{SEP}
\label{MBdev_objFunction}
\\
&
\sum_{k\in K} d_{t}^{k} \cdot w_{t}^{k}
+ M \cdot (1 - v_t) \geq JAM_t - \Delta\overline{SIR}_{t}
\label{MBdev_JAMviolation}
&&
t \in T'
\\
&
l_{k} \leq \sum_{j\in J} w_{t}^{k} \leq u_{k}
&&
k\in K
\label{MBdev_constraint}
\\
&
\sum_{k\in K} w_{t}^{k} \leq 1
&&
t \in T'
\label{MBgub_constraint}
\\
&
w_{t}^{k} \in \{0,1\}
&&
t \in T', k \in K
\label{MBgub_variables}
\\
&
v_{t} \in \{0,1\}
&&
t \in T'
\label{MBgub_deniedJamming}
\; .
\end{align}
\noindent
The separation problem SEP uses 1) a binary variable $v_{t}$, $\forall t \in T'$ that is equal to 1 when the jamming of TP $t$ is denied and 0 otherwise; 2) a binary variable $w_{t}^{k}$ that is equal to 1 when the SIR balance $\Delta SIR_{t}$ of $t$ deviates in band $k$ and 0 otherwise.
The objective function aims at maximizing the number of TPs whose jamming is denied by the adversarial. A constraint \eqref{MBdev_JAMviolation} expresses the violation of the corresponding constraint \eqref{GenNJP_SIR} when the jamming of TP $t$ is denied by feasible deviations of the SIR balance according to RHS-MB, namely
$JAM_t < \Delta SIR_{t} + \sum_{k\in K} d_{t}^{k} \cdot w_{t}^{k}$, where $JAM_t = \delta \sum_{j \in J} \sum_{m \in M} a_{t j} \cdot P_{JAM}^{m} \cdot \bar{y}_{jm}$ is the total jamming power that $t$ receives for jamming solution $(\bar{z},\bar{y})$.
Constraints (\ref{MBdev_constraint})-(\ref{MBgub_constraint}) enforce the structure of the uncertainty set RHS-MB: the first family imposes the lower and upper bounds on the number of RHS values $\Delta SIR_{t}$ that may deviate in each band $k \in K$, whereas the second family imposes that each value $\Delta SIR_{t}$ deviates in at most one band (note that $\sum_{k\in K} w_{t}^{k} = 0$ corresponds with no deviation and is equivalent to $w_{t}^{0} = 1$).
It is easy to observe that if the optimal value $V$ of SEP is equal to 0, then $(\bar{z},\bar{y})$ is robust, since it is not possible to compromise the jamming of any TP for the given uncertainty set RHS-MB. On the contrary, if $V \geq 1$ and $(v^{*},w^{*})$ is an optimal solution of SEP, then $(\bar{z},\bar{y})$ is not robust, the jamming of $V$ TPs may be compromised and
\begin{equation}
\sum_{t \in T': \hspace{0.1cm} v_t^{*}=1} z_t \leq V - 1
\label{eq:robCut}
\end{equation}
is evidently a robustness cut that we must add to the original problem. After this we can iterate the basic robustness check step.
The general structure of the proposed robust cutting plane algorithm is described in Algorithm 1.
Assuming to use a solver like CPLEX implementing a branch-and-cut solution algorithm, the separation problem is solved every time that the solver finds a feasible solution to NJP-01. If a robustness cut is identified for the current solution, then it is added as constraint to NJP-01.
\begin{algorithm}
\caption{Robust Cutting Planes for NJP subject to RHS-MB uncertainty}
\label{ALGhybrid}
\begin{algorithmic}[1]
\Require an instance of NJP-01 and of RHS-MB
\Ensure a robust optimal solution $(z^{*},y^{*})$ to NJP-01 w.r.t. RHS-MB (if existent)
\State Solve NJP-01 by a branch-and-cut-based MIP solver (denoted by SOLVER)
\While{SOLVER has not find a robust optimal solution $(z^{*},y^{*})$ to NJP-01 or has proved that $(z^{*},y^{*})$ does not exist}
\State Run SOLVER
\If{SOLVER finds a feasible solution $(\bar{z},\bar{y})$ to NJP-01}
\State Solve SEP for $(\bar{z},\bar{y})$ and RHS-MB
\If{$V > 0$}
\State Generate a robustness cut \eqref{eq:robCut} and add it to NJP-01
\EndIf
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Computational results}
\label{sec:experiments}
To evaluate the performance of our original robust cutting plane algorithm, we considered a set of 15 realistic instances, based on network data defined in collaboration with network engineers of the Technical Strategy \& Innovations Unit of British Telecom Italia. All the instances refer to a fixed WiMAX network (see \cite{AnGhMu07}, \cite{DAMa09} and \cite{DA12} for an introduction to WiMAX technology and modeling) and are based on real terrain data model and population statistics of a residential urban area from the administrative district of Rome (Italy). The instances consider distinct networks with up to $|T| = 224$ TPs and $|S| = 20$ TRXs, operating on one of the frequency channels reserved for WiMAX transmissions in Italy in the band [3.4$\div$3.6] GHz and using a QAM-16 modulation scheme.
We used these data to build the MILP problem SPAP-MILP for each instance and obtain realistic wireless network configurations to jam by solving the uncertain version of problem NJP-01. The revenue $r_t$ associated with the service coverage of each TP was derived from population statistics.
In order to build NJP-01 and set the robust cutting-plane algorithm, we assume that we know the set $T'$ of served TPs. However, we also assume that we do not exactly the value of the SIR balance granted by the solution of SPAP-MILP, but we just have at disposal an estimate $\Delta\overline{SIR}_{t}$ (different from the actual value provided by the solution). On the basis of discussions on the topic with network professionals,
we decided to model deviations through an RHS multiband uncertainty set including 5 deviation bands (2 negative and 2 positive, besides the null deviation band) and with a basic deviation of each band equal to 20\% of the nominal value. Concerning the jammers, we supposed to have three typologies of jammers (i.e., $|M| = 3$) with a cost of deployment reflecting the population in the TPs and increasing as the population in the TP increases (we assume a higher risk of deployment in more populated areas where the jammers could be discovered). The profit $\pi_t$ of successfully jamming a TP was also based on population data.
All experiments were made on a 2.70 GHz Intel Core i7 with 8 GB. The code was written in the C/C++ programming language and the optimization problems were solved by IBM ILOG CPLEX 12.5 with the support of Concert Technology. The results of the experiments are reported in Table \ref{tab:1},
in which the first column states the instance ID, whereas the following four columns report: the number $|T|$ of TPs and the number $|S|$ of TRXs in the SPAP-MILP instance; the number $|T^{*}|$ of covered TPs in the feasible solution of SPAP-MILP used for building the corresponding NJP-01 instance; the number $|J|$ of jammers in the NJP-01 instance. The following four columns report instead: the optimal number \#JAM(Nom) of jammed TPs for the nominal NJP-01 problem (no uncertainty considered); the optimal number \#JAM(Rob) of jammed TPs for the robust version of NJP-01 solved by Algorithm 1; the percentage price of robustness PoR\%; the number \#Cuts of robust cuts generated during the execution of Algorithm 1.
The main observations about the results are related to the comparison between the optimal value of the nominal problem and that of its robust version. Concerning this central point, we can observe that the price of robustness that we must face keep contained, reaching an average value of -17,1\% and a peak of -23.1\% in the case of instance I9. We consider this a reasonable price to pay to obtain the protection against the deviations that the decision maker considers relevant. Furthermore, we can notice that the number of robust cuts that are separated during the execution of Algorithms 1 is limited, especially in the case of the smaller instances.
Concerning solution time, while solving the uncertain version of NJP-01 required a time ranging from about 30 to about 70 minutes, depending upon the features of the wireless network configuration to be jammed identified by a solution of SPAP-MILP, the execution time of Algorithm 1 could reach approximately 3 hours. We believe that this time could be sensibly reduced by studying a stronger separation model and more efficient separation algorithms.
\begin{table}
\caption{Experimental results}
\label{tab:1}
\begin{tabular}{c cccc cccc}
\hline
\noalign
{\smallskip}
ID & $|T|$ & $|S|$ & $|T^{*}|$ & $|J|$ & \#JAM(Nom) & \#JAM(Rob) & PoR\% & \#Cuts
\\
\noalign{\smallskip}\hline\noalign{\smallskip}
I1 & 100 & 6 & 65 & 15 &
44 & 37 & -15.90 & 29
\\
I2 & 100 & 9 & 71 & 15 &
51 & 45 & -11.76 & 41
\\
I3 & 100 & 12 & 75 & 15 &
46 & 38 & -17.93 & 37
\\
I4 & 150 & 6 & 85 & 15 &
49 & 43 & -12.24 & 32
\\
I5 & 150 & 9 & 93 & 15 &
68 & 57 & -16.17 & 31
\\
I6 & 150 & 12 & 106 & 20 &
75 & 64 & -14.66 & 35
\\
I7 & 169 & 12 & 92 & 20 &
47 & 39 & -17.02 & 49
\\
I8 & 169 & 16 & 95 & 20 &
66 & 53 & -19.69 & 58
\\
I9 & 169 & 20 & 120 & 20 &
69 & 53 & -23.18 & 75
\\
I10 & 196 & 12 & 108 & 20 &
73 & 58 & -20.54 & 68
\\
I11 & 196 & 16 & 122 & 25 &
82 & 69 & -15.85 & 54
\\
I12 & 196 & 20 & 134 & 25 &
89 & 70 & -21.34 & 92
\\
I13 & 224 & 15 & 142 & 25 &
102 & 82 & -19.60 & 87
\\
I14 & 224 & 20 & 159 & 25 &
115 & 96 & -16.52 & 101
\\
I15 & 224 & 25 & 170 & 25 &
109 & 93 & -14.67 & 103
\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
We considered the \emph{Wireless Network Jamming Problem}, namely the problem of optimally placing and configuring a set of jammers in order to inderdict communications of a wireless networks.
We revisited the models proposed in the seminal works by Commander et al., better highlighting the strong connections with classical wireless network design formulations. Moreover, we addressed the uncertain nature of the problem by proposing an original robust cutting plane algorithm, inspired by \emph{Multiband Robust Optimization}, to deal with the RHS uncertainty of the problem and overcome the rigidity of canonical row-wise uncertainty approaches. As future work, we plan to investigate stronger models for the problem, tackling in particular the presence of big-M coefficients and devising more effective and efficient separation algorithms.
|
1,116,691,500,365 | arxiv | \section{Introduction}
QUIC is a recently standardized protocol~\cite{rfc9000} aiming at providing the services of TLS/TCP (built-in encryption, reliable data transfer,...) with data multiplexing atop UDP.
Initially designed for HTTP/3~\cite{ietf-quic-http-34}, QUIC sets itself up as the next general purpose transport protocol for the Internet and many extensions, such as the support for unreliable data transfer~\cite{ietf-quic-datagram-06}, have been proposed.
Thanks to its flexibility, QUIC can serve many use cases~\cite{kosek2021beyond} while enabling rapid experiments with, e.g., congestion control algorithms~\cite{kakhki2017taking,narayan2018restructuring}.
While QUIC supports probing new networks and switching to a different network, it only provides single-path data transmission.
In particular, endhosts cannot simultaneously use different network paths to, e.g., aggregate their bandwidths.
Still, there is demand for such real multipath support for various use cases~\cite{interim-20-10}.
Multipath TCP~\cite{raiciu2012hard} and CMT-SCTP~\cite{iyengar2006concurrent} can now address them, such as mobility support for delay-sensitive applications~\cite{de2018tuning},
network handovers in high-speed trains~\cite{li2018measurement} and hybrid access networks~\cite{bonaventure2016multipath}.
However, both Multipath TCP and CMT-SCTP faced deployment issues on the Internet~\cite{honda2011still,budzisz2012taxonomy}.
QUIC mitigates such network interference by design thanks to its built-in encryption, raising interest to bring multipath support.
The first attempts~\cite{de2017multipath, viernickel2018multipath} were mostly built on the design experience of Multipath TCP.
However, these were based on an old version of QUIC~\cite{langley2017quic} which differs from the standardized one.
As of September 2021, there were three different multipath proposals for standardized QUIC: \texttt{draft-deconinck-quic-multipath}~\cite{deconinck-quic-multipath-07,de2021multiflow}, \texttt{draft-huitema-quic-mpath-option}~\cite{huitema-quic-mpath-option-01} and \texttt{draft-liu-multipath-quic}~\cite{liu-multipath-quic-04,zheng2021xlink}.
Having several proposals actually slowed down reaching the consensus on one approach.
Some were considered too complex.
For instance, \texttt{draft-deconinck} provides support for unidirectional paths (while QUIC's path validation ensures bidirectional ones) and IP address communication (raising concerns about forged addresses).
\texttt{draft-liu} adds Quality of Experience signaling, which might not be used in some use cases.
In order to advance the multipath work, all the previous drafts' authors worked together to make a common proposal~\cite{lmbdhk-quic-multipath-00} focusing on the core components that would suit all the aforementioned use cases.
This multipath draft recently got adopted at IETF112~\cite{ietf112-minutes}.
However, there remains a core design issue that requires consensus: the amount of packet number spaces that a multipath QUIC connection should use.
While \texttt{draft-huitema} advocates for keeping a single application packet number space, \texttt{draft-deconinck} and \texttt{draft-liu} dedicate an application packet number space per path.
While the unified proposal enables negotiating both options, it is likely only one will be widely supported in the end.
This paper aims at providing insights to the network research community in order to understand the implications of this design choice, not only for Multipath QUIC, but also for any multipath transport protocol.
Indeed, MPTCP has several path's TCP sequence numbers with a global MPTCP data sequence one, while CMT-SCTP has a sequence number per data stream.
Our evaluation reveals that while both designs work for QUIC, using a single packet number space\xspace makes the performance of the transfer more dependent on the receiver's acknowledgment strategy than using multiple ones.
The remaining of this paper is organized as follows.
We start in Section~\ref{sec:multipath} by describing the core components of Multipath QUIC and explaining the advantages and drawbacks of each packet number space design.
Then, in Section~\ref{sec:evaluation}, we evaluate these designs by considering two different implementations (\texttt{picoquic}\xspace~\cite{huitema_picoquic_2021} and \texttt{PQUIC}\xspace~\cite{de2019pluginizing}) under a broad range of network scenarios using the NS3 simulator~\cite{riley2010ns}.
Finally, we discuss our results in Section~\ref{sec:discussion}.
\section{Bringing Multipath to QUIC}\label{sec:multipath}
QUIC packets are fully encrypted, except a small header containing a few clear-text fields.
Among them, the \emph{Destination Connection ID} enables the endhosts to map packets to QUIC connections.
This makes QUIC unbound to the 4-tuple (IP\textsubscript{src}, IP\textsubscript{dst}, port\textsubscript{src}, port\textsubscript{dst}).
Some servers negotiate 0-length Connection IDs to limit the wire overhead.
In such cases, the QUIC connection is identified by the 4-tuple.
QUIC packets contain a monotonically increasing packet number that is used to compute a unique AEAD nonce of at least 64 bits.
When a packet is lost, its content can be retransmitted but in a packet with a greater packet number.
While SCTP has chunks, the core QUIC data, located in the encrypted payload, are called \emph{frames}.
The \texttt{STREAM} frame carries the application data.
This frame contains an absolute offset that does not wrap-around, unlike TCP's sequence number and MPTCP's Data Sequence Number.
Furthermore, unlike (MP)TCP and (CMT-)SCTP, QUIC does not acknowledge data sequence numbers, but packets numbers using the \texttt{ACK} frame.
Compared to TCP where the number of selective acknowledgments~\cite{mathis1996rfc2018} is often limited to 2-3 per packet, an \texttt{ACK} frame is theoretically constrained to the size of the QUIC packet, limiting the number of ranges to several hundreds entries.
In addition, the ACK delay field of the \texttt{ACK} frame includes the time between the reception of the largest packet number and the instant the frame is sent, allowing precise RTT estimations.
One of the key features of QUIC is connection migration.
For instance, a connection initiated by a smartphone can move from a Wi-Fi network to a cellular one due to, e.g., user mobility.
For that, a client willing to change the network path needs first to check that the endpoint is still reachable using the new 4-tuple before initiating connection migration.
This process is called \emph{path validation}.
To do so, both endhosts need to have an unused Destination Connection ID provided by their peer through \texttt{NEW\_CONNECTION\_ID} frames\footnote{Unless the peer negotiated 0-length Connection IDs.}.
An endhost starts the path validation process by sending a packet containing a \texttt{PATH\_CHALLENGE} frame containing 8 bytes of opaque data from a new 4-tuple with an unused Destination Connection ID.
Upon reception of such packet, the peer replies on the newly-seen 4-tuple with another
packet containing a \texttt{PATH\_RESPONSE} frame echoing the opaque data of the \texttt{PATH\_CHALLENGE} frame using an unused Destination Connection ID.
Upon reception of that packet, the endhost marks the path as validated.
At this point, the peer may also initiate path validation on this new path.
As soon as the client starts sending data (e.g. \texttt{STREAM} or \texttt{ACK} frames) over this new network path, the server stops using the previous path and migrates the connection over the new 4-tuple.
While QUIC bundles such connection migration feature, it does not enables hosts to simultaneously use several network paths to send data.
The unified proposal~\cite{lmbdhk-quic-multipath-00} strives at providing multipath usage with as few changes as possible to the QUIC specification~\cite{rfc9000}.
It focuses on the core building blocks: negotiating multipath, initiating new paths and numbering packets.
Advanced multipath-specific algorithms, such as packet scheduling and selecting the paths to use, are out-of-scope of the proposal.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[width=.77\columnwidth]{figures/single-pns.png}
\caption{Single packet number space case. }
\label{fig:spns}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[width=.77\columnwidth]{figures/multiple-pns.png}
\caption{Multiple packet number spaces case.}
\label{fig:mpns}
\end{subfigure}
\hfill
\caption{An example of a 12-packets data transfer where the server follows a round-robin scheduling strategy to send packets to the client. It shows a snapshot where the fourth packet on the bottom path got received. For the sake of this example, the client replies with a packet containing an acknowledgment frame on the same path than the last received packet, but in practice the client has no constraint on the path it should use.}
\label{fig:example}
\end{figure*}
\paragraph{Multipath Negotiation.}
As many other QUIC extensions, the multipath one is negotiated during the connection handshake using a QUIC transport parameter, named \texttt{enable\_multipath}, to negotiate its usage.
If both the client and the server advertise compatible values indicating multipath support\footnote{The multipath negotiation allows hosts to advertise to advertise their support for one of the packet number space design, or both. If, e.g., the client has exclusive support for single packet number space\xspace while the server only handles multiple packet number spaces\xspace, then the multipath negotiation will fail.}, then the connection uses the multipath extension.
Note that the handshake needs to be fully completed before starting using multiple paths.
\paragraph{Initiating New Paths.}
Multipath QUIC builds on the path validation process to initiate new paths.
The main addition to the single-path QUIC is that with the multipath extension, several validated paths can be simultaneously used to send data packets.
Each path, using different 4-tuples, is identified by the sequence number of the Destination Connection ID (communicated by the NEW CONNECTION ID frame) it uses.
Packets can then be mapped to a path thanks to the packet's Connection ID, or the packet's 4-tuple when zero-length Connection IDs are used.
Note that endhosts can change at any time the Connection ID used over a given path.
Furthermore, since an endhost must use distinct Destination Connection IDs per path, its peer controls the maximum number of paths that can be opened by choosing when to send \texttt{NEW\_CONNECTION\_ID} frames\footnote{A hard limit on the number of usable Connection IDs is determined during the handshake thanks to the \texttt{active\_connection\_id\_limit} QUIC transport parameter.}.
\paragraph{Numbering Packets.}
In single-path QUIC, all post-handshake packets use a single application packet number space.
The current multipath proposal enables implementations to experiment with either single packet number space or multiple packet number spaces.
We first describe the single packet number space\xspace design.
Then, we discuss the multiple packet number spaces\xspace one.
\subsection{Single Packet Number Space}
This design lets endhosts spread packets over different paths while keeping the regular QUIC application packet number space.
This means that a packet with number $N$ can be sent over path $A$ while a packet with number $N+1$ can be transmitted on path $B$.
To illustrate this situation, Figure~\ref{fig:spns} represents a server sending 12 packets to a client in a round-robin fashion.
Here, the server sends all even packet numbers on the upper path and all the odd ones on the lower path.
The packets are acknowledged with the regular ACK frame.
However, when paths do not exhibit the same performance, the receiver is likely to observe out-of-order packet number reception.
In the example, the lower path is faster than the upper one.
When the client receives packet number 2 on the upper path, it advertised that it received the two first packets on the upper path (0, 2) and the three first packets on the lower one (1, 3, 5) by sending an ACK frame with two ranges: 5 and from 3 to 0.
Then, it receives the fourth packet on lower path (7).
This creates a new range inside the ACK frame.
Note that the receiver cannot know \emph{a priori} the next packet number it should expect on a path.
The main concern about this approach relates to the number of ranges that the \texttt{ACK} frame sent by the receiver could contain.
When paths have very different latencies, the receiver should pay attention not to prune its ranges too quickly.
Furthermore, as we will see in Section~\ref{sec:evaluation}, implementations might want to limit the number of ranges they encode in \texttt{ACK} frames to limit the state they need to maintain.
In such cases, some packets might be either lately or never acknowledged by the receiver, even if they were finally received.
In addition, the RTT estimates are also less precise because the ACK delay field of the \texttt{ACK} frame only relates to the largest packet number received.
This makes the performance of a multipath transfer more dependent on the receiver's strategy.
\subsection{Multiple Packet Number Spaces}
With this design, each path has an associated packet number space.
As depicted in Figure~\ref{fig:mpns}, subsequent packets over a given path use consecutive packet number spaces.
Because packet numbers are now path-dependent, an augmented version of the ACK frame, called the \texttt{ACK\_MP} frame, includes the path identifier to which the acknowledged packet numbers refer to.
Such an approach avoids large ranges in \texttt{ACK\_MP} frames due to different paths' performances, making the multipath performance less dependent of the receiver's strategy.
It also enables keeping a simple per-path logic for lost packet detection.
Yet, the multiple packet number spaces\xspace design also has drawbacks.
First, it requires endhosts to use non-zero-length Connection IDs to be able to identify the packet number space of a packet.
Second, it involves changes to the use of AEAD for packet protection.
QUIC uses the packet number to compute the AEAD nonce, and this nonce must not be reused over a given connection.
To mitigate this issue when using multiple packet number spaces\xspace, the nonce includes the path identifier.
\section{Evaluation}\label{sec:evaluation}
To evaluate the impact of the number of packet number spaces on Multipath QUIC performance, we explore a large set of network scenarios within the NS3 environment~\cite{riley2010ns}.
Compared to emulation, this setup enables fully reproducible and stable results while still using actual implementations.
We focus on two different open-source implementations of Multipath QUIC.
\texttt{picoquic}\xspace~\cite{huitema_picoquic_2021} supports both single packet number space and multiple packet number spaces following the unified proposal.
We want to emphasize that all the algorithms (loss detection, multipath-specific decisions,...) remain the same regardless of the packet number space design used.
\texttt{PQUIC}\xspace~\cite{de2019pluginizing} has a multipath plugin implementing an earlier proposal~\cite{deconinck-quic-multipath-07} relying on multiple packet number spaces.
Note that both implementations limit to 32 the maximum number of additional ACK Blocks (AB) contained in a single \texttt{ACK(\_MP)} frame, therefore limiting the maximum number of ranges within an \texttt{ACK(\_MP)} frame to 33.
We start our experiments with \texttt{picoquic}\xspace\footnote{Commit \texttt{f4ae862}.} and \texttt{PQUIC}\xspace\footnote{Commit \texttt{841c822}.} versions that acknowledge first the ranges that are the closest to latest acknowledged packet number.
We explore many network situations following an experimental design approach using the WSP algorithm~\cite{santiago_construction_2012} enabling us to cover broadly the parameter space with 95 points, resulting in 95 runs.
All our experiments consist in a 50 MB download initiated by a GET request over a single stream.
Relying on such a large transfer, combined to a large initial receive buffer of 2 MB enables us to limit the impact of the packet scheduler.
The client initiates the usage of all the available paths upon handshake completion.
We define the transfer time as the delay between the first QUIC packet sent by the client and the last QUIC packet received by the client containing the \texttt{CONNECTION\_CLOSE} frame.
In addition, we generate QLOG files~\cite{marx2020debugging} at client and server sides to get a full view of the connection and the internal state of the implementations.
We first experiment with 2-path network scenarios where both paths share the same characteristics.
Then, we explore 2-path situations where paths exhibit heterogeneous performances in terms of bandwidth and delay.
Finally, we extend our heterogeneous scenarios to 3-path networks.
\subsection{Homogeneous 2-Path Experiments}
\begin{table}
\centering
\caption{Parameter space for the 95 homogeneous runs.}
\begin{tabular}{@{}c|cc@{}}
\toprule
Factor & Min & Max \\
\midrule
Bandwidth [Mbps] & 2.5 & 100 \\
RTT [ms] & 5 & 100 \\
\bottomrule
\end{tabular}
\label{tab:homo_param_space}
\end{table}
When all the network paths provide the same performance, i.e., same delay and bandwidth, there should not be many packet reordering at receiver's side.
Therefore, there should not be much performance difference between packet number space designs.
To assess this intuition, we consider the parameter space depicted in Table~\ref{tab:homo_param_space}.
In this paper, unless explicitly mentioned otherwise, we only consider loss-less networks and the buffer size of the bottleneck router is always set to 1.5 times the bandwidth-delay product.
Note that actual packet losses may still occur due to bottleneck router's buffer overflow.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/time_2p_homo.pdf}
\caption{Transfer time.}
\label{fig:time_homo}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/ack_range_avg_2p_homo.pdf}
\caption{Range size advertised by the client.}
\label{fig:ack_ranges_homo}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/stream_retrans_2p_homo.pdf}
\caption{Data sent by the server.}
\label{fig:stream_retrans_homo}
\end{subfigure}
\caption{Homogeneous experiments. The legend is common to all the figures.}
\label{fig:homo}
\end{figure*}
Figure~\ref{fig:time_homo} shows the completion times for the 50 MB transfer.
As \texttt{picoquic}\xspace supports both packet number space designs and two different congestion control schemes (Cubic and BBR), we evaluate each combination.
\texttt{PQUIC}\xspace only supports multiple packet number spaces\xspace with Cubic.
The transfer times are mostly dependent of the paths' bandwidth that differs between runs.
Still, we do not observe much performance difference between single packet number space\xspace and multiple packet number spaces\xspace, which is expected.
However, having homogeneous paths does not prevent the receiver from observing reordering.
We extract the ranges advertised in the \texttt{ACK} (for single packet number space\xspace) and \texttt{ACK\_MP} frames (for multiple packet number spaces\xspace) sent by the client.
Figure~\ref{fig:ack_ranges_homo} shows the average size of the \texttt{ACK(\_MP)} ranges over runs.
With multiple packet number spaces\xspace, \texttt{ACK\_MP} frames often show a single range, as no reordering occurs within a path.
Still, a few packet losses due to buffer overflow might occur, leading to a few multiple packet number spaces\xspace runs where the average range size is greater than 1.
With single packet number space\xspace, \texttt{ACK} frames can carry many ranges, even if there is no buffer loss.
Indeed, it can happen that the server pushes slightly more on a path than the other, hence adding queuing delay on that path, causing an initial reordering.
Afterwards, the reception of \texttt{ACK} frames with several ranges lets the server send a burst of packets on the less-loaded path.
\texttt{picoquic}\xspace integrates pacing at the sender to limit this effect.
Yet, the high average range size of single packet number space\xspace raises concerns when the receiver wants to further limit the number of ACK Blocks (AB) it writes.
If there is reordering involving many packets, the client may never acknowledge a given packet, even if it was received.
To implement this situation, we lower the maximum number of ACK Blocks that an \texttt{ACK} frame can contain to 4, leading to at most 5 ranges within a given \texttt{ACK} frame.
While Figure~\ref{fig:ack_ranges_homo} confirms that the average range size is always lower than (but close to) 5, Figure~\ref{fig:time_homo} shows that such a strategy hinders the performance of the transfer, especially when using Cubic that tends to overflow buffers.
To explain this performance drop, we consider the \texttt{STREAM} frames the server sends.
Based on the data offset and length fields, we compute the number of retransmitted data bytes and make it relative to the transfer size (50 MB).
Figure~\ref{fig:stream_retrans_homo} indicates that there are indeed more data retransmission with single packet number space\xspace, and especially when the number of ACK Blocks that \texttt{ACK} frames can convey is limited.
Even in homogeneous networks, it is possible to be unable to acknowledge some packets due to the limited ACK blocks, leading to spurious retransmissions, decreased sending rate and lower overall performance.
Note that BBR is less affected by packet loss detection than Cubic, making the impact on its transfer performances lower.
\subsection{Heterogeneous 2-Path Experiments}\label{sec:hetero_2p}
\begin{table}
\centering
\caption{Parameter space for the 95 heterogeneous 2-path network scenarios.}
\begin{tabular}{@{}c|c@{}}
\toprule
Factor & Value\\
\midrule
Total Bandwidth [Mbps] & 100 \\
Total RTT [ms] & 200 \\
Bandwidth Balance & [0.1; 0.9] \\
RTT Balance & [0.1; 0.9] \\
\bottomrule
\end{tabular}
\label{tab:hetero_param_space}
\end{table}
The previous experiments considered an "idealistic" case where all network paths share the same characteristics.
This situation rarely happens in practice with, e.g., Wi-Fi/LTE or terrestrial/satellite~\cite{zhang2006measurement}.
To investigate situations where paths have different properties, we consider the parameter space shown in Table~\ref{tab:hetero_param_space}.
We split the bandwidth and RTT between paths such as their sum is always the same.
For instance, if bandwidth balance is 0.9 and RTT balance 0.1, then the first path has 90 Mbps 20 ms RTT and the second path 10 Mbps 180 ms RTT.
As the theoretical bandwidth remains the same across all runs, this enables easier comparison between time performance.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/time_2p_100mbps_50ms_before_fix.pdf}
\caption{Transfer time.}
\label{fig:time_hetero_before_fix}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/ack_ranges_avg_2p_100mbps_50ms_before_fix.pdf}
\caption{Range size advertised by the client.}
\label{fig:ack_ranges_hetero_before_fix}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/stream_retrans_2p_100mbps_50ms_before_fix.pdf}
\caption{Data sent by the server.}
\label{fig:stream_retrans_hetero_before_fix}
\end{subfigure}
\caption{Heterogeneous 2-path experiments with original \texttt{picoquic}\xspace. The legend is common to all the figures.}
\label{fig:hetero}
\end{figure*}
When using an upper limit of 32 ACK Blocks and considering Cubic, \texttt{picoquic}\xspace with single packet number space\xspace keeps performance close to \texttt{picoquic}\xspace with multiple packet number spaces\xspace and \texttt{PQUIC}\xspace, as depicted in Figure~\ref{fig:time_hetero_before_fix}.
However, like in homogeneous experiments, Figure~\ref{fig:ack_ranges_hetero_before_fix} shows that \texttt{picoquic}\xspace with single packet number space\xspace triggers \texttt{ACK} frames advertising many ranges, sometimes hitting the receiver's ACK Blocks limit.
If we further limit the number of ACK Blocks being sent (to 16, 8 and 4) within an \texttt{ACK} frame, the transfer performance degrades because of spurious loss detection and increased data retransmission.
In particular, with single packet number space\xspace \texttt{picoquic}\xspace using Cubic whose receiver does not advertise more than 4 ACK Blocks, there is a run where there are more than 70 \% of the data that get retransmitted, some piece of data being retransmitted up to 8 times.
Furthermore, the performance of \texttt{picoquic}\xspace transfers using BBR are worse than the ones using Cubic.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/time_2p_100mbps_50ms_after_fix.pdf}
\caption{Transfer time.}
\label{fig:time_hetero_after_fix}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/ack_ranges_perc_75_2p_100mbps_50ms.pdf}
\caption{Range size advertised by the client.}
\label{fig:ack_ranges_hetero_after_fix}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/stream_retrans_2p_100mbps_50ms.pdf}
\caption{Data sent by the server.}
\label{fig:stream_retrans_hetero_after_fix}
\end{subfigure}
\caption{Heterogeneous 2-path experiments with fixed \texttt{picoquic}\xspace. The legend is common to all the figures.}
\label{fig:hetero_fixed}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=.66\columnwidth]{figures/ratio_fix_time_2p_100mbps_50ms.pdf}
\caption{A ratio greater than 1 means that fixed \texttt{picoquic}\xspace is faster than original \texttt{picoquic}\xspace. Note the logarithmic x scale.}
\label{fig:ratio_time}
\end{figure}
We contacted the \texttt{picoquic}\xspace's implementer to report these results.
He was aware of the two aforementioned issues.
First, for the BBR performance, it was related to the \texttt{ACK} frame scheduling by the data receiver.
Under some circumstances, a specific range was always sent on the slow path.
Then, a later range arrived first on the fast path, affecting the path's RTT estimate and loss detection algorithm based on RACK~\cite{rfc8985}.
The implemented fix consists in duplicating \texttt{ACK(\_MP)} frames on both paths.
Second, regarding limited ACK Blocks, the implementation's behavior was to acknowledge the largest number ranges first.
In case of large packet reordering, it may happen that packets got either lately or never acknowledged, leading to spurious data retransmissions.
To address that, \texttt{picoquic}\xspace now includes an heuristic placing the lowest ranges first, knowing the maximum number of ranges it wants to advertise.
\begin{figure}
\centering
\includegraphics[width=.66\columnwidth]{figures/ack_length_2p_100mbps_50ms.pdf}
\caption{Total number of bytes of acknowledgment frames.}
\label{fig:ack_length}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/time_3p.pdf}
\caption{Transfer time.}
\label{fig:time_hetero_3p}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/acked_ranges_3p.pdf}
\caption{Range size advertised by the client.}
\label{fig:ack_ranges_hetero_3p}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/stream_retrans_3p.pdf}
\caption{Data sent by the server.}
\label{fig:stream_retrans_hetero_3p}
\end{subfigure}
\caption{Heterogeneous 3-path experiments with fixed \texttt{picoquic}\xspace. The legend is common to all figures.}
\label{fig:hetero_3p}
\end{figure*}
In the remaining experiments, we consider a version of \texttt{picoquic}\xspace integrating these fixes\footnote{Commit \texttt{9eacfff}.}.
We rerun the previous 2-path heterogeneous experiments and present the results in Figure~\ref{fig:hetero_fixed}.
To better highlight the impacts of those fixes, Figure~\ref{fig:ratio_time} shows the ratio of the transfer time between the original \texttt{picoquic}\xspace and the fixed one on the same run.
On the one hand, the performance of \texttt{picoquic}\xspace using BBR considerably improved.
Indeed, BBR is very sensitive to the path's latency, and duplicating \texttt{ACK} frames on both paths makes these estimates more stable.
This ACK duplication also benefits to Cubic runs and the multiple packet number spaces\xspace variant.
On the other hand, changing the range selection strategy provided mixed results.
As illustrated in Figure~\ref{fig:ack_ranges_hetero_after_fix}, in nearly all single packet number space\xspace runs, at least 25\% of the sent \texttt{ACK} frames hit the implementation limit of ACK Blocks.
While acknowledging lowest ranges first brings benefits in some network scenarios, it provides worse results in others.
If the receiver does not provide all information it has, it can trigger sub-optimal decisions at sender's side and increases the amount of retransmitted data.
Interestingly, some runs of \texttt{picoquic}\xspace with multiple packet number spaces\xspace using Cubic show data retransmissions and \texttt{ACK\_MP} frames with several ACK ranges while \texttt{PQUIC}\xspace does not (like \texttt{picoquic}\xspace BBR).
Two elements might explain this result.
First, the implementations of Cubic are slightly different, and \texttt{picoquic}\xspace's one uses the estimated path's RTT to determine whether it should exit the slow-start phase or not.
This might make the \texttt{picoquic}\xspace's Cubic slightly more aggressive than \texttt{PQUIC}\xspace's one.
Second, as depicted in Figure~\ref{fig:ack_length}, the \texttt{PQUIC}\xspace receiver acknowledges data more aggressively than the \texttt{picoquic}\xspace one.
Indeed, the \texttt{PQUIC}\xspace receiver sends an \texttt{ACK\_MP} frame for both paths as soon as two new packets arrive for a given path.
In the case of \texttt{picoquic}\xspace, while its initial limit is also set to 2, it supports the \texttt{ACK\_FREQUENCY} frame~\cite{ietf-quic-ack-frequency-01} and requests its peer to not trigger immediate acknowledgments if it detects packet number reordering.
It can then wait a few milliseconds between subsequent packets containing \texttt{ACK(\_MP)} frames.
However, during that time lap, several tens of packets from the server can have been received.
Also note that while the minimum size of an \texttt{ACK\_MP} frame is larger than a regular \texttt{ACK} one, \texttt{picoquic}\xspace with multiple packet number spaces\xspace manages to send less acknowledgment-related bytes than \texttt{picoquic}\xspace with single packet number space\xspace.
Smaller ranges and less out-of-order triggered acknowledgments explain this result.
\subsection{Heterogeneous 3-Path Experiments}
\begin{table}
\centering
\caption{Parameter space for the 95 heterogeneous 3-path network scenarios.}
\begin{tabular}{@{}c|c@{}}
\toprule
Factor & Value\\
\midrule
Total Bandwidth [Mbps] & 100 \\
Total RTT [ms] & 300 \\
Bandwidth Path Weight & [0.1; 0.9] \\
RTT Path Weight & [0.1; 0.9] \\
\bottomrule
\end{tabular}
\label{tab:hetero_3p_param_space}
\end{table}
When considering multipath, one usually starts with two network paths.
Still, there are cases where more than two network paths are simultaneously available, such as dual-stacked Wi-Fi and LTE.
To cover them, we consider 3-path scenarios covering the parameter space depicted in Table~\ref{tab:hetero_3p_param_space}.
Unlike in previous subsection that explored a 2-dimensional space, each path has a weight for the budget bandwidth and RTT, leading to 6 varying parameters.
As an example with the RTT, if the first path has a weight of 0.5, the second path 0.25 and the third path 0.75, the RTT of each path will respectively be 100 ms, 50 ms and 150 ms.
In the remaining experiments, we keep \texttt{PQUIC}\xspace and the fixed version of \texttt{picoquic}\xspace.
Figure~\ref{fig:hetero_3p} shows that when three heterogeneous paths are simultaneously used, 33 ranges inside \texttt{ACK} frames are often not sufficient with single packet number space\xspace to provide a full view of the received packets to the sender.
Indeed, as emphasized by Figure~\ref{fig:ack_ranges_hetero_3p}, in nearly all our runs, at least 30\% of all the \texttt{ACK} frames sent by the client hit the implementation limit.
This lack of information triggers more spurious loss detection events and retransmissions (Figure~\ref{fig:stream_retrans_hetero_3p}), leading to lower performance compared to multiple packet number spaces\xspace (Figure~\ref{fig:time_hetero_3p}).
In such networks, it is still possible to make the single packet number space\xspace design work as well as the multiple packet number spaces\xspace one by increasing or removing the maximum number of ACK Blocks that an implementation is willing to write in a single \texttt{ACK} frame.
However, when using single packet number space\xspace, this makes multipath more dependent on the receiver's implementation than with multiple packet number spaces\xspace.
Note that increasing the acknowledgment rate of \texttt{picoquic}\xspace enables it to get results closer to the \texttt{PQUIC}\xspace's one while keeping the same trend between the different designs (figure not shown due to space constraints).
\section{Discussion and Next Steps}\label{sec:discussion}
In this paper, we evaluated the impact of the amount of packet number spaces on a bulk transfer.
While this scenario does not cover all the possible usages of multipath, it assesses its bandwidth aggregation feature.
To this end, we showed that both designs can address this need.
However, our experiments pointed out that when using the single packet number space\xspace design, the performance of Multipath QUIC is more dependent on the receiver.
As already discussed by Huitema~\cite{huitema-pns}, the final decision of using one or several packet number spaces will be a trade-off.
On the one hand, a single packet number space provides support for zero-length Connection IDs, fewer protocol additions and no change to the nonce computation.
On the other hand, multiple packet number spaces allow implementations to keep the current loss detection algorithms per-path, do not add complexity when generating \texttt{ACK}s (including their frequency) and make the performance more resilient to the receiver's choices.
Knowing that the inclusion of multiple paths adds new dimensions to cope with (packet scheduling, path management), it might be wise to avoid adding complexity to these algorithms when using multipath transport protocols.
\paragraph{Artifacts.}
The evaluation/measurement/analysis scripts will be released in the future.
If you are already interested by them now, please contact the author.
{ \balance
{
%
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,500,366 | arxiv | \section{Introduction}
Let $X$ be a projective, smooth and absolutely irreducible genus $g$ curve defined over a finite field $\mathbb{F}_{q}$. It is well known that the number of $\mathbb{F}_{q}$-rational points of $X$ is bounded and a lot of research has been done to determine whether the bounds are sharp: see for example Sections 5.2 and 5.3 of \cite{stichtenoth} for an overview. The curve $X$ is called \emph{optimal} if for every other genus $g$ curve $Y$ over $\mathbb{F}_{q}$ one has $\#Y(\mathbb{F}_{q})\le\#X(\mathbb{F}_{q})$. The main result of this paper deals with uniqueness up to $\mathbb{F}_{2}$-isomorphism of small genus optimal curves defined over $\mathbb{F}_{2}$.
\begin{theorem}\label{uniquecurve}
For $g\leq5$, there exists a unique optimal genus $g$ curve defined over $\mathbb{F}_{2}$. There exist two non-isomorphic genus $6$ optimal curves and at least two non-isomorphic genus $7$ optimal curves defined over $\mathbb{F}_{2}$.
\end{theorem}
Examples of small genus optimal curves defined over $\mathbb{F}_{2}$ are already present in \cite{serre}, \cite{serre1} and \cite{niederreiterxing}. In this paper we show that for genus $g\leq 5$ these examples are unique, while one of the genus $6$ curves we construct appears to be new.\\
The proof of this result consists of two steps. We first determine a short list of Zeta functions that an optimal curve over $\mathbb{F}_{2}$ can have. In Section \ref{sec: zetafunctions} we show that for genus $g\leq 5$ there is only one possible Zeta function, while for $g=6$ there are two. Next we apply class field theory techniques as in \cite{auer}, \cite{lauter}, \cite{schoof}, \cite{serre}, \cite{serre1}, and recent results by Howe and Lauter in \cite{howelauter} to show that for each possible Zeta function there exists precisely one curve. In Section \ref{sec: unique01} we discuss curves of genus $0$ and $1$. Sections \ref{sec: unique2} to \ref{sec: unique6} are devoted to curves of genus $2$ to $6$. Finally, in Section \ref{sec: genus7} we exhibit two optimal genus $7$ curves with different Zeta functions.
\footnotetext{The author wishes to express her gratitude to her advisor Ren\'e Schoof, for this work would not have been possible without his precious help. The author also thanks Everett Howe for his interesting and constructive comments and Claus Fieker for his MAGMA computation. Part of this paper was written while the author was supported by the Fund for Scientific Research Flanders (F.W.O. Vlaanderen)}
\section{Zeta function and real Weil polynomial of a curve}\label{sec: zetafunctions}
Throughout this paper a curve is understood to be projective, smooth and absolutely irreducible over a finite field of definition $\mathbb{F}_{q}$. In order to study optimal genus $g$ curves defined over $\mathbb{F}_{q}$ it is of interest to determine the quantity
\[
N_{q}(g):=\max \{\#X(\mathbb{F}_{q}) \, \arrowvert \, X \textrm{ is a genus } g \textrm{ curve defined over } \mathbb{F}_{q} \}.
\]
Then, an optimal genus $g$ curve $X$ defined over $\mathbb{F}_{q}$ satisfies $\#X(\mathbb{F}_{q})=N_{q}(g)$. Several methods have been developed in order to determine $N_{q}(g)$ for given $q$ and $g$. The progress is listed and continuously updated in the tables \cite{geervlugt}. In particular Serre determined very good upper bounds for the number of $\mathbb{F}_{q}$-rational points in \cite{serre1}. For $q=2$ he gives the estimate $\#X(\mathbb{F}_{2}) \le 0.83 g+5.35$. For $g\ge 2$ this improves the Hasse-Weil bound $\#X(\mathbb{F}_{q})\leq q+1+\lfloor 2g\sqrt{q} \rfloor$. In \cite{serre} Serre also provided examples of genus $g$ curves defined over $\mathbb{F}_{2}$ attaining these bounds. Hence for small genus curves he proved that $N_{2}(g)$ is as follows \cite[Theorem 5]{serre1}
\[
\begin{array}{|c|c|c|c|c|c|c|c|c|}
\hline
g & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
\hline
N_{2}(g) & 3 & 5 & 6 & 7 & 8 & 9 & 10 & 10\\
\hline
\end{array}
\]
The Zeta function of a genus $g$ curve $X$ defined over $\mathbb{F}_{q}$ is given by
\[
Z(t)=\prod_{d\geq 1}\frac{1}{(1-t^d)^{a_d}},
\]
where
\[
a_{d}=\#\{ P\, \arrowvert \, P \textrm{ place of } X \textrm{ such that deg}\, P=d\}.
\]
In particular, $a_{1}=\#X(\mathbb{F}_{q})$. The Zeta function $Z(t)$ is a rational function of the form
\[
Z(t) = \frac{L(t)}{(1-t)(1-qt)},
\]
where
\[
L(t) = \prod_{i=1}^{g}(1-\alpha_{i} t)(1-\overline{\alpha_{i}} t)
\]
for certain $\alpha_{i} \in \mathbb{C}$ of absolute value $\sqrt{q}$. Therefore $L(t)=q^g t^{2g} + b_{2g-1}t^{2g-1}+\ldots + b_{1} t+1 \in \mathbb{Z}[t]$ is determined by the coefficients $b_{1}, \ldots, b_{g}$ which are in turn determined by the numbers $a_{1}, \ldots, a_{g}$. See for example \cite[Section 5.1]{stichtenoth} for more details.\\
To a genus $g$ curve $X$ having $L(t)$ as numerator of its Zeta function, we associate the so-called \emph{real Weil polynomial} of $X$:
\[
h(t)=\prod_{i=1}^{g}(t-\mu_i) \;\; \in\, \mathbb{Z}[t],
\]
where $\mu_{i}=\alpha_{i}+\overline{\alpha_{i}}$ is a real number in the interval $[-2\sqrt{q}, 2\sqrt{q}]$, for all $i=1,\ldots, g$. We have
\begin{equation}\label{relation}
L(t)=t^{g}h(qt+1/t).
\end{equation}
One can hence turn the problem of determining the Zeta function of $X$ into the problem of determining the real Weil polynomial of $X$. Not every polynomial $h(t)$ with all zeros in the interval $[-2\sqrt{q}, 2\sqrt{q}]$ and with the property that
\[
\frac{L(t)}{(1-t)(1-qt)}=\prod_{d\ge 1}\frac{1}{(1-t^{d})^{a_{d}}}
\]
for certain integers $a_{d}\ge 0$ is necessarily the real Weil polynomial of a curve. The following result is due to Serre \cite[page Se 11]{serre}, \cite[Lemma 1]{lauter}.
\begin{proposition} \label{Res1}
Let $h(t)$ be the real Weil polynomial of a curve $C$ over $\mathbb{F}_{q}$. Then $h(t)$ cannot be factored as $h(t) = h_{1}(t)h_{2}(t)$, with $h_{1}(t)$ and $h_{2}(t)$ non-constant polynomials in $\mathbb{Z}[t]$ such that the resultant of $h_{1}(t)$ and $h_{2}(t)$ is $\pm 1$.
\end{proposition}
This result has been generalized by E.~Howe and K.~Lauter. Proposition \ref{Res2} below is an improvement \cite{howelauterslides} of \cite[Theorem 1.b)]{howelauter} and Proposition \ref{EllFact} is \cite[Theorem 1, Proposition 13]{howelauter}. Recall that the \emph{reduced resultant} of two polynomials $f, g \in \mathbb{Z}[t]$ is defined to be the non-negative generator of the ideal $(f,g) \cap \mathbb{Z}$.
\begin{proposition} \label{Res2}
Let $h(t)=h_{1}(t)h_{2}(t)$ be the real Weil polynomial of a curve $C$ over $\mathbb{F}_{q}$, where $h_{1}(t)$ and $h_{2}(t)$ are coprime non-constant factors in $\mathbb{Z}[t]$. Let $r$ be the reduced resultant of the radical of $h_{1}(t)$ and the radical of $h_{2}(t)$. If $r= 2$, then, there exists a degree $2$ map $C \to C'$, where the curve $C'$ is defined over $\mathbb{F}_{q}$ and has either $h_{1}(t)$ or $h_{2}(t)$ as real Weil polynomial.
\end{proposition}
\begin{proposition} \label{EllFact}
Let $h(t) = (t - \mu)h_{2}(t)$ be the real Weil polynomial of a curve $C$ over $\mathbb{F}_{q}$, where $t - \mu$ is the real Weil polynomial of an elliptic curve $E$ and $h_{2}(t)$ a non-constant polynomial in $\mathbb{Z}[t]$ coprime with $t-\mu$. If $r \neq \pm1$ is the resultant of $t-\mu$ and the radical of $h_{2}(t)$, then $C$ admits a map of degree dividing $r$ to an elliptic curve isogenous to $E$.
\end{proposition}
For a curve $X$ we denote by $a(X)$ the vector $[a_{1}, a_{2},\ldots]$. The main result of this section is the following.
\begin{theorem} \label{RealWeil}
For $g\le 6$ the real Weil polynomial $h(t)$ and the vector $a(X)$ of an optimal genus $g$ curve $X$ over $\mathbb{F}_{2}$ are as follows:
\begin{align}
g=1:\; h(t)&=t+2, &a(X)&=[5, 0, 0, 5, 4, 10, \ldots]; \nonumber \\
g=2:\; h(t)&=t^2 + 3t + 1, &a(X)&=[6, 0, 1, 1, 6, 12, \dots]; \nonumber \\ g=3:\; h(t)&=t^3 + 4t^2 + 3t - 1, &a(X)&=[7, 0, 1, 0, 7, 7, \ldots]; \nonumber \\
g=4:\; h(t)&=(t + 1)(t + 2)(t^2 + 2t - 2), &a(X)&=[8, 0, 0, 2, 4, 8, \ldots]; \nonumber \\
g=5:\; h(t)&=t(t + 2)^2(t^2 + 2t - 2), &a(X)&=[9, 0, 0, 2, 0, 12, \ldots]; \nonumber \\
g=6:\; \phantom{h(t)}& & & \nonumber \\
h(t) &= t(t+2)(t^4+5t^3+5t^2-5t-5), &a(X)&=[10,0,0,0,3,10,\dots], \label{rWa}
\\
h(t) &= (t-1)(t+2)(t^2+3t+1)^2, &a(X)&=[10,0,0,0,2,15,\dots]. \label{rWb}
\end{align}
\end{theorem}
\begin{proof}
Following \cite[page Se Th 38]{serre} we compute for each $g\le 6$ a finite list of monic degree $g$ polynomials $h(t)\in \mathbb{Z}[t]$ for which $a_{1}$ is equal to the number of $\mathbb{F}_{2}$-rational points of an optimal genus $g$ curve and for which $a_{d}\ge 0$ for $d\ge 2$ in the relation $L(t)=t^{g}h(qt+1/t)$. Moreover we require that $h(t)$ has the property that its zeros are in the interval $[-2\sqrt{2},2\sqrt{2}]$. Finally, we require that the conditions of Proposition \ref{Res1} are satisfied. A short computer calculation gives a unique polynomial for $g\leq 5$ and three polynomials for $g=6$:
\begin{align}
&(1)& h(t)&=t(t+2)(t^4+5t^3+5t^2-5t-5), &a(X)&=[10,0,0,0,3,10,\dots]; \nonumber \\
&(2)& h(t)&=(t-1)(t+2)(t^2+3t+1)^2, &a(X)&=[10,0,0,0,2,15,\dots]; \nonumber \\
&(3)& h(t)&=(t+1)(t+2)(t^2+2t-2)(t^2+2t-1), &a(X)&=[10,0,0,1,0,12,\dots]. \nonumber
\end{align}
We show that the third polynomial cannot occur. The resultant of the factors $t+2$ and $(t+1)(t^2+2t-2)(t^2+2t-1)$ is $-2$. Hence, by Proposition \ref{EllFact}, a genus $g=6$ curve $X$, having this polynomial as real Weil polynomial, admits a degree $2$ map $X \to E$, where $E$ is a genus one curve having real Weil polynomial $t+2$. The curve $E$ has parameters $a(E)=[5, 0, 0, 5, 4, 10, \ldots]$, hence $E$ has five places of degree $4$ while $X$ has only one. Since $E$ does not have any degree $2$ places, this means that one place $Q$ of the degree $4$ places of $E$ must ramify in $X$. The different $D$ of the quadratic function field extension $\mathbb{F}_{2}(X)/\mathbb{F}_{2}(E)$ satisfies $2Q \le D$ (where the coefficient $2$ is forced by wild ramification). On the other hand the degree of the different is $2g-2=10=\mathrm{deg}\,D$ by the Hurwitz formula. Thus $D=2Q+2R$, where $R$ is a rational point of $E$. But this is a contradiction because all of five rational points of $E$ split completely in $X$ since $\#X(\mathbb{F}_{2})=10$.
\end{proof}
\section{Uniqueness of optimal elliptic curves}\label{sec: unique01}
In this section we prove Theorem \ref{uniquecurve} for curves of genus $0$ and $1$.
\begin{remark}
We denote by $\mathbb{P}^{1}$ the projective line over $\mathbb{F}_{2}$ and by $0$, $1$ and $\infty$ its three rational points. Over a finite field, every genus $0$ curve is isomorphic to $\mathbb{P}^{1}$. Therefore $\mathbb{P}^{1}$ is optimal. The Zeta function of $\mathbb{P}^{1}$ is
\[
Z(t)=\frac{1}{(1-2t)(1-t)} \qquad \mathrm{and \;hence} \qquad a(\mathbb{P}^{1})=[3,1,2,3,6,\ldots].
\]
\end{remark}
\begin{proposition}
Up to $\mathbb{F}_{2}$-isomorphism, the unique genus $1$ curve having five rational points over $\mathbb{F}_{2}$ is the elliptic curve $E$ of affine equation $y^{2}+y=x^{3}+x$.
\end{proposition}
\begin{proof}
A genus $1$ curve $E$ over $\mathbb{F}_{2}$ having five rational points over $\mathbb{F}_{2}$ is an elliptic curve. Hence $E$ admits a separable degree $2$ morphism to $\mathbb{P}^{1}$. It can be described as a smooth cubic in $\mathbb{P}^{2}$ of affine equation of the form $y^2+a(x)y=f(x)$, where $a(x)$ and $f(x)$ are polynomials in $\mathbb{F}_{2}[x]$, the first of degree $0$ or $1$ and the latter of degree $3$ \cite[Appendix A]{silverman}. Since the point at infinity $\infty$ of $\mathbb{P}^{1}$ ramifies in $E$, one has $a(x)=1$. The affine points $0$ and $1$ of $\mathbb{P}^{1}$ have to split, thus we have that $f(0)=f(1)=0$ and hence $f(x)=x(x+1)(x+a)$, where $a \in \mathbb{F}_{2}$. If $a=1$ we find the equation $y^2+y=x^3+x$ and if $a=0$ the equation $y^2+y=x^3+x^2$. These two curves are indeed isomorphic over $\mathbb{F}_{2}$ by changing coordinates through the map $(x,y) \mapsto (x+1,y)$.
\end{proof}
\begin{remark}
The function field of the genus $1$ curve $E$ can also be described as the ray class field of $\mathbb{P}^{1}$ of conductor $4$ times a rational point, in which the other two rational points of $\mathbb{P}^{1}$ are both split. Since $\mathrm{Aut}(\mathbb{P}^{1})$ acts doubly transitively on $\{0,1,\infty\}$, different choices give rise to isomorphic ray class fields.
\end{remark}
\begin{remark}\label{remg1N5}
We often refer to this unique optimal elliptic curve $E$ throughout this paper. For future reference, we present here some properties of $E$. In terms of the affine equation $y^2 +y =x^3 +x$, we denote the five rational points of $E$ as follows: we write $P_{0}$ for the point at infinity and we put
\begin{equation}\label{Epoints}
P_{1}=(0,0), \quad P_{2}=(0,1), \quad P_{3}=(1,0), \quad P_{4}=(1,1).
\end{equation}
The real Weil polynomial of $E$ is $h(t)=t+2$. The vector $a(E)$ of the numbers $a_{d}$ of places of degree $d=1,2,\ldots$ of $E$ is given by
\[
a(E)=[5, 0, 0, 5, 4, 10, 20,\ldots].
\]
Let $a\in \mathbb{F}_{16}$ be a root of $x^{4}+x+1$. Then, the five places of degree $4$ of $E$ have coordinates
\begin{align*}
Q_1=(a^{3}&,a^{3}+a), \, Q_2=(a^{3},a^{3}+a+1), \\
Q_3=(a^{3}+1,a), \, Q_4&=(a^{3}+1, a+1), \, Q_5=(a^{2}+a+1,a). \nonumber
\end{align*}
Let $b\in \mathbb{F}_{32}$ be a root of $x^{5}+x^{3}+1$, then the four places of degree $5$ of $E$ consist of the points of coordinates:
\begin{equation*}
R_{1}=(b,b^{4}), \, R_{2}=(b,b^{4}+1),\,
R_{3}=(b+1,b^{4}+b), \, R_{4}=(b+1,b^{4}+b+1).
\end{equation*}
Let $c \in \mathbb{F}_{64}$ be a root of $x^{6}+x^{5}+1=0$. The places of degree $6$ of $E$ have coordinates
\begin{align*}
T_{1}=(c^5+c^3+c^2+c+1,c^5 + c^4 + c^3 + 1 )&, \, T_{2}=(c^5+c^3+c^2+c, c^4+c^2+c), \nonumber \\
T_{3}=(c^3+c^2+1, c^3+c^2+c), \,T_{4}&=(c^3+c^2+1, c^3+c^2+c+1), \nonumber \\
T_{5}=(c+1, c^4+c^3+c^2+c), \, T_{6}&=(c+1, c^4+c^3+c^2+c+1), \\
T_{7}=(c^3+c^2, c+1 )&,\, T_{8}=(c^3+c^2, c ), \nonumber \\
T_{9}=(c, c^4+c^3+c^2), \, T_{10}&=(c, c^4+c^3+c^2+1).\nonumber
\end{align*}
The order $5$ automorphism $\sigma$ of $E$ given by addition of $P_{1}$ acts transitively on $E(\mathbb{F}_{2})$ as follows: $P_{0}\mapsto P_{1}\mapsto P_{3}\mapsto P_{4}\mapsto P_{2}\mapsto P_{0}$. The action of $\sigma$ on the places of degree $4$ is as follows: $Q_{1}\mapsto Q_{5}\mapsto Q_{2}\mapsto Q_{4}\mapsto Q_{3}\mapsto Q_{1}$.
On the other hand, the order $4$ automorphism of $E$
\[
\tau:\, (x,y) \mapsto (x+1,y+x+1)
\]
fixes $P_0$ and acts transitively on the remaining four rational points: $P_{1}\mapsto P_{4}\mapsto P_{2}\mapsto P_{3}\mapsto P_{1}$. Similarly, $\tau$ fixes $Q_{5}$ and acts transitively on the remaining degree $4$ places: $Q_{1}\mapsto Q_{4}\mapsto Q_{2}\mapsto Q_{3}\mapsto Q_{1}$. The action of $\tau$ on the places of degree $5$ is transitive: $R_{1}\mapsto R_{4}\mapsto R_{2}\mapsto R_{3}\mapsto R_{1}$.
\end{remark}
\section{Uniqueness of genus $2$ optimal curves}\label{sec: unique2}
\begin{proof}[Proof of Theorem \ref{uniquecurve} for $g=2$]
A genus $2$ optimal curve $X$ over $\mathbb{F}_2$ is hyperelliptic. Since $X$ has six rational points, all three rational points of $\mathbb{P}^1$ split completely in the double covering $X \to \mathbb{P}^1$. By Theorem \ref{RealWeil}, the curve $X$ has no places of degree $2$ and only one place of degree $3$. Thus only one degree $3$ place $Q$ of the two degree $3$ places of $\mathbb{P}^1$ totally ramifies in $X$. The different $D$ of the corresponding function field extension is hence $2Q$, since $2Q\leq D$ and $\textrm{deg}\,D=6$ by the Hurwitz formula. Any genus $2$ curve having six rational points over $\mathbb{F}_2$ is hence a double covering of $\mathbb{P}^1$ of conductor $2Q$, where $Q$ is a place of $\mathbb{P}^1$ of degree $3$, in which all rational points of $\mathbb{P}^1$ are split. A different choice of the degree $3$ place of $\mathbb{P}^1$ leads to an $\mathbb{F}_2$-isomorphic curve. Indeed, the $\mathbb{F}_2$-isomorphism $x \mapsto 1/x$ preserves the rational points of $\mathbb{P}^{1}$, but switches the two degree $3$ places.
\end{proof}
\section{Uniqueness of genus $3$ optimal curves}
We briefly recall some important results on the Jacobian variety of a curve in order to state and prove a useful lemma.\\
Let $X$ be a curve defined over $\mathbb{F}_q$. We denote by $\mathcal{J}ac(X)$ the Jacobian variety of $X$ and by $T_\ell$ the Tate module attached to $\mathcal{J}ac(X)$, where $\ell$ is a prime number different from the characteristic of $\mathbb{F}_q$. We set $V_{\ell}=T_{\ell} \otimes \mathbb{Q}_{\ell}$. Let $F: V_\ell \to V_\ell$ be the Frobenius map and let $V: V_\ell \to V_\ell$ be the Verschiebung map: the unique map such that $V\circ F=q$. Then $\mathbb{Z}[F,V]\subseteq \mathrm{End}(\mathcal{J}ac(X))$. Next we let $\phi$ be the canonical polarization on $\mathcal{J}ac(X)$. Then $\phi$ can be represented as a non-degenerate alternating form $\phi: V_\ell \times V_\ell \to \mathbb{Q}_\ell$. Here $\mathbb{Q}_\ell$ denotes the field of $\ell$-adic numbers. Since $\phi(F(x),F(y)) = q\phi(x,y)$ for every $x, y \in V_\ell$, by bilinearity of $\phi$ we have that $\phi(F(x),F(y)) = q\phi(x,y)=\phi(qx,y)=\phi(V(F(x)),y)$. It follows that $\phi(z,F(y))=\phi(V(z),y)$ for any $y, z \in V_\ell$. In other words $V$ is left adjoint to $F$ \mbox{with respect to $\phi$.}
\begin{theorem} [Torelli Theorem \cite{weil}]
Let $X$ and $X'$ be two curves over a perfect field $k$. Let \mbox{$\tau: \mathcal{J}ac(X) \to \mathcal{J}ac(X')$} be an isomorphism over $k$ compatible with the canonical polarizations. Then
\begin{enumerate}
\item if $X$ is hyperelliptic, there exists a unique isomorphism$f:\, X \to X'$ over $k$ which gives $\tau$;
\item if $X$ is not hyperelliptic, there exists a unique isomorphism \mbox{$f: \, X \to X'$} over $k$ and a unique $\varepsilon \in \{ \pm 1\} $ such that $f$ gives $\varepsilon \tau$.
\end{enumerate}
\end{theorem}
\begin{corollary}
If $\tau$ is an automorphism of $\mathcal{J}ac(X)$ over $k$ preserving the po\-la\-ri\-za\-tion, then either $\tau$ or $-\tau$ comes from an automorphism over $k$ of $X$.
\end{corollary}
\begin{lemma} \label{lemmam7}
Any genus $3$ curve $X$ having exactly seven rational points over $\mathbb{F}_{2}$ admits an automorphism of order $7$.
\end{lemma}
\begin{proof}
We show that for a genus $3$ curve $X$ having seven rational points over $\mathbb{F}_2$ the ring $\mathbb{Z}[F,V]\subseteq \mathrm{End}(\mathcal{J}ac(X))$ is isomorphic to $\mathbb{Z}[\zeta_7]$, the ring of integers of $\mathbb{Q}(\zeta_7)$. The minimal polynomial of $F+V$ is the real Weil polyomial of $X$. By Theorem \ref{RealWeil} this is $h(t)=t^3 + 4t^2 + 3t - 1$. It is an irreducible polynomial of discriminant $7^{2}$. Hence, for a root $\alpha \in \overline{\mathbb{Q}}$ of $h(t)$, the number field $\mathbb{Q}(\alpha)$ is a cyclic extension of degree $3$ of $\mathbb{Q}$, which is ramified only at $7$. By the Kronecker-Weber Theorem the field $\mathbb{Q}(\alpha)$ is hence the unique degree $3$ subfield $\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1})$ of $\mathbb{Q}(\zeta_{7})$ and $\mathbb{Z}[\alpha]$ is its ring of integers. Consider now the minimal polynomial of Frobenius $x^{2}-\alpha x+2 \in \mathbb{Z}[\alpha][x]$. Its discriminant $\alpha^{2}-8$ has norm $7$ and hence generates a prime ideal $\pi \subseteq \mathbb{Z}[\alpha]$ lying over the prime $7$ of $\mathbb{Z}$. By class field theory $\mathbb{Q}(\alpha)$ admits a unique quadratic extension unramified outside of $\pi$ and the three infinite primes lying over $7$. This is the field $\mathbb{Q}(\zeta_{7})$, which has discriminant $7^{5}$ by the conductor-discriminant formula. The discriminant of $\mathbb{Q}(\alpha,x)$ can be computed to be $7^{5}$ as well by means of the relative discriminant formula for towers of number fields. Hence $\mathbb{Z}[F,V]=\mathbb{Z}[\alpha,x]$ is the ring of integers $\mathbb{Z}[\zeta_{7}]$ of $\mathbb{Q}(\zeta_{7})$ as wanted.\\
Now $\mathcal{J}ac(X)$ has in particular an automorphism $\tau$ of order $7$ corresponding to $\zeta_7$. We show that $\tau$ preserves the polarization $\phi$. By bilinearity of $\phi$ and since $V$ is the complex conjugate of $F$, the left adjoint to an element $\tau \in \mathbb{Z}[F,V]$ is its complex conjugate $\overline{\tau}$. Since $\tau$ satisfies $\tau \overline{\tau}=1$, we have in particular that $\phi(\tau (x), y) = \phi(x,\overline{\tau}(y)) = \phi(x, {\tau}^{\scriptscriptstyle{-1}}(y))$ for any $x$, $y$ in $V_\ell$. This implies that $\phi(\tau(x), \tau(y)) = \phi(x, y)$ for any $x, y \in V_\ell$. In other words $\tau$ preserves the polarization $\phi$ of $\mathcal{J}ac(X)$. By the above Corollary of Torelli's Theorem the curve $X$ admits hence an automorphism $f$ of order $7$. Indeed if $f$ does not induce $\tau$ of order $7$, but $f$ induces $-\tau$, then $f^{2}$ is an automorphism of order $7$ of $X$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{uniquecurve} for $g=3$]
By Lemma \ref{lemmam7} the curve $X$ admits an automorphism $f$ of order $7$. Then, by Galois correspondence, $X$ is a cyclic covering of degree $7$ of a curve which can only be $\mathbb{P}^1$ by comparing the genera and the degree of the different in the Hurwitz formula. By the conductor-discriminant formula, the conductor $D$ of such a covering satisfies $6\,\textrm{deg}D=18$. Since there are seven rational points on $X$, only one rational point $P$ of $\mathbb{P}^1$ splits completely. Thus one has $D=Q$, where $Q$ is a place of $\mathbb{P}^1$ of degree $3$. Hence $X$ is a cyclic degree $7$ covering of $\mathbb{P}^1$ of conductor $Q$, where one rational point $P$ of $\mathbb{P}^1$ splits completely. Different choices of $P$ in $\{0,1,\infty\}$ and of the degree $3$ place $Q$ give rise to $\mathbb{F}_2$-isomorphic curves. Indeed, since the automorphisms group of $\mathbb{P}^1$ acts transitively on the rational points, we can always first reduce to the case $P=\infty$. Next the automorphism $x \mapsto x+1$ fixes $P$ and maps one degree $3$ place of $\mathbb{P}^{1}$ into the other one.
\end{proof}
\section{Uniqueness of genus $4$ optimal curves}
\begin{proof}[Proof of Theorem \ref{uniquecurve} for $g=4$]
By Theorem \ref{RealWeil} the real Weil polynomial of an optimal genus $4$ curve $X$ over $\mathbb{F}_2$ is $h(t)=(t + 1)(t + 2)(t^2 + 2t - 2)$. The resultant of the polynomials $t+2$ and $(t + 1)(t^2 + 2t - 2)$ is $2$. Proposition \ref{EllFact} implies therefore that the curve $X$ is a double covering of the unique optimal elliptic curve $E$ having real Weil polynomial $t+2$ described in Remark \ref{remg1N5}. Since $X$ has no places of degree $2$, no rational point of $E$ can be inert in $X$. Hence, since $X$ has eight rational points, there is only one possibility for the five rational points of $E$: three of them split completely and two are totally ramified. Denoting by $P$ and $P'$ the two wildly ramified rational points of $E$, we have that the contribution to the different of the quadratic function field extension $\mathbb{F}_{2}(X)/\mathbb{F}_{2}(E)$ is at least $2P+2P'$. Since the degree of the different has to be $6$ by the Hurwitz formula, the different, which is also the conductor of the extension, is $4P+2P'$ or $2P+4P'$. Thus any optimal genus $4$ curve over $\mathbb{F}_2$ is a double covering of the optimal elliptic curve $E$ of conductor $4P+2P'$ or $2P+4P'$, in which the other three rational points of $E$ split completely. Uniqueness of $X$ follows from the fact that $\mathrm{Aut}(E)$ acts doubly transitively on $E(\mathbb{F}_{2})$ as described in Remark \ref{remg1N5}.
\end{proof}
\section{Uniqueness of genus $5$ optimal curves}
\begin{lemma}\label{lemmag5}
Let $C$ be the hyperelliptic curve over $\mathbb{F}_{2}$ of affine equation $y^2 + y = x^5 + x^3$. Let $P$ be a rational point of $C$ and let $K$ be the ray class field of $\mathbb{F}_{2}(C)$ of conductor $4P$ in which all rational points of $C$ except $P$ split completely. Then $K=\mathbb{F}_{2}(C)$ except when $P$ is the point at infinity, in which case we have $[K:\mathbb{F}_{2}(C)]=2$.
\end{lemma}
\begin{proof}
Let $t$ denote a uniformizer at $P$ and let $S=C(\mathbb{F}_{2})\backslash \{P\}$. By Artin reciprocity the Galois group $\mathcal{G}al(K/\mathbb{F}_2(C))$ is isomorphic to the $S$-ray class group of $C$ modulo $4P$ \cite[Section 2.5]{niederreiterxing}. In this case the latter is isomorphic to a quotient of $R=\Big (\mathbb{F}_2[[t]]/(t^4)\Big)^*\simeq \mathbb{Z}_4 \times \mathbb{Z}_2$ by the $S$-unit group of $C$ \cite[Section 8]{schoof}. We show that if $P$ is the point at infinity of $C$ we have \mbox{$\mathcal{G}al(K/\mathbb{F}_2(C))\simeq \mathbb{Z}_2$}. On the other hand, if $P$ is one of the other rational points
\[
P_0=(0,0),\, P_0'=(0,1), \, P_1=(1,0),\textrm{ or } P_1'=(1,1)
\]
of $C$, the group $\mathcal{G}al(K/\mathbb{F}_2(C))$ is trivial. A sketch of the computations follows.
\begin{enumerate}
\item[$i)$] Let $P$ be the point at infinity of $C$. Then a basis for the $S$-unit group of $C$ consists of the functions with principal divisors given by
\begin{eqnarray*}
\Big( \frac{y+x^{3}}{x^{3}}\Big) & = &2P_{0}+P_{1}'-3P_{0}',\\
\Big( \frac{y+1}{y}\Big) \;& = &3(P_{0}'-P_{0})+2(P_{1}'-P_{1}),\\
\Big( \frac{x+1}{x}\Big) \;& = &P_{1}-P_{0}+P_{1}'-P_{0}'.
\end{eqnarray*}
Let $t=y/{x^3}$ be a uniformizer at $P$, then their images in $R$ are:
\begin{eqnarray*}
\frac{y+x^{3}}{x^{3}} & \equiv & 1+t\;\,\textrm{mod}\, t^{4},\\
\frac{y+1}{y} \;& = &1+\frac{1}{y} \equiv 1+t^{5}\equiv 1 \;\, \textrm{mod} \, t^{4},\\
\frac{x+1}{x} \;& = &1+\frac{1}{x}\equiv 1+t^{2} \;\, \textrm{mod} \, t^{4},
\end{eqnarray*}
since $1/y=t^{5}+O(t^{6})$ and $1/x=t^{2}+O(t^{4})$. The element $1+t$ generates a subgroup $R'$ of $R$ of index $2$ and $1+t^{2} \in R'$. Therefore $\mathcal{G}al(K/\mathbb{F}_{2}(C))\simeq R/R'\simeq \mathbb{Z}_{2}$.
\item[$ii)$] Let $P=P_0$ and $x$ a uniformizer at $P$. In this case consider the two $\mathbb{F}_{2}$-linearly independent $S$-units of divisors given by
\begin{eqnarray*}
(x+1) & = &P_{1}+P_{1}'-2P_{\infty},\\
(y+1) & = & 3P_{0}'+2P_{1}'-5P_\infty.
\end{eqnarray*}
Here $P_{\infty}$ denotes the point at infinity of $C$. By means of Hensel's lemma, we compute the local expansion of $y$ at $P_0$ as $y=x^5+x^3+O(x^6)$. Therefore their images in $R$ are
\begin{eqnarray*}
x+1 & \equiv & 1+x \phantom{^{3}}\;\,\textrm{mod} \, x^{4},\\
y+1 & \equiv & 1+x^{3} \;\,\textrm{mod} \, x^{4}.
\end{eqnarray*}
In this case the group $R$ is generated by the images of the $S$-units and thus the quotient group is trivial.
\end{enumerate}
The other possibilities for $P$ reduce to case $ii)$ by applying the order $4$ automorphism $\varphi:(x,y) \mapsto (x+1, y+x^2+1)$ of $C$. It fixes the point at infinity of $C$ and acts transitively on the other rational points of $C$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{uniquecurve} for $g=5$]
By Theorem \ref{RealWeil} a genus $5$ optimal curve $X$ defined over $\mathbb{F}_2$ has real Weil polynomial $h(t)=t(t+2)^2(t^2+2t-2)$. Since the principal ideal $(t(t+2),t^2+2t-2) \cap \mathbb{Z}$ is generated by $2$, Proposition \ref{Res2} implies that the curve $X$ is a double covering of a curve $C$ having real Weil polynomial either $t(t+2)^2$ or $t^2+2t-2$. If $C$ had $t(t+2)^2$ as real Weil polynomial, it would be a genus $3$ curve having seven rational points over $\mathbb{F}_2$, which is impossible by Theorem \ref{RealWeil}.\\
Hence $C$ is a genus $2$ curve having five rational points and no place of degree $2$. Every genus $2$ curve defined over $\mathbb{F}_2$ is a hyperelliptic curve. Up to $\mathbb{F}_2$-isomorphism there exists a unique hyperelliptic curve $C$ over $\mathbb{F}_{2}$ having real Weil polynomial $t^2+2t-2$. Indeed such a hyperelliptic curve has five rational points and no place of degree $2$. Thus the different of the function field extension associated to the double covering $C \to \mathbb{P}^1$ has to be $6Q$, where $Q$ is a rational point of $\mathbb{P}^1$. According to the classification of genus $2$ curves over $\mathbb{F}_{2}$ in \cite[page 327]{maisnernart}, by taking $Q=\infty$, any such hyperelliptic curve is $\mathbb{F}_2$-isomorphic to a projective curve of affine equation $y^2+y=x^5+ax^3+bx^2+c$, with $a,b,c \in \mathbb{F}_2$. Of the eight possible equations arising from the choice of the parameters $a,b,c$, only the affine equation $y^2 + y = x^5 + x^3$ describes a projective curve having five rational points over $\mathbb{F}_{2}$ and no places of degree $2$.\\
Since $X$ has nine rational points, only one rational point $P$ of $C$ ramifies in the double covering $X \to C$, while the other four rational points of $C$ split completely in $X$. The different of $\mathbb{F}_{2}(X)/\mathbb{F}_{2}(C)$ is hence $4P$, since it must have degree $4$ by the Hurwitz formula. The function field $\mathbb{F}_{2}(X)$ is hence an abelian extension of $\mathbb{F}_{2}(C)$ of conductor $4P$, where the other four rational points of $C$ split completely. The maximal among such abelian extensions is the ray class field $K$ described in Lemma \ref{lemmag5}. Hence $P$ is the point at infinity of $C$ and $\mathbb{F}_{2}(X)=K$.
\end{proof}
\section{Genus $g=6$ optimal curves}\label{sec: unique6}
Theorem \ref{RealWeil} lists the two possible real Weil polynomials of an optimal genus $6$ curve defined over $\mathbb{F}_2$. In this section we give a proof of the existence of a unique genus $6$ curve for each of the two listed polynomials.
\begin{proposition}
Up to $\mathbb{F}_{2}$-isomorphism, there is a unique curve having real Weil polynomial as in \eqref{rWa} of Theorem \ref{RealWeil}.
\end{proposition}
\begin{proof}
Let $X$ be a genus $6$ optimal curve defined over $\mathbb{F}_2$ having real Weil polynomial $h(t)=t(t+2)(t^4+5t^3+5t^2-5t-5)$. Since the resultant of the factors $t+2$ and $t(t^4+5t^3+5t^2-5t-5)$ is $-2$, there exists a degree $2$ morphism $X \to E$ by Proposition \ref{EllFact}. All of the five rational points of $E$ split completely into the ten rational points of $X$. By the Hurwitz formula the degree of the different of $\mathbb{F}_{2}(X)/\mathbb{F}_{2}(E)$ is $10$. Now, since $a_2(X)=a_3(X)=a_4(X)=0$, the different is precisely $2R$, where $R$ is a degree $5$ place of $E$. Thus, any such optimal genus $6$ curve is a double covering of $E$ of conductor $2R$, in which all rational points of $E$ are split. As observed in Remark \ref{remg1N5}, the elliptic curve $E$ has actually four points of degree $5$ and the $\mathbb{F}_{2}$-automorphism $\tau$ of $E$ acts transitively on them. The choice of a different degree $5$ ramifying point, gives thus an $\mathbb{F}_2$-isomorphic curve.
\end{proof}
In the rest of the section, let $X$ be a genus $6$ optimal curve over $\mathbb{F}_2$ having real Weil polynomial as in \eqref{rWb} of Theorem \ref{RealWeil}.
\begin{proposition}\label{Xb}
Up to $\mathbb{F}_{2}$-isomorphism, there is a unique curve having real Weil polynomial as in \eqref{rWb} of Theorem \ref{RealWeil}.
\end{proposition}
\begin{lemma}\label{nonnormal6}
The curve $X$ is a non-Galois covering of degree $3$ of the elliptic curve $E$ such that $X$ is unramified outside of $E(\mathbb{F}_{2})$.
\end{lemma}
The following definition introduces a notation for the splitting behavior of the rational points of the elliptic curve $E$.
\begin{definition}\label{abcpointsE}
Let $X\to E$ be a degree $3$ covering defined over $\mathbb{F}_{2}$. Consider a rational point $P$ of $E$. We say that $P$ is
\begin{enumerate}
\item[$a)$] an $A$-point, if $P$ splits completely in $X$;
\item[$b)$] a $B$-point, if $P$ splits into two points of $X$, one unramified and the other one with ramification index $2$;
\item[$c)$] a $C$-point, if $P$ is totally ramified in $X$ with ramification index $3$.
\end{enumerate}
Moreover we denote by $a$, $b$, $c$ the number of $A$-points, $B$-points and $C$-points of $E$ respectively.
\end{definition}
\begin{proof}[Proof of Lemma \ref{nonnormal6}]
By Theorem \ref{RealWeil}, the real Weil polynomial of $X$ is $h(t)=(t-1)(t+2)(t^2+3t+1)^2$. Since the resultant of the polynomials $t+2$ and $(t-1)(t^2+3t+1)$ is equal to $3$, by Proposition \ref{EllFact} the curve $X$ admits a morphism of degree $3$ to the optimal elliptic curve $E$ described in Remark \ref{remg1N5}. Since the parameters of $X$ are $a(X)=[10,0,0,0,2,15,\dots]$, there are no places of degree $2$ or $3$ on $X$. Therefore each of the $\mathbb{F}_2$-rational points in $E$ can hence be either an $A$-point, a $B$-point or a $C$-point in the sense of Definition \ref{abcpointsE}. Then we have
\[
a+b+c = 5 \quad \textrm{and} \quad 3a+2b+c = 10,
\]
and hence
\[
2a+b=5 \quad \textrm{and}\quad a=c.
\]
This leaves us with the three cases of Table \ref{typesIII}.
\begin{table}[h]
\begin{center}
\begin{tabular}{| l | c | c | c |}
\hline
$\,$ & $\mathbf{a}$ & $\mathbf{b}$ & $\mathbf{c}$\\
\hline
\textbf{case} $I$ & $0$ & $5$ & $0$\\
\hline
\textbf{case} $II$ & $1$ & $3$ & $1$\\
\hline
\textbf{case} $III$ & $2$ & $1$ & $2$\\
\hline
\end{tabular}
\caption{Splitting behavior of the rational points of $E$ in $X$}
\label{typesIII}
\end{center}
\end{table}
In each case the covering $X \to E$ is non-Galois since $b$ is never zero. Moreover the function field extension $\mathbb{F}_2(X)/\mathbb{F}_2(E)$ is unramified outside of $E(\mathbb{F}_2)$. Consider indeed the degree of the different, which is $10$ by the Hurwitz formula. By Definition \ref{abcpointsE}, only one of the two points of $X$ lying over a $B$-point of $E$ is wildly ramified. This gives a contribution to the degree of the different which is at least $2$. The contribution to the different that comes from the rational points of $E$ is therefore at least $db+2c$ with $d\geq 2$. Therefore it is at least $5\cdot 2=10$ in case $I$, at least $3\cdot 2 + 2=8$ in case $II$ and at least $1\cdot 2 + 2\cdot 2=6$ in case $III$. Since there are no points of degree $2$, $3$ or $4$ on $X$, any other non-rational ramified place of $E$ should have degree strictly larger than $4$. But this would give a too large contribution to the different in each of the three cases. Hence there are no other places of $E$ ramifying in $X$ but those of degree one.
\end{proof}
\begin{definition}\label{defioverlineprime}
We denote by $\overline{X}$ the curve whose function field is the normal closure of $\mathbb{F}_2(X)$ with respect to $\mathbb{F}_2(E)$: it is a Galois extension of $\mathbb{F}_{2}(E)$ having Galois group isomorphic to the symmetric group $S_3$. We denote by $X'$ the curve having as function field the quadratic extension of $\mathbb{F}_2(E)$ corresponding to the group $A_3\simeq\mathbb{Z}_3$, the unique (normal) subgroup of $S_3$ of index $2$. The situation is described in the following picture:
\begin{center}
$
\xymatrix@C=0.1pc {
&&& \overline{X} \ar @{->}[dlll]_(.68){{\scriptscriptstyle 2}} \ar @{->}[dll]_(.67){{\scriptscriptstyle 2}} \ar @{->}[dl]_(.61){{\scriptscriptstyle 2}} \ar@{->}[drr]^(.64){{\scriptscriptstyle 3}} && \\
{\qquad X} \ar @{->}[drrr]_(.32){{\scriptscriptstyle 3}} & {\;\;Y} \ar @{->}[drr]_(.33){{\scriptscriptstyle 3}} & **[r]Z \ar @{->}[dr]_(.39){{\scriptscriptstyle 3}} & && X' \ar @{->}[dll]^(.36){{\scriptscriptstyle 2}} \\
&&& E & &}
\quad$
$
\xymatrix@C=0.1pc {
&&& \{1\} \ar @{->}[dlll]_(.68){{\scriptscriptstyle 2}} \ar @{->}[dll]_(.67){{\scriptscriptstyle 2}} \ar @{->}[dl]_(.61){{\scriptscriptstyle 2}} \ar@{->}[drr]^(.64){{\scriptscriptstyle 3}} && \\
{\qquad \mathbb{Z}_2} \ar @{->}[drrr]_(.32){{\scriptscriptstyle 3}} & {\;\;\mathbb{Z}_2} \ar @{->}[drr]_(.33){{\scriptscriptstyle 3}} & \mathbb{Z}_2 \ar @{->}[dr]_(.39){{\scriptscriptstyle 3}} & && \mathbb{Z}_3 \ar @{->}[dll]^(.36){{\scriptscriptstyle 2}} \\
&&& G & &}
$
\end{center}
\end{definition}
We sum up some arithmetical properties of $X'$ and $\overline{X}$ in the following auxiliary lemmas.
\begin{lemma} \label{abcPoints}
\mbox{}
\begin{enumerate}
\item[$a)$] The $A$-points of $E$ split completely in $\overline{X}$ and $X'$.
\item[$b)$] Over each $B$-point of $E$ there are three points of $\overline{X}$, each with ramification index $2$ and there is one point of $X'$ with ramification index $2$.
\item[$c)$] Over each $C$-point of $E$ there is a unique place of $\overline{X}$ of degree $2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $Y$ be the degree $3$ covering of $E$ as in the picture above.
\begin{enumerate}
\item[$a)$]
Each $A$-point $P$ of $E$ splits completely over $\mathbb{F}_{2}(Y)\simeq \mathbb{F}_{2}(X)$ as well. Hence the function field of $\overline{X}$, being the compositum of $\mathbb{F}_{2}(X)$ and $\mathbb{F}_{2}(Y)$, is the splitting field of $P$. Moreover, since the function field of $X'$ is contained in it, $P$ splits completely in $X'$ as well.
\item[$b)$]
Since there is more than one point of $\overline{X}$ lying over a $B$-point $P$ of $E$, the decomposition groups of the points lying over $P$ have order $2$. Since the ramification index of one of the points of $X$ lying over $P$ is $2$, all points of $\overline{X}$ lying over $P$ have ramification $2$. It also follows that there is a unique point of $X'$ lying over $P$. It has ramification index $2$.
\item[$c)$]
Let $P$ be a $C$-point of $E$. Since the order of the inertia group of any of the points of $\overline{X}$ lying over $P$ has order divisible by $3$, the same is true for a point $P'$ of $X'$ lying over $P$. It follows that $\overline{X} \to X'$ is a cyclic degree $3$ covering that is ramified at $P'$. Therefore, by class field theory, the multiplicative group of the residue field of $P'$ must have order divisible by $3$. It follows that $P$ must be inert in $X'$. Indeed, in this case the residue field of $P'$ is $\mathbb{F}_{4}$.\vspace{-0.5cm}
\end{enumerate}
\end{proof}
\begin{lemma}\label{genusx'}
The curve $X'$ is defined over $\mathbb{F}_2$ and has genus $g'=6-c$. Moreover, the covering $X' \to E$ is ramified exactly at the $B$-points of $E$.
\end{lemma}
\begin{proof}
Since $X \to E$ is unramified outside of $E(\mathbb{F}_{2})$ by Lemma \ref{nonnormal6}, the same holds for the covers $\overline{X}$ and $X'$ of $E$. Lemma \ref{abcPoints} implies then that $X' \to E$ is ramified precisely at the $B$-points of $E$. By Table \ref{typesIII} there is always at least one such point. Thus, since the residue field of any place contains the constant field, the constant field of $X'$ is $\mathbb{F}_{2}$.
In order to compute the genus of $X'$ we compare the different $\textrm{Diff}(X'/E)$ of $\mathbb{F}_2(X')/\mathbb{F}_2(E)$ with the different $\textrm{Diff}(X/E)$ of $\mathbb{F}_2(X)/\mathbb{F}_2(E)$. By the Hurwitz formula we have that $10=2\cdot 6-2=\textrm{deg\,Diff}(X/E)=\textrm{deg}\,\textrm{Diff}(X/E)_{tame} + \textrm{deg}\,\textrm{Diff}(X/E)_{wild}$. The contribution given to $\textrm{Diff}(X/E)$ by the $c$ tamely ramified points is $2c$. Therefore the contribution of the $b$ wildly ramified points is $10-2c$. Since these are precisely the points that are ramified in $X' \to E$, the degree of $\textrm{Diff}(X'/E)$ is also equal to $10-2c$. It follows that $2g'-2=10-2c$, so that $g'=6-c$ as required.
\end{proof}
\begin{lemma}\label{XX'}
For low degrees $d$, the number $a_{d}$ of places of degree $d$ of the curves $\overline{X}$ and $X'$ are as follows:
\begin{align*}
a_1(\overline{X})&=6a+3b, &a_1(X')&=2a+b;\\
a_2(\overline{X})&=c , &a_2(X')&=c;\\
a_3(\overline{X})&=0 , &a_3(X')&=0;\\
a_4(\overline{X})&=0 , &a_4(X')&=10;\\
a_5(\overline{X})&=0.
\end{align*}
\end{lemma}
\begin{proof}
The computation of the numbers $a_1(\overline{X})$ and $a_1(X')$ of $\mathbb{F}_2$-rational points of $\overline{X}$ and $X'$ respectively, follows directly by Lemma \ref{abcPoints}. By the same Lemma the degree $2$ places of $X'$ are precisely the ones lying over the $C$-points of $E$ and they are themselves totally ramified in $\overline{X}$. This gives $a_2(X')=c =a_2(\overline{X})$. Because of Theorem \ref{RealWeil}, the curve $X$ has parameters $a(X)=[10, 0, 0, 0, 2, 15, ...]$ and in particular $a_{3}(X)=0$. Since also $a_{3}(E)=0$, it follows at once that $a_3(X')=a_3(\overline{X})=0$. The curve $X$ has no places of degree $2$ or $4$, thus $a_4(\overline{X})=0$. Moreover this means that the five places of degree $4$ of $E$ are inert in $X$. Since they are not ramified, their decomposition group has to be cyclic and hence of order $3$. Therefore they are split in $X'$ and we have
\[
a_4(X')=2a_4(E)=2\cdot 5=10.
\]
Suppose that $a_5(\overline{X})$ is not zero, then one of the places of degree $5$ of $E$ splits completely in $\overline{X}$. This implies that $X$ has at least three places of degree $5$, which is not the case. Therefore $a_5(\overline{X})=0$.
\end{proof}
The following Lemma describes abelian extensions $K_{D}$ of $\mathbb{F}_{2}(E)$ for particular choices of the conductor $D$. These extensions play a role in the proof of Proposition \ref{possibleX}. The divisor $D$ is a sum of points in $E(\mathbb{F}_{2})$. See Remark \ref{remg1N5} for the notation.
\begin{lemma}\label{rayclass2}
Let $K_{D}$ denote the ray class field of $\mathbb{F}_{2}(E)$ of conductor $D$ in which the point at infinity and all places of degree $4$ of $E$ split completely. Then $K_{D}$ is trivial when $D=4P_{1}+2P_{2}+2P_{3}$ or $D=2P_{1}+2P_{2}+4P_{3}$. It has degree $2$ over $\mathbb{F}_{2}(E)$ when $D=2P_{1}+4P_{2}+2P_{3}$.
\end{lemma}
\begin{proof}
Let $Q_1,Q_2,\ldots, Q_5$ denote the degree $4$ places of $E$ as listed in Remark \ref{remg1N5} and let $S=\{ Q_1,Q_2,Q_3,Q_4,Q_5,P_0\}$. A basis for the $S$-unit group of $E$ is given by the following functions $u_{i}$, $i=1,\ldots,5$:
\begin{align*}
u_1 &= x^4+x^3+x^2+x+1, & \textrm{with } & (u_{1}) = Q_1+Q_2-8P_0,\\
u_2 &= x^4+x^3+1, & \textrm{with } & (u_{2}) = Q_3+Q_4-8P_0,\\
u_3 &= x^2+x+1, & \textrm{with } & (u_{3}) = Q_5-4P_0,\\
u_4 &= \frac{(y+x^3)(y+x^3+x^2)^2}{y(y+x)(x^2+x+1)^3}, & \textrm{with } &(u_4) = Q_1+2Q_3-3Q_5,\\
u_5 &= \frac{(y+x^3)^2(y+x^3+x^2+1)}{(y+1)(y+x)(x^2+x+1)^3}, & \textrm{with } & (u_{5}) = Q_4+2Q_1-3Q_5.
\end{align*}
Then consider the ray class field $K_{D'}$ of $\mathbb{F}_{2}(E)$ of conductor $D'=4P_{1}+4P_{2}+4P_{3}$ in which the places in $S$ split completely. We are interested in the ray class fields $K_{D_{j}}$, $j=1,2,3$, that are subfields of $K_{D'}$ of conductor $D_{1}=4P_{1}+2P_{2}+2P_{3}$, $D_{2}=2P_{1}+4P_{2}+2P_{3}$ and $D_{3}=2P_{1}+2P_{2}+4P_{3}$. The corresponding $S$-ray class groups modulo $D_{j}$ are quotients of the groups $R_{j}=\Big(\mathbb{F}_2[[t_j]]/(t_j^4)\Big)^*\oplus \Big(\mathbb{F}_2[[{t_{j'}}]]/(t_{j'}^2)\Big)^*\oplus \Big(\mathbb{F}_2[[{t_{j''}}]]/(t_{j''}^2)\Big)^*\simeq \mathbb{Z}_4 \times \mathbb{Z}_2 \times \mathbb{Z}_2\times \mathbb{Z}_2$ by the image of the $S$-unit group of $E$ \cite[Section 8]{schoof}. Here $t_{j}$, $t_{j'}$, $t_{j''}$ denote uniformizers of $P_{j}$, $P_{j'}$, $P_{j''}$ respectively, for $\{j,j',j''\}=\{1,2,3\}$. We show that the order of the $S$-ray class group modulo $D_{j}$ is $2$ for $j=2$, while for $j=1,3$ this group is trivial. In Table \ref{uiPj}, we display in the column marked by $R_{j}$, $j=1,2,3$, the images of the $u_{i}$'s ($i\neq 3$) in the group $R_{j}$. We remark that the computations for the units $u_{4}$ and $u_{5}$ can be performed calculating the local expansions $y_j$ of $y$ at $P_j$, for $j=1,2,3$:
\begin{eqnarray*}
y_{1}&=&x+x^2+x^3+x^4+x^6+O(x^7),\\
y_{2}&=&1+x+x^2+x^3+x^4+x^6+O(x^7),\\
y_{3}&=&t^2+t^3+t^4+t^6+O(t^7),\, \textrm{where} \, t=x+1.
\end{eqnarray*}
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
$u_{i}$ & $R_{1}$ & $R_{2}$ & $R_{3}$ \\
\hline
$u_{1}$ & $(1+t_1)^{3}(1+t_2)$ &$(1+t_1)(1+t_2)^3$ & $(1 + t_1) (1 + t_3^3)(1+t_2)$\\
\hline
$u_{2}$ & $(1+t_1^3)(1+t_3)$ & $(1+t_2^3)(1+t_3)$ & $(1+t_3)^3 $ \\
\hline
$u_{4}$ & $(1+t_1)^2(1+t_1^3)(1+t_2)$ & $(1+t_2)^3(1+t_2^3)$ & $1+t_2$\\
\hline
$u_{5}$ & $1+t_3$ & $(1+t_2^3)(1+t_3)$ & $(1+t_3)^3(1+t_3^3)$ \\
\hline
\end{tabular}
\caption{Images of the $u_{i}$'s in the group $R_{j}$, for $j=1,2,3$.}
\label{uiPj}
\end{center}
\end{table}
One checks that in $R_{2}$ the images of the $u_i$'s for $i\neq 3$ generate a subgroup of index $2$. The image of $u_{3}$ is $(1+t_1)(1+t_2)^3(1+t_2^3)(1+t_3)$ and lies hence in the same subgroup. On the other hand, the images of the $u_i$'s, $i\neq 3$, are independent generators of $R_{1}$: the image of $u_{1}$ has order $4$ and the images of $u_2$, $u_4$ and $u_5$ have order $2$. Thus in this case the ray class group is trivial. Similarly for the images of $u_1$, $u_2$, $u_4$ and $u_5$ in $R_{3}$: also in this case the ray class group is trivial.
\end{proof}
\begin{proposition}\label{possibleX}
All rational points of $E$ are ramified in $X'$. The curve $X'$ has genus $6$ and real Weil polynomial $h(t)=t(t+2)(t^2-5)^2$. In other words, only the configuration of case $I$ in Table \ref{typesIII} is possible.
\end{proposition}
\begin{proof}
According to Table \ref{typesIII}, there are three possibilities for the splitting behavior of the rational points of $E$ in $X$. Moreover by Lemmas \ref{genusx'} and \ref{XX'} the genus $g'$ and the vector $a(X')$ of the curve $X'$ are in the three cases as follows:
\begin{equation*}
\left.
\begin{tabular}{|l|c|c|c||c|c|}
\hline
$ \,$ & $\mathbf{a}$ & $\mathbf{b}$ & $\mathbf{c}$ & $\mathbf{g'}$ & $\mathbf{a(X')}$\\
\hline
\textbf{case } $I$ & $0$ & $5$ & $0$ & $6$ & $[5,0,0,10,\ldots]$\\
\hline
\textbf{case } $II$ & $1$ & $3$ & $1$ & $5$ & $[5,1,0,10,\ldots]$\\
\hline
\textbf{case } $III$ & $2$ & $1$ & $2$ & $4$ & $[5,2,0,10,\ldots]$\\
\hline
\end{tabular}
\right.
\end{equation*}
Case $III$ cannot occur since in this case the curve $X'$ would be a genus $4$ curve having $N_4=N+2a_2+4a_4=5+2\cdot 2+4\cdot 10=49$ rational points over $\mathbb{F}_{2^4}$, while $N_4(4)=45$ according to \cite{geervlugt}.\\
In case $II$ a computer calculation gives only one possible real Weil polynomial for $X'$, namely $h(t)=(t+2)(t^2-5)(t^2-2)$. Now, since the automorphism group of $E$ acts doubly transitively on $E(\mathbb{F}_{2})$ as described in Remark \ref{remg1N5}, we may assume that the point at infinity $P_{0}$ of $E$ is the unique $A$-point of $E$ and that $P_{4}=(1,1)$ is the unique $C$-point. The remaining three rational points of $E$ are $\{P_{1}, P_{2}, P_{3}\}$. They are $B$-points of $E$ and hence ramify in $X' \to E$ by Lemma \ref{abcPoints}. Moreover, since $a_{4}(X')=10$ and $a_{4}(E)=5$, all five degree $4$ places of $E$ split completely in $X'$. By the Hurwitz formula the degree of the different of $\mathbb{F}_{2}(X')/\mathbb{F}_{2}(E)$ is $8$. Therefore Lemma \ref{rayclass2} implies that $\mathbb{F}_{2}(X')$ is equal to the ray class field of $\mathbb{F}_{2}(E)$ of conductor $2P_{1}+4P_{2}+2P_{3}$, in which $P_{0}$ and all degree $4$ places of $E$ split completely. Consider now the curve $\overline{X}$. It is a degree $3$ abelian covering of $X'$. Since $\overline{X}$ has $15$ rational points by Lemma \ref{XX'}, all five rational points of $X'$ split completely in $\overline{X}$. Moreover, since $X\to E$ and $X'\to E$ are both unramified outside of $E(\mathbb{F}_{2})$, only the degree $2$ place $P_4'$ of $X'$, which lies over $P_4$ of $E$ ramifies in $\overline{X}$.
The curve $\overline{X}$ is hence the ray class field of $\mathbb{F}_2(X')$ of conductor $P_4'$, where all rational places of $X'$ split completely. A computer calculation with MAGMA shows that the associated ray class group is trivial. Hence case $II$ cannot occur.\\
In case $I$ a computer calculation gives only one possible real Weil polynomial for $X'$, namely $h(t)=t(t+2)(t^2-5)^2$.
\end{proof}
In the next two lemmas we describe two curves appearing in the proof of Proposition \ref{Xb}.
\begin{lemma}\label{unhyper}
There exists a unique curve $C$ having real Weil polynomial $h(t)=(t+2)(t-1)$. Up to $\mathbb{F}_{2}$-isomorphism, this is a genus $2$ projective curve described by the affine equation $y^2+xy=x^5+x^4+x^2+x$.
\end{lemma}
\begin{proof}
A curve $C$ having real Weil polynomial $h(t)=(t+2)(t-1)$ is a genus $2$ curve having four rational points and two places of degree $2$ over $\mathbb{F}_{2}$. Since it is a hyperelliptic curve, we can consider the double covering $C\to \mathbb{P}^{1}$. The different of the corresponding function field extension is $4P+2P'$, where $P$ and $P'$ are rational points of $\mathbb{P}^1$. Indeed, by the Hurwitz formula, the degree of the different is $6$ and, since $C$ has four rational points, two of the rational points of $\mathbb{P}^1$ are wildly ramified and one splits completely. The coefficients of $P$ and $P'$ are forced to be even since $\mathbb{F}_{2}(C)$ is an Artin-Schreier extension of the rational function field. Notice also that the possibility that two rational points of $\mathbb{P}^{1}$ split and the third stays inert in $\mathbb{F}_{2}(C)$ is excluded by the fact that in this case the degree $2$ place of $\mathbb{P}^{1}$ would be ramified, giving a contradiction in the computation of the different. According to the classification of genus $2$ curves over $\mathbb{F}_{2}$ in \cite[page 327]{maisnernart}, by taking $P=P_\infty$ and $P'=(0,0)$, any such an hyperelliptic curve over $\mathbb{F}_2$ is $\mathbb{F}_2$-isomorphic to a projective curve of affine equation $y^2+y=x^3+ax+1/x+b$, with $a,b \in \mathbb{F}_2$. There are hence four possibilities for the parameters $a$ and $b$, but only $y^2 + y = x^3 + x+1/x+1$ is the equation of a projective curve having four rational points over $\mathbb{F}_{2}$ and two places of degree $2$. This curve is $\mathbb{F}_2$-isomorphic to the projective curve of more simple affine equation $y^2+xy=x^5+x^4+x^2+x$, an isomorphism being given by $(x,y)\mapsto (x,(y+x^2)/x)$.
\end{proof}
\begin{lemma}\label{rayclass3}
Let $C$ be the curve of Lemma \ref{unhyper}. Then $C$ admits an unramified cyclic degree $5$ covering in which both the point at infinity $P_{\infty}$ and the point $(0,0)$ split. This covering is unique up to isomorphism. Moreover, for any other choice of rational points $P$ and $P'$ of $C$, any cyclic unramified degree $5$ covering of $C$ in which $P$ and $P'$ split, is necessarily trivial.
\end{lemma}
\begin{proof}
Consider the maximal unramified extension $L$ of the function field $K$ of $C$ where $P_\infty$ splits completely. By class field theory, the Galois group $\mathcal{G}al(L/K)$ is isomorphic to the quotient of the class group $Pic(C)$ by the subgroup generated by the image in $Pic(C)$ of the Frobenius element $\mathrm{Frob} \,P_{\infty} \in \mathcal{G}al(L/K)$ of $P_{\infty}$. Hence $\mathcal{G}al(L/K)\simeq Pic^{0}(C)$. Let $h(t)$ be the real Weil polynomial of $C$ as in Lemma \ref{unhyper}. By \cite[Theorem 5.1.15 (c)]{stichtenoth} the class number $\# Pic^{0}(C)$ of $C$ equals $L(1)$, where $L(t)$ is the numerator of the Zeta function of $C$. Since $L(1) = h(q+1)$ by \eqref{relation}, one has $\# Pic^0(C)= h(3)=10$. Therefore there exists a unique unramified cyclic degree $5$ extension $K'$ of $\mathbb{F}_{2}(C)$ in which $P_{\infty}$ splits completely. Since the divisor $(x)=2((0,0)- P_\infty)$ is principal, the Frobenius of $(0,0)$ is trivial in $\mathcal{G}al(K'/K)\simeq\mathbb{Z}_5\simeq Pic^0(C)/\mathbb{Z}_{2}$, so that the rational point $(0,0)$ is also split in $K'$.\\
On the other hand, if we replace the points $P_{\infty}$ and $(0,0)$ by any other pair of rational points of $C$, there is no such unramified cyclic degree $5$ extension. To see this, we note that $C$ has four rational points: $P_{\infty}$, $(0,0)$, $(1,0)$ and $(1,1)$. If two of these were to split in an unramified cyclic degree $5$ covering of $C$, then $2$ times their difference, would be a principal divisor. By adding or subtracting the principal divisors $2((0,0)-P_{\infty})$ and $2((1,0)+(1,1)-2P_{\infty})$, this boils in each case down to the question of whether or not the divisor $2((1,0)-P_{\infty})$ is principal. Suppose that $2((1,0)-2P_{\infty})$ is the divisor of a function $f \in \mathbb{F}_2(C)$. Since the only functions in $\mathbb{F}_2(C)$ with a pole of order $2$ at infinity are linear functions in $x$, we must have $f=x+1$, but then $f$ also vanishes in $(1,1)$, a contradiction.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Xb}]
By Lemma \ref{nonnormal6} the genus $6$ curve $X$ is a non-Galois covering of degree $3$ of the elliptic curve $E$. Moreover, by Proposition \ref{possibleX} the only possibility for the splitting behavior of the rational points of $E$ in $X$ is described in case $I$ of Table \ref{typesIII}. In other words, all rational points of $E$ are $B$-points in the sense of Definition \ref{abcpointsE}. In order to show that such a curve $X$ is unique, consider the quadratic function field extension $\mathbb{F}_{2}(X')/\mathbb{F}_{2}(E)$ described in the picture of Definition \ref{defioverlineprime}. By the Hurwitz formula and Proposition \ref{possibleX}, this is an abelian extension of $\mathbb{F}_{2}(E)$ of conductor $\sum_{i=0}^4 2 P_i$ where all places of $E$ of degree $4$ split completely.\\
Let $\tau$ be the order $4$ automorphism of $E$ described in Remark \ref{remg1N5}. Then the endomorphism $\tau+2$ of the elliptic curve $E$ has degree $5$ and kernel $E(\mathbb{F}_{2})$. The Galois group of the covering $\tau +2:\, E \to E$ consists of the translations by the points $P_{i}$ of $E$. It preserves both the set $E(\mathbb{F}_{2})$ and the set of places of $E$ of degree $4$. Therefore the covering
\[
X' \longrightarrow E \stackrel{\tau +2}{-\!\!-\!\!\!\longrightarrow} E
\]
is Galois. Similarly, the covering $\overline{X}\to X'$ is unramified and cyclic of degree $3$. Lemma \ref{XX'} implies that all rational points of $X'$ are split.
By class field theory, there exists a unique degree $3$ such a covering of $X'$. Indeed, let $h(t)$ be the real Weil polynomial of $X'$ as in Proposition \ref{possibleX}. By \cite[Theorem 5.1.15 (c)]{stichtenoth} one has $\# Pic^{0}(X')=L(1)$, where $L(t)$ is the numerator of the Zeta function of $X'$. Hence, since by \eqref{relation} one has $L(1)=h(3)=2^4\cdot 3\cdot 5$, there exists a unique index $3$ subgroup in the class group of $X'$. Thus the function field extension corresponding to the covering
\[
\overline{X} \longrightarrow E \stackrel{\tau +2}{-\!\!-\!\!\!\longrightarrow} E
\]
is also Galois. The Galois group $G$ is an extension of $\mathbb{Z}_{5}$ by $S_3$. Since these groups have coprime order and $\mathbb{Z}_{5}$ necessarily acts trivially on $S_{3}$, the Schur-Zassenhaus Theorem implies that $G$ is a direct product of $\mathbb{Z}_{5}$ and $S_{3}$. By Galois correspondence there exists hence a tower of function fields corresponding to the morphisms of curves $\overline{X}\to Y \to E$, such that $\mathcal{G}al (\mathbb{F}_{2}(Y)/\mathbb{F}_{2}(E))\simeq S_3$. Let $\rho$ be a generator of $\mathcal{G}al(\mathbb{F}_2(\overline{X})/ \mathbb{F}_2(X))\subseteq S_{3}$ and consider invariant fields. We obtain a cyclic covering $X \to C$ of degree $5$, which is unramified since $\tau+2: E\to E$ is.
\begin{center}
$
\xymatrix {
Y \ar @{->}[d]_{\scriptscriptstyle 2} \ar @{<-}[r]^{\scriptscriptstyle 5} & \overline{X} \ar @{->}[d]^{{\scriptscriptstyle 2}} \\
C \ar @{->}[d]_{\scriptscriptstyle 3} \ar @{<-}[r]^{\scriptscriptstyle 5} & X \ar @{->}[d]^{{\scriptscriptstyle 3}} \\
E \ar @{<-}[r]^{\scriptscriptstyle 5} & E}
\quad\quad$
$
\xymatrix {
\quad\;\mathbb{Z}_{5}\quad \ar @{->}[d]_{\scriptscriptstyle 2} \ar @{<-}[r]^(.60){\scriptscriptstyle 5} & \{1\} \ar @{->}[d]^{{\scriptscriptstyle 2}} \\
\mathbb{Z}_{5}\times \mathbb{Z}_{2} \ar @{->}[d]_{\scriptscriptstyle 3} \ar @{<-}[r]^(.60){\scriptscriptstyle 5} & \mathbb{Z}_{2} \ar @{->}[d]^{{\scriptscriptstyle 3}} \\
\mathbb{Z}_{5} \times S_{3} \ar @{<-}[r]^(.60){\scriptscriptstyle 5} & S_{3}}
$
\end{center}
The curve $C$ has genus $2$ by the Hurwitz formula. The real Weil polynomial of $C$ is thus a degree $2$ factor of the real Weil polynomial of $X$. Since $C$ is also a degree $3$ covering of $E$, the real Weil polynomial of $C$ is divisible by the real Weil polynomial $t+2$ of $E$, since the same holds for the corresponding Zeta functions \cite{aubryperret}. Hence the real Weil polynomial of $C$ is $h(t)=(t+2)(t-1)$. By Lemma \ref{rayclass3}, the curve $C$ indeed admits such an unramified cyclic degree $5$ covering. Therefore there actually exists a unique curve $X$ with real Weil polynomial equal to polynomial \eqref{rWb} in Theorem \ref{RealWeil} and Proposition \ref{Xb} follows.
\end{proof}
\section{Genus $7$ optimal curves}\label{sec: genus7}
Let $E$ be the optimal genus $1$ curve of affine equation $y^{2}+y=x^{3}+x$ described in Remark \ref{remg1N5}. In this last section we present a class field theoretic construction of a ray class field of $\mathbb{F}_{2}(E)$ whose proper quadratic subfields are function fields of optimal genus $7$ curves. We show that the Zeta functions of these curves are not all the same, providing existence of at least two non-isomorphic genus $7$ optimal curves over $\mathbb{F}_{2}$.
\begin{proposition}\label{g7N10}
Let $K$ be the function field of $E$ and let $Q$ denote a degree $6$ place of $K$ of uniformizer $t=x^6+x^5+1$. Let $L$ be the ray class field of $K$ of conductor $2Q$, in which all five rational points of $K$ split completely. The Galois group $\mathcal{G}al(L/K)$ is isomorphic to $\mathbb{Z}_2 \oplus \mathbb{Z}_2$. The quadratic subfields of $K$ are function fields of optimal genus $7$ curves that do not all have the same Zeta function.
\begin{center}
$
\xymatrix@C=0.1pc {
& {\;Y} \ar @{->}[dl]_(0.57){{\scriptscriptstyle 2}} \ar @{->}[d]_{{\scriptscriptstyle 2}} \ar @{->}[dr]^(0.57){{\scriptscriptstyle 2}} & \\
{\;\;X_1} \ar @{->}[dr]_(0.43){{\scriptscriptstyle 2}} & {X_2} \ar @{->}[d]_{{\scriptscriptstyle 2}} & {X_3} \ar @{->}[dl]^(0.43){{\scriptscriptstyle 2}}\\
& E &}
\quad$
$
\xymatrix@C=0.1pc {
& \{1\} \ar @{->}[dl]_(0.57){{\scriptscriptstyle 2}} \ar @{->}[d]_{{\scriptscriptstyle 2}} \ar @{->}[dr]^(0.57){{\scriptscriptstyle 2}} & \\
{\;\;\mathbb{Z}_2} \ar @{->}[dr]_(0.43){{\scriptscriptstyle 2}} & {\mathbb{Z}_2} \ar @{->}[d]_{{\scriptscriptstyle 2}} & {\mathbb{Z}_2} \ar @{->}[dl]^(0.43){{\scriptscriptstyle 2}}\\
& G &}
\quad$
\end{center}
\end{proposition}
\begin{proof}
Let $a \in \mathbb{F}_{2^{6}}$ be a root of $x^6+x^5+1$, and let $Q$ be the place that consists of the point $(a,a^4+a^3+a^2+1)$ and its conjugates. The prime ideal corresponding to $Q$ is $\mathfrak{p}=(x^6+x^5+1,y+x^4+x^3+x^2+1)$. The principal divisor $(x^6+x^5+1)$ is equal to $Q+Q'-12P_0$ where $Q'$ is the place consisting of $(a,a^4+a^3+a^2)$ and its conjugates. We take $t=x^6+x^5+1$ as a uniformizer at $Q$. Denote by $S$ the set of the five rational points of $E$ described in Remark \ref{remg1N5}.\\
Let $L$ be the ray class field of $K$ of conductor $2Q$, in which all five rational points in $S$ split completely. Then, by Artin reciprocity, the Galois group $G=\mathcal{G}al(L/K)$ is isomorphic to the quotient of $R=\mathbb{F}_{2^6}[[t]]^*/\{u:\, u\equiv 1\;\, \textrm{mod}\, t^2\}$ by the image of the $S$-unit group $O_S^*$ of $K$. A basis for $O_{S}^{*}$ is given by the functions $x$, $x+1$, $y$ and $y+x$ having the following principal divisors
\begin{eqnarray*}
(x) & = & P_1+P_2-2P_0,\\
(x+1) & = & P_3+P_4-2P_0,\\
(y) & = & P_1+2P_3-3P_0,\\
(x+y) & = & 2P_1+P_4-3P_0.
\end{eqnarray*}
In order to compute the image of the $S$-units in $R$, we first observe that the image of the $S$-unit $x$ has order $63$ modulo $t$ and hence it generates the $63$-part of $R$. Then we compute
\begin{align*}
x^{63}-1 & \equiv (x+1)t &\,& \textrm{mod}\, t^2,\\
(x+1)^{63}-1 & \equiv xt &\,& \textrm{mod}\, t^2,\\
y^{63}-1 & \equiv (x^5+x^2)t &\,& \textrm{mod}\, t^2,\\
(y+x)^{63}-1 & \equiv (x^5+x^4+x^3+x^2)t &\,& \textrm{mod}\, t^2.
\end{align*}
Thus $\mathcal{G}al(L/K)$ is isomorphic to the quotient of $\mathbb{F}_2[x]/(x^6+x^5+1)$ by the additive subgroup $H$ generated by $x+1$, $x$, $x^5+x^2$ and $x^5+x^4+x^3+x^2$. This is a quotient group of order $4$ where all elements have order $2$. Hence $\mathcal{G}al(L/K)\simeq\mathbb{Z}_2 \oplus \mathbb{Z}_2$.\\
The three subgroups of order $2$ of $\mathcal{G}al(L/K)$ correspond to three coverings $X_1$, $X_2$ and $X_3$ of $E$ as in the diagram. Each curve $X_{i}$ has ten rational points over $\mathbb{F}_2$, since all five rational places of $E$ split completely. Since the non-trivial characters of $\mathcal{G}al(L/K)$ have conductor $2Q$, the different of each quadratic extension $\mathbb{F}_{2}(X_{i})/\mathbb{F}_{2}(E)$ has degree $12$ and the three curves have genus $7$ by the Hurwitz formula. Since $N_2(7)=10$ by Theorem $5$ in \cite{serre1}, they are three genus $7$ optimal curves over $\mathbb{F}_2$.\\
To show that the curves are not all isomorphic it suffices to consider the number of places of degree $d$ of each curve $X_{i}$ for $d\leq 4$. Since the rational points of $E$ are all split and $E$ has no places of degree $2$ or $3$, none of the three curves $X_i$ has places of degree $2$ or $3$ either. Therefore a curve $X_{i}$ can only have places of degree $4$ if some places of $E$ of degree $4$ split completely in $X_{i}$. By class field theory, a place $P$ of $E$ splits completely in $X_i$ if and only if the image of the uniformizer of $P$ is trivial in the quotient $R_{i}$ of $R$ which is the ray class group of the covering $X_i \to E$. Consider the index $2$ additive subgroups $H_1=H+\langle x^3 \rangle$, $H_2=H+\langle x^2 \rangle$ and $H_3=H+ \langle x^3+x^2 \rangle$ of $\mathbb{F}_2[x]/(x^6+x^5+1)$. The ray class group $R_{i}$ associated to the curve $X_i$ is isomorphic to the quotient group of $\mathbb{F}_2[x]/(x^6+x^5+1)$ by $H_{i}$ for $i=1,2,3$. We present the results of the computation in Table \ref{tabfour}.
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
$\mathbf{Q_{j}}$ & $\mathbf{u_{j}(x,y)}$ & $\mathbf{g_{j}(x)}$ & $\mathbf{H_{i}}$\\
\hline
$Q_{1}$ & $y+x^3$ & $x^5+x$ & $H_{2}$\\
\hline
$Q_{2}$ & $y+x^3+1$ & $x^4$ & $H_{1}$\\
\hline
$Q_{3}$ & $y+x^3+x^2$ & $x^5+x^3+x$ & $H_{3}$\\
\hline
$Q_{4}$ & $y+x^3+x^2+1$ & $x^4+x^2$ & $H_{3}$\\
\hline
$Q_{5}$ & $x^2+x+1$ & $x^5+x^3+x^2$ & $H_{1}$\\
\hline
\end{tabular}
\caption{\mbox{Splitting behavior of the degree $4$ places of $E$ in each curve $X_{i}$.}}
\label{tabfour}
\end{center}
\end{table}
The first column lists for $j=1,\ldots,5$ the degree $4$ places $Q_{j}$ of $E$ as in Remark \ref{remg1N5}. In the second and third column we display the uniformizers $u_{j}(x,y)$'s of the $Q_{j}$'s and the images $g_{j}(x)$'s in $\mathbb{F}_{2}/(x^{6}+x^{5}+1)$ of the $u_{j}(x,y)$'s. In other words we have $u_{j}(x,y)^{63}-1\equiv g_{j}(x)t \; \textrm{mod}\, t^2$. In the last column we write $H_{i}$ for $i=1,2,3$ whenever $g_{j}(x)$ belongs to $H_{i}$. The curve $X_1$ has four places of degree $4$, since both $Q_{2}$ and $Q_{5}$ split. Similarly, also $X_{3}$ has four places of degree $4$. On the other hand the curve $X_2$ has only two places of degree $4$, since only $Q_{1}$ splits. Hence the two curves $X_{1}$ and $X_{2}$ are not isomorphic.
\end{proof}
\begin{remark}\label{remX13}
Let $\sigma$ and $\tau$ be the automorphisms of $E$ described in Remark \ref{remg1N5}. Then, the action of $\sigma$ on the places of degree $6$ of $E$ listed in Remark \ref{remg1N5}, is given by
\[
T_1 \mapsto T_9 \mapsto T_3 \mapsto T_4 \mapsto T_{10} \mapsto T_1.
\]
Since the elliptic involution $\tau^{2}$ switches $T_{9}$ and $T_{10}$ we have that $\sigma^3 \tau^{2}$ preserves $T_{10}$. In terms of adding points on the elliptic curve $E$ one has $\sigma^{3}\tau^{2}: (x,y) \mapsto (1,1) - (x,y)$. A short computation shows that $\sigma^{3}\tau^{2}$ switches the functions $x^3$ and $x^2+x^3$ modulo the subgroup $H$ of $\mathbb{F}_2[x]/(x^6+x^5+1)$. Therefore the curves $X_1$ and $X_3$ are actually isomorphic.
\end{remark}
For completeness we compute the real Weil polynomials of the optimal genus $7$ curves.
\begin{proposition}
For $i=1,2,3$, the real Weil polynomial $h_{i}(t)$ and the vector $a(X_{i})$ of the curve $X_i$ are
\begin{align*}
&h_{1,3}(t)\!=\!(t\!+\!2)(t^6\!+\!5t^5\!+\!3t^4\!-\!15t^3\!-\!15t^2\!+\!9t\!+\!8), &a(X_{1,3})\!=\![10,0,0,4,2,5,18,\ldots],\\
&h_{2}(t)\!=\!(t\!+\!2)(t^2\!+\!3t\!+\!1)(t^4\!+\!2t^3\!-\!4t^2\!-\!5t\!+\!2), &a(X_{2})\!=\![10,0,0,2,4,11,12,\ldots].
\end{align*}
\end{proposition}
\begin{proof}
By Remark \ref{remX13} the curves $X_{1}$ and $X_{3}$ are isomorphic, therefore they have the same real Weil polynomial. In the proof of Proposition \ref{g7N10} we already observed that for the curves $X_{1}$ and $X_{2}$ one has $a_{1}=10$ and $a_{2}=a_{3}=0$. We also proved that $a_{4}(X_{1})=4$ while $a_{4}(X_{2})=2$. Similarly to what was done for the places of degree $4$, we consider the splitting behavior of the places of degree $5$ of $E$ listed in Remark \ref{remg1N5} and display the results in Table \ref{tabfive}.
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
$\mathbf{R_{k}}$ & $\mathbf{u_{k}(x,y)}$ & $\mathbf{g_{k}(x)}$ & $\mathbf{H_{i}}$\\
\hline
$R_{1}$ & $y+x^4$ & $x^3+x+1$ & $H_{1}$\\
\hline
$R_{2}$ & $y+x^4+1$ & $x^5+x^4+x$ & $H_{3}$\\
\hline
$R_{3}$ & $y+x^4+x$ & $x^4+x^3+x^2+1$ & $H_{2}$\\
\hline
$R_{4}$ & $y+x^4+x+1$ & $x^5+x^4+x^3+1$ & $H_{2}$\\
\hline
\end{tabular}
\caption{\mbox{Splitting behavior of the degree $5$ places of $E$ in each curve $X_{i}$.}}
\label{tabfive}
\end{center}
\end{table}
Summing up we have $a_{5}(X_1)=2$ and $a(X_2)=[10,0,0,2,4,\ldots]$. Since the degree $6$ place $Q$ of $E$ is the only ramifying place in each curve $X_1$, $i=1,2$, we have that $a_6(X_i)$ has to be odd, while $a_7(X_i)$ has to be even. We can now determine a parametric form for the real Weil polynomial of each curve $X_i$:
\begin{enumerate}
\item[$i)$] For the curve $X_1$ the values of $\#X_1(\mathbb{F}_2)=a_1=10$, $a_2=a_3=0$, $a_4=4$ and $a_5=2$ allow to determine the following parametric form:
\[
h(t) = t^7+7t^6+13t^5-9t^4-45t^3-21t^2+\alpha t+\beta.
\]
One can check that only for the values of $(\alpha,\beta)=(26,16)$ and $(\alpha,\beta)=(27,18)$ all roots of $h(t)$ lie in the interval $[-2\sqrt{2},2\sqrt{2}]$. Only the first pair gives an odd number of degree $6$ places, namely $a_6(X_1)=5$. In this case $a_7(X_1)=18$.
\item[$ii)$] For the values $a(X_2)=[10,0,0,2,4,\ldots]$ we have the parametric real Weil polynomial
\[
h(t) = t^7+7t^6+13t^5-9t^4-47t^3-33t^2+\alpha t+\beta.
\]
In this case there are three pairs of values of $(\alpha,\beta)$ for which $h(t)$ has all roots in the interval $[-2\sqrt{2},2\sqrt{2}]$: the pair $(3,2)$, which gives $a_6=10$; the pair $(4,4)$, which gives $a_6=11$; and the pair $(5,7)$, for which $a_6=12$. Hence the real Weil polynomial of $X_2$ corresponds to the unique pair $(\alpha,\beta)=(4,4)$ for which $a_6$ is not even. In this case $a_7=12$.
\end{enumerate}
\end{proof}
|
1,116,691,500,367 | arxiv | \section{Introduction\label{intro}}
Standard survival models assume that all individuals experience the event of interest eventually (cf.~\citet{kalbprent:2002}). However, this assumption is not always tenable since, for example, some diseases may require specific biological and/or lifestyle traits in place to activate, or it may be that immunity is a consequence of successful curative treatment (or, indeed, a combination of both treatment and pre-treatment attributes). The individuals who will not experience the event are referred to as \emph{cured} or \emph{non-susceptible}, and, although the former is perhaps suggestive of treatment, both are used interchangeably in the biostatistic literature. While there may be scenarios where it is possible to medically examine individuals to determine who is cured, typically the ``cure status'' is unknown and inference about cure mechanisms is based on the results of a survival study. Note that, as the observation period increases in such a study, the sample becomes composed of relatively more censoring times since the cured individuals never contribute event times. In practice then, the presence of cured individuals can be visualized in terms of the Kaplan-Meier curve \citep{kapmeier:1958} which will plateau as time increases (instead of tending towards zero).
The importance of estimating the cured proportion was recognized by \citet{boag:1949} and \citet{berkgage:1952} who made use of a mixture model. Here, the overall survival distribution is improper with mixture components given by: (i) a proper survival distribution for uncured individuals (usually referred to as the \emph{latency} distribution), and (ii) a point mass at time infinity for cured individuals. \citet{farewell:1977,farewell:1982} extended this work to incorporate covariates in both the latency component (proportional hazards Weibull model) and the cure component (logistic regression model). This mixture model structure permits quite straightforward interpretation and is the most widely used cure model. However other formulations exist, notably the bounded cumulative hazard model \citep{tsodikov:1998,tsodikovetal:2003}, which can be motivated through an underlying mechanistic model for tumor growth, or the compound Poisson multiplicative frailty model \citep{aalen:1992,duchateau:2007}, which assigns a zero frailty (and, hence, zero hazard) to a mass of individuals (i.e., cured individuals). Studies concerning mixture cure models can also be traced in the labor economics literature, under the name \emph{split population duration models} \citep{schmidtdryden:1989}, and in the reliability literature under the name \emph{limited-failure population models} \citep{meeker:1987}.
While much early work in cure modelling assumed a parametric latency distribution (e.g., \citet{boag:1949} used a log-normal model, and \citet{farewell:1977,farewell:1982} used a Weibull model), non/semi-parametric approaches are typically preferred in the analysis of time-to-event data. Furthermore, \citet{yuetal:2004} showed that cure estimates can be sensitive to the choice of latency distribution -- albeit more flexible parametric cure models are more robust \citep{peng:1998, yamaguchi:1992}.
The semi-parametric cure model consists of a parametric logistic cure regression component and semi-parametric proportional hazards (PH) \citep{pengdear:2000, syetal:2000} or accelerated failure time (AFT) latency model \citep{litaylor:2002, zhangpeng:2007}; in both PH and AFT cases, the baseline latency model is an unspecified function, and estimation is based on the EM algorithm \citep{dempster:1977} in combination with (modified) partial likelihood \citep{cox:1972,cox:1975} or rank regression \citep{ritov:1990,tsiatis:1990} respectively. Note that penalized splines have also been considered for the baseline latency model \citep{corbiereetal:2009}. Of course, structural model assumptions are still made in the way that covariates enter (i.e., PH or AFT), and, estimation of the cure component can still be sensitive to such choices.
Our approach differs from the aforementioned.
First note that, if the cure status was directly observable, one would simply apply standard binary logistic (or probit) regression. Because, typically, the cure status is not available (e.g., medically or by finite-time survival studies), we propose replacing it with a ``proxy'' or ``synthetic'' value, and then proceeding in the usual way. In particular, we generate these synthetic values using inverse probability censoring weighting (IPCW) arguments \citep{robinsfink:2000, van2003unified}. This approach obviates the need for a latency model -- although IPCW does require estimation of the censoring distribution. At first sight, we trade one missing data framework (EM) for another (IPCW), and latency estimation with censoring estimation. However, in addition to being an interesting new cure estimation process in its own right, our proposal has its advantages. Unlike the existing EM approaches, which are not so easily extended to more flexible latency models (beyond PH and AFT), our general framework does not depend on any particular specification for the censoring model, i.e., this could be very flexible. Furthermore, once the synthetic cure status has been computed (in one initial step), one may avail of standard, fast (GLM) estimation and penalized variable selection procedures thereafter.
The remainder of this article is organized as follows. In Section \ref{sec:prelim} we introduce a new result which forms the basis of the proposed likelihood estimation procedure of Section \ref{sec:est}. In Section \ref{sec:varselect} we present a penalized version of this likelihood for the purpose of variable selection. Asymptotic properties of our estimation procedure are given in Section \ref{sec:ass}, along with empirical evidence via simulation in Section \ref{sec:sim}. Real data examples are given in Section \ref{sec:data}. We close with some remarks in Section \ref{sec:disc}.
The proofs are postponed to the Appendix.
\section{Preliminaries\label{sec:prelim}}
\setcounter{equation}{0}
\setcounter{thm}{0}
Let $T \in (0, \infty]$ denote the survival time of interest, and note in particular that, in contrast to standard survival models, the support includes the possibility that $T=\infty$ to allow for cases where the event never occurs, i.e., cured (or non-susceptible) individuals. We also introduce the cure indicator $B = \mathbbm{1}(T=\infty) \sim \text{Bernoulli}(\pi)$ so that $\pi$ is the cure probability. Furthermore, $T_0 = T\mid(B\!=\!0)$ is survival time for an uncured individual (the latency time) with survivor function $S_{T_0}(t) = \Pr(T_0 > t)$ whose support is contained in $(0, \infty)$, i.e., $S_{T_0}(t)$ is simply a standard survivor function. Since $\Pr(T > t \mid B) = (1-B) S_{T_0}(t) + B$, we have that $S_T(t) = \mathbb{E}\{\Pr(T > t \mid B)\} = (1-\pi) S_{T_0}(t) + \pi$ which is the so-called ``mixture cure'' model; it is a finite mixture of cured and uncured individuals. Note that $S_T(\infty) = \pi$ so that, in the time limit (when $\pi>0$), there is a proportion of individuals who do not experience the event, and $S_T$ has an asymptote.
Generally we will have a vector of covariates, $X$,
and it is of particular interest to model the relationship between $X$ and $\pi$. Thus, generalizing the above mixture cure model to covariate dependence, we arrive at
\begin{equation*}
S_T(t\mid X) = \{1-\pi(X)\} S_{T_0}(t\mid X) + \pi(X)
\end{equation*}
where $\pi(X) = \pi(X;\theta) \in (0,1]$ is the cure regression function, and $\theta$ is a vector of parameters which describe the relationship between $X$ and the cure probability. Typically, although not necessarily, covariates enter through a linear predictor, i.e., $\pi(X;\theta)$ is a given function of $X^\top\theta$ where $X = (X_{(0)}\equiv1, X_{(1)}, \ldots, X_{(p)})^\top$ is the covariate vector and $\theta = (\theta_{(0)},\theta_{(1)},\ldots,\theta_{(p)})^\top $ is the vector of associated regression coefficients. (Here and in the following, for any matrix $A,$ $A^\top$ denotes its transpose.) In our applications, we will make use of a logistic regression function, $\pi(X) = 1/\{1 + \exp(- \theta^\top X)\}$, but one may, of course, use alternative parametric forms (e.g., probit or complementary log-log, or forms in which covariates enter in non-linear ways). Note that we have not suggested any particular model for $S_{T_0}(t\mid X)$ as, in our proposed estimation procedure, this function is completely unspecified.
In the majority of practical applications, we do not observe the cure status, $B$. We observe individuals over a (possibly fixed) time period during which some will experience the event and others will not. Those who do not experience the event during the observation period are censored -- but this does not mean they have been cured since they may experience the event at a later stage outside of the observation period. We therefore introduce a censoring time $C \in (0, \infty)$ with survivor function $S_C(t)$. In contrast to $T$, the support of $C$ does not include infinity since, practically, observation windows are finite. Let $Y = T \wedge C$ be the observed time (where $\wedge$ is the minimum operator), and $\Delta = \mathbbm{1}(T \le C) = (1-B)\mathbbm{1}(T_0 \le C)$ is the event indicator. Therefore, due to censoring, neither $T$ nor $B$ are observed so that inference must be made through $Y$ and $\Delta$.
We will assume the following:
\begin{align}
T_0 &\perp C && \hspace{-4.5cm} \mid X, \label{ass1}\\[0.2cm]
B &\perp (T_0, C) && \hspace{-4.5cm} \mid X \label{ass2},
\end{align}
where (\ref{ass1}) is the standard independence assumption made throughout survival literature, and (\ref{ass2}) is introduced in the cure context to ensure that the cure regression model is identifiable. In particular, Assumptions (\ref{ass1}) and (\ref{ass2}) guarantee that $T \perp C \mid X$. See Lemma \ref{indep_eq} in the Appendix. With these assumptions in place, it can then be shown that
\begin{equation}
\mathbb{E}\left(\left.\frac{\Delta}{S_C(Y-\mid X)} ~\right| X\right) ~=~ \mathbb{E}(1-B\mid X) ~=~ 1 - \pi(X) \label{expectcure}
\end{equation}
where, for any possible value, $x$, of the covariate vector, $$S_C(t-\mid x) = \Pr(C \ge t \mid X=x).$$ This result
is the core of our estimation scheme which is described in Section \ref{sec:est}.
In fact, (\ref{expectcure}) is a special case of the more general result
\begin{equation}
\mathbb{E}\left(\left.\frac{\Delta r(Y,X)}{S_C(Y-\mid X)} ~\right| X\right) = \mathbb{E}\{r(T_0,X)\mid X\} \{1-\pi(X)\} \label{expectcure2}
\end{equation}
but with $r(\cdot,\cdot)=1$, and, indeed, it is (\ref{expectcure2}) which is proved in Lemma \ref{core_id}.
It is worth highlighting that (\ref{expectcure2}) is an application of the Inverse-Probability Censoring Weighting (IPCW) approach \citep{robinsfink:2000} extended to the case where a cured proportion exists; (\ref{expectcure2}) reduces to the usual IPCW approach when $\pi(X)\equiv0$.
\section{Estimation and inference\label{sec:est}}
\setcounter{equation}{0}
If the iid replicates of $(B_i,X_i)$ were observed, we would simply have the standard Bernoulli log-likelihood,
\begin{equation}\label{likeB}
\ell(\theta) = \sum_{i=1}^n \left[B_i \log \pi_i (\theta) + (1-B_i) \log(1- \pi_i (\theta) )\right],
\end{equation}
where $\pi_i (\theta)= \pi(X_i;\theta)$.
Of course, (\ref{likeB}) is not operational in the cure context since $B_i$ is not observed but it serves as our motivation for likelihood estimation and inference on $\theta$ based on iid replicates $(Y_i,\Delta_i,X_i)$, $1\leq i \leq n$.
\subsection{Likelihood estimation
We now define
\begin{equation}\label{def_Bistar}
B_i^*(S_C) = B_i^*(Y_i, \Delta_i, X_i, S_C) = 1 - \frac{\Delta_i}{S_C(Y_i-\mid X_i)},
\end{equation}
which, from (\ref{expectcure}), is such that $\mathbb{E}\{B_i^*(S_C)\mid X_i\} = \mathbb{E}(B_i\mid X_i)$. Replacing $B_i$ with $B_i^*(S_C)$ in (\ref{likeB}), we obtain
the log-likelihood function which we propose in the case where \emph{$S_C$ is known}, that is
\begin{equation}
\ell^*(\theta) = \sum_{i=1}^n\left[B_i^*(S_C) \log \pi_i (\theta)+ \{1- B_i^*(S_C)\} \log(1-\pi_i (\theta) )\right] \omega_i. \label{likedata}
\end{equation}
Unlike (\ref{likeB}), this likelihood is formed using the observable quantities $Y_i$ and $\Delta_i$ rather than the unobservable $B_i$. Here, $\omega_i=\omega(X_i)$ are positive weights. As will be explained in the sequel, the weight function $\omega(X)$ is a technical device that will be needed when deriving general asymptotic results. However, we anticipate that in practically all applications $\omega_i \equiv 1$. The score function,
$U^*(\theta)= \partial \ell^*(\theta) / \partial \theta = \sum_{i=1}^n U^*_i(\theta)$ where
$$
U_i^*(\theta) = \frac{\left[B_i ^*(S_C) - \pi_i (\theta)\right]\omega_i}{\pi_i (\theta) (1- \pi_i (\theta) )}\, \frac{\partial \pi_i (\theta) }{\partial \theta }\, \in\mathbb{R}^{p+1},
$$
is unbiased due to property (\ref{expectcure}); note that, in the case of a logistic cure regression model, we obtain the simplified expression
$ U_i^*(\theta) = [B_i^*(S_C)- \pi_i (\theta) ] X_i \omega_i $ with $ \pi_i (\theta) = 1/\{1 + \exp(- \theta^\top X_i)\} $. Furthermore, using standard inequalities, \color{black} it can be shown that, when the cure regression model is identifiable,
\begin{equation}\label{entropy_ineg}
\mathbb{E}\{\ell^*(\theta)\} < \mathbb{E}\{\ell^*(\theta_0)\} \qquad \forall \theta \ne \theta_0,
\end{equation}
where $\theta_0$ is the true parameter vector. See Lemma \ref{lik_ineq_proof} in the Appendix. Hence, $\ell^*(\theta)$ is a legitimate criterion for estimation and inference on $\theta$.
Note that we have explicitly written $B^*_i(S_C)$ as a function of $S_C$ (while its dependence on $Y_i$, $\Delta_i$, and $X_i$ is implicit) since we must estimate $S_C$ in practice, i.e., we will use $B_i^*(\hat S_C)$. In applications of IPCW (recall that (\ref{expectcure}) is based on IPCW arguments), $S_C$ is typically estimated using a Kaplan-Meier or a Cox model, but one could also use more flexible non-parametric regression models such as those of \citet{aalen:1980} or \citet{beran:1981}; in fact, the asymptotic theory of Section \ref{sec:ass} only requires that the chosen estimator for $S_C$ has an iid representation.
Thus, while our approach does not require specification or estimation of $S_{T_0}$, we must estimate $S_C$, and this can essentially be done in an arbitrarily flexible way.
Therefore, for practical purposes, we propose the maximum likelihood estimator defined as
\begin{equation}\label{mle}
\hat \theta = (\hat\theta_{(0)},\hat\theta_{(1)},\ldots,\hat\theta_{(p)})^\top = \arg\max_{\theta} \hat \ell^*(\theta)
\end{equation}
where
\begin{equation}
\hat \ell^*(\theta) = \sum_{i=1}^n\left[B_i^*(\hat S_C) \log \pi_i (\theta) + \{1- B_i^*(\hat S_C)\} \log(1-\pi_i (\theta) )\right] \omega_i \label{likedata_b}.
\end{equation}
The weights $\omega_i = \omega(X_i)$ will serve in theory to control the behavior of general estimates of $B_i^*(\hat S_C)$ in regions of low covariate density.
Of course, (\ref{likedata_b}) is the familiar likelihood function used in logistic regression, (\ref{likeB}), (hence, GLM estimation procedures can be used) but with $B_i$ replaced with $\hat B_i^* = B_i^*(\hat S_C)$. The quantity $\hat B^*$ plays a similar role to a ``synthetic observation'' as used by \citet{kouletal:1981} (see also \citet{delecroixetal:2008} and references therein) in least squares estimation for censored survival data. However, their response variable is a survival time, whereas we have a binary cure indicator. It is worth highlighting that current estimation procedures for semi-parametric cure models also involve replacing $B_i$ in (\ref{likeB}) with an expected value, but in an iterative EM fashion \citep{pengdear:2000,syetal:2000,litaylor:2002, zhangpeng:2007}, whereas $B_i^*(\hat S_C)$ is computed in one step followed by maximization of (\ref{likedata_b}). The reason for this difference comes from the modeling approach. In the existing literature, models are assumed for both the conditional survival time of the uncured individuals, $S_{T_0}(t\mid X)$, and the cure proportion, $\pi(X)$, that together completely determine $S_T(t\mid X)$, while no assumptions are made about $S_C(t\mid X)$. Since $S_T(t\mid X)$ is directly identifiable from the observed data, the EM iterations between $S_{T_0}(t\mid X)$ and $\pi(X)$ represent a natural way to estimate these assumed model components. In contrast, our approach uses the fact that $S_C(t\mid X)$ can also be identified from the data, and, thus, we neither need to impose assumptions on $S_{T_0}(t\mid X)$ nor use an iterative procedure.
\subsection{Penalized likelihood for variable selection\label{sec:varselect}}
Selection of important variables has traditionally been carried out using best subset and stepwise procedures. However, these discrete approaches (variables are either in or out) are computationally intensive, their theoretical properties are not so well characterized \citep{fanli:2001,fanli:2002}, coverage of confidence intervals for selected variables can be poor \citep{hurvich:1990, zhang:1992}, and they are unstable in the sense that small changes to the data can result in large changes to the selected model and coefficients \citep{breiman:1996}. More modern approaches are based on penalized likelihood methods such as the least absolute shrinkage and selection operator (lasso) \citep{tibshirani:1996}, and these remove variables by estimating their coefficients as zero. Furthermore, the adaptive lasso (alasso) \citep{zou:2006} enjoys the so-called oracle property (i.e., asymptotically, the estimates and standard errors are the same as if the true submodel was known and had simply been fitted without variable selection); the non-convex smoothly clipped absolute deviation (SCAD) penalty also possesses the oracle property \citep{fanli:2001}. Interestingly, the adaptive lasso is asymptotically related to (a continuous version of) best subset selection \citep{zhangetal:2007}.
The situation we consider for variable selection is the one where $\pi(X;\theta)$ is a given function of $X^\top \theta$ and thus $X\in \mathbb{R}^{p+1}$ with $X_{(0)}\equiv 1$. Then, by construction, the incorporation of a penalty term in our setup is straightforward, with the alasso estimator given by
\begin{equation}\label{alasso_est}
\hat \theta_\lambda =(\hat \theta_{\lambda ,(0)},\hat \theta_{\lambda ,(1)},\ldots,\hat \theta_{\lambda ,(p)})^\top = \arg\max_{\theta\in\Theta } \hat \ell^*_{\lambda}(\theta)
\end{equation}
where
\begin{equation}
\hat \ell^*_{\lambda}(\theta) = \hat \ell^*(\theta) - \lambda \sum_{j=1}^p w_j |\theta_{(j)}| \label{alasso_lik_def}
\end{equation}
is the penalized likelihood function, $\hat \ell^*(\theta)$ is defined in (\ref{likedata_b}), $\lambda \ge 0$ is the associated penalty parameter, and $w_j \ge0$ are (potentially adaptive) weights, $1 \le j \le p$.
Here, as is usual, the intercept, $\theta_{(0)}$, is not penalized in (\ref{alasso_est}), and, furthermore, typically, the covariates will be standardized. Setting $w_j = 1$ $\forall j$ yields the lasso penalty which penalizes all coefficients equally, while $w_j = 1/|\hat\theta_{(j)}^{(0)}|^\gamma$ for some $\gamma >0$ yields the alasso penalty (we will set $\gamma=1$ as is most common in practice). In the latter case $\hat\theta_{(j)}^{(0)}$ may be any consistent estimator of $\theta_{(j)}$, and, typically, $\hat\theta_{(j)}^{(0)}=\hat\theta_{(j)}$, where $\hat\theta_{(j)}$ is the $j$th unpenalized estimator from (\ref{mle}). In Section \ref{sec:simselect} we provide details on implementation aspects of the alasso in our context.
\section{Asymptotic results\label{sec:ass}}
\setcounter{equation}{0}
\setcounter{thm}{0}
Our asymptotic results are deduced under some minimal moment assumptions on the observed variables completed by some mild high-level assumptions on the cure regression model and on the model for the censoring variable. These conditions are quite natural in the context of right-censored data when covariates are present and are to be verified on a case by case basis according to the context of the application. In this section we use the notation $\pi(\theta ) = \pi (X;\theta),$
$\partial \pi(\theta)/ \partial \theta= \partial \pi(X; \theta)/ \partial \theta$,
and $B^*_i(S_C)$ (hence, $B^*_i(\hat S_C)$) as defined in equation (\ref{def_Bistar}).
\begin{assumption}\label{dgp} \emph{(The data)}
The observations $(Y_i,\Delta_i,X_i),$ $1\leq i \leq n,$ are independent replications of $(Y,\Delta,X)\in\mathbb{R}\times \{0, 1\} \times \mathcal{X}$, where $\mathcal{X}$ is some covariate space. Moreover, $\mathbb{E}[\Delta/S_C (Y-\mid X)]<\infty.$
\end{assumption}
Let $\mathcal{M}=\{\pi(\theta ): \theta \in\Theta \subset \mathbb{R}^{p+1}\}$ be a generic parametric cure regression model, that is a set of functions of the covariate vector $X$ indexed by $\theta$ in some parameter space $\Theta$. For the logistic cure model, $\pi(\theta) = 1/\{1 + \exp(- \theta^\top X)\}.$
\subsection{Consistency}
For the definition of a Glivenko-Cantelli class of function, we refer to the book by \citet{van2000asymptotic}.
\begin{assumption}\label{reg_cure_ass}
\emph{(The cure regression model)}
\begin{enumerate}
\item\label{ass_ome} The weight $\omega(X)$ is bounded, almost surely nonnegative and has a positive expectation.
\item\label{ass_truth} There exists $\theta_0\in\Theta$ such that $\Pr(T=\infty \mid X) = \pi(\theta_0)$. Moreover, there exists $0<c <1/2$ such that, for any $\theta\in\Theta$, $\Pr(c\leq \pi(\theta) \leq 1-c) = 1.$
\item\label{ass_sep} For any $\varepsilon >0,$ $\inf_{\|\theta-\theta_0\| >\varepsilon } \mathbb{E}[|\pi(\theta)- \pi(\theta_0)|\omega (X)] >0$.
\item\label{ass_GC} The model $ \mathcal{M}$ is a $\mathbb{P}_{X}-$Glivenko-Cantelli class of functions of $X$ with constant envelope.
\end{enumerate}
\end{assumption}
\begin{assumption}\label{ulln_ass}
\emph{(Uniform law of large numbers)}
The estimator $\hat S_C (\cdot\mid \cdot)$ satisfies the law of large numbers uniformly over the class of the logit transformations of the functions in $\mathcal M$, that is
\begin{equation*
\sup_{\theta \in\Theta} \left| \frac{1}{n} \sum_{i=1}^{n} \left[ B^*_i(\hat S_C) - B^*_i( S_C) \right] \omega(X_i) \log\left( \frac{\pi_i(\theta)}{1-\pi_i(\theta)} \right) \right| =o_{\mathbb{P}}(1).
\end{equation*}
\end{assumption}
\quad
For simplicity, we consider a bounded weight $\omega(X)$ and assume that the cure regressions in the model stay uniformly away from 0 and 1. The condition in Assumption \ref{reg_cure_ass}-\ref{ass_sep} is a slightly reinforced identification condition that will guarantee that $\theta_0$ is a well-separated maximum of the expectation of the likelihood.
It will be satisfied for instance in the logistic or probit regression as soon as the covariates are not redundant, that is whenever $\mathbb{E}[XX^\top\omega (X)]$ is an invertible matrix.
Let us provide some mild sufficient conditions implying the uniform convergence of Assumption \ref{ulln_ass}.
These sufficient conditions involve a threshold that is commonly used in the literature of cure
models. It is typically justified as representing a total follow-up of the study, and usually appears to have been considered independent of the covariates. However, we allow it to depend on the covariates in an arbitrary way.
\begin{lem}\label{suff_cdt_ulln}
Assume that there exists $\tau(x)$ such that, for any $x$, $\Pr(T_0 > \tau(x)) = 0 $ and $\inf_{x \in\mathcal{X}, \;\omega(x)>0} S_C(\tau(x)-\mid x) >0. $ Moreover, Assumptions \ref{reg_cure_ass}-\ref{ass_truth} and \ref{reg_cure_ass}-\ref{ass_GC} hold true. If
\begin{equation}\label{suff_c1}
\sup_{x \in\mathcal{X}, \;\omega(x)>0} \;\sup_{y\leq\tau(x)}\left|\hat S_C(y-\mid x) - S_C(y-\mid x) \right| = o_{\mathbb{P}}(1) ,
\end{equation}
then the uniform convergence in Assumption \ref{ulln_ass} holds true.
\end{lem}
\quad
The common parametric, semiparametric and nonparametric estimators $\hat S_C$ satisfy condition (\ref{suff_c1}). Several examples are provided in the monographs by \citet{borgan1995statistical} and \citet{kalbprent:2002}, and, for convenience, some examples are recalled in the Appendix. The consistency of our maximum likelihood estimator is stated in the following results.
\begin{proposition}\label{conv_th1}
If Assumptions \ref{dgp}, \ref{reg_cure_ass} and \ref{ulln_ass} hold true, then $\widehat \theta -\theta_0=o_{\mathbb{P}}(1)$.
\end{proposition}
\begin{corollary}\label{conv_th1c}
If Assumptions \ref{dgp} and \ref{reg_cure_ass} hold true, and condition (\ref{suff_c1}) is met, then
$\widehat \theta -\theta_0=o_{\mathbb{P}}(1)$.
\end{corollary}
\subsection{Asymptotic normality}
\begin{assumption}\label{reg_cure_ass_2}
\emph{(The cure regression model)}
\begin{enumerate}
\item For any $x\in\mathcal{X}$, the map $\theta\mapsto \pi (x; \theta)$ is twice continuously differentiable.
\item\label{A_pos_def} The true value $\theta_0$ is an interior point of $\Theta$, $$\mathbb{E}\left[\left\| \frac{\partial \pi (\theta_0)}{\partial \theta} \right\|^2 \right]<\infty$$ and the $(p+1)\times (p+1)-$matrix
$$
A(\theta_0)=\mathbb{E}\left[ \frac{\omega(X)}{\pi (\theta_0)[1-\pi (\theta_0)]} \frac{\partial \pi (\theta_0)}{\partial \theta} \frac{\partial \pi (\theta_0)}{\partial \theta} ^\top\; \right]
$$
is positive definite.
\item\label{ULLN_as_nor} For any $0\leq k\leq l \leq p,$ the families of functions of $x$ indexed by $\theta$
$$
\mathcal{F}_{1,kl}\!= \!\left\{ \frac{\partial^2 \pi}{\partial \theta_{(k)} \partial \theta_{(l)} }(x;\theta) : x\!\in\mathcal{X}, \theta\! \in \Theta \! \right\}\! , \quad \! \!\mathcal{F}_{2,kl}\!= \!\left\{\!\! \left(\! \frac{\partial \pi}{\partial \theta_{(k)} } \; \frac{\partial \pi}{\partial \theta_{(l)}} \! \right) \!\! (x;\theta) : x\!\in\mathcal{X}, \theta\! \in \Theta \! \right\}
$$
are $\mathbb{P}_{X}-$Glivenko-Cantelli classes of functions of $X$ with integrable envelopes.
\end{enumerate}
\end{assumption}
\begin{assumption}\label{uclt_ass2}
\emph{(I.I.D. representation)}
Let $\varphi(X)$ be a vector-valued function such that $\mathbb{E}\{\| \varphi(X)\|^2\}<\infty$. Then there exists $\mu_C^\varphi (Y,\Delta, X)$ a zero-mean vector-valued function that depends on $\varphi(X)$, such that $\mathbb{E}\{\| \mu_C^\varphi (Y,\Delta, X)\|^2\}<\infty$ and
$$
\frac{1}{n} \sum_{1\leq i\leq n} \left[ B^*_i(\hat S_C) - B^*_i( S_C) \right]\varphi (X_i) = \frac{1}{n} \sum_{1\leq i\leq n} \mu_C^\varphi(Y_i,\Delta_i, X_i) + o_{\mathbb{P}}(n^{-1/2}) ;
$$
\end{assumption}
\quad
Assumption \ref{reg_cure_ass_2} introduces mild standard regularity conditions on the cure regression model. In particular, Assumption \ref{reg_cure_ass_2}-\ref{ULLN_as_nor} allows the uniform law of large numbers and guarantees that the remainder terms in the standard Taylor expansion used to prove asymptotic normality for MLE are uniformly negligible. Such an assumption on the complexity of the classes of first and second order derivatives of the functions in the model are satisfied by the standard parametric models such as logit and probit models. As an alternative to Assumption \ref{reg_cure_ass_2}-\ref{ULLN_as_nor}, we could impose condition (\ref{suff_c1}) and slightly stronger regularity conditions on the model $\mathcal{M}$. The details are provided at the end of the proof of Proposition \ref{prop_tcl}. A property as required in Assumption \ref{uclt_ass2}
is very common in survival analysis and is related to the so-called \emph{Kaplan-Meier integrals} (cf.~\citet{stute96} and \citet{KMintegral_laan}). In the Appendix we provide several examples of models and estimators $\hat S_C$ for which Assumption \ref{uclt_ass2} holds true, namely Kaplan-Meier, conditional Kaplan-Meier, Cox proportional hazard, proportional odds and transformation models. In these examples, except for the case of the conditional Kaplan-Meier estimator, the weights $\omega_i=\omega(X_i)$ will always be set equal to 1.
In general, the expression of the function $\mu_C^\varphi(Y,\Delta, X) $ in Assumption \ref{uclt_ass2} depends on the joint law of the observations $(Y,\Delta, X)$. This function contributes to the asymptotic variance of our MLE $\hat \theta$ that, hence, will be different from the asymptotic variance of the infeasible MLE defined with $B^*( S_C)$ instead of $B^*( \hat S_C)$.
\begin{proposition}\label{prop_tcl}
Assume the conditions of Assumptions \ref{dgp}, \ref{reg_cure_ass} and Lemma \ref{suff_cdt_ulln} are met. Moreover, Assumption \ref{reg_cure_ass_2} holds true, and Assumption \ref{uclt_ass2} is satisfied with
$$
\varphi(X)= \frac{\omega(X)}{ \pi (\theta_0) [1 - \pi (\theta_0) ] } \; \frac{\partial \pi(\theta_0) }{\partial \theta} .
$$
Then
$$
\hat \theta -\theta_0= A(\theta_0) ^{-1} \frac{1}{n} \sum_{i=1}^n \left\{ \mu(Y_i,\Delta_i, X_i;\theta_0) + \mu_C^\varphi (Y_i,\Delta_i, X_i;\theta_0) \right\} \\ + o_{\mathbb{P}}(n^{-1/2}).
$$
where
$$
\mu(Y,\Delta, X;\theta_0) = \frac{ \left[B^*( S_C) -\pi (\theta_0)\right]\omega(X)}{\pi (\theta_0)[1-\pi (\theta_0)]} \; \frac{\partial \pi(\theta_0) }{\partial \theta}
$$
and $\mu_C^\varphi (Y,\delta, X)$ is the zero-mean vector-valued function from Assumption \ref{uclt_ass2}. In addition,
$$
\sqrt{n}\left( \hat \theta -\theta_0 \right) \rightsquigarrow N_{p+1}\left(0, A(\theta_0) ^{-1}V(\theta_0) A(\theta_0) ^{-1}\right)
$$
with $V(\theta_0)= \mbox{\rm Var}\left\{\mu(Y,\Delta, X ;\theta_0) + \mu_C^\varphi(Y,\Delta, X ;\theta_0)\right\}$. ($\rightsquigarrow$ denotes convergence in law.)
\end{proposition}
\quad
If suitable estimates of the vectors
$ \mu(Y_i,\Delta_i, X_i;\theta)$ and $ \mu_C^\varphi (Y_i,\Delta_i, X_i;\theta)$ are available, say, $ \hat \mu_i ( \theta)$ and $ \hat \mu_{C,i}^\varphi (\theta )$, then $V(\theta_0) $ could be simply estimated by sample covariance, $n^{-1} \sum_{i=1}^n [\hat \mu_i (\hat \theta) + \hat \mu_{C,i}^\varphi (\hat \theta )]^{\bigotimes2}$ where $a^{\bigotimes2} = a a^T$. Meanwhile, $A( \theta_0) $ could also be estimated by standard methods, and thus one could derive an estimate of the variance of $\hat \theta$. However, the estimates $ \hat \mu_i ( \theta)$ and $ \hat \mu_{C,i}^\varphi (\theta )$ are sometimes difficult to built. Alternatively, one can make use of the nonparametric bootstrap to approximate the law of $\hat \theta$; indeed, we use this approach in our simulation studies and real data analysis.
\subsection{Oracle properties for the adaptive lasso}
In this section we
prove consistency in variable selection for the adaptive lasso proposed in Section \ref{sec:varselect}. Moreover, we prove the asymptotic normality for the true subset of coefficients. That is, we extend the Theorem 4 of \citep{zou:2006} to the cure regression context.
Let $\theta_0 = (\theta_{0,(0)},\theta_{0,(1)},\ldots,\theta_{0,(p)})^\top$ be the true value of the cure regression parameter vector. Assume the true model has a sparse representation. Let $\mathcal{A} = \left\{j : 1\leq j\leq p , \theta_{0,(j)}\neq 0 \right\}\cup\{0\}$. Without loss of generality, suppose $\mathcal{A} = \{0,1,\ldots,p_0\}$ with $p_0<p. $ Below, the subscript $\mathcal{A}$ is used to define the subvectors or blocks in matrices with the components corresponding to the indices in the set $\mathcal{A}$. That is, $\theta_{\mathcal{A},0}$ is the subvector of the first $p_0+1$ components of $\theta_0 $, $\partial \pi(\theta_0)/\partial \theta_{\mathcal{A}}$ denotes the vector of $p_0+1$ partial derivatives with respect to the first $p_0 + 1$ components of $\theta$, and $A_{\mathcal{A}}(\theta_{0})$ is the upper-left block of dimension $(p_0+1)\times (p_0+1)$ of the $(p+1)\times (p+1)-$matrix $A(\theta_0)$ defined in Assumption \ref{reg_cure_ass_2}-\ref{A_pos_def}.
\begin{proposition}\label{oracle}
Assume the conditions of Proposition \ref{prop_tcl} are met and $\pi(X;\theta)$ is a given function of $X^\top \theta$. Let $\hat \theta_\lambda$ be the estimator defined in equation (\ref{alasso_est}) with $w_j = 1/|\hat\theta_{(j)}|^\gamma$ for some $\gamma >0$.
Moreover, assume that $\lambda/\sqrt{n}\rightarrow 0$ and $\lambda n ^{(\gamma-1)/2}\rightarrow \infty$. Let
$\mathcal{A}_n = \left\{j : 1\leq j\leq p , \hat \theta_{\lambda ,(j)}\neq 0 \right\}\cup\{0\}.$
Then
\begin{enumerate}
\item $\lim_{n\rightarrow \infty} \mathbb{P}(\mathcal{A}_n = \mathcal{A} )=1$.
\item
$$
\hat \theta_{\!\mathcal{A},\lambda} -\theta_{\!\mathcal{A},0}= A_{\mathcal{A}}(\theta_{0}) ^{-1} \frac{1}{n} \sum_{i=1}^n \left\{ \mu_\mathcal{A}(Y_i, \Delta_i, X_i;\theta_{0})+ \mu_{\mathcal{A},C}^\varphi (Y_i,\Delta_i, X_i;\theta_{0}) \right\} + o_{\mathbb{P}}(n^{-1/2}).
$$
where
$$
\mu_{\mathcal{A}}(Y,\Delta, X;\theta_{0}) = \frac{ \left[B^*( S_C) -\pi (\theta_{0})\right]\omega(X)}{\pi (\theta_{0})[1-\pi (\theta_{0})]} \; \frac{\partial \pi(\theta_{0}) }{\partial \theta_{\mathcal{A}}}
$$
and $\mu_{\mathcal{A},C}^\varphi (Y,\Delta, X;\theta_{0})$ is the zero-mean vector-valued function from Assumption \ref{uclt_ass2} considered with
$$
\varphi_{\mathcal{A}}(X)= \frac{\omega(X)}{ \pi (\theta_{0}) [1 - \pi (\theta_{0}) ] } \; \frac{\partial \pi(\theta_{0}) }{\partial \theta_{\mathcal{A}}} .
$$
In addition,
$$
\sqrt{n}\left( \hat \theta_{\mathcal{A},\lambda} -\theta_{\mathcal{A},0} \right) \rightsquigarrow N_{p_0+1}\left(0, A_{\mathcal{A}}(\theta_{ 0}) ^{-1}V_{\mathcal{A}}(\theta_{ 0}) A_{\mathcal{A}}(\theta_{ 0}) ^{-1}\right)
$$
with $V_{\mathcal{A}}(\theta_{0})= Var\left\{\mu_{\mathcal{A}} (Y,\Delta, X ;\theta_{0}) + \mu_{\mathcal{A},C}^\varphi(Y,\Delta, X ;\theta_{0})\right\}$.
\end{enumerate}
\end{proposition}
As was the case for
Proposition \ref{prop_tcl},
we can also obtain Proposition \ref{oracle} by imposing condition (\ref{suff_c1}) and slightly stronger regularity conditions on the model $\mathcal{M}$ instead of Assumption \ref{reg_cure_ass_2}-\ref{ULLN_as_nor}.
\section{Simulation studies\label{sec:sim}}
\setcounter{equation}{0}
\subsection{Setup\label{sec:simsetup}}
We first generate $B$, $T_0$ and $C$, from which we obtain $T = T_0 $ when $B=0$ and $T=\infty$ otherwise, and, hence, the observed time, $Y = T \wedge C$, and censoring indicator, $\Delta = (1-B)\mathbbm{1}(T_0 \le C)$, respectively. The cure status is given by $B \sim \text{Bernoulli}(\pi)$ where $\pi (\theta) = 1 / \{1 + \exp(- X^\top \theta)\}$, $X = (1, X_{(1)}, X_{(2)})^\top$, and $X_{(1)}$ and $X_{(2)}$ are independent $\text{Normal}(0,1)$ variables. We set $\theta_0 = (\theta_{0,(0)}, 1, 1)^\top$ with $\theta_{0,(0)} \in \{-1.85,-0.55\}$ such that the marginal cure proportion $\pi_m = \mathbb{E}\{\pi (\theta)\} \in \{0.2, 0.4\}$.
Consider the survivor function
\begin{equation*}
S_{T_0} (t\mid X) = \left\{\frac{\exp(-t^\kappa) - \exp(-\tau^\kappa)}{1- \exp(-\tau^\kappa)}\right\}^\psi
\end{equation*}
which is that of a truncated Weibull whose support is $(0,\tau)$ with a rate parameter, $\psi$, and a shape parameter, $\kappa$. The latency time, $T_0$, was generated according to this distribution with $\psi = \exp(X^\top \beta_{T_0})$ and $\kappa = (1/\psi)^\nu$ where $\beta_{T_0} = (0, 0, 1)^\top$ and $\nu \in \{0, 2\}$; the proportional hazards property holds when $\nu = 0$. The value of $\tau$ was set at the 95th percentile of the marginal untruncated distribution, i.e., $\tau$ is the unique solution of the equation
$$
\mathbb{E}\left[ \left\{\exp(-\tau^\kappa) \right\}^\psi \right] = 0.05.
$$
Clearly, $\tau$ depends on the value of $\nu$.
Lastly, the censoring time, $C$, was generated from an exponential distribution with rate parameter $\psi_C = \exp(X^\top \beta_C)$ where $\beta_C = (\beta_{C,(0)}, 0, 1)^\top$. The value $\beta_{C,(0)}$ was chosen such that the overall censored proportion is given by $\pi_\text{cen} = \Pr(\Delta=0) = \pi_m + \rho$ where $\rho \in \{0.1, 0.2\}$, and this depends on the values of $\theta_{(0)}$ and $\nu$; since $\pi_m \in \{0.2, 0.4\}$, there are then four values for the censoring proportion, $\pi_\text{cen} \in \{0.3,0.4,0.5,0.6\}$.
It is worth highlighting that $X_{(1)}$ only affects cure probability (since $\beta_{T_0,(1)}=\beta_{C,(1)}=0$), whereas $X_{(2)}$ affects all components of the data generating process (since $\theta_{(2)}=\beta_{T_0,(2)}=\beta_{C,(2)}=1$). Sample sizes of $n \in \{100, 300, 1000\}$ were considered, and, with two values for each of $\theta_{(0)}$, $\nu$, and $\rho$, there are 24 scenarios altogether. Each simulation scenario was replicated 2000 times.
\subsection{Estimation procedure}
We applied the estimation scheme described in Section \ref{sec:est} to the simulated data with $S_C$ estimated using a Cox model in which both covariates, $X_{(1)}$ and $X_{(2)}$, appear as predictors. Table \ref{tab:res} displays the average bias and standard error of estimates over simulation replicates. While the bias can be somewhat large when $n=100$, this vanishes as the sample size increases. Similarly, the standard errors also decrease with the sample size. As we might expect, the estimates generally disimprove when the censoring proportion increases. Furthermore, the results do not change appreciably when $\nu$ is varied (i.e., the approach is not sensitive to the form of $S_{T_0}$), while the standard errors decrease a little when $\pi_m$ is increased. Table \ref{tab:rescov} shows the empirical coverage for 95\% confidence intervals constructed using bootstrapping with 399 replicates; we find that the empirical coverage is close to the nominal level.
\begin{table}[htbp]
\centering
\caption{Average bias and standard error (in brackets) of estimates\label{tab:res}}
\begin{small}
\begin{tabular}{ccc@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c}
\hline
&&& \multicolumn{3}{l}{\hspace{0.6cm}$n=100$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=300$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=1000$} \\
$\nu$ & $\pi_m$ & $\rho$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ \\[0.1cm]
\hline
0 & 0.2 & 0.1 & -0.20 & 0.14 & 0.12 & -0.06 & 0.05 & 0.03 & -0.01 & 0.01 & 0.01 \\
& & & (0.65) & (0.62) & (0.58) & (0.29) & (0.29) & (0.26) & (0.15) & (0.14) & (0.14) \\[0.2cm]
& & 0.2 & -0.17 & 0.12 & 0.12 & -0.06 & 0.05 & 0.05 & -0.02 & 0.01 & 0.02 \\
& & & (0.80) & (0.74) & (0.68) & (0.40) & (0.40) & (0.34) & (0.20) & (0.20) & (0.17) \\[0.4cm]
& 0.4 & 0.1 & -0.05 & 0.10 & 0.12 & -0.01 & 0.04 & 0.03 & 0.00 & 0.01 & 0.01 \\
& & & (0.36) & (0.49) & (0.47) & (0.18) & (0.24) & (0.22) & (0.10) & (0.12) & (0.12) \\[0.2cm]
& & 0.2 & -0.03 & 0.16 & 0.17 & -0.02 & 0.05 & 0.05 & 0.00 & 0.02 & 0.02 \\
& & & (0.51) & (0.74) & (0.66) & (0.29) & (0.37) & (0.32) & (0.13) & (0.18) & (0.16) \\[0.4cm]
2 & 0.2 & 0.1 & -0.16 & 0.12 & 0.11 & -0.05 & 0.04 & 0.03 & -0.01 & 0.01 & 0.01 \\
& & & (0.62) & (0.60) & (0.51) & (0.29) & (0.29) & (0.24) & (0.15) & (0.15) & (0.13) \\[0.2cm]
& & 0.2 & -0.13 & 0.13 & 0.10 & -0.09 & 0.07 & 0.06 & -0.02 & 0.01 & 0.01 \\
& & & (0.80) & (0.76) & (0.60) & (0.45) & (0.43) & (0.32) & (0.21) & (0.20) & (0.15) \\[0.4cm]
& 0.4 & 0.1 & -0.05 & 0.13 & 0.11 & -0.02 & 0.04 & 0.02 & 0.00 & 0.01 & 0.01 \\
& & & (0.37) & (0.53) & (0.44) & (0.19) & (0.25) & (0.21) & (0.10) & (0.13) & (0.11) \\[0.2cm]
& & 0.2 & -0.03 & 0.15 & 0.15 & -0.02 & 0.06 & 0.05 & -0.01 & 0.02 & 0.01 \\
& & & (0.55) & (0.77) & (0.60) & (0.30) & (0.41) & (0.27) & (0.15) & (0.20) & (0.13) \\[0.1cm]
\hline
\end{tabular}
\end{small}
\end{table}
\begin{table}[htbp]
\centering
\caption{Empirical coverage of 95\% bootstrapped confidence intervals \label{tab:rescov}}
\begin{tabular}{ccc@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c}
\hline
&&& \multicolumn{3}{l}{\hspace{0.6cm}$n=100$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=300$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=1000$} \\
$\nu$ & $\pi_m$ & $\rho$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ \\[0.1cm]
\hline
0 & 0.2 & 0.1 & 91.2 & 93.2 & 93.9 & 93.5 & 93.2 & 94.3 & 94.0 & 94.8 & 94.0 \\
& & & 94.7 & 95.0 & 94.8 & 94.8 & 94.5 & 93.2 & 93.8 & 93.0 & 93.5 \\[0.2cm]
& & 0.2 & 94.6 & 93.6 & 93.4 & 94.8 & 94.0 & 94.9 & 94.7 & 94.4 & 94.8 \\
& & & 94.7 & 94.2 & 93.8 & 93.9 & 94.4 & 93.8 & 94.7 & 94.0 & 94.2 \\[0.4cm]
& 0.4 & 0.1 & 92.9 & 94.3 & 93.4 & 93.9 & 93.5 & 94.8 & 94.2 & 94.3 & 94.9 \\
& & & 95.9 & 95.3 & 96.1 & 94.5 & 94.9 & 94.1 & 93.0 & 95.0 & 95.3 \\[0.2cm]
& & 0.2 & 93.3 & 93.0 & 92.4 & 93.9 & 94.6 & 94.6 & 95.0 & 93.8 & 95.0 \\
& & & 94.7 & 95.3 & 94.1 & 94.0 & 94.4 & 95.3 & 94.2 & 94.5 & 94.9 \\[0.4cm]
2 & 0.2 & 0.1 & 90.8 & 93.6 & 93.3 & 93.2 & 93.5 & 93.9 & 94.0 & 93.8 & 93.2 \\
& & & 94.8 & 96.1 & 95.5 & 94.6 & 94.1 & 94.0 & 94.5 & 94.2 & 94.0 \\[0.2cm]
& & 0.2 & 94.1 & 93.2 & 92.6 & 94.2 & 92.6 & 93.2 & 94.6 & 94.0 & 94.3 \\
& & & 96.0 & 95.8 & 95.0 & 94.2 & 94.3 & 94.4 & 94.4 & 94.0 & 94.0 \\[0.4cm]
& 0.4 & 0.1 & 92.1 & 94.4 & 93.8 & 93.2 & 93.6 & 93.5 & 93.5 & 93.3 & 93.8 \\
& & & 94.5 & 95.3 & 95.8 & 93.5 & 95.2 & 93.3 & 94.0 & 94.0 & 93.8 \\[0.2cm]
& & 0.2 & 94.5 & 94.0 & 94.0 & 94.2 & 92.9 & 93.2 & 94.0 & 94.5 & 94.1 \\
& & & 95.0 & 94.8 & 94.6 & 94.9 & 94.7 & 95.5 & 93.6 & 93.5 & 94.1 \\[0.1cm]
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Average bias and standard error (in brackets) of \texttt{smcure} estimates\label{tab:ressmcure}}
\begin{tabular}{ccc@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c}
\hline
&&& \multicolumn{3}{l}{\hspace{0.6cm}$n=100$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=300$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=1000$} \\
$\nu$ & $\pi_m$ & $\rho$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ & $\theta_{(0)}$ & $\theta_{(1)}$ & $\theta_{(2)}$ \\[0.1cm]
\hline
0 & 0.2 & 0.1 & -0.20 & 0.12 & 0.08 & -0.05 & 0.04 & 0.02 & -0.02 & 0.01 & 0.01 \\
& & & (0.54) & (0.48) & (0.46) & (0.25) & (0.23) & (0.22) & (0.14) & (0.12) & (0.12) \\[0.2cm]
& & 0.2 & -0.40 & 0.20 & 0.14 & -0.09 & 0.04 & 0.05 & -0.03 & 0.02 & 0.02 \\
& & & (0.93) & (0.72) & (0.68) & (0.34) & (0.30) & (0.27) & (0.17) & (0.15) & (0.14) \\[0.4cm]
& 0.4 & 0.1 & -0.08 & 0.09 & 0.09 & -0.02 & 0.03 & 0.02 & -0.01 & 0.01 & 0.01 \\
& & & (0.33) & (0.40) & (0.38) & (0.17) & (0.20) & (0.19) & (0.09) & (0.11) & (0.10) \\[0.2cm]
& & 0.2 & -0.25 & 0.20 & 0.12 & -0.06 & 0.05 & 0.03 & -0.01 & 0.02 & 0.01 \\
& & & (0.64) & (0.68) & (0.61) & (0.26) & (0.27) & (0.25) & (0.12) & (0.14) & (0.13) \\[0.4cm]
2 & 0.2 & 0.1 & -0.16 & 0.11 & -0.03 & -0.03 & 0.02 & -0.07 & 0.00 & 0.00 & -0.09 \\
& & & (0.54) & (0.48) & (0.42) & (0.25) & (0.23) & (0.20) & (0.13) & (0.12) & (0.11) \\[0.2cm]
& & 0.2 & -0.18 & 0.12 & -0.10 & -0.02 & 0.02 & -0.14 & 0.04 & -0.02 & -0.16 \\
& & & (0.73) & (0.59) & (0.48) & (0.33) & (0.27) & (0.23) & (0.17) & (0.14) & (0.11) \\[0.4cm]
& 0.4 & 0.1 & -0.12 & 0.10 & -0.09 & -0.08 & 0.04 & -0.13 & -0.06 & 0.01 & -0.13 \\
& & & (0.35) & (0.40) & (0.33) & (0.19) & (0.21) & (0.17) & (0.10) & (0.11) & (0.09) \\[0.2cm]
& & 0.2 & -0.22 & 0.15 & -0.17 & -0.12 & 0.05 & -0.20 & -0.09 & 0.01 & -0.20 \\
& & & (0.55) & (0.57) & (0.42) & (0.28) & (0.27) & (0.20) & (0.14) & (0.13) & (0.10) \\[0.1cm]
\hline
\end{tabular}
\end{table}
By way of comparison, we also applied the EM approach of \citet{pengdear:2000} and \citet{syetal:2000} which has been implemented in the \texttt{smcure} \citep{chaoetal:2012} package in \texttt{R} \citep{R:2018}. In contrast to our scheme, $S_{T_0}$ must be estimated rather than $S_C$. Thus, $S_{T_0}$ was estimated using a Cox model in which both covariates, $X_{(1)}$ and $X_{(2)}$, appear as predictors. The results, shown in Table \ref{tab:ressmcure}, are broadly similar to those in Table \ref{tab:res}, albeit the \texttt{smcure} estimates are generally more efficient. This was expected since the \texttt{smcure} estimates take into account the model imposed to $S_{T_0}$, while our assumption is fully nonparametric in that respect.
An important difference is the fact that, when $S_{T_0}$ does not have the proportional hazards property (i.e., when $\nu=2$), we see bias in the \texttt{smcure} estimates which does not disappear with increasing sample size. In particular, the bias manifests through $\hat\theta_{(0)}$ and $\hat\theta_{(2)}$; interestingly, $\hat\theta_{(1)}$ is unaffected (i.e., the coefficient of $X_{(1)}$, the covariate which only enters the cure component).
\subsection{Selection procedure\label{sec:simselect}}
A variety of algorithms have been implemented for solving non-differentiable lasso problems, e.g., quadratic programming \citep{tibshirani:1996}, least angle regression (LARS) \citep{efronetal:2004}, and co-ordinate descent \cite{friedmanetal:2007}. However, we prefer the use of a differentiable penalty since standard gradient-based optimization procedures can then be utilized. Therefore, we propose the use of
\begin{equation}
\hat\ell^*_{\lambda,\epsilon}(\theta) = \hat\ell^*(\theta) - \lambda \sum_{j=1}^p w_j a_\epsilon(\theta_{(j)}) \label{penlike}
\end{equation}
where $a_\epsilon(x) = (x^2 + \epsilon^2)^{1/2} - \epsilon$ is an extension of the absolute value function such that $\lim_{\epsilon \rightarrow0}a_\epsilon(x) = |x|$, and which is differentiable for $\epsilon>0$. Clearly,
smaller $\epsilon$ values bring the penalty closer to the alasso, but also bring (\ref{penlike}) closer to being non-differentiable.
In our work, we have found that $\epsilon = 10^{-4}$ works well.
For the purpose of selecting the tuning parameter, $\lambda$, we consider cross-validation; in particular we aim to minimize the $k$-fold cross-validation error. First, since $\mathbb{E}\{B_i^*(S_C) - \pi_i(\theta)\mid X_i\} = 0$, we may define the error term, $B_i^*(\hat S_C) - \pi_i(\hat\theta)$. Then, for a partition $F_1,\ldots,F_K$ of the set $\{1,\ldots,n\}$, the mean-squared error for the $j$th fold, $F_j$ is given by $\sum_{i\in F_j}\{B_i^*(\hat S_C) - \pi_i(\hat\theta_\lambda^{-j})\}^2$ where $\hat\theta_\lambda^{-j}$ is the penalized estimate with the $j$th fold removed. Thus, the $k$-fold cross-validation error is given by
\begin{align}
\text{CVE}(\lambda) = \frac{1}{k}\sum_{j=1}^{\text{k}}\sum_{i\in F_j}\left\{B_i^*(\hat S_C) - \pi_i(\hat\theta_\lambda^{-j})\right\}^2\label{cvmse}
\end{align}
where we will use $k=10$ as is standard in practice.
Minimizing (\ref{cvmse}) with respect to $\lambda$ can be achieved by profiling over a range of $\lambda$ values or by using a one-dimensional optimizer, e.g., golden search; we will define $\hat\lambda_{\text{CVE}}$ to be the minimizer of (\ref{cvmse}).
In order to test our proposed selection procedure, we simulated data as described in Section \ref{sec:simsetup}, but with four additional independent $\text{Normal}(0,1)$ variables, namely, $X_{(3)}$, $X_{(4)}$, $X_{(5)}$, and $X_{(6)}$. These variables do not affect probability of cure, i.e., their $\theta$ coefficients are zero, but we set $\beta_{T_0,(3)}=\beta_{C,(3)}=1$ so that $X_{(3)}$ affects other aspects of the data generating process; the $\beta_{T_0}$ and $\beta_{C}$ coefficients for $X_{(4)}$, $X_{(5)}$, and $X_{(6)}$ are all zero. We then define the following metrics to assess the variable selection procedure:
\begin{align*}
\text{C} &= \sum_{j = 4}^6 \mathbbm{1}(\hat\theta_{\hat\lambda_{\text{CVE}},(j)} = 0), \\
\text{IC} &= \sum_{j = 1}^2 \mathbbm{1}(\hat\theta_{\hat\lambda_{\text{CVE}},(j)} = 0), \\
\text{DF} &= \sum_{j = 0}^6 \mathbbm{1}(\hat\theta_{\hat\lambda_{\text{CVE}},(j)} > 0), \\
\end{align*}
where C is the number of coefficients \emph{correctly} set to zero, IC is the number of coefficients \emph{incorrectly} set to zero, and DF is the model \emph{degrees of freedom} (i.e., the number of non-zero parameters); in our setup, for the oracle model, C $= 4$, IC $= 0$, and DF $=3$. These metrics, averaged over simulation replicates, are shown in Table \ref{tab:resvar}.
\begin{table}[htbp]
\centering
\caption{Correct zeros, incorrect zeros, and model degrees of freedom\label{tab:resvar}}
\begin{small}
\smallskip
\begin{tabular}{cccc@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c@{~~~~}c@{~~}c@{~~}c}
\hline
&&&& \multicolumn{3}{l}{\hspace{0.6cm}$n=100$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=300$} & \multicolumn{3}{l}{\hspace{0.6cm}$n=1000$} \\
Type & $\nu$ & $\pi_m$ & $\rho$ & C & IC & DF & C & IC & DF & C & IC & DF \\[0.1cm]
\hline
&&&&&&&&&&&&\\[-0.4cm]
oracle & & & & 4.00 & 0.00 & 3.00 & 4.00 & 0.00 & 3.00 & 4.00 & 0.00 & 3.00\\[0.3cm]
lasso & 0 & 0.2 & 0.1 & 2.86 & 0.37 & 3.77 & 2.39 & 0.00 & 4.61 & 2.31 & 0.00 & 4.69 \\[0.1cm]
& & & 0.2 & 3.23 & 0.79 & 2.98 & 2.66 & 0.06 & 4.29 & 2.54 & 0.00 & 4.46 \\[0.2cm]
& & 0.4 & 0.1 & 2.52 & 0.11 & 4.37 & 2.33 & 0.00 & 4.67 & 2.30 & 0.00 & 4.70 \\[0.1cm]
& & & 0.2 & 2.90 & 0.45 & 3.65 & 2.51 & 0.02 & 4.47 & 2.44 & 0.00 & 4.56 \\[0.2cm]
& 2 & 0.2 & 0.1 & 2.76 & 0.34 & 3.90 & 2.40 & 0.00 & 4.60 & 2.28 & 0.00 & 4.72 \\[0.1cm]
& & & 0.2 & 3.19 & 0.85 & 2.96 & 2.64 & 0.09 & 4.27 & 2.46 & 0.00 & 4.53 \\[0.2cm]
& & 0.4 & 0.1 & 2.58 & 0.10 & 4.32 & 2.37 & 0.00 & 4.63 & 2.24 & 0.00 & 4.76 \\[0.1cm]
& & & 0.2 & 2.92 & 0.48 & 3.60 & 2.50 & 0.05 & 4.45 & 2.43 & 0.00 & 4.57 \\[0.3cm]
alasso & 0 & 0.2 & 0.1 & 3.44 & 0.37 & 3.19 & 3.47 & 0.01 & 3.52 & 3.67 & 0.00 & 3.33 \\[0.1cm]
& & & 0.2 & 3.50 & 0.74 & 2.76 & 3.50 & 0.09 & 3.41 & 3.69 & 0.00 & 3.31 \\[0.2cm]
& & 0.4 & 0.1 & 3.34 & 0.17 & 3.49 & 3.52 & 0.00 & 3.48 & 3.69 & 0.00 & 3.31 \\[0.1cm]
& & & 0.2 & 3.40 & 0.47 & 3.13 & 3.45 & 0.05 & 3.50 & 3.67 & 0.00 & 3.32 \\[0.2cm]
& 2 & 0.2 & 0.1 & 3.35 & 0.36 & 3.30 & 3.48 & 0.01 & 3.51 & 3.65 & 0.00 & 3.35 \\[0.1cm]
& & & 0.2 & 3.47 & 0.77 & 2.76 & 3.46 & 0.13 & 3.42 & 3.65 & 0.00 & 3.35 \\[0.2cm]
& & 0.4 & 0.1 & 3.33 & 0.15 & 3.52 & 3.55 & 0.00 & 3.44 & 3.72 & 0.00 & 3.28 \\[0.1cm]
& & & 0.2 & 3.38 & 0.47 & 3.15 & 3.42 & 0.07 & 3.50 & 3.64 & 0.01 & 3.35 \\[0.1cm]
\hline
\end{tabular}
\end{small}
\end{table}
The IC values tend towards zero as the sample size increases for both the lasso and alasso. However, the lasso tends to select a more complex model than the alasso as indicated by the smaller C values and larger DF values; it is well known that the lasso exhibits this behaviour \citep{fanlv:2010}. Overall, the alasso appears to work well with C approaching the oracle value of four as the sample size increases. The results are unaffected by the value of $\nu$ as we might expect, whereas, when $n=100$, increased censoring proportion, $\rho$, or decreased cure proportion, $\pi_m$, both lead to fewer variables being selected.
\section{Data analysis\label{sec:data}}
\subsection{Overview of data}
We consider two datasets for the purpose of analysis, namely (i) the well known colon cancer dataset
\citep{moertel:1990} contained in the \texttt{survival} package in \texttt{R} and (ii) a melanoma dataset \citep{kirkwood:1996} contained in the \texttt{smcure} package in \texttt{R}. Both were randomized controlled trials which evaluated adjuvant chemotherapy following surgery where relapse-free survival was considered, i.e., time from randomization until the earlier of cancer relapse or death. These datasets are candidates for cure analysis based on their Kaplan-Meier (KM) curves (Figure \ref{fig:kmcurves}) which level off over time --- approximately at 40\% for the colon cancer data and 30\% for the melanoma data. Indeed the last value in the KM curve is a valid estimator of the marginal cure probability \citep{mallerzhou:1992} and our approach reduces to this when there are no covariates (i.e., $X \equiv 1$); see \citet{satten:2001} for details.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{c@{}c}
\includegraphics[width=0.45\textwidth, trim = {0cm 0.5cm 0.5cm 2cm}, clip]{colonsurv} & \includegraphics[width=0.45\textwidth, trim = {0cm 0.5cm 0.5cm 2cm}, clip]{e1684surv}
\end{tabular}
\caption{Kaplan-Meier curves for the colon (left) and melanoma (right) datasets. \label{fig:kmcurves}}
\end{center}
\end{figure}
The colon and melanoma datasets are described in more detail and analysed in Sections \ref{sec:colon} and \ref{sec:melanoma} below respectively. In particular, we estimate the cure parameters using our proposed likelihood procedure (see Section \ref{sec:est}) and, for comparison, we apply \texttt{smcure} with signs of the coefficients reversed (to align with our model for the cure probability, rather than the non-cure probability). We use a Cox model with all covariates for the estimation of $S_C$ in our approach and the estimation of $S_{T_0}$ in \texttt{smcure}, and confidence intervals and p-values are produced using bootstrapping in both cases. We also carry out variable selection using lasso and adaptive lasso (see Sections \ref{sec:varselect} and \ref{sec:simselect}); covariates are standardized for the purpose of selection, after which the cure estimates are transformed back to correspond to the original scale.
\subsection{Colon cancer data\label{sec:colon}}
This was a national intergroup study (involving Eastern Cooperative Oncology Group, the North Central Cancer Treatment Group, the Southwest Oncology Group, and the Mayo Clinic) to investigate the efficacy of the drugs levamisole and 5FU (fluorouracil) for the treatment of colon cancer following surgery. In total, 929 patients with stage C disease were enrolled during the period March 1984 to October 1987, with a maximum follow-up time of approximately nine years. These patients were randomized to the following treatments: observation (control / reference group), levamisole, and a combined treatment of levamisole and 5FU.
In addition to the treatment variable, a variety of binary covariates were recorded (reference categories are shown first): days since surgery, $\{\le20,>20\}$; sex, $\{\text{female},\text{male}\}$; obstruction of colon by tumour, $\{\text{no},\text{yes}\}$; adherence to nearby organs, $\{\text{no},\text{yes}\}$; depth of invasion, $\{\text{submucosa or muscular layer} ,\text{serosa}\}$; positive lymph nodes, $\{\le4,>4\}$. Furthermore, the age of the patient was also recorded, and we use a mean-centered version (the mean age is 59.75 years). See \citet{moertel:1990} for further details.
\begin{table}[htbp]
\centering
\caption{Colon cancer estimates\label{tab:colon}}
\begin{footnotesize}
\smallskip
\begin{tabular}{l@{~~}l@{~\quad}c@{}r@{~~}c@{~~}c@{~~\quad}c@{}r@{~~}r@{~~}c@{~~\quad}r@{~~}c@{~~}c@{}}
\hline
&&&&&&&&&&&&\\[-0.3cm]
& & & \multicolumn{3}{c}{Unpenalized} && \multicolumn{2}{c}{Penalized} && \multicolumn{3}{c}{\texttt{smcure}}\\[0.1cm]
\multicolumn{2}{c}{Covariate} & & Est. & 95\%CI & pval & & lasso & alasso & & Est. & 95\%CI & pval \\
\hline
&&&&&&&&&&&&\\[-0.3cm]
Intercept & & & 0.66 & ( 0.05, 1.36) & 0.03 & & 0.66 & 0.32 & & 0.57 & (-0.05, 1.15) & 0.06 \\
Treatment & Lev & & 0.42 & (-0.11, 1.22) & 0.14 & & 0.42 & 0.00 & & 0.19 & (-0.21, 0.61) & 0.33 \\
& Lev+5FU & & 0.94 & ( 0.39, 1.73) & 0.00 & & 0.94 & 0.60 & & 0.71 & ( 0.30, 1.15) & 0.00 \\
Surgery & $>20$days & & -0.65 & (-1.63,-0.11) & 0.02 & & -0.65 & -0.41 & & -0.49 & (-0.88,-0.09) & 0.01 \\
Age (centered) & Years & & -0.01 & (-0.03, 0.00) & 0.10 & & -0.01 & 0.00 & & -0.01 & (-0.02, 0.00) & 0.24 \\
Sex & Male & & -0.24 & (-0.75, 0.16) & 0.28 & & -0.24 & 0.00 & & -0.11 & (-0.46, 0.25) & 0.60 \\
Obstruction & Yes & & -0.56 & (-2.01, 0.05) & 0.08 & & -0.56 & -0.19 & & -0.18 & (-0.61, 0.20) & 0.35 \\
Adherence & Yes & & -0.42 & (-1.00, 0.07) & 0.10 & & -0.42 & 0.00 & & -0.69 & (-1.50,-0.17) & 0.01 \\
Depth & Serosa & & -0.81 & (-1.51,-0.26) & 0.01 & & -0.81 & -0.54 & & -0.71 & (-1.27,-0.19) & 0.02 \\
Nodes & $>4$ & & -1.18 & (-1.63,-0.82) & 0.00 & & -1.18 & -0.94 & & -1.16 & (-1.59,-0.80) & 0.00 \\[0.1cm]
\hline
\end{tabular}
\end{footnotesize}
\end{table}
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.9\textwidth, trim = {0.3cm 0.7cm 0.7cm 1cm}, clip]{regpath-colon-tilde}
\end{tabular}
\caption{Adaptive lasso regularization paths for colon data. Coefficient estimates, denoted by $\tilde \theta$, are for the standardized covariates (hence, their magnitudes can be directly compared), and these are plotted against the tuning parameter $\tilde\lambda = \lambda/n$. Also shown is the 10-fold cross-validation error curve (see right-hand $y$-axis) with a vertical line indicating the minimum error point.\label{fig:regcolon}}
\end{center}
\end{figure}
The estimated cure parameters are shown in Table \ref{tab:colon}. We first consider the results for our unpenalized approach. The effect of the levamisole treatment does not significantly increase the cure probability (compared with a patient receiving no treatment), while the combination of levamisole with 5FU does; indeed, the odds of being cured for this latter treatment are $2.56$ ($=\exp(0.94)$) with 95\% confidence interval given by $(1.48,5.64)$. The effect of all other covariates is to reduce the cure probability, albeit sex is not statistically significant, and obstruction and adherence are only just significant at the 10\% level. The results for \texttt{smcure} are broadly similar, apart from the fact that adherence is statistically significant.
In this particular application, the lasso has not performed any shrinkage of coefficients (i.e., the optimal $\lambda$ value selected was zero) which is in line with the typical behaviour of lasso to over-select covariates. On the other hand, the adaptive lasso has set several coefficients to zero, and the retained variables are those with smaller p-values from the unpenalized model. The regularization paths for standardized cure coefficients (i.e., those corresponding to standardized covariates) provide useful information on the relative importance of each covariate; these are shown for adaptive lasso in Figure \ref{fig:regcolon}. We can see immediately that the Lev+5FU treatment is one of the most important features. The number of positive lymph nodes is also highly important, and the presence of more than four such lymph nodes reduces the chance of cure. Next, the timing of surgery and depth of the tumour have similar importance, followed by the presence of an obstruction.
\subsection{Melanoma data\label{sec:melanoma}}
The Eastern Cooperative Oncology Group (ECOG)
trial EST 1684 recruited patients (284 altogether) between 1984 and 1990, with the study ending in 1993. The aim of this study was to evaluate interferon alfa-2b (IFN$\alpha$-2b) as an adjuvant therapy for melanoma following surgery. Thus, patients were randomly assigned to one of two arms: observation (control / reference group), and treatment with IFN$\alpha$-2b. Furthermore, the age and sex of the patient was recorded (and we mean-center age where the mean is 47.03 years). See \citet{kirkwood:1996} for further details.
Following Section \ref{sec:colon}, we present the unpenalized and penalized estimates along with the estimates from \texttt{smcure} in Table \ref{tab:melanoma}. Treatment is highly statistically significant (p-value $<$ 0.01), while age comes in just under the 5\% level of significance; sex is not statistically significant. The results from \texttt{smcure} are qualitatively similar, although age is not statistically significant. Both the lasso and adaptive lasso penalties remove sex from the model. The standardized regularization path for the adaptive lasso (Figure \ref{fig:regmelanoma}) confirms that treatment is the variable which impacts the probability of cure the most, having a curative effect, while age reduces the cure probability (albeit the magnitude of this effect is lower than that of treatment).
\begin{table}[htbp]
\centering
\caption{Melanoma estimates\label{tab:melanoma}}
\begin{footnotesize}
\smallskip
\begin{tabular}{l@{~~}l@{~\quad}c@{}r@{~~}c@{~~}c@{~~\quad}c@{}r@{~~}r@{~~}c@{~~\quad}r@{~~}c@{~~}c@{}}
\hline
&&&&&&&&&&&&\\[-0.3cm]
& & & \multicolumn{3}{c}{Unpenalized} && \multicolumn{2}{c}{Penalized} && \multicolumn{3}{c}{\texttt{smcure}}\\[0.1cm]
\multicolumn{2}{c}{Covariate} & & Est. & 95\%CI & pval & & lasso & alasso & & Est. & 95\%CI & pval \\
\hline
&&&&&&&&&&&&\\[-0.3cm]
Intercept & & & -1.40 & (-2.96,-0.79) & 0.00 & & -1.29 & -1.40 & & -1.28 & (-2.10,-0.70) & 0.00 \\
Treatment & IFN & & 1.01 & ( 0.25, 2.96) & 0.00 & & 0.54 & 0.72 & & 0.59 & (-0.02, 1.37) & 0.05 \\
Age (centered) & Years & & -0.03 & (-0.08, 0.00) & 0.04 & & -0.01 & -0.01 & & -0.02 & (-0.05, 0.01) & 0.13 \\
Sex & Male & & -0.33 & (-1.57, 0.40) & 0.43 & & 0.00 & 0.00 & & -0.09 & (-0.73, 0.52) & 0.79 \\[0.1cm]
\hline
\end{tabular}
\end{footnotesize}
\end{table}
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.9\textwidth, trim = {0.3cm 0.7cm 0.7cm 1cm}, clip]{regpath-e1684-tilde}
\end{tabular}
\caption{Adaptive lasso regularization path for melanoma data. Coefficient estimates, denoted by $\tilde \theta$, are for the standardized covariates (hence, their magnitudes can be directly compared), and these are plotted against the tuning parameter $\tilde\lambda = \lambda/n$. Also shown is the 10-fold cross-validation error curve (see right-hand $y$-axis) with a vertical line indicating the minimum error point.\label{fig:regmelanoma}}
\end{center}
\end{figure}
\section{Discussion\label{sec:disc}}
We have proposed an IPCW-based likelihood estimation procedure for cure regression models; elsewhere IPCW has been advocated by \citet{KMintegral_laan} as a device for producing straightforward estimators in complex survival data. In contrast to current cure estimation procedures in the literature, our assumptions are placed on $S_C$ while $S_{T_0}$ is unspecified. Although we have considered a Cox model estimator for $S_C$ in the examples in this article, any arbitrarily flexible model can be used in practice as this simply ``plugs in'' to the likelihood function given in (\ref{likedata_b}) -- or the penalized likelihood given in (\ref{alasso_lik_def}) -- without any added complexity to the estimation procedure; indeed, it is a (penalized) GLM procedure. Moreover, our asymptotic results still hold once the estimator, $\hat S_C$, permits an iid representation (and we have given many common examples in the Appendix).
Except for the case of a fully nonparametric approach like in \citet{xu:2014} (which suffers from the curse of dimensionality), the existing cure regression models have to impose assumptions on both the cure proportion and the law of the susceptible individuals, without the possibility for model diagnosis (besides ad-hoc efforts). In our approach, one can first use standard diagnosis procedures to validate the censoring law model as this is identifiable from the observed data directly. For example, one could assess the proportional hazards assumption for $S_C$ using the test due to \citet{grambschthern:1994} which is implemented in the \texttt{cox.zph} function in the \texttt{survival} package in \texttt{R}. (Although not shown, this test supported the proportional hazards assumption in the applications considered in Section \ref{sec:data}.) Next one could consider model diagnostics for the cure regression, e.g., based on error terms of the form $B_i^*(\hat S_C) - \pi_i(\hat\theta)$ which we made use of in (\ref{cvmse}). Furthermore, note that our theory is not limited to the logistic model choice used in our applications, and, more generally still, the functional form of the cure regression model $\pi(X^\top\theta)$ could itself be estimated. Goodness-of-fit for the cure model and estimation of its functional form are beyond the scope of the current article but will be developed in our future work.
\newpage
\bibliographystyle{apalike}
|
1,116,691,500,368 | arxiv | \section{Introduction}
\label{sec:intro}
One of the most notable features in the energy spectrum of cosmic rays is the sharp increase in the slope of the energy spectrum near $3\times10^{15}$\,eV (3 PeV) per particle -- the so-called ``knee.''
The nature of this ``knee'' is still unclear, and represents one of the major mysteries of cosmic ray physics and space physics in general.
The ``knee'' in the spectrum of cosmic rays has been found and is still observed in the EAS (extensive air showers) experiments, which provide data on the energy spectrum of cosmic rays at very high energies, but do not give reliable information about their chemical composition.
At the same time, for understanding the physics near the ``knee,'' it would be very important to know the behavior of the individual components of the flux of cosmic rays near this area.
Much more detailed information on the chemical composition of cosmic rays is provided by so-called direct experiments, in which the spectrometer is moved out of the atmosphere to a stratospheric balloon or a spacecraft, where space particles can be observed directly, using different types of spectrometers.
Such experiments provide indications of complex behavior of the spectra of individual components of cosmic rays at energies 10\,TeV -- 1\,PeV, i.e. in the region adjacent to the knee from the low-energy side, but such data are severely lacked and do not have sufficiently high statistical reliability.
For example, figure~\ref{fig:Compilation-p} shows a short compilation of data on the measurement of the proton spectrum of cosmic rays by direct experiments.
Firstly, there is a noteworthy feature in the form of upturn of the spectrum near the energy of $\sim$500 GeV, the presence of which is well established in several experiments, although the details of the behavior remain to be studied.
Secondly, there is an indication of a break in the energy spectrum near 10\,TeV, but so far no experiment has given statistically reliable data in this respect.
The behavior of the spectrum at energies above 100\,TeV is completely unclear.
There is an urgent need to improve the quality of results for energies from several TeV up to about 1000 TeV.
There are a number of examples of other similar problems in the energy spectra of other nuclei, which are given below.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Compilation-p.pdf}
\caption{\label{fig:Compilation-p} A compilation of data on the measurement of the proton spectrum of cosmic rays by direct experiments: BESS-TeV \cite{BESS-TeV-2003,BESS-TeV-2004,BESS-TeV-2005}; CAPRICE \cite{CAPRICE-2003}; PAMELA \cite{CR-PAMELA-2011-p-He-Mag}; AMS02-2015 \cite{AMS-02-2015-PRL1}; ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; CREAM-III \cite{CREAM2017-ApJ-pHe}; CREAM-I \cite{CREAM2011-ApJ-PHe-I}; MUBEE \cite{MUBEE-1993-JetpLett,MUBEE-1994-YadFiz}; JACEE \cite{JACEE-1998-ApJ}; RUNJOB \cite{RUNJOB-2005-ApJ}; SOKOL \cite{SOKOL-1993-IzvRAN}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{NUCLEON.pdf}
\caption{\label{fig:NUCLEON} A simplified layout diagram of the NUCLEON spectrometer.
1 -- two pairs of planes of the charge measurement system (ChMS); 2 -- a carbon target; 3 -- six planes of the energy measurement system using the KLEM method (KLEM system tracker); 4 -- three double-layer planes of the scintillator trigger system (the trigger system); 5 -- a small aperture calorimeter (IC).
}
\end{figure}
The NUCLEON experiment was designed primarily to solve the problems outlined above.
Thus, the main priority of the NUCLEON experiment is to measure the spectra of cosmic ray nuclei with an individual charge resolution in the energy range from 10\,TeV to 1\,PeV per particle, while having a lower energy threshold of a few hundred GeV.
This review presents the main results of the NUCLEON experiment concerning the energy spectra of cosmic ray nuclei obtained from a set of statistics in 2015--2016.
\section{Features of the NUCLEON detector}
\label{sec:features}
The NUCLEON experiment is a purely domestic project and has been developed with the participation of several institutions and universities in the Russian Federation.
On December 28, 2014, the NUCLEON detector was launched into a sun-synchronous orbit with an average altitude of 475\,km and an inclination of 97~degrees as an additional payload of the Russian satellite Resource-P~2.
On January 11, the NUCLEON detector was powered and started to collect data.
The weight of the detector is approximately\,360 kg; the power consumption does not exceed 160\,W.
The detector can transmit up to 10\,GB of scientific data to Earth per day.
The planned lifetime of the NUCLEON detector is at least five years.
The most important feature of the NUCLEON detector is the implementation of two different particle energy measurement methods: the first uses an ionization calorimeter, and the second is a kinematic method, the Kinematic Lightweight Energy Meter (KLEM) \cite{KLEM-2000,KLEM-2001,KLEM-2002,KLEM-2005A,KLEM-2005B}, which is based on the measurement of the multiplicity of secondary particles after the first nuclear interaction of a primary particle with a target of the spectrometer.
The first method is well known and has been used in experiments on cosmic rays many times. This is the first time the second method has been used.
The advantage of the use of two methods is the ability to cross-check the results of the measurements.
The advantage of the KLEM method compared to the conventional calorimetric method is the ability to provide a high aperture of the device with a low weight of the equipment.
The presence of the two methods of energy measurement in the NUCLEON detector will allow studying and calibrating the new KLEM method using a conventional calorimetric method.
Figure~\ref{fig:NUCLEON} shows a simplified diagram of the layout of the NUCLEON detector.
The main systems of the spectrometer are two pairs of planes of the charge measurement system (ChMS), a carbon target, six planes of the energy measurement system using the KLEM method (KLEM system tracker), three double-layer planes of the scintillator trigger system, and a small aperture calorimeter (IC).
Details of the detector design are provided in the articles \cite{NUCLEON-DEZ-2007A,NUCLEON-DEZ-2007B,NUCLEON-DEZ-2007C,NUCLEON-DEZ-2010,NUCLEON-DEZ-2015}.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Portrait.pdf}
\caption{\label{fig:Portrait} An example of an event visualization recorded by the detector (Ne nucleus).}
\end{figure}
Figure~\ref{fig:Portrait} shows an example of visualization of an event, recorded by the detector.
The upper left corner shows a top view of the four planes of the ChMS, where each plane has its own color.
The circles mark the signals from the triggered detectors, and the circle's area is proportional to the effective charge measured by the detector.
In addition to the primary particle signals, there are visible signals from the back scattered secondary particles as well as a certain amount of noise.
Below and to the right of the ChMS planes the $XZ$ and $YZ$ (respectively) projections of the detector are displayed.
The colors correspond to the value of the signal in the triggered detectors; black rectangles are inoperative detectors.
The center of the figure shows a panel with some technical information about the event; below lie cascade curves obtained in the trigger system, the KLEM system and the IC (indicated in the figure as {\tt td}, {\tt s} and {\tt m}, respectively).
A reconstructed shower axis is drawn over the projections.
The right half of the figure displays the histograms of the energy released in planes of different systems.
The ChMS system can reliably separate the charges of the abundant nuclei of cosmic rays to obtain their individual energy spectra.
The charge distributions obtained by the NUCLEON detector are shown in figure~\ref{fig:Charge}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Charge1.pdf}
\includegraphics[width=0.49\textwidth]{Charge2.pdf}
\caption{\label{fig:Charge} Charge distributions of the cosmic ray nuclei measured in the NUCLEON experiment.}
\end{figure}
Section \ref{sec:Results} presents the main results of the measurements of the cosmic ray energy spectra, for approximately one year of data-taking of the NUCLEON experiment. The results are shown for both energy measurement methods: the calorimetric method and the KLEM method. In each of these methods, there is a complex analyzing cycle before the final absolute energy spectra of cosmic rays is obtained. Some main steps of the implementation of both calorimetric and KLEM methods on board the NUCLEON spectrometer are discussed in Section~\ref{sec:EReconstruction}, but the methods in all detail will be published elsewhere in two special separate papers.
The degree of consistency of the methods can be judged by the degree of consistency of the results.
Here a very direct and model-independent argument is given in favor of expecting consistent results from both methods, that is, if all the data processing is performed correctly.
The basic value which is used in the calorimetric method to reconstruct the spectra of the particles is the energy deposited in the detectors of the calorimeter $Ed$; and in the KLEM method, the main parameter is a specially constructed estimator $S$, which is related to the number of secondary particles with a high pseudorapidity after the first interaction (see Equation~(\ref{eq:S}) below, for details see also \cite{KLEM-2000,KLEM-2001,KLEM-2002,KLEM-2005A,KLEM-2005B}).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{LogEd-LogS-He.pdf}
\caption{\label{fig:LogEd-LogS} The scatter plot of the calorimeter energy deposit $Ed$ and the estimator $S$ of the KLEM method (for the incident He nuclei).
The energy deposit $Ed$ is measured in MIP's -- the energy deposit of a minimally ionizing particle $(Z=1)$ for the silicon strip detectors of the calorimeter.}
\end{figure}
Figure~\ref{fig:LogEd-LogS} shows a scatter plot of the $Ed$ and $S$ variables, measured for the same event.
A strong correlation between both parameters is visible.
Obviously, if one of the values can be used for the reconstruction of the energy spectrum of the particles, then the other can be used for the same purpose as well.
It is well-known that the energy deposited in the calorimeter $Ed$ can be used in this way, hence the estimator $S$ of the KLEM method can be used to reconstruct the spectra of cosmic rays particle energies too.
\section{Main steps of the energy reconstruction methods}
\label{sec:EReconstruction}
As it have been already mentioned above, two different particle energy measurement methods were implemented in the NUCLEON design: the KLEM method, that has been used in the astroparticle physics for the first time, and more usual method of ionization calorimeter. The KLEM method is considered to be a main method of the NUCLEON experiment since it provides greater statistics than the calorimetric method.
To determine the energy spectrum of primary particles, two fundamentally different approaches can be used. In the first approach, for nuclei of a certain type an apparatus function that gives the probabilities of obtaining different energy deposites $Ed$ of the calorimeter or different KLEM parameter $S$ for each primary particle energy is calculated by a simulation of the device. Then the experimental spectrum of $Ed$ or $S$ is constructed, and a complete inverse problem for the primary particles energy spectrum is solved for them. Such a problem, as it is known, belongs to the class of ill-posed inverse problems.
In the second approach, the energy of the primary particle is reconstructed for each event separately. For this, two functions must be defined. The first determines the factor that should be used to convert $Ed$ or $S$ into the primary energy of a particle. This coefficient (generally speaking) will depend on the $Ed$ or $S$ parameters themselves and can be determined computationally using the computer model of the apparatus. The second function gives the probability of a particle registration depending on the primary energy found (registration efficiency). When the total registration efficiency for a given event is found, the event must be added to the spectrum of registered particles with a weight equal to the inverse of the registration efficiency. This will take into account the missed particles.
Each of the two approaches mentioned above can be implemented in two versions. In the first version, the apparatus functions or the energy conversion factors and efficiency are determined depending on the direction of the shower axis (with some degree of the details of the direction description), in the second variant all these functions are determined by averaging over the entire working aperture of the spectrometer. The first option requires a much larger amount of simulation to build the apparatus functions, but it is somewhat more accurate than the second one.
In the versions of the methods described below, the second of these two approaches is realized: event-by-event method of energy reconstruction in its simplest form -- with averaging of the energy conversion factors and registration efficiency over the spectrometer aperture. We consider this approach as the first approximation for the data processing methods of the NUCLEON experiment.
\subsection{KLEM method}
\label{sec:KLEM}
In the KLEM method the primary energy is reconstructed by registration of spatial density of the secondary particles after the first hadronic interaction. Six planes of the KLEM energy measurement system (tracker) is located under the carbon target of 0.24 proton nuclear interaction length. It is supposed that the new secondary particles are generated by the first hadronic inelastic interaction in the carbon target. Then, additional secondary particles are produced in the thin tungsten converters of KLEM energy measurement system by electromagnetic and hadronic interactions. To reconstruct the primary energy of the incident particle the following $S$-estimator is used:
\begin{equation}
S = \sum\limits_{i=1}^N n_i\eta_i^2,
\label{eq:S}
\end{equation}
where summation are over N position-sensitive detectors of a tracker layer located after the converter; $\eta_i = -\ln(r_i/2H)$, where $r_i$ is the distance from the shower axis to $i$-th position-sensitive strip detector in a tracker layer ($r_i$ means $x_i$ or $y_i$ depending on the orientation of the strips of the tracker layer), $n_i$ is estimated number of charge-one particles crossing the detector, and $H$ is the distance from the interaction point in the target. For the real apparatus we apply $H$ determined as the distance from the middle of the carbon target to the tracker layer. Each layer of the tracker produces its own value of $S$, but the most reliable data are produced by two lowest tracker layers. The systematic uncertainty related to the uncertainty in the position of the first hadronic interaction in the target is small in comparison with the physical fluctuations. A direct simulation shows only negligible increasing of RMS deviation of reconstructed energy if one neglects by the differences between the true interaction point and the position of the middle of the carbon target. The above-mentioned multiplication of secondaries in the tungsten converters make energy dependence $S(E)$ of the estimator steeper than for simple multiplicity in the first interaction.
For an incident nucleus with mass number $A$ only a part of the nucleons interacts with the target carbon nucleus. Therefore, the multiplicity of secondaries is not proportional to $A$ but the angular distribution of secondaries is similar to the distribution for a proton. A detailed simulation of $S(E)$ for different nuclei is needed and have been performed by the GEANT 3.21 software package \cite{GEANT3-1984} complemented by the QGSJET \cite{KALMYKOV-1997} nuclear interaction generator to describe high-energy hadron-nucleus and nucleus-nucleus interactions. Generally, the $S(E)$ dependences for different types of primary nuclei is similar in the wide energy range and look like a simple power-law functions. Two examples of simulated scattering plots of the primary energy $E$ versus the estimator $S$ are shown in Figure~\ref{fig:KLEM-Scattering}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{KLEM-Scattering.pdf}
\caption{\label{fig:KLEM-Scattering} The simulated scatter plots of the primary energy $E$ and the estimator $S$ for primary protons and carbon nuclei.}
\end{figure}
To reconstruct the primary energy of a particle, the scatter plots like in Figure~\ref{fig:KLEM-Scattering} for different nuclei were approximated by power laws like
\begin{equation}
E_{rec} = a(S\times10^{-5})^b
\label{eq:KLEMFit}
\end{equation}
where the parameters $a$ and $b$ were optimized by the mean square method, proceeding from the requirement $\langle E_{rec}/E\rangle = 1$ for the given initial spectrum of projectile nuclei. The optimization procedure will be described in details elswhere. The values of $a,b$ for some nuclei, obtained for the initial power-law spectrum with the spectral index $\gamma = -2.6$, are shown in the Table~\ref{tab:KLEMab}.
\begin{table}
\caption{\label{tab:KLEMab}The values of parameters $a,b$ for approximation Equation~(\ref{eq:KLEMFit}), obtained for the initial power-law spectrum with the spectral index $\gamma = -2.6$.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Projectile & $a$, GeV & b \\
\hline
p & 1651 & 1.36 \\
He & 2556 & 1.27 \\
C & 3514 & 1.18 \\
S & 4163 & 1.14 \\
Fe & 4362 & 1.12 \\
\hline
\end{tabular}
\end{center}
\end{table}
The NUCLEON flight model was tested in 2012 on pion beams of the SPS accelerator in CERN. Pion data were obtained for 150~GeV and 350~GeV. The normalized distributions of the energy, reconstructed by the KLEM method, for primary pions with energies of 150~GeV and 350~GeV are shown in Figure~\ref{fig:KLEM-Reconstruct-Orig}, the left panel. The RMS deviation to primary energy ratio is equal to 0.53 for 150~GeV and 0.63 for 350~GeV beams. The asymmetry of distributions is determined by the asymmetry of multiplicity distributions for hadron interactions.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{KLEM-Reconstruct-Orig.pdf}
\includegraphics[width=0.49\textwidth]{KLEM-Efficiency.pdf}
\caption{\label{fig:KLEM-Reconstruct-Orig} Left panel: Normalized distributions of the reconstructed energy for primary pions with energies of 150 (thin line) and 350 GeV (thick line). Right panel: The energy dependences of the registration efficiency used in the KLEM method fore some nuclei.}
\end{figure}
Within the framework of the present implementation of KLEM method, the determination of the efficiency of registration of particles as a function of the particle energy is considered as a separate problem. The energy dependences of the efficiency was calculated by simulation, according to the trigger conditions used. The calculated energy dependences of the registration efficiency for some nuclei and for one typical trigger condition are shown in Figure~\ref{fig:KLEM-Reconstruct-Orig}, the right panel.
\subsection{Calorimetric method in the NUCLEON experiment}
\label{sec:MIC}
The idea of use of an ionization calorimeter for reconstruction of energy of cosmic-ray primary particles is based on the fact that the energy which is lost in a calorimeter by a shower is correlated with the energy of primary particle. Therefore the energy of a primary particle may be reconstructed with some accuracy from the energy, measured by the calorimeter.
Calorimeters can be divided on thick and thin. In thick calorimeters the shower caused by primary particle is absorbed almost completely. In such devices it is possible to reach a high precision of definition of energy of primary particles. In thin calorimeters the shower is absorbed not completely and the energy of a primary particle has to be determined only by a part of primary energy, which was absorbed by the calorimeter. The precision of energy measurement in thin calorimeters is lower due to fluctuations of the absorbed part of a shower.
In addition, calorimeters are divided into homogeneous and sampling ones. In homogeneous calorimeters the absorber is also an active medium that measures the deposited energy of the shower particles. In such devices, all the energy released in the calorimeter is measured. The sampling calorimeters contain a passive absorber in which, in fact, the nuclear and electromagnetic shower generated by the primary particle develops, as well as the detectors, which now measure not the energy, deposited in the calorimeter, but a value approximately proportional to the amount of ionizing particles in the shower. This value correlates with the energy deposit and, consequently, with the initial particle energy. Generally, homogeneous calorimeters provide higher accuracy.
The ionizing calorimeter IC of the NUCLEON spectrometer is a thin sampling calorimeter. The directly measurable quantity is the energy deposited in the thin silicon strip detectors (1\,mm step) arranged in six layers between the layers of tungsten alloy (8\,mm thick each). The radiation depth of the calorimeter is 12 X-units, the nuclear depth of the calorimeter is 0.50 proton nuclear interaction lengths, the complete nuclear depth of the spectrometer from the top to the bottom is 1.12 proton interaction lengths.
The energy deposit in the strip detectors of the IC calorimeter is measured in MIPs (MIP, mean energy loss rate close to the minimum for an one-charged ionizing particle). Since IC is a thin and, moreover, sampling calorimeter, the relationship between the energy of the primary particle and the energy measured by the calorimeter is of a statistical nature. The scatter plots of the deposited energy $(Ed)$ versus the initial energy of the particle $(E0)$ for the primary protons and iron nuclei obtained by simulation the NUCLEON spectrometer by the FLUKA system \cite{BATTISTONI-2015} are shown in Figure~\ref{fig:MIK-ScatPlots}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{MIK-ScatPlots.pdf}
\caption{\label{fig:MIK-ScatPlots} Scatter plots of the deposited energy $(Ed)$ versus the initial energy of the particle $(E0)$ for the primary protons and iron nuclei.}
\end{figure}
It is seen that the average correlation plots $E0$-$Ed$ does not lie on a simple power law. In particular, bending upward correlation curves is seen at the highest energy end of the plots. This phenomenon is associated with saturation of the electonics of the spectrometer detectors at the level above 27,000 MIPs per one strip detector, which sometimes arises at the highest primary particle energies. This saturation is taken into account in the simulation and is taken into account in the reconstruction of the energy of the primary particle.
The energy deposit $Ed$ of the calorimeter is recalculated to the initial energy of the particle $E0$ using a coefficient, that depends on $Ed$. $Ed$, expressed in MIPs, should be divided by this coefficient in order to obtain $E0$ in GeV -- this is the definition of this coefficient. The corresponding function, which is denoted as $K(Ed)$, was calculated for eight nuclei: p, He, Be, C, O, Mg, Ca, Fe, for which the interaction with the spectrometer was simulated, and for the remaining nuclei it was determined by interpolation in atomic weight.
Since the energy resolution of the IC calorimeter is not high, in order to calculate the most probable value of the conversion function $K(Ed)$ for each $Ed$, it is necessary to make an assumption about the shape of the initial cosmic-ray energy spectrum. It is known that the energy spectrum of all cosmic-ray nuclei in the energy region below $10^{15}$\,eV is close to the power-law fuction with an exponent of about $-2.6$ with some variations. It was this form of the spectrum that was supposed to be the initial approximation (this step is quite similar to that in the described above KLEM method). The calculation procedure for $K(Ed)$ is as follows. All the relevant area of the energy deposits $Ed$ is divided into relatively narrow bins. The initial flux of particles with the spectrum $\sim E^{-2.6}$ is simulated. For each $Ed$ bin, the distribution function for the ratios $Ed/E0$, which are the estimates of $K(Ed)$ for each individual event, is constructed. The most probable values of $K(Ed)$, which can be calculated from the histograms obtained, are used as the conversion factors from $Ed$ to $E0$.
In Figure~\ref{fig:MIK-KHist} two examples of $K(Ed)$-histograms obtained by simulation for the primary protons and iron nuclei are shown. The widths of the distributions obtained are good estimates of the energy resolution of the calorimetric technique in the NUCLEON experiment. The resolution is about 50\% for protons, and it is better for iron nuclei ($\sim$35\%).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{MIK-KHist.pdf}
\caption{\label{fig:MIK-KHist} $K(Ed)$-histograms obtained by simulation for the primary protons and iron nuclei for $Ed$-bin $5.0 < \lg(Ed/\mathrm{MIP}) < 5.5$. This energy bib corresponds to the primary energy of $\sim 10$\,TeV for protons and $\sim 18$\,TeV for iron nuclei.}
\end{figure}
According to the estimates of the most probable coefficients $K(Ed)$ for different $Ed$ and for each primary nucleus, quadratic interpolations of the corresponding functions are carried out. In Figure~\ref{fig:MIK-KCalibr} $K(Ed)$ factors for protons and iron calculated for one of the most widely used flight trigger conditions are given as an example. The curves are far from horizontal lines, which indicates that there is no proportionality between $Ed$ and $E0$, although there is certainly a strong correlation, in the form of a functional dependence.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{MIK-KCalibr.pdf}
\caption{\label{fig:MIK-KCalibr} $K(Ed)$ factors for protons and iron calculated for one of the most widely used flight trigger conditions.}
\end{figure}
The efficiency of registration is deterimined by the three main factors: the efficiency of the trigger, the efficiency of reconstruction of the shower axis, and the efficiency of determining the particle charge. In addition, there are a number of less significant factors that we will not discuss here.
The inefficiency of the trigger is determined by the fact that the energy release in the planes of the trigger system does not always exceed the set of thresholds of the triggers, which are known from the calibration of the trigger system. This can happen either because the initial energy of the particle was not high enough, either because the first nuclear interaction occurred too low in the instrument (below the carbon target) or a nucleus passed through the entire device without any nuclear interaction at all. The first circumstance establishes the natural lower energy threshold of the device, and the second leads to pure losses of statistics, which can occur at any initial particle energies.
The inefficiency of reconstruction of the trajectory and the inefficiency of determining the charge of the primary particle are determined by certain software limitations imposed on the quality of the reconstruction of the trajectory of the primary particle and on the degree of correspondence of the values of the charge signals obtained over different planes of the charge measurement system.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{MIK-Efficiency.pdf}
\caption{\label{fig:MIK-Efficiency}. Efficiency curves for protons and for iron nuclei obtained by the simulation of spectrometer.}
\end{figure}
If Figure~\ref{fig:MIK-Efficiency} the efficiency curves for protons and iron obtained by the spectrometer simulation with accounting for all mentioned above factors are shown. It is seen that the efficiency for iron nuclei generally is higher than for protons, mainly due to larger nuclear cross-section of iron.
There are a number of less important factors (like accounting for a fraction of mistaken events etc.) in the reconstruction of the energy spectra both by the KLEM method and by the calorimetric method that we can not describe here in details due to a restricted volume of this paper. This issues will be described elsewhere in more special publications. In both the KLEM and the calorimetric methods the final energy spectrum of cosmic-ray nuclei is obtained as
\begin{equation}
I(E) = \frac{N(E,\Delta E)}{T_l \times \Omega \times \Delta E \times R(E) \times Corr(E)},
\end{equation}
where $N(E,\Delta E)$ is the number of events near primary energy $E$ in the interval $\Delta E$, $T_l$ is the live time of the measurements, $R(E)$ is the efficiency of registration and $Corr(E)$ is a factor accounting for the mentioned above less important corrections.
\section{Main results}
\label{sec:Results}
This section will present the main results of the NUCLEON experiment spectra measurements for 2015--2016.
Much of the time in this period was spent on the tests and the configuration of the detector, and part of the time was spent on a variety of technical manipulations of the Resource-P 2 spacecraft, during which data collection was not possible.
The data presented correspond to 247 days of observations in terms of astronomical time, of which 160 days were the live time of the detector (the dead time was spent on the exchange of data between the detector and the on-board computer for event recording).
The collected statistics are about one-fifth of the expected statistics, so the experiment is currently in its initial stage.
The techniques for data processing at this stage of the experiment are also in the stage of checking, debugging, and partly even under construction, and therefore are preliminary.
This is reflected in the nature of the reported results, which are also to be understood as preliminary.
In particular, we do not try to give statistically accurate quantitative analyses of the data in this experimental phase, and, generally, we omit any detailed discussion of the physics of the observed phenomena (for a general discussion of the physics see section~\ref{sec:Discussion}).
In the current phase of the research, that would be premature.
The NUCLEON experimental data are, as a rule, given for two different energy measurement methods: the calorimetric and the KLEM methods.
When comparing the results of the methods one should keep in mind that because its aperture is about four times larger, the KLEM method corresponds higher statistics than the calorimetric method.
\subsection{All-particle spectrum and the mean logarithm of atomic weight}
\label{subsec:AllPart}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{All-Particles.pdf}
\caption{\label{fig:All-Particles} All-particle spectrum measured by the KLEM system and by the calorimeter in comparison with the results of other direct measurement experiments: ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; Sokol \cite{SOKOL-1993-ICRC}; Proton-4 \cite{PROTON4-1972}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{LnA.pdf}
\caption{\label{fig:LnA} A plot of the mean logarithm mass of cosmic rays versus energy per particle by the NUCLEON detector in comparison with the results of other direct measurement experiments: ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; JACEE \cite{JACEE-1998-NuclPhys}.}
\end{figure}
Figure~\ref{fig:All-Particles} shows the all-particle spectrum measured by the KLEM system and by the calorimeter in comparison with the results of other direct measurement experiments: ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}, Sokol \cite{SOKOL-1993-ICRC}, and Proton-4 \cite{PROTON4-1972}.
The spectrum for the KLEM method has a higher threshold than the spectrum of the calorimeter, as the KLEM system has not yet solved the problem of taking into account the so-called slips of the heavy nuclei.
The problem is that a heavy nucleus, especially iron, may cause the trigger to activate a record of an event even without a nuclear interaction, by the ionization signals alone, as they are proportional to the charge squared, and therefore large for heavy nuclei.
Such slips simulate an event with an initial energy of several TeV, and therefore it is necessary to work with a threshold higher than this energy.
This problem can be solved, but in the current version of the data processing algorithms, it has not been solved yet.
Since heavy nuclei in the KLEM are measured with a high threshold, the lower limit of the range of the all-particle spectrum can only be built up to highest value of the threshold of the individual nuclei.
The NUCLEON experimental spectra are in reasonable agreement with the ATIC and the SOKOL experiments, but all the spectra are notably higher in intensity than the spectrum of the Proton-4 experiment.
The Proton-4 experiment still holds the record of highest energy achieved in a direct measurement of the energy spectrum of cosmic rays, but it was one of the first space experiments carried out, with a very simplified procedure, from a modern point of view, and it might have a low accuracy.
At energies above 100 TeV, both methods, the calorimetric and the KLEM, indicate a possible break in the spectrum of all particles.
However, the statistics in this region of the spectrum are not enough even for preliminary conclusions.
A discrepancy between the results of the KLEM method and the calorimeter method outside the statistical error in the NUCLEON experiment should be noted.
This suggests that some systematic errors in the measurement of the spectra still occur, although they are not very large.
This was expected, since at this stage of the NUCLEON experiment, many experimental methods are preliminary, and the results will be refined.
No detailed evaluation of systematic errors has been performed, as it is premature.
This observation is relevant to almost all of the results to be presented in this paper.
Figure~\ref{fig:LnA} shows a plot of the mean of the logarithms of the masses of the cosmic rays versus the energy per particle by the NUCLEON detector, which exactly corresponds to the all particle spectrum in figure~\ref{fig:All-Particles}.
The ATIC experiment indicates, with low statistical significance, the existence of an undulating structure (bending) near the energy 10\,TeV per particle.
The curves of the mean logarithm mass of the NUCLEON experiment do not contradict the existence of such a structure and also give some indication of its existence.
As the data set grows, the statistical significance of the NUCLEON experiment findings will grow, and the existence of the structure will be confirmed or refuted.
\subsection{Proton and helium spectra}
\label{subsec:p-He}
Figure~\ref{fig:p} shows a proton spectrum measured in the NUCLEON experiment together with the data from the Sokol \cite{SOKOL-1993-IzvRAN,SOKOL-1993-ICRC}, ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}, CREAM-III \cite{CREAM2017-ApJ-pHe}, AMS-02 \cite{AMS-02-2015-PRL1}, PAMELA \cite{CR-PAMELA-2011-p-He-Mag} and BESS-Polar I and II \cite{BESS-Polar-2016} experiments.
The results of the calorimetric and KLEM methods are close to each other and are in reasonable agreement with the results of the other experiments.
However, it should be noted that there are discrepancies with the data of other experiments that are outside of the margins of statistical error, therefore they are methodological in nature.
The proton spectra measured by the NUCLEON experiment do not contradict the existence of a break in the energy spectrum near 10 TeV, which was mentioned in the Introduction.
Signs of the break with different statistical significance can be seen in the spectra of both the calorimeter and the KLEM.
The behavior of the spectrum with energies above 100 TeV is unclear, as the statistics are insufficient, but these two methods do not exclude the spectrum's steepening after the break being replaced by a new flattening of the spectrum.
The situation will become clearer with the collection of a larger set of statistics.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{p.pdf}
\caption{\label{fig:p}Proton spectrum measured in the NUCLEON experiment together with the data from other experiments: Sokol \cite{SOKOL-1993-IzvRAN,SOKOL-1993-ICRC}, ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; CREAM-III \cite{CREAM2017-ApJ-pHe}; AMS-02 \cite{AMS-02-2015-PRL1}; PAMELA \cite{CR-PAMELA-2011-p-He-Mag}; BESS-Polar I and II \cite{BESS-Polar-2016}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{He.pdf}
\caption{\label{fig:He} Helium spectrum measured in the NUCLEON experiment together with the data from other experiments: Sokol \cite{SOKOL-1993-IzvRAN,SOKOL-1993-ICRC}, ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; CREAM-III \cite{CREAM2017-ApJ-pHe}; AMS-02 \cite{AMS-02-2015-PRL1}; PAMELA \cite{CR-PAMELA-2011-p-He-Mag}; BESS-Polar I and II \cite{BESS-Polar-2016}..}
\end{figure}
Figure~\ref{fig:He} shows the helium nuclei spectrum measured by the NUCLEON experiment, and the results of the Sokol \cite{SOKOL-1993-IzvRAN,SOKOL-1993-ICRC}, ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}, CREAM-III \cite{CREAM2017-ApJ-pHe}, AMS-02 \cite{AMS-02-2015-PRL1}, PAMELA \cite{CR-PAMELA-2011-p-He-Mag} and BESS-Polar I and II \cite{BESS-Polar-2016} experiments.
The NUCLEON data are consistent with other experiments.
Some discrepancy may be noted for the two points of the Sokol experiment in the 20--50\,TeV range, but the statistical errors of the Sokol experiment are large, so this deviation is hardly a serious problem.
At energies below 10\,TeV, a slight systematic difference between the calorimeter and the KLEM methods can be noted.
\begin{figure}
\centering
\includegraphics[width=0.6666\textwidth]{p-He-KLEM.pdf}
\caption{\label{fig:p-He-KLEM}Spectra of protons and helium nuclei, measured in the NUCLEON experiment using the KLEM method in terms of energy per nucleon.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{p-to-He.pdf}
\caption{\label{fig:p-to-He} Ratio of proton to helium flow of the NUCLEON experiment's KLEM and calorimeter methods and the data of the ATIC experiment \cite{ATIC-2009-PANOV-IzvRAN-ENG}.}
\end{figure}
Figure~\ref{fig:p-He-KLEM} shows the spectra of protons and helium nuclei measured in the NUCLEON experiment using the KLEM method in terms of energy per nucleon.
The calorimetric method is not given here because it is qualitatively very similar to the KLEM results, but the statistics are worse.
In figure~\ref{fig:p-He-KLEM}, a break is clearly visible in the proton spectrum near the energy of 10 TeV.
In the comparison of the spectra of protons and helium in terms of energy per nucleon, it is noteworthy that the spectrum of helium gives some indication of a possible break at approximately the same energy as the break in the proton spectrum.
This is a very interesting fact that should be carefully examined as the set of the NUCLEON experiment statistics grows.
Many direct experiments of the previous century gave an indication that the proton and helium spectra at energies ranging from tens of GeV to tens of TeV have different inclinations.
This phenomenon would be of fundamental importance, as it would indicate different conditions of acceleration of protons and helium, and therefore the existence of different types of accelerators of cosmic rays.
However, for a long time no experiment could give a statistically significant result in relation to the existence of such a difference, until the existence of the phenomenon with very high statistical reliability was confirmed in the region of energies from 200 GeV to 10 TeV in the ATIC experiment \cite{ATIC-2004-ZATSEPIN-IzvRan}.
After that, the existence of the phenomenon was confirmed in several other experiments for various energy ranges, and for new experiments became in fact a test of a method's correctness.
Figure 11 shows the ratio of proton to helium flux of the NUCLEON experiment's KLEM and calorimeter methods and the data of the ATIC experiment \cite{ATIC-2009-PANOV-IzvRAN-ENG}.
The NUCLEON experiment confirms the presence of the phenomenon and its results are in full accordance with the results of the ATIC.
\subsection{Spectra of abundant heavy nuclei}
\label{subsec:Abundant}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{C.pdf}\\
\includegraphics[width=0.75\textwidth]{O.pdf}
\caption{\label{fig:CO} Energy spectra of carbon and oxygen nuclei obtained by the NUCLEON experiment and in the experiments ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}, TRACER(LDB2) \cite{CRNUCL-TRACER2011-ApJ}, and CREAM \cite{CR-CREAM2010A}.}
\end{figure}
Figure~\ref{fig:CO} shows the energy spectra of carbon and oxygen nuclei obtained by the NUCLEON experiment; figures~\ref{fig:NeMg} and \ref{fig:SiFe} shows the energy spectra of neon, silicon, manganese and iron nuclei.
There are no strong deviations from the results of the other experiments (see the captions under the pictures).
Some systematic differences between the calorimeter and the KLEM methods are present only for carbon and iron nuclei.
For the heavy nuclei, the spectra obtained show several interesting features.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Ne.pdf}
\includegraphics[width=0.75\textwidth]{Mg.pdf}
\caption{\label{fig:NeMg} Energy spectra of neon and manganese obtained by the NUCLEON experiment and in the experiments: ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; TRACER(LDB1) \cite{CRNUCL-TRACER2008B-ApJ}; TRACER(LDB2) \cite{CRNUCL-TRACER2011-ApJ}; CREAM \cite{CR-CREAM2010A}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Si.pdf}
\includegraphics[width=0.75\textwidth]{Fe.pdf}
\caption{\label{fig:SiFe} Energy spectra of silicon and iron obtained by the NUCLEON experiment and in the experiments: ATIC \cite{ATIC-2009-PANOV-IzvRAN-ENG}; TRACER(LDB1) \cite{CRNUCL-TRACER2008B-ApJ}; TRACER(LDB2) \cite{CRNUCL-TRACER2011-ApJ}; CREAM \cite{CR-CREAM2010A}.}
\end{figure}
One of the intriguing problems is the possibility of a flattening of the spectra for the majority of heavy nuclei at high energies -- above a few hundred GeV per nucleon.
An indication of the existence of such a phenomenon was seen in the ATIC experiment \cite{ATIC-2007-PANOV-IzvRAN} and, later, in the CREAM experiment \cite{CR-CREAM2010A}.
The TRACER experiment \cite{CRNUCL-TRACER2011-ApJ,CRNUCL-TRACER2008B-ApJ} did not confirm the existence of this effect, but it does not apparently contradict it due to insufficient statistical accuracy.
Some indications of the existence of this phenomenon can be seen in the spectra from the carbon and oxygen nuclei (figure~\ref{fig:CO}), but it is absent from the iron spectrum (figure~\ref{fig:SiFe}).
Significantly more reliable data can be obtained by constructing an averaged spectrum of the heavy nuclei in terms of energy per nucleon, which can dramatically increase the statistical significance of the spectrum at high energies.
Figure~\ref{fig:AllNucl} shows the spectra of heavy nuclei ($Z=6\div27$) in terms of energy per nucleon from the NUCLEON experiment along with the similar data from the ATIC experiment \cite{ATIC-2009-PANOV-IzvRAN-ENG}.
For historical reasons (for comparison with the data of the ATIC), the spectrum of the iron nucleus is also included, while the iron spectrum has no signs of flattening at high energies (as it will be specifically discussed below).
Although there are some systematic differences in the absolute intensity of the spectrum between the calorimeter method and the KLEM method of the NUCLEON experiment, qualitatively, both methods reliably indicate that the averaged spectrum of heavy nuclei at energies above 200--300 GeV per nucleon has a low slope, confirming the indications of the ATIC experiment.
However, the NUCLEON data also provide evidence of an entirely new phenomenon, which could not be detected in the ATIC experiment: at energies above 3--8\,TeV/n (depending on the method used) the spectrum unexpectedly goes down dramatically.
This indication is not quite statistically robust, but it will be checked with the accumulated data of the NUCLEON experiment and improved data processing methods.
Note that this phenomenon manifests itself in a previously inaccessible energy range, and the NUCLEON experiment was designed for the sake of such physics.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{AllNucl.pdf}
\caption{\label{fig:AllNucl} Spectra of heavy nuclei ($Z=6\div27$) in terms of energy per nucleon from the NUCLEON experiment along with similar data from the ATIC experiment \cite{ATIC-2009-PANOV-IzvRAN-ENG}.}
\end{figure}
\subsection{Features of the spectrum of iron in comparison with the spectra of other heavy nuclei}
\label{subsec:Iron}
As can already be seen based on the data presented, the iron spectrum behaves significantly differently from the spectra of other heavy nuclei at high energies.
The easiest way to see this is to determine the ratios of the heavy nuclei spectra to the iron spectrum.
That has already been done in the ATIC experiment \cite{ATIC-2014-NuclPhysB} and the results do indicate a significant difference between these spectra, although the statistical significance of the data is not very high.
Those findings can be tested in the NUCLEON experiment with greater statistical reliability and for higher energies.
Figure~\ref{fig:Abund-to-Fe} shows the ratios of the spectra of nuclei with charges from 6 to 14 in terms of the energy per nucleon to the spectrum of the iron nucleus for the NUCLEON experiment and the ATIC experiment \cite{ATIC-2014-NuclPhysB}.
The NUCLEON data confidently indicate that the spectrum of iron at energies above $\sim$100\,GeV per nucleon is steeper than the spectra of heavy nuclei with charges from 6 to 14, which includes the abundant heavy nuclei C, O, Ne, Mg, Si.
The systematic difference of the ratios obtained from the calorimeter method and the KLEM method is mainly caused by systematic differences in the measured spectrum of iron, already noted above.
Qualitatively, however, both methods lead to the same result, and the statistical significance of the common result is mainly provided by the KLEM method, for which the statistics are about four times as many as the statistics of the calorimeter method.
Note that the difference between the iron spectrum and the spectra of heavy nuclei currently has no explanation, which is why this phenomenon is very important.
Also note that the difference between the spectra is observed in the NUCLEON experiment separately for each heavy nuclei and the iron nucleus, but the statistical significance of this difference is lower than for the total spectrum of 6--14 charges.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Abund-to-Fe.pdf}
\caption{\label{fig:Abund-to-Fe} Ratios of the spectra of nuclei with charges $Z=6\div14$ in terms of energy per nucleon to the spectrum of iron nuclei for the NUCLEON experiment and the ATIC experiment \cite{ATIC-2013-Panov-FeUpturn}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{SubFe-to-Fe.pdf}
\caption{\label{fig:SubFe-to-Fe} Ratios of the spectra of nuclei with charges $Z=16\div24$ (``sub-Fe'' nuclei) in terms of energy per nucleon to the spectrum of iron nuclei for the NUCLEON experiment and the ATIC experiment \cite{ATIC-2013-Panov-FeUpturn}.}
\end{figure}
The ATIC experiment indicates even greater differences between the spectra of heavy nuclei within the sub-Fe charge range $(Z = 16\div24)$ and iron nuclei (the iron spectrum is steeper).
This is especially strange, since a lot of sub-Fe nuclei are secondary nuclei -- fragments from nuclear spallation, mainly of iron and interstellar gas, which are expected to have steeper spectra than the spectrum of iron.
Earlier, a similar effect was observed in the results of the HEAO-3-C3 space experiment \cite{HEAO-HN-1987-ICRC-330}, but the authors then questioned the reality of the phenomenon and tied it to a possible methodological error.
Figure~\ref{fig:SubFe-to-Fe} shows the ratios of the spectra of nuclei with charges from 16 to 24 (sub-Fe nuclei) in terms of energy per nucleon to the spectrum of iron nuclei for the NUCLEON experiment and the ATIC experiment \cite{ATIC-2013-Panov-FeUpturn,ATIC-2014-NuclPhysB}.
As can be seen, there is a more mixed picture.
The calorimeter method data qualitatively confirm the results from the ATIC experiment very well, but the statistical errors of the calorimeter method are large, as well as the statistical errors of the ATIC experiment.
The KLEM method, although not explicitly showing the theoretically expected decrease of the $Z = 16\div24$\,/\,Fe ratio with energy, which is already important, does not show the growth of this ratio similar to the results and outcomes of the ATIC experiment and the calorimeter method of the NUCLEON experiment.
It is difficult to talk about systematic differences between the results of the calorimeter and the KLEM methods of the NUCLEON experiment, because all the differences occur within the statistical uncertainty.
The situation should become clearer with a larger set of data in the NUCLEON experiment. The methodological causes of such differences should also be carefully considered.
\section{Discussion and summary}
\label{sec:Discussion}
Although the NUCLEON experiment is in its initial phase, and the results so far are preliminary in nature, we can say with confidence that the data already give numerous indications of the existence of various non-canonical phenomena in the physics of cosmic rays, which are expressed in violation of a simple universal power law of the energy spectra.
Some of the results confirm and essentially clarify the data of earlier experiments.
Worth mentioning here are: the difference between the slopes of the spectra of protons and helium; the difference between the spectra of heavy abundant nuclei and iron nuclei; the difference between the spectra of the sub-Fe nuclei $(Z = 16\div24)$ and that of iron nuclei; and the flattening of all the studied nuclei except the iron nuclei at energies above ~ 500 GeV/nucleon.
These phenomena can be explained in terms of the concept of the existence of various sources of cosmic rays, which are characterized by different chemical compositions of the accelerated particles, and different energy spectra, such as in the three-component model \cite{CR-ZATSEPIN2006}, or within the concept of the heterogeneous structure of the cosmic ray sources themselves \cite{ATIC-2011-ZATSEPIN-ICRC,CRA-OHIRA2011}.
The difference between the spectra of heavy nuclei and the spectrum of iron nuclei may be partly related to the effects of propagation in a heterogeneous space environment provided by the so-called ``superbubbles'' \cite{ATIC-2013-Panov-FeUpturn}, however, the discussion of these phenomena is in its initial phase.
Other effects found (if confirmed) are brand new.
These include breaks in the spectra of protons and helium near 10--20\,TeV per nucleon\footnote{During the revising of this paper, a message of the CREAM collaboration appeared \cite{CREAM2017-ApJ-pHe} that the CREAM experiment also confirms the existence of a break in the proton spectrum at energies of 10-20 TeV.} and a break in the spectra of heavy nuclei near the energies of 5--10\,TeV per nucleon.
The existence of these phenomena is still not firmly established, so a discussion of their physical nature has not even begun.
These phenomena are situated in a poorly investigated energy range, from 10\,TeV per particle energies up to several hundred TeV per particle, which became available in the NUCLEON experiment.
They manifest themselves with the current amount of collected data and it is expected that the statistical significance of the results and their methodological elaboration will increase significantly during the experiment.
\acknowledgments
We are grateful to ROSCOSMOS State Space Corporation and Russian Academy of Sciences for their continued support of this research.
|
1,116,691,500,369 | arxiv | \section{Introduction}
With the continuous developmenting of the blue economy, the communications for marine information gathering, transmission and fusion will become more and more important\cite{uwanet}.
Moreover, sixth-generation (6G) agenda aims to connect the whole world together, which needs to ensure worldwide connectivity. The demand of commnunications will expand from space to air, ground, and sea environment in this era, dramatically\cite{6Gtvt}~\cite{6Gnetwork}.
Hence, underwater information acquisition is becoming increasingly important. Underwater acoustic communication (UWA COMMs) as the most effective way of underwater information transmission has two challenges to face.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm]{Visio-jstsp.pdf}\\
\caption{Distributed OoT devices link IoT devices and IoUT devices.}\label{RelayIllustration}
\end{figure}
One challenge is underwater information need cross the water-air interface.
Fortunately, using buoy node to exchange seabed observation information with satellite control information is a very typical and extremely important application for ocean observation \cite{situOB}.
The emerging ocean of things (OoT) based on low-cost floating devices \cite{ootJ} \cite{ootoceans}, will provide a feasible way for water and air information interaction.
Therefore, OoT will be the hub of information interaction, linking IoT, which is for wireless devices and IoUT which is mainly focusing on underwater equipment\cite{IoUT}.
Another challenge is that UWA COMMs need overcome many obstacles, such as strong noise interference, multipath effects, large scale Doppler effects and synchronization error\cite{Chitre2008Underwater}.
The UWA COMMs can be divided into two research fields, high rate communication system and low rate robust communication system.
Especially, the robust UWA COMMs play an important role in many underwater scenarios, such as control signaling transmission for unmanned underwater work system and information interaction in a high-noise environment.
Recently, deep learning has shown amazing results in solving underwater acoustic signal recovery than classical signal processing method. However, the fatal problem is that devices are distributed and may have insufficient data in single node.
Hence, there are two important problems that can't be ignored. First, data is separated which lead to the marginalization and discretization of data acquisition. Second, single device may have insufficient data.
The emerging federated learning (FL) can train deep learning (DL) models in distributed systems which is a good solution\cite{FLgen}.
Motivated by above mentioned, we explore the power of deep learning and exploit the cooperation of acoustic and radio links to use distributed data to achieve robust UWA COMMs and utilize distributed data. The surface relay buoy transmission system can utilize the cooperation of acoustic and radio. First, the system can realize information interaction with the subsea equipments through DL based UWA COMMs. Second, the sea surface relay buoy can do federated learning to share DL model parameters via radio frequence in order to improve single node performance. In this paper, the main contribution can be divided into three parts.
$\bullet$ We proposed an acoustic radio cooperative (ARC) training framework for deep learning based Ocean of Things, which can be used to DL-model training for surface equipment.
$\bullet$ To analyze the ARC performance, we take stability UWA COMMs for OoT device as an example. We propose a novel DL-based chirp communications receiver apply it over underwater acoustic channels, which can against doppler shift and symbol time offset. The bit error rate can be increased by an order of magnitude.
$\bullet$ To utilize the distributed data from multiple buoy nodes, we proposed an ARC enhanced federated meta learning (FML) based algorithm to train the DL-receiver in the context of random scheduling wireless networks, dubbed ARC/FML, which can achieve distributed transfer learning to adapt to a new dataset. Besides, we analysis the converges of FML with wireless communication. For any convergence target gap $\epsilon$, the FML algorithm can acheive an gap after $T_z$ rounds of communications.
The remainder of this paper is organized as follows. In section II, provides a brief survey on UWA COMMs, DL in physical layer and federated learning in wireless networks. Then, the system model is introduced in section III and the convergence of Federated Meta Learning in Wireless is analysed on section IV. In section V, the dataset is explained.In section VI simulation results are demonstrated. In last section, a conclusion is provided.
\section{State of Art}
\textbf{UWA COMMs:} UWA characteristics are now known the disadvantages of severe transmission loss, time-varying multi-path propagation, severe Doppler spread, limited and distance-dependent bandwidth, and high propagation delay\cite{WENM}. These features will change as the communication scenario changes. Therefore, researchers usually divide underwater acoustic communication into two kinds according to application requirement \cite{Huang}. One is high-data-rate underwater acoustic communication for short- and medium-range communications. In this direction,in order to further improve the communication reliability, it is necessary to better overcome the complex multi-path fading of underwater acoustic channel. The joint equalization decoding method, which combines channel equalization and channel decoding, is a relatively advanced channel compensation technology at present and has high practical value and application prospect\cite{TURBO}. Moreover, in order to meet the needs of more diversified applications with high data rate, new efficient modulation mode and multiple input multiple output technology are introduced into the medium and short range underwater acoustic communication, which significantly improves the rate of underwater acoustic communication and becomes a new research hotspot\cite{MIMOuwa}.
The other is low-data-rate underwater acoustic communication for long-range communications. Many modulation techniques which have robust performance, such as frequence shift keying, chirp modulation and spread modulation, have been used for rapid timevarying channels.
However, the most difficult aspect of UWA COMMs is lack of a accurate channel models
the tractable mathematical descriptions of the underwater acoustic channel are elusive, because the signal propagation is very complicated\cite{NarimanNeural}.
Hence, researchers pay more attention to data based deep learning method expecting to solve many problems that traditional methods cannot.
With the improving computational resources and the quantity of data, deep learning brings a new era for communication system that many novel system architecture and algorithm are designed \cite{o2017introduction}.
\textbf{DL Based COMMs:}
DL has been applied successfully in receiver design, channel estimation and signal detection over wireless channels~\cite{qin2019deep}. Unlike conventional receivers, DL can handle wireless channels in an end-to-end manner. It is widely acknowledged that the well-trained DL-receiver can not only reduce the receiver complexity \cite{TwoApplications}, but also achieve perfect demodulation under unknown channels \cite{NarimanNeural}.
The design of DL-receivers can be generally classified into two categories. One is the data driven method, such as the classical FC-DNN\cite{ye2017power}, which takes into account the characteristics of data and the ability of neural networks, aiming to achieve global optimality. Whereas, most existing works based on the data-driven method consider the communication system as a black box. The other category is the model driven method, such as the well-known ComNet~\cite{Comnet}, which combines DL and expert knowledge.
In especial, some methods based on deep learning are also gradually being used in underwater acoustic communication.
\textbf{FL:}
FL come into fashion because it can decouple the data acquisition and compute at the central unit.
An analytical model is developed to characterize the performance of FL in wireless networks\cite{FLtony}.
Moreover, to ensure the DL based communications can work at a new environment. Meta-learning, which can train the network by alternating inner-task and across-task updates with a small number of labeled data to improve the transfer efficiency, has been used in wireless communications.
Therefore, many researchers pay attention to federated meta learning.
However, among many applications in federated learning based wireless network, deep learning (DL) based applications are actively and widely discussed on image classification task, such as MNIST, rather than DL based physical layer cases.
Recently, in the ocean of things, with the increasing computational capacity of device, such as buoys, unmanned ships and offshore platform, as well as the increasing concerns about sharing private data. There is a precedent that the IoUT device can realize federated learning (FL) computation \cite{UWAiot}.
\section{Federated Meta Learning Based Framework}
In this section, we introduce the federated meta learning in wireless algrithom.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{metaprocess-eps-converted-to.pdf}\\
\caption{Federated meta learning framework.}\label{RelayIllustration}
\end{figure}
For DL based applications, an important limitation of the approach is that training should be generally carried out from scratch for each new dataset. Aiming at improve the generalization of DL-receiver, we proposed FML based framework in random scheduling.
Squentially updating the network parameters from all edge nodes then do global aggregation which is very efficient than transimitted all data loacated on distributed node to the centre node. In many communication scenarios, the channel sources are precious. Hence, sheeduling police is used to allocate the limited channels to users. In \cite{FLtony}, the federated learning in wireless network with three scheduling policies have been studied and the convergences are analyzed.
However, existing federated learning focus on a sufficient dataset and is hard to adapt to new UWA environment when labeled data is rarely.
The issue of robust learning is highly concerned for machine learning for communications. Many approaches have been proposed, in particular the approach of coarse offline learning using data sets and fine learning in an online fashion have been proposed.
The goal of federated meta learning fits in well with this, which is to train models performing one or a few steps
of gradient descent on their local dataset and such that it can solve new learning tasks using only a small number of training samples\cite{DTLaibo}.
Hence, in this paper, we take federated meta learning as the framework for distributed model with random scheduling. In each communication rounds, the center node will uniformly pick $N$ users out of $K$ users and $G=\frac{N}{K}$ is the available channel ratio.
Essentially, federated meta learning is used to do transfer learning in order to improve the generalization of DL-receiver.
Here, we focus on applying FML in wireless networks. The procedure is description in algorithm 1. The algorithm can be divided into two parts.
$\bullet$ At edge node $i$, it first update using the training data $D_i^{training}$ stored on the device.
For MAML algorithm, given that the model parameters of $i$ buoy node, the node can update its parameters by one step learning according gradient descent based on $D_i^{train}$,
\begin{equation}
\phi_i(\theta)=\theta -\alpha \nabla_\theta L(\theta, D_i^{train}),
\end{equation}
where $\alpha$ is the learning rate and then evaluates the loss $L(\phi_i, D_i^{train})$.
Then, locally update $\theta_i$ using testing data $D_i^{test}$:
\begin{equation}
\theta_i^{t+1}= \theta_i^t-\beta \nabla_\theta L(\phi_i(\theta), D_i^{test}).
\end{equation}
After that, if the node is choosen by AP, it will send $\theta_i^{t+1}$ to the AP. The framework can be seen in Fig.3.
$\bullet$ At the AP side, it selects part of BNs for collect parameters in order to the global aggregation.
During a communication round $t$, the parameters can be uploaded and updated successfully which needs to meet two conditions, simultaneously. One is that the node $k$ should be selected and the other is that the transmitted data should be decoded without error. In this respect, we use indicator function $u_{i}^t\in \{0,1\}$ to mark the node $i$ which will be used in federated learning process,
which indicate whether the node $i$ be chosen at $t$-th round. Hence, the AP performs
\begin{equation}
\theta^{t+1} =\sum _{ i \in \mathbb{S}}w_i \theta_i^{t+1} u_i,
\end{equation}
where $w_i$ can be calculated by the local data size according by $w_i= \frac{|J_i|}{\sum_{i \in \mathbb{S} }||J_i||}$.
Then, the new global paprameters $\theta^{t+1}$ will be broadcast to all nodes in a reliable way.
We first assume the distributed buoy $i \in \mathbb{S}$ and $J_i$ represents its local dataset ${(x_i^{1}, y_i^{1}),...,(x_i^{j}, y_i^{j}),...,(x_i^{J_{i}}, y_i^{J_{i}})}$, where $|J_i|$ means dataset and $(x_i, y_i)$ is the sample of this dataset. $x_i$ is the input of DL-based network which is the receive signal and $y_i$ is the output of DL-based network. The distribution of the dataset is unknown. The loss function can be defined as $l(\theta, (x_i, y_i)) :\mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R} $.
The receiver located on buoys can be trained by optimizing the loss function as
\begin{equation}\label{eq-loss-each}
L_i(\theta, J_j) \triangleq \frac{1}{|J_j|}\sum_{\substack(x_i^j, y_i^j)\in |J_j|}{{{\left| {{x_i^j} - {\hat x_i^j}} \right|}^2}}.
\end{equation}
which can be simply denoted as $L_i(\theta)$.
\begin{algorithm}[!t]
\caption{Federated Meta Learning Based on Random Scheduling}
\label{alg:example}
\begin{algorithmic}[1]
\REQUIRE {\bf{Data set $\{D_i\}_{i=1}^{i=K}$} at each BN} \\
\FOR {$ t=1:T $}
\FOR {each UE $k \in \{ 1,2, \ldots ,K\}$ in parallel}
\STATE {Initialize $w_i^t = {w^t}$}
\FOR {$t_{ local}=1$ to $T_0$ }
\STATE {Sample $i \in D_i$ uniformly at random, and update the local parameter using $D_{train}$}
\STATE {$\phi_i^t = \theta_i^t - \alpha (\nabla_\theta {L}(\theta_i^t), D_{train})$}
\STATE {obtain $ \theta_i^{t+1}$ based on}
\STATE {$\theta_{i+1}^t = \theta_i^t - \beta (\nabla_\theta \phi_i^t , D_{test})$}
\ENDFOR
\STATE {Send parameter $w_i^t$ to the AP }
\ENDFOR
\STATE{The AP collects all the parameters $\{ \theta_i^t\} _{i = 1}^K$, and updates ${\theta^{t + 1}} = \frac{1}{n}\sum\limits_{i = 1}^K {{w_i}\theta_i^t}u_i $ }
\ENDFOR
\ENSURE $\theta^T$
\end{algorithmic}
\end{algorithm}
\section{UWA Chirp Communications Cases}
With the advance of deep learning, it has a very strong application in the physical layer.
Hence, we take DL-based communications system as an example.
In order to achieve the purpose of stable communication, each node selects chirp communications for UWA COMMs, because of its characteristic of anti-noise and robustness to Doppler.
The UWA stable communications adopt a pair of chirp signals in frequency band $[f_1 - f_2]$ Hz to transmit information which can be expressed as
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{frame-eps-converted-to.pdf}\\
\caption{Chirp communication system framework.}\label{RelayIllustration}
\end{figure}
\begin{equation}\label{eq-III.1}
s_1(t)= cos(2 \pi f_1 t+\mu t^2/2 +\phi_0 ), 0 \leq t\leq T,
\end{equation}
\begin{equation}\label{eq-III.1}
s_2(t)= cos(2 \pi f_2 t-\mu t^2/2 +\phi_0 ), 0 \leq t\leq T,
\end{equation}
where $s_1$ means up chirp and $s_2$ is down chirp where $T$ is symbol duration, and $\phi_0$ is arbitrary initial phase which is assumed to be zero without loss of generality, respectively.
Moreover, the chirp signal is characterized by its start frequency $f_1$, end frequency $f_2$, and time duration $T$ as
\begin{equation}\label{eq-III.1}
\mu =\frac{|f_2-f_1|}{T}=\frac{B}{T},
\end{equation}
where $B$ is the bandwidth of chirp signals. Usually, the
chirp signal is defined as up-chirps with $\mu > 0$ and
down-chirps with $\mu < 0$.
After framed, the signal is transmitted over the channel which can be expressed as
\begin{equation}\label{channelintergral}
y(t)= \int_{-\infty}^{+\infty}h(\tau,t)s(t-\tau)d\tau +n(t),
\end{equation}
where $ h(t) $ means UWA channel impulse response.
\subsection{Matched Filter based Receiver}
A classical method to implement the matched filter (correlator) receiver is convolution. Received signal $y(t)$ is convolved with time-reversed versions of $s_1(t)$ and $s_2(t)$ to generate estimator $c_1$ and $c_2$, respectively. The receiver calculation process is as follow,
\begin{equation}\label{eq-III.1}
c_{i}= \int y(\tau)s_{i}(T-\tau)d \tau, ~~i \in {(1,2)},
\end{equation}
If $c_1 \leq c_2$, the receiver estimates that $s_1$ is transmitted. Otherwise, $s_2$ is transmitted.
However, to achieve the optimal detection, the matched filter needs to satisfy three conditions.
$\bullet$ Integral interval synchronization should be satisfied. That is to say, the received signals $y(t)$ require precise synchronization.
$\bullet$ The noise $n(t)$ should be Gaussian noise.
$\bullet$ The received signal $y(t)$ must not be affected by Doppler shift.
Unfortunately, the abovementioned assumptions are impractical in UWA COMMs, because of the large delay, non-Gaussian noise and doppler shift charactersitc. Hence, we proposed a novel DL-receiver solve those problems.
\subsection{DL based Receiver}
We first introduce a specific DL based receiver, which depends on neural network architecture, called C-DNN, which can be employed for
detection uses several fully connected NN layers. Hence, we assume that the DL-receiver can be denoted as $f_\theta$ with parameters $\theta$ $\in \mathbb{R}^d$.
In this paper, primitively, let us consider the simplest four-layer fully-connected
neural network with one input layer, two hidden
layers, and one output layer. Denote $x_i$ and $y_i$
as the estimited data and the network input of node $i$,
respectively.
Therefore, the receiver $f(\theta)$ which is a cascade of nonlinear
transformation of input data can be expressed as
\begin{equation}
\hat x_i= f(y,\theta)=f_{sigmoid}^{(L-1)}(f_{Relu}^{(L-2)}(...f_{Relu}^{(1)}(y_i))),
\end{equation}
where layer $l$ contains a total
of $N_l$ neurons and each neuron in the layer $l$ is connected to all
neurons in the next layer $(l+1)$ through the connection weights
matrix.
The output of hidden layer are activated by
\begin{equation}
f_{Relu}(x_i) = \max (0,x_i),
\end{equation}
which is a non-linear function to provide a normalized output
and keeps the output within the interval $[0, +\infty]$.
The output of the final layer is the estimated bit.
The output layer only consist one neuron, estimating the binary bits to be detected.
Considering sigmoid activation function is applied to the output layer, which limits the output within the interval $[0, 1]$. Thus, the task can be activated by
\begin{equation}
f_{sigmoid}(x_i) = \frac{1}{{1 + {e^{ x_i}}}}.
\end{equation}
The input to the first layer
is the received signal $y_i(t)$ or the sampled feature factor $y_i[k]$, which
is selectively choosen from the observed signal through
preprocessing.
This dataset is then used to train a DL-based receiver that
estimate the received signal $y_i[k]$ to one of the transmission symbols in $[s_1, s_2]$.
\begin{table}[b]
\centering
\renewcommand\arraystretch{1.2}
\renewcommand\tabcolsep{3.0pt}
\setlength{\abovecaptionskip}{0.cm}
\caption{Notation Summary.}
\begin{tabular}{p{2cm}p{6cm}}
\hline
Notations & Definition \\
\hline
$T_0$ & number of local update steps\\
$v^t_{[n]}$ & parameters for global aggregation at each iteration within the interval $[(n-1)T_0,nT_0]$\\
$\theta_i^t$; $\theta^t$ & local parameters; weighted average of local parameters which is synchronized with $v^t_{[n]}$ \\
$\alpha$; $\beta$ & learning rate during meta training (local adaptation); learning rate during meta training (update initialization)\\
$L_i$; $G_i$; $G$ & objective function\\
$D^{train}$; $D^{test}$ & dataset for training and dataset for testing\\
\hline
\end{tabular}\\
\end{table}
\section{Convergence Analysis of Federated Meta Learning}
In this section, we focus on the convergence of the federated meta-learning method. We first define $G_i(\theta) = L_i(\phi_i(\theta))$ and $G(\theta) = \sum_{ i \in \mathbb{S}} w_i G_i(\theta)$. For simplicity, we assume $T=NT_0$ and do four assumptions for all objective functions.
\textbf{Assumption 1.} Each $L(\theta)$ is $\mu$-strong convex, i.e., for all $\theta$, $\theta'\in \mathbb{R}^n$,
\begin{equation}
||\nabla L_i(\theta)-\nabla L_i(\theta')|| \geq \mu ||\theta- \theta' ||.
\end{equation}
\textbf{Assumption 2.} Each $L(\theta)$ is $H$-smooth, i.e., for all $\theta$, $\theta'\in \mathbb{R}^n$,
\begin{equation}
||\nabla L_i(\theta)-\nabla L_i(\theta')|| \leq H ||\theta- \theta' ||.
\end{equation}
and there exists constant $B$ such that for all $\theta'\in \mathbb{R}^n$,
\begin{equation}
||\nabla L_i(\theta)|| \leq B.
\end{equation}
\textbf{Assumption 3.} The hessian of each $L(\theta)$ is $\rho$-Lipschitz, i.e., for all $\theta$, $\theta'\in \mathbb{R}^n$,
\begin{equation}
||\nabla^2 L_i(\theta)-\nabla^2 L_w(\theta)||\leq\ \rho||\theta- \theta' ||.
\end{equation}
\textbf{Assumption 4.} There exists $\delta$ and $\sigma$ such that for all $\theta \in \mathbb{R}^n$,
\begin{equation}
||\nabla L_i(\theta)-\nabla L_w(\theta)||\leq\delta_i
\end{equation}
\begin{equation}
||\nabla^2 L_i(\theta)-\nabla^2 L_w(\theta)||\leq\sigma_i
\end{equation}
Assumption 1 and 2 are standard in standard and hold in many deep learning algorithm.
Assumption 3 illustrates that the local loss function is second-order smooth which is possible to analyse local meta learning loss function.
Assumption 4 describes the node similarity, whose gap between an implementation and the sample average can be measured by $||\nabla L_i(\theta)-\nabla L_w(\theta)||$.
Next, to analysis the convergency characteristic of FML based on random scheduling, we first characteristic the global loss function $G(\theta)$.
Here, we first show that $G(\theta)$ is $\mu''$-strongly convex and $H''$-smooth.
\textbf{Lemma 1.}
Suppose assumption 1-3 hold.
$G(\theta)$ is $\mu''$-strongly convex and $H''$-smooth, where $\mu''=N \mu'$, $ H''=N H'$ , $\mu' =\mu (1 - \alpha H)^2- \alpha \rho B $ and $ H'= H(1-\alpha \mu)^2+ \alpha \rho B$.
Lamma 1 tell us that the total loss function $G(\theta)$ is also a convex function as $L(\theta)$
\emph{\textbf{Proof:}}
To establish the smooth, we should show $||\nabla G(\theta)- \nabla G(\theta')|| \leq H'' ||\theta-\theta'||.$
By the definition of $H$-smooth, we have
\begin{equation}
\begin{aligned}
G_i(\theta) \leq G_i(\theta') +\nabla G_i(\theta')(\theta - \theta')+\frac{H'}{2}||\theta-\theta'||^2,
\end{aligned}
\end{equation}
which is equivalent to
\begin{equation}
\begin{aligned}
||\nabla G_i(\theta)- \nabla G_i(\theta')|| \leq H'||\theta-\theta'||,
\end{aligned}
\end{equation}
where $i \in \mathbb{S}$.
Because $G(\theta) = \sum_{ i \in \mathbb{S}} w_i G_i(\theta)$, by summing we can get
\begin{equation}
\begin{aligned}
\sum_{ i \in \mathbb{S}} G_i(\theta) &\leq \sum_{ i \in \mathbb{S}} G_i(\theta') + \sum_{ i \in \mathbb{S}} \nabla G_i(\theta')(\theta - \theta')\\&+ \frac {\sum_{i=1}^{N} H'}{2}||x-y||^2.
\end{aligned}
\end{equation}
That is to say,
\begin{equation}
\begin{aligned}
G(\theta) \leq G(\theta') +\nabla G(\theta')(\theta - \theta')+\frac{N H'}{2}||\theta - \theta'||^2,
\end{aligned}
\end{equation}
which is equivalent to
\begin{equation}
\begin{aligned}
||\nabla G(\theta)- \nabla G(\theta')|| \leq H''||\theta-\theta'||,
\end{aligned}
\end{equation}
where $H''= NH'$.
In the same way, we can establish the convex,
\begin{equation}
\begin{aligned}
||\nabla G(\theta)- \nabla G(\theta')|| \geq N\mu'||\theta-\theta'|| .
\end{aligned}
\end{equation}
From abovementioned, we can have
\begin{equation}
N\mu'||\theta-\theta'|| \leq ||\nabla G(\theta)- \nabla G(\theta')|| \leq NH'||\theta-\theta'||.
\end{equation}
Thereby we complete the proof.
Next, we analysis the influence of the similarity between local learning tasks.
Based on Lemma 1, we can get the convergence target gap of the FML based method.
\textbf{Theorem 1.} For any convergence target gap $\epsilon$, the FML algorithm can acheive the gap after $T_z$ rounds of communications, i.e.,
\begin{equation}
\mathbb{E}[G(\theta^*) - G(\theta^T)] \leq \epsilon
\end{equation}
if $T_z$ satisfies the following
\begin{equation}\label{impans}
T_z \geq \frac{log(\frac{1}{n}(\epsilon + K m(T)))}{log(\xi)},
\end{equation}
where $K = \frac{ \mu'' } {1-\xi^{T_0}}$, $\xi =1-2H'' \beta(1+\frac{\mu''\beta}{2})$ and $m(T)= \alpha' T-\frac{\alpha'}{\beta H'}[1-(1-\beta H')^{T}]$.
The proof is as follows. From formula (\ref{impans}), we can find that the term $K$ is influenced by the diffence of meta task and the
mutiple local update using the function $m(T_0)$.
According to the $m(T_0)$, we can find that the local step $T_0$ impact the convergence time $T_z$. With the increasing of $T_0$, the
$T_z$ decrease. Hence, we can adjust the $T_0$ to balance transmission cost and local calculation cost.
\emph{\textbf{Proof:}}
Considering the same ways of \cite{AdaptiveFL}, \cite{realtimeFL}, we define virtual sequence $v^t_{[n]}$ for global aggregation at each iteration for $t \in [(n-1)T_0,nT_0]$. the internal $[(n-1)T_0,nT_0]$ is regarded as $[n]$. In general, we have
\begin{equation}
v^{t+1}_{[n]} = v^t_{[n]} -\beta \nabla G(v^t_{[n]} ),
\end{equation}
where $v^t_{[n]}$is assumed to be "synchronized" with $\theta^t$ at the beginning of interval $[n]$ ,i. e, $v^t_{[n]} = \theta^{(n-1)T_0}$, where $\theta^{(n-1)T_0}$ is the globle averaging model parameters $\theta_i^{(n-1)T_0}$.
To show the convergence, we first analyze the gap between $v^t_{[n]}$ and $\theta^t$,
\begin{equation}
\begin{aligned}
&||\theta^{t+1}_i - v^{t+1}_{[n]}||\\
& = || \theta^{t}_i- \beta \nabla G_i(\theta_i^t) -v_{[n]}^t + \beta \nabla G(v_{[n]}^t)||\\
&\leq || \theta^{t}_i-v_{[n]}^t|| + \beta || \nabla G(v_{[n]}^t)- \nabla G_i(\theta_i^t)||\\
&\leq || \theta^{t}_i-v_{[n]}^t|| + (\beta ||\nabla G_i(\theta_i^t) - \nabla G_i(v_{[n]}^t) || \\
&~~~+ \beta || \nabla G_i(v_{[n]}^t)- \beta || \nabla G(v_{[n]}^t)||)\\
&\leq (1+\beta H') ||\theta^{t}_i - v^{t}_{[n]}||\\&+ \beta [\delta_i + \alpha C (H \delta_i+B\sigma_i +\tau)]
\end{aligned}
\end{equation}
where the upper bound of $||\theta^{t}_i - v^{t}_{[n]} ||\leq \delta_i + \alpha C (H \delta_i+B\sigma_i +\tau) $ can be found from \cite{realtimeFL}.
Next, we denote $g(x) \doteq \frac{\delta_i + \alpha C (H \delta_i+B\sigma_i +\tau)}{H'}[(1+\beta H')^x-1]$. and we can have $||\theta^{t}_i - v^{t}_{[n]}|| \leq g(t-(n-1)T_0)$. According this, we can get
\begin{equation}
\begin{aligned}
&||\theta^{t+1} - v^{t+1}_{[n]}||\\
&= ||\sum w_i \theta^{t+1}_i u_i^t - v^{t+1}_{[n]}||\\
&=|| \theta^{t}- \sum w_i \nabla G_i(\theta_i^t) u_i^t - v^{t}_{[n]}+ \beta \nabla G(v_{[n]}^t) ||\\
&\geq || \theta^{t}-v_{[n]}^t|| - \beta || \sum w_i ( \nabla G_i(\theta_i^t) - \nabla G_i(v_{[n]}^t)) ~u_i^t||\\
&\geq || \theta^{t}-v_{[n]}^t|| - \beta H' \sum_{i \in \mathbb{S}} w_i||\theta^{t}_i-v^{t}_{[n]}|| u_i^t\\
&\geq || \theta^{t}-v_{[n]}^t||- \beta H' \sum_{i \in \mathbb{S}} w_i||\theta^{t}_i-v^{t}_{[n]}|| \\
&\geq || \theta^{t}-v_{[n]}^t|| - \beta H' \sum_{i \in \mathbb{S}} w_ig(t-(n-1)T_0) \\
&= || \theta^{t}-v_{[n]}^t||+ \alpha'[1-(1+\beta H')^{t-(n-1)T_0}]
\end{aligned}
\end{equation}
where $\alpha' = \beta [\delta +\alpha C(H \delta +B \sigma +\tau)]$. Iteratively, we have
\begin{equation}
\begin{aligned}
&||\theta^{t} - v^{t}_{[n]}|| \geq \sum_{j=1}^{j=t-(n-1)T_0}{\alpha'[1-(1+\beta H')^{j}]}\\
&= \alpha' [(t-(n-1)T_0]-\frac{\alpha'}{\beta H'}[1-(1-\beta H')^{(t-(n-1)T_0)}]\\
&\doteq m(t-(n-1)T_0)
\end{aligned}
\end{equation}
Then, we analyse the gap between virtual sequence $v^t_{[n]}$ and $v^{t+1}_{[n]}$ within the interval $[n]$ at $t \in [(n-1)T_0,nT_0]$. Because $G(.)$ is $H''$-smooth, we have
\begin{equation} \label{com1}
\begin{aligned}
& G(v^{t}_{[n]}) -G(v^{t+1}_{[n]})\\
&\leq \nabla G(v^t_{[n]})(v^{t}_{[n]}- v^{t+1}_{[n]}) + \frac{H''}{2}||v^{t}_{[n]}-v^{t+1}_{[n]}||^2\\
&\leq \beta(1+\frac{H''\beta}{2})||\nabla G(v^{t}_{[n]}) ||^2
\end{aligned}
\end{equation}
Assumption 3 told us that $G(.)$ is $H''$-smooth and $\theta^*$ is the minimum point, we have
\begin{equation}\label{3232}
\begin{aligned}
G(\theta^*)&= \min_{\theta} G(\theta) \\
&\leq \min_{\theta} [G(v^{t}_{[n]}) +\nabla G(\theta')(\theta - v^{t}_{[n]})+\frac{H''}{2}||\theta-v^{t}_{[n]}||^2]\\
&= G(v^{t}_{[n]}) + \min_{||y||=1} \min_{t\geq 0}[\nabla G(v^{t}_{[n]})yt+\frac{H''}{2}t^2 ]\\
&= G(v^{t}_{[n]}) + \min_{||y||=1} [-\frac{(\nabla G(v^{t}_{[n]})y)^2}{2H''}]\\
&= G(v^{t}_{[n]}) - \frac{||\nabla G(v^{t}_{[n]})||^2}{2H''},\\
\end{aligned}
\end{equation}
where $t=\theta-v^{t}_{[n]}$.
Therefore, we can get
\begin{equation}\label{com2}
\begin{aligned}
\frac{1}{2H'' }||\nabla G(v^{t}_{[n]})||^2 \leq G(v^{t}_{[n]}) - G(\theta^*)
\end{aligned}
\end{equation}
Combined formula (\ref{com1}) and (\ref{com2}), we can have
\begin{equation} \label{oooooottt}
\begin{aligned}
G(v^{t}_{[n]})- G(v^{t+1}_{[n]}) \leq 2H'' \beta(1+\frac{H''\beta}{2}) [ G(v^{t}_{[n]}) - G(\theta^*) ]
\end{aligned}
\end{equation}
Hence, we can get
\begin{equation} \label{eq1111}
\begin{aligned}
G(\theta^*)- G(v^{t+1}_{[n]}) \leq [1-2H'' \beta(1+\frac{H''\beta}{2})] [G(\theta^*)- G(v^{t}_{[n]})]
\end{aligned}
\end{equation}
Here we denot $\xi =1-2H'' \beta(1+\frac{H''\beta}{2})$.
That is to say,
\begin{equation}
G(\theta^*)- G(v^{t+1}_{[n]}) \leq \xi [G(\theta^*)- G(v^{t}_{[n]})].
\end{equation}
Iteratively, we can have
\begin{equation}
\begin{aligned}
G(\theta^*) - G(v^{NT_0}_{[N]}) &\leq \xi^{T_0} [G(\theta^*) - G(v^{(n-1)T_0}_{[n]})] \\
&= \xi^{T_0} [G(\theta^*)-G(v^{(n-1)T_0}_{[n-1]})] +\\&\xi^{T_0} [G(v^{(n-1)T_0}_{[n-1]}) - G(v^{(n-1)T_0}_{[n]})].
\end{aligned}
\end{equation}
\begin{table*}[tb]
\centering
\renewcommand\arraystretch{1.2}
\setlength{\abovecaptionskip}{0.cm}
\caption{Parameters of channel dataset}
\begin{tabular}{cccccc}
\hline
Parameters &SIM-P &SIM-B&NOF&NCS &CWR \\
\hline
Environment &Rayleigh & Default & Fjord & Shelf &Reservoir \\
Range &- &500m$\sim$8000m & 750m &540m &1100m, 2100m, 6000m \\
Water depth &-&100m & 10m& 80m & 50m \\
Transmitter depl &- &Suspended & Bottom& Bottom & Suspended \\
Receiver depl &- &Suspended & Bottom& Bottom & Suspended \\
Doppler coverage &30Hz&Uncalculated& 7.8Hz& 31.4Hz &Uncalculated\\
\hline
\end{tabular}\\
\end{table*}
\begin{figure*
\centering
\subfigure[SIM-B]{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{Mch-eps-converted-to.pdf}\label{09}
\end{minipage}
}
\subfigure[NOF]{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{NOFch-eps-converted-to.pdf}\label{075}
\end{minipage}
}
\subfigure[NCS]{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{NCSch-eps-converted-to.pdf}\label{08}
\end{minipage}
}
\subfigure[CWR]{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{WLchs-eps-converted-to.pdf}\label{Acceleration_Factor1}
\end{minipage}
}
\caption{\small A snapshot of CIR dataset.}\label{ch_all}
\end{figure*}
Because$G(.)$ is $\mu''$-strong convex, we can get that $||G(\theta) -G(\theta') || \geq \mu''||\theta -\theta'||$.
Hence, the lower bound $G(v^{(n-1)T_0}_{[n]})-G(v^{(n-1)T_0}_{[n-1]})$ is shown as follow
\begin{equation}
\begin{aligned}
&\mathbb{E}[G(v^{(n-1)T_0}_{[n]})-G(v^{(n-1)T_0}_{[n-1]})]\\
&=\mathbb{E}[G(\theta^{(n-1)T_0})-G(v^{(n-1)T_0}_{[n-1]})]\\
&\geq \mathbb{E}[\mu''||\theta^{(n-1)T_0} - v^{(n-1)T_0}_{[n-1]} ||]\\
&\geq \mu'' m(T_0).
\end{aligned}
\end{equation}
Iteratively and incrementally, we can get
\begin{equation}
\begin{aligned}
&\mathbb{E}[G(\theta^*) - G(v^{NT_0}_{[N]})] \\
&\leq \xi^{T_0}\mathbb{E} [G(\theta^*)-G(v^{(n-1)T_0}_{[n-1]})] -\xi^{T_0} \mu'' m(T_0)\\
& \leq \xi^{(n-1)T_0}\mathbb{E} [G(\theta^*)-G(v^{0}_{[1]})] -\sum_{j=1}^{N-1}\xi^{jT_0} \mu'' m(T_0)
\end{aligned}
\end{equation}
After receiving updates from $T$-th rounds, the convergence gap of objective function can be expressed as
\begin{equation} \label{gap}
\begin{aligned}
&\mathbb{E}[G(\theta^*) - G(\theta^{T})] \\
&= \mathbb{E}[G(\theta^*) - G(v_{[N]}^{Nt})]+\mathbb{E}[G(v_{[N]}^{Nt}) - G(v_{[N+1]}^{Nt})]\\
&\leq \xi ^T\mathbb{E}[G(\theta^*) - G(\theta^0)] - \sum_{j=1}^{N-1}\xi^{jT_0} \mu'' m(T_0) - \mu'' m(T_0)\\
&\leq \xi^T\mathbb{E}[G(\theta^*) - G(\theta^0)]- \frac{ \mu'' m(T_0)}{1-\xi^{T_0}}
,
\end{aligned}
\end{equation}
The upper bound of is $\epsilon$ and we have $\mathbb{E}[G(\theta^0)-G(\theta^*)] \le n$ \cite{COCOA}. Thereby we complete the proof.
\section{ Dataset}
In addition, massive data is critical for deep learning. In radio frequency (RF) communication system, the required data sets can be found online, such as DeepSig dataset\cite{deepsig}, RF channel dataset \cite{qin2019deep}. But open source underwater acoustic communication dataset for learning algorithm is still blank. What's more, acoustic channel models and open-source software are foreseen as some of the key elements in the next generation of UWA COMMs research practices\cite{SongEditorial}.
The approximate channel models can be implemented by the tapped delay line. We assume the signal is
bandlimited within bandwidth B which can be described by discrete samples. Hence, formula (\ref{channelintergral}) can be expressed as
\begin{equation}\label{discreteCH}
r[k] = \sum_{k=0}^{K}g_k(t)x(t-kT_s)
\end{equation}
where $T_s$ is the sampling interval and $T_s=\frac{1}{B}$. The tap gain can be calculated by
\begin{equation}\label{eq-III.1}
g_k(t)=\int_{-\infty}^{+\infty}h(\tau,t) sinc(\frac{\tau-kT_s}{T_s})d\tau.
\end{equation}
where $sinc$ is sample function. Formula (\ref{discreteCH}) shows that $y(t)$ can be generated by passing $x(t)$
through a tapped delay line or FIR-filter with taps spaced $T_s$.
Next, we use three methods to get $h(\tau,t)$ for our dataset.
\subsection{Channel Impulse Response}
\subsubsection{Probability Model}Deep learning requires massive underwater acoustic channel impulse response to trained our net.
There are many ways to simulate underwater acoustic channel impulse response.
For generate massive CIR for learning, the probability model is selected to model and analyze the underwater acoustic channel. In this article, we summarize the experience of our predecessors, built a simulation underwater acoustic Rayleigh channel dataset for shallow water horizontal communication. The UWA multipath distribution can be assumed as a Rayleigh-distributed\cite{UWAOFDMbook}. In this model, maximum excess delay is set to 12ms, exponentially decaying is designed and the power attenuation coefficient is 0.66 dB per tap\cite{ebihara2015doppler}. And the UWA channel Doppler spread is considered by a bell-shaped function with $a$ equals 9 as the formula (12).
\begin{equation}\label{eq-III.1}
S(f) = \frac{{\sqrt a }}{{\pi {f_d}\left\{ {1 + a\left( {\frac{f}{{{f_d}}}} \right)} \right\}}} \left| f \right| \le {f_d}
\end{equation}
\subsubsection{Propagation Model}
BELLHOP (SIM-B) is a widely-known UWA channel simulation method.
Taking it into account, we create a part of data by \cite{Qarabaqi}. The horizontal distance between the receiving end and the transmitting end varies between 500m and 8000m, randomly.
The vertical distance varies from 10m to 90m, randomly.
\subsubsection{Measured CIR}
In this subsection, we provide a multi-scene validation dataset, containing simulation and measured CIRs with multi-communication environments, and multi-communication ranges.
In addition, the CIRs measured under different environments are considered.
The raw CIRs were measured at Norway-Oslofjord (NOF), Norway-Continental Shelf (NCS)\cite{van2017watermark} and China-Wanlu Reservoir (CWR). It is worth noting that, in CWR, CIRs from different distances were collected. Both receiver and receiver terminals were located on two ships for long-distance communication test, with distances of 1100m, 2100m, and 6000m, respectively.
After this, we consider two specific condition as data augmentation.
\subsection{Data Augmentation}
\subsubsection{Symbol Time Offset}If the synchronization error or loss of synchronization occurs in the communication system, the performance of the communication system will be reduced or the communication failure will occur. In order to ensure that the system can reliably detect the synchronous signal, it is more difficult to detect the synchronous signal of underwater communication system due to the complexity of the above-mentioned conditions of underwater acoustic signal transmission
\begin{equation}\label{eq-III.1}
\hat r'(t) = r(t+\delta),
\end{equation}
where $\delta$ means the deviation caused by inaccuracy synchronization.
\subsubsection{Doppler Shift}
In UWA channel, the effect of platform
motion on a wideband signal is more accurately modeled
as a complete time scaling (expansion or compression) of
the signal waveform. We assume that the mean Doppler shift can not be removed from the sounding data before the correlation.
The signal can be expressed as
\begin{equation}\label{eq-III.1}
r(t) = s((1+\alpha)t),
\end{equation}
where $s(t)$ and $r(t)$ are the source and Doppler-shifted
received signals, respectively. The relative Doppler shift $\alpha$
is defined as the ration of the relative platform speed to
the sound speed, which can be calculated by
\begin{equation}\label{eq-III.1}
\alpha =\frac{\delta f}{f_c} = \frac{v}{c},
\end{equation}
where $c$ means the speed of sound in the water and $v$ denotes the relative speed of the transmitter and receiver.
\section{Simulation Results}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{newsample-eps-converted-to.pdf}\\
\caption{ BER curves of DNN and MF under different downsample factor.}\label{SAMPLE}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{dop-eps-converted-to.pdf}\\
\caption{ BER curves of DNN and MF ($\lambda$=1) under different relative speed.}\label{DOPPLER}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{sto-eps-converted-to.pdf}\\
\caption{ BER curves of DNN and MF ($\lambda$=1) under different STO.}\label{STO}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{generalization-eps-converted-to.pdf}\\
\caption{ C-DNN receiver generalization under different conditions.}\label{generalization}
\end{figure}
In this section, in order to evaluate the effect of proposed system. The results are divided into two parts. One is focus on amazing performance of DL based chirp receiver. The other is the convergency performance of FML enhanced communications. Specially, we first introduce the parameters we used in our simulations. Then, we start with the performance of single node to illustrate how great deep learning is for physical layer, especially for complex UWA COMMs.
\subsection{Parameters}
In our experiments, Pytorch and Matlab are used as development framework. Each chirp frame contains 200 symbols and each symbol duration is 10ms. The input contains real parts. Every symbol is predicted independently. The parameters of receiver DNN scheme are shown in table I.
In the following experiment, if there is no special explanation, the local adaptation rate $\alpha$ is 0.001 and the update rate $\beta$ is 0.0001. The parameters the avaliable ratio, the number of access users and local epoch are configured with $G=0.3$, $N=10$ and $T=T_0$.
For buoy node, each local dataset contains 1000 symbols for training.
\renewcommand{\algorithmicrequire}{ \textbf{Input:}}
\renewcommand{\algorithmicensure}{ \textbf{Output:}}
\subsection{Performance of Single-node}
\subsubsection{Under different down-sampling factors}
Figure.\ref{SAMPLE} shows BER curves of DNN and MF under different downsample factor and different channel conditions.
From the results, we can discover a distinct characteristic that the smaller sample factor we get, the lower bit error rate can be obtained.
This is due to the smaller sample factor means the more sample points which influence signal-to-noise ratio of the used signal in detection.
Besides, we can find that the C-DNN receiver can reach the almost ideal bit error rate performance.
\subsubsection{Under different STO and DOP}
The accuracy vs SNR performance of C-DNN and matched filter are presented in Fig.4. Through observation, we can see
For deep learning based receiver, not only BER is the key characteristic to decipt communication system performance, but also needs the comparison of train loss and the valid loss to reflect the generalization capability of the receiver.
Fig.3 The DNN receiver BER performance can be
There is a fact that the receiver is easy to overfit as shown in Fig.4. When receiver is trained online, the loss of model on training set is better than that on validate set. Even so, when the receiver is deployed offline, the receiver performance can better than traditional minimum meansquare error algorithm based receiver . If the generalization ability of the model can be further improved, the receiver performance will be further improved. Deep learning based receiver has great potential.
\subsubsection{Complexity Analysis}
From Table \ref{compareC}, it is obvious that, DL-receiver has lower complexity.
The computations of matched filter are all from correlation calculation.
The total number of additions (ADD) used by matched filter is $N_1+N_2-1$
and multiplications (MUL) is $N_1(N_1+N_2-1)$, where $N_1=N_2= \frac{Tf_s}{\lambda}$.
The total number of additions used by matched filter is
$C^{MF}_{ADD}=2N_1-1,$
The total number of multiplications is
$C^{MF}_{MUL}=2N_1^2-2N_1,$
The total number of computations are presented in terms of the elementary operations is $C^{MF}_{TOTAL}= C^{MF}_{ADD}+C^{MF}_{MUL}=2N_1^2-1$.
The total number of additions used by DNN is
$C^{DNN}_{ADD}=\sum_{l=1}^{L}N_l =\frac{15}{8}N_1+1,$
The total number of multiplications is
$C^{DNN}_{MUL}=N_1^2+\sum_{l=2}^{L}N_l N_{l-1}= \frac{53}{32}N_1^2+\frac{1}{8}N_1,$
The total number of non-linear activations,
amount to $C^{DNN}_{NAV}=\sum_{l=1}^{L}N_l= \frac{15}{8}N_1+1.$
The total number of computations are presented in terms of the elementary operations is
$C^{DNN}_{TOTAL}= C^{DNN}_{ADD}+C^{DNN}_{MUL}+C^{DNN}_{NAV} =\frac{53}{32}N_1^2+\frac{31}{8}N_1+2.$
It can be concluded from the above, the $C^{DNN}_{TOTAL} \le C^{MF}_{TOTAL} $, because $N_1$ always is a large number.
\begin{table}[b]
\centering
\renewcommand\arraystretch{1.2}
\renewcommand\tabcolsep{3.0pt}
\setlength{\abovecaptionskip}{0.cm}
\caption{Complexity analysis for DNN and MF.}
\begin{tabular}{ccccc}
\hline
Operations&MF ($\lambda$ =1)& MF ($\lambda$ =2)& MF ($\lambda$ =6) &DNN ($\lambda$ =6)\\
\hline
ADD & 1,919 & 959& 319& 301\\
MUL & 1,842,240&460,320&51,040& 42,420\\
Non-Linear Activations & -& -& -& 301\\
Total & 1,844,159& 461,279& 51,359& 43,022\\
DNN Advantage & 4186.5\%& 972.2\%& 19.4\%& -\\
\hline
\end{tabular}\\\label{compareC}
\end{table}
\subsection{Generalization of C-DNN}
Figure.\ref{generalization} shows that C-DNN can achieve similar performance under channels that have never appeared in the training dataset.
We use simulation data to train our C-DNN and test its performance under simulated and measured dataset. We test C-DNN under different scenarios, where NCS, NOF and CWR have diverse communication environments. It is worth mentioning that a variety of distance are employed in CWR. Hence, we can believe the C-DNN receiver can handle multiple scenarios with only a small loss of accuracy. That is to say, C-DNN is robust can deal with different channals. Moreover, we also test C-DNN performance under emergency conditions (EC) that the data augmentation cannot cover all Doppler shift and symbol timing offset cases. Unfortunately, under EC condition, the C-DNN encounter performance degradation because the DL-model meet something it had never seen before.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{vs-eps-converted-to.pdf}\\
\caption{ Accuracy vs communication rounds peformance of federated learning and federated meta learning.}\label{FLvsFML}
\end{figure}
\subsection{Performance of FML}
\subsubsection{With federated learning} From Fig.\ref{FLvsFML}, we can find that FL and FML have the similar convergence performance under training dataset. But for test stage, test dataset considered, FL accuracy will decrease because the insufficient generalization of the model. FML utilize the then fine-tune the network with the labeled data in the target dataset. In genearl, if the source dataset and target dataset are highly related, a FL algorithm would perform well, without the need for fine-tuning the DL-receivers according to the target environment, which owes to generalization of it. However, most of the time, it's hard to make source and target dataset equally distributed. Hence, FML has wider application scenarios.
\subsubsection{With different local epochs}
Performance of accuracy vs communication rounds with different local epoch is shown in Fig.\ref{localepoch}. From the results we can find that as $T_0$ increase, so does the accuracy of DL-receiver. The case with local epoch $T=5T_0$ converges more than $10$ communication rounds faster than that with $T=T_0$. However, as the number of local training epoch increases, this advantage will decrease. The case with local epoch $T=5T_0$ and that with $T=10T_0$ have the similar convergence rate.
Formula (\ref{impans}) tell us that $K$ and $m(T)$ are increments of $T_0$. Hence, the simulation results verify the Theorem 2 that the convergency gap increase with $T_0$ under a fixed communication rounds $T$. This result can guide us to balance the consume of communication rounds and local computations in the real system.
\subsubsection{With different data volumes}
Figure.\ref{differentVolumes} shows the variation of accuracy compared and communication rounds under different data volumes on a single node. From the results, we can find that with the number of users increasing, the convergence of the whole system becomes faster.
It is easy to see that the larger data volume used for training, the better system performance can be got. But in a real system we can't overdo the amount of local data, because of the distributed data storage, especially in the ocean. Most importantly, we compare three kind of data volume to understand the influence of data volume to single node.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{lineplotime-eps-converted-to.pdf}\\
\caption{ Accuracy vs communication rounds with different local epoch.}\label{localepoch}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{lineG3-eps-converted-to.pdf}\\
\caption{ Accuracy vs communication rounds with different available channel ratio $G$.}\label{avaliableG}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{lineVolume-eps-converted-to.pdf}\\
\caption{ Accuracy vs communication rounds with different data volumes on a single node.}\label{differentVolumes}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{lineusers-eps-converted-to.pdf}\\
\caption{ Accuracy vs communication rounds with different numbers of access users.}\label{differentUsers}
\end{figure}
\subsubsection{With different number of channels}
Figure.\ref{avaliableG} and Fig.\ref{differentUsers} indicate the effect of the number of users and the impact of the number of channels available. We can draw the following conclusions. At a certain number of accessible channels, with the increasing of access users,
the higher accuracy of DL-receiver we can get. Meanwhile, at a certain number of access users, the higher access rate, the slower the convergence rate. We can explain it qualitatively and quantitatively. For quantitative analysis, with the increasing of users, the convergence rate $T_z$ will become slower. Formula (\ref{impans}) tell us $N$ and $G$ affect the convengence rate which is inversely proportional relationship. From qualitative aspect that as the number of users increases, leading to riching the differences of training data, it slows down the convergence.
\section{Conclusions}\label{conclusions}
Deep learning have great potential in underwater acoustic communication system but the disadvantage is sensitive to the distribution of training data. Therefore, considering the generalization of DL-based applications, we utilize the acoustic radio cooperation characteristic of OoT, we proposed
a federated Meta Learning Enhanced Acoustic Radio Cooperative Framework for Ocean of Things, which take advantage of the data distributed on surface nodes. Through this method, we can achieve transfer learning.
We take UWA chirp communications as an example, which can provide stable UWA COMMs for Ocean of Things.
In order to overcome UWA doppler shift and symbol time offset, we proposed C-DNN receiver based on data driven deep learning. Besides, to understand its performance, a comprehensive
convergence analysis framework for FML with random schedule in wireless is developed. This work represents the first
attempt to combine FML and DL in physical layer.
For future work, we will consider the framework of the
current work to sea trial. As another
interesting direction, the proposed design only for OoT device can be extended to the IoUT
scenario by using underwater unmanned submarine vehicle in an effort to utilizing the distributed data, further accelerating the learning process.
|
1,116,691,500,370 | arxiv | \section{Introduction}
The direct detection of WIMPs in the lab would not only directly
confirm the existence of dark matter but would also allow us to probe
the WIMP properties, in particular its mass. This would shed light on
its nature and probe extensions of the standard model of particle
physics. Furthermore definitive detection of the WIMP may well require
consistent signals (i.e. with the same inferred WIMP properties) from
direct detection, indirect detection and collider experiments. Here we give a
brief overview of recent work~\cite{green} on determining the WIMP
mass from direct detection experiments (see also
Refs.~\cite{ls,massall,ds}).
The differential event rate (number of events per unit energy, time
and detector mass) has a roughly exponential energy
dependence:~\cite{ls}
\begin{equation}
\frac{{\rm d}R}{{\rm d} E} \approx
c_{1} F^2(E) \left( \frac{{\rm d}R}{{\rm d} E} \right)_{0}
\exp{\left(-\frac{E}{c_{2} E_{\rm R}}\right)} \,,
\end{equation}
where $c_{1} $ and $c_{2}$ are fitting parameters of order unity (which
depend on the target mass number and energy threshold), $({\rm d} R/{\rm d}
E)_{0}$ is the event rate in the $E \rightarrow 0 \, {\rm keV}$ limit
and $F(E)$ is the form factor. The characteristic energy
scale of the exponential, $E_{\rm R}$, depends on the WIMP mass,
$m_{\chi}$, and is given by
\begin{equation}
E_{\rm R} = \frac{ 2 m_{A} m_{\chi}^2 v_{\rm c}^2}{(m_{\chi} + m_{A})^2} \,,
\end{equation}
where $m_{A}$ is the target nuclei mass and $v_{\rm c}$ is the local
circular speed. For light WIMPs ($m_{\chi} \ll m_{\rm A}$) $E_{\rm R}
\propto m_{\chi}^2$, while for heavy WIMPs ($m_{\chi} \gg m_{\rm A}$)
$E_{\rm R} \sim {\rm const}$. In other words, for light WIMPs the energy
spectrum is strongly dependent on the WIMP mass while for heavy WIMPs
the dependence on the WIMP mass is far weaker. Consequently it should be
easier to measure the mass of light (compared with the target nuclei) WIMPs
than heavy WIMPs.
\section{Monte Carlo simulations}
We have used Monte Carlo simulations to examine how well a SuperCDMS
like detector~\cite{SuperCDMS} could determine the WIMP mass from the
energies of observed WIMP nuclear recoil events. Our benchmark
detector is composed of Ge, has a nuclear recoil energy threshold of
$E_{\rm \, th} = 10 \, {\rm keV}$ and has no upper limit on the recoil
energy. We assume that the detection efficiency is independent of
energy, the energy resolution is perfect, the background is zero, the
form factor has the Helm form and fix the WIMP-proton cross-section to
be $\sigma_{\rm p}= 10^{-8} \, {\rm pb}$, a factor of a few below the
current exclusion limits~\cite{exclude}. We assume the local WIMP
speed distribution is Maxwellian and the local density is $0.3 \, {\rm
GeV \, cm}^{-3}$ and consider (efficiency weighted) exposures of
${\cal E}=3 \times 10^{3}$, $3 \times 10^{4}$ and $3 \times 10^{5} \,
{\rm kg \, day}$ which correspond, roughly, to a detector with mass
equal to that of the 3 proposed phases of SuperCDMS taking data for a
year with a $\sim 50\%$ detection efficiency. These are, generally,
optimistic assumptions and will therefore give `best case' results. We
discuss the effects of dropping or varying some of these assumptions
in Sec.~\ref{results}, see also Ref.~\cite{green}.
For each WIMP mass, $m_{\chi}^{\rm in}$, and detector configuration we
calculate the probability distribution of the maximum likelihood
estimator of the WIMP mass by simulating $10^{4}$ experiments. We
first calculate the expected number of events, $\lambda$, from the
input energy spectrum. The actual number of events for a given
experiment, $N_{\rm expt}$, is drawn from a Poisson distribution with
mean $\lambda$. We Monte Carlo generate $N_{\rm expt}$ events from the
input energy spectrum, from which the maximum likelihood mass and
cross-section for that experiment are calculated. Finally we find the
(two-sided) $68\%$ and $95 \%$ confidence limits on the WIMP mass from
the maximum likelihood masses.
\section{Results and discussion}
\label{results}
\begin{figure}
\includegraphics[width=.8\textwidth]{Greenfig1.eps}
\caption{The fractional deviation of the WIMP mass limits from the
input mass, $(m_{\chi}^{\rm lim}-m_{\chi}^{\rm in})/m_{\chi}^{\rm
in}$, for exposures ${\cal
E}= 3 \times 10^{3}, 3 \times 10^{4}$ and $3 \times 10^{5} \,
{\rm kg \, day}$ and input cross-section $\sigma_{\rm p} = 10^{-8}
\, {\rm pb}$ for the benchmark SuperCDMS like detector. The solid
(dotted) lines are the 95\% (68\%) confidence limits. }
\label{fig1}
\end{figure}
The accuracy with which the WIMP mass could be measured by the
benchmark SuperCDMS~\cite{SuperCDMS} like Ge detector described above
is shown in Fig.~\ref{fig1}. With exposures of ${\cal E}= 3 \times
10^{4}$ and $3 \times 10^{5} \, {\rm kg \, day}$ it would be possible
to measure the mass of a light, $m_{\chi} \sim {\cal O}(50 \, {\rm
GeV})$, WIMP with an accuracy of roughly $25\%$ and $10\%$
respectively. For heavy WIMPs ($m_{\chi} \gg 100 \, {\rm GeV}$) even
with a large exposure it will only be possible to place a lower limit
on the mass. For very light WIMPs, $m_{\chi} < {\cal O}(20 \, {\rm
GeV})$, the number of events above the detector energy threshold
would be too small to allow the mass to be measured accurately.
The number of events detected is directly proportional to both the
exposure and the cross-section, therefore these quantities have the
greatest bearing on the accuracy of the WIMP mass determination.
The energy threshold, $E_{\rm th}$, and the maximum energy, $E_{\rm
max}$, above which recoils are not detected/analysed also affect the
accuracy with which the WIMP mass can be determined. Increasing
$E_{\rm th}$ (or decreasing $E_{\rm max}$) not only reduces the number
of events detected, but also reduces the range of recoil energies and
the accuracy with which the characteristic energy of the energy
spectrum, $E_{\rm R}$, and hence the WIMP mass, can be measured. For
light WIMPs the small $E_{\rm R}$ means that the expected number of
events decreases rapidly as the energy threshold is increased, while
for heavy WIMPs the large $E_{\rm R}$, and flatter energy spectrum,
means that the smaller range of recoil energies reduces the accuracy
with which $E_{\rm R}$ can be measured. Reducing the maximum energy
only has a significant effect for heavy WIMPs.
The WIMP and target mass dependence of $E_{\rm R}$ suggests that
heavy targets will be able to measure the mass of a heavy WIMP more
accurately, however the rapid decrease of the nuclear form factor with
increasing momentum transfer which occurs for heavy nuclei means that
this is in fact not the case (see also Ref.~\cite{ds}).
If the WIMP distribution on the ultra-local scales probed by direct
detection experiments is smooth, then the $\pm 20 \, {\rm km \,
s}^{-1}$ uncertainty in the local circular speed~\cite{klb} leads to
a $\sim 10 \%$ systematic error in the determination of $m_{\chi}$.
Changes in the detailed shape of the local velocity distribution lead
to relatively small changes in the shape of the differential event
rate~\cite{drdens}, and hence a relatively small, ${\cal O} (5 \%)$,
systematic uncertainty in the WIMP mass. If the ultra-local WIMP
distribution consists of a finite number of streams, then the energy
spectrum will consists of a number of steps. The positions of the
steps will depend on the (unknown) stream velocities, as well as the
target nuclei and WIMP masses. With multiple targets it would in
principle be possible to constrain the WIMP mass without making any
assumptions about the WIMP velocity distribution~\cite{ds}.
Future experiments aim to have negligible backgrounds, however if the
background rate is not negligible compared with the WIMP event rate it
will be difficult to disentangle a WIMP signal (and the WIMP mass)
from the background if the background spectrum has a similar shape to
the WIMP spectrum (i.e. exponential background, or flat background
and a heavy WIMP). The uncertainties from backgrounds could be
mitigated by using multiple targets and/or using multiple scatter events
to measure/constrain the background spectrum.
|
1,116,691,500,371 | arxiv | \section{Introduction}
Let $\Bbbk$ be an algebraically closed field. If $C_1$ and $C_2$ are two cubics in $\mathbb{P}^2_\Bbbk$ which meet in $9$ points, and $X$ is a cubic passing through $8$ points of $C_1 \cap C_2$, then $X$ contains the nineth point of $C_1 \cap C_2$ as well. This well-known statement extends Pappus's and Pascal's theorems, and it is one version of a series of results which are referred to as Cayley-Bacharach theorems. We refer the interested reader to the seminal work of Eisenbud, Green and Harris \cite{EGH_CB}, and to recent work of Kreuzer, Long and Robbiano \cite{KLR} for a detailed and fascinating history on the subject.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{C2.jpg}
\caption{A sketch of the case in which $C_1$ and $C_2$ are a union of three lines.}
\end{figure}
More generally, if $C_1,C_2 \subseteq \mathbb{P}^2$ are two curves of degrees $d_1 \leq d_2$ meeting transversely in $d_1d_2$ points, the Cayley-Bacharach theorem states that, if a curve $X$ of degree $D=d_1+d_2-3$ passes through all but possibly one point of $C_1 \cap C_2$, then it must contain all $d_1d_2$ points of $C_1 \cap C_2$. In the literature, there have been several efforts to extend this theorem to a more general setup \cite{GH, Tan, Li, EL,KLR}. However, in most cases, the obtained results still require the hypersurface to pass through at least all but one point. In \cite{EGH_CB}, Eisenbud, Green and Harris suggest a different direction in which this theorem can be pushed. Namely, one can require $X$ to contain all but a given number of points of $C_1 \cap C_2$, balancing off this additional freedom by putting more restrictions on the degree of $X$. This leads to a new series of conjectured inequalities on multiplicities of almost complete intersections (see \cite[Conjecture CB12]{EGH_CB}). More specifically, \cite[Conjecture CB12]{EGH_CB} can be restated by saying that the multiplicity of an almost complete intersections is bounded above by the multiplicity of a special monomial almost complete intersection of the same degrees, which in Section \ref{Section Gorenstein} we denote $\LL(\d;D)$. By its nature, this upper bound is sharp, if true. In the literature, this improved version has been called the General Cayley-Bacharach conjecture (see \cite{GK}). However, in this article we will refer to the above as Cayley-Bacharach, since it is the only one we will focus on.
In $\mathbb{P}^2$, \cite[Conjecture CB12]{EGH_CB} and hence the Cayley-Bacharach theorem follow from a stronger conjecture of Eisenbud, Green and Harris (henceforth EGH), see Conjecture \ref{Conjecture EGH}, which is known to be true in this case by \cite{Richert,Cooper}.
In $\mathbb{P}^3$, \cite[Conjecture CB12]{EGH_CB} and the Cayley-Bacharach theorem have been proved by Geramita and Kreuzer \cite[Corollary 4.4]{GK}.
In Section \ref{Section Gorenstein}, we refine the Cayley-Bacharach inequality on multiplicities of almost complete intersections of height three, and we obtain the following upper bound on their Hilbert functions.
\begin{theoremx}[see Theorem \ref{Theorem EGH ACI}] \label{THMX B} Let $S = \Bbbk[x_1,\ldots,x_n]$, and $\mathfrak{f}=(f_1,f_2,f_3)$ be a complete intersection of degrees $\d=(d_1,d_2,d_3)$. Let $G$ be an element of degree $D \leq d_1+d_2+d_3-3$ such that $G \notin \mathfrak{f}$, and $\a=\mathfrak{f}+(G)$ has height three. Then $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\LL(\d;D))$, where $\LL(\d;D) = (x_1^{d_1},x_2^{d_2},x_3^{d_3},U_D)$ and $U_D$ is the largest monomial with respect to the lexicographic order which has degree $D$ and does not belong to $(x_1^{d_1},x_2^{d_2},x_3^{d_3})$.
\end{theoremx}
Theorem \ref{THMX B} in particular gives that the multiplicity (denoted $\operatorname{e}(-)$) of an almost complete intersection of degrees $(d_1,d_2,d_3;D)$ is at most $\operatorname{e}(S/\LL(\d;D))$, as conjectured in \cite[Conjecture CB12]{EGH_CB}. We would like to point out that the statement on Hilbert functions is stronger than the corresponding one on multiplicities. In fact, the standard techniques which usually allow to reduce to the Artinian case might fail for this purpose (see Example \ref{Example reduction}). We also note that the stronger statement on Hilbert function rather than just on multiplicities is needed in Section \ref{Section ACI} to improve the Cayley-Bacharach theorem in $\mathbb{P}^n$, Theorem \ref{THMX Pn}, in a special case (see Theorem \ref{Theorem delta2}).
\begin{comment} Given integers $1 \leq d_1 \leq d_2 \leq d_3$, and $1 \leq D \leq d_1+d_2+d_3-3$, in Section \ref{Section Gorenstein} we construct a new sequence $\c = (c_1,c_2,c_3)$. For example, if $d_1=d_2=d_3=D=3$, then $\c=(1,2,3)$. The following is the analogue of Theorem \ref{THMX P2} for $\mathbb{P}^3$. The bound we obtain is the one predicted by the EGH conjecture, and it is therefore sharp.
\begin{theoremx}[Cayley-Bacharach in $\mathbb{P}^3$] \label{THMX P3} Let $\Gamma \subseteq \mathbb{P}^3$ be a complete intersection of three surfaces of degrees $(d_1,d_2,d_3)$. If $X$ is a surface of degree $D \leq \sigma=d_1+d_2+d_3-3$ which contains at least $d_1d_2d_3-c_1c_2c_3+1$ points of $\Gamma$, then $X$ contains $\Gamma$.
\end{theoremx}
For example, if $\Gamma$ is a complete intersection of three cubic surfaces in $\mathbb{P}^3$, and $X$ is another cubic surface containing at least $22$ points of $\Gamma$, then $X$ must in fact contain all $27$ points of $\Gamma$.
\end{comment}
Using Theorem \ref{THMX B}, we immediately recover the Cayley-Bacharach theorem for points in $\mathbb{P}^3$.
\begin{corollaryx}[Cayley-Bacharach in $\mathbb{P}^3$] \label{Corollary P3} Let $\Gamma \subseteq \mathbb{P}^3$ be a complete intersection of three surfaces of degrees $\d=(d_1,d_2,d_3)$. If $X$ is a surface of degree $D \leq \sigma=d_1+d_2+d_3-3$ which contains at least $d_1d_2d_3-\operatorname{e}(S/\LL(\d;D))+1$ points of $\Gamma$, then $X$ contains $\Gamma$.
\end{corollaryx}
We refer to Section \ref{Section Gorenstein} for an explicit way to compute $\operatorname{e}(S/\LL(\d;D))$ in terms of a new sequence $\c=(c_1,c_2,c_3)$, constructed from $\d$ and $D$. To give an example, if a surface of degree $D$ in $\mathbb{P}^3$ contains at least $D^3-D^2+D+1$ points of a complete intersection of three surfaces of degree $D$, then it must contains all $D^3$ of them.
An analogue of Theorem \ref{THMX B} is not known, in general, for almost complete intersections of codimension higher than three. However, a result of Francisco \cite{F} gives an upper bound on the Hilbert function of any almost complete intersection in one specific degree.
In Section \ref{Section ACI}, we exhibit upper bounds for the multiplicity of almost complete intersections of any height combining a repeated use of Francisco's theorem with several other techniques (see Theorems \ref{THM multiplicity} and \ref{Theorem symmetric}). While our estimates are not in general as sharp as the ones predicted by \cite[Conjecture CB12]{EGH_CB}, they significantly improve the best known upper bounds, due to Engheta \cite{E} and later extended by Huneke, Mantero, McCullough and Seceleaenu \cite{HMMCS}, in all those circumstances in which the latter are not already sharp (see Remark \ref{Remark HMMCS}).
Using these estimates, we obtain a Cayley-Bacharach-type theorem in $\mathbb{P}^n$. We refer the reader to Section \ref{Section ACI} and Theorem \ref{THM Pn} for the definition of the integer $\delta(\d;D)$ which appears in the statement of the theorem.
\begin{theoremx}[Cayley-Bacharach in $\mathbb{P}^n$] \label{THMX Pn} Let $\Gamma \subseteq \mathbb{P}^n$ be a complete intersection of degrees $\d = (d_1,\ldots,d_n)$. If $X$ is a hypersurface of degree $D < \sigma = \sum_{i=1}^n(d_i-1)$ which contains at least $\delta(\d;D) = \prod_{i=1}^n d_i - \sum_{m=D+1}^{\tau_-} \varphi_m - \sum_{m=D+1}^{\tau^+} \varphi_m - 1$ points of $\Gamma$, then $X$ contains $\Gamma$.
\end{theoremx}
As an explicit consequence of Theorem \ref{THMX Pn}, if a cubic hypersurface in $\mathbb{P}^{2n}$ contains at least $3^{2n} - (6n^2-8n+3)$ points of a complete intersection of $2n$ cubics, then it contains all of them.
As another application, if a hypersurface of degree $D \leq n$ in $\mathbb{P}^n$ contains at least $2^n-\lfloor \frac{3(n-D)^2+1}{4} \rfloor$ points of a complete intersection of $n$ quadrics, then it contains all of them.
Finally, combining Theorem \ref{THMX B} and our new bounds on the multiplicity of almost complete intersection of any height, we improve Theorem \ref{THMX Pn} in case the degree $D$ of the hypersurface $X$ is less than $d_4$, see Theorem \ref{Theorem delta2}. As already pointed out, for this result it is crucial that Theorem \ref{THMX B} gives an upper bound on the Hilbert function of an almost complete intersection of codimension three, rather than on its multiplicity alone. In this scenario, when Theorem \ref{Theorem delta2} can be applied it drastically improves Theorem \ref{THMX Pn}, and it often allows to obtain estimates which are rather close to the optimal ones of \cite[Conjecture CB12]{EGH_CB} (see Example \ref{Example (4,4,4,10;4)}).
\subsection*{Acknowledgments} We thank Martin Kreuzer and Lorenzo Robbiano for pointing out some inaccuracies on a previous version of the paper.
\section{Almost complete intersections of codimension three} \label{Section Gorenstein}
The goal of this section is to prove an upper bound on the Hilbert function of almost complete intersections of codimension three.
We start by setting up some notation, which will be used throughout the article. In what follows, $S= \Bbbk[x_1,\ldots,x_n]$ denotes a polynomial ring over any field $\Bbbk$. We consider the standard grading on $S$, that is, $\deg(x_i)=1$ for all $i$. We denote by $\mathfrak{m}$ the irrelevant maximal ideal $(x_1,\ldots,x_n)S$ of $S$. We adopt the convention that a sum $\sum_i^j (-)$ equals zero whenever $j<i$. Given a finitely generated graded $S$-module $M$ we write $\operatorname{HF}(M)$ for the Hilbert function of $M$, that is, the numerical function $j \in \mathbb{Z} \mapsto \operatorname{HF}(M;j) = \dim_\Bbbk(M_j)$, and $\operatorname{e}(M)$ for its multiplicity. Given two graded $S$-modules $M$ and $N$, we write $\operatorname{HF}(M) \leq \operatorname{HF}(N)$ to mean $\operatorname{HF}(M;j) \leq \operatorname{HF}(N;j)$ for all $j \in \mathbb{Z}$.
Let $\d = (d_1,\ldots,d_h) \in \mathbb{N}^{h}$, with $1 \leq d_1 \leq \cdots \leq d_h$. We denote by $(\mathsf{x}^\d)$ the ideal $(x_1^{d_1},\ldots,x_h^{d_h})$ of $S$. An ideal $\LL \subseteq S$ is a called a $\d$-LPP ideal if we can write $\LL=(\mathsf{x}^\d) + L$, where $L$ is a lexicographic ideal (see \cite[Definitions 4 and 5]{CM}). We now state the current version of the Eisenbud-Green-Harris conjecture \cite{EGH} (see \cite{CM}).
\begin{conjecture}[$\EGH{\d}{n}$] \label{Conjecture EGH}
Let $I$ be a homogeneous ideal of $S=\Bbbk[x_1,\ldots,x_n]$, containing a regular sequence of degrees $\d = (d_1,\ldots,d_h)$. There exists a $\d$-LPP ideal containing $(\mathsf{x}^\d)$ which has the same Hilbert function as $I$.
\end{conjecture}
It is easy to show that $I$ satisfies $\EGH{\d}{n}$ if and only if the following holds: for any $j \in \mathbb{Z}$, if $L$ is a lexicographic ideal generated in degree $j$ such that the $\d$-LPP ideal $\LL(j)=(\mathsf{x}^{\d}) + L$ satisfies $\operatorname{HF}(S/I;j) = \operatorname{HF}(S/\LL(j);j)$, then $\operatorname{HF}(S/I;j+1) \leq \operatorname{HF}(S/\LL(j);j+1)$. Because of standard properties of lexsegment ideals, the latter is also equivalent to $\LL(j)_{\geq j+1} \subseteq \LL(j+1)$, where $\LL(j)_{\geq j+1}$ denotes the ideal generated by the elements of $\LL(j)$ of degree at least $j+1$.
Conjecture $\EGH{\d}{n}$ is known, among other cases, if $\d=(d_1,d_2)$ \cite{Richert, Cooper}, if $I$ contains a monomial regular sequence of degrees $\d$ \cite{CL,MP,CK1}, if the degrees of the forms in the regular sequence grow sufficiently fast \cite{CM}, if $I=Q_1+Q_2$, where $Q_1$ is generated by a regular sequence of quadrics and $Q_2$ is generated by general quadratic forms \cite{HP,G}, if the regular sequence factors as a product of linear forms \cite{A}, and if $d_1 = \ldots = d_h=2$ and $h \leq 5$ \cite{GuHo}. In general, however, the conjecture is wide-open.
One more case in which the conjecture is known is for the class of minimally licci ideal, defined by Chong \cite{Chong}. Chong proves that, if $\mathfrak{g} \subseteq S$ is an ideal of height three which contains a regular sequence of degrees $\d$ among its minimal generators, and such that $S/\mathfrak{g}$ is Gorenstein, then $\mathfrak{g}$ satisfies $\EGH{\d}{n}$. The condition that the regular sequence is part of a minimal generating set for $\mathfrak{g}$ can actually be removed, as the following lemma shows.
\begin{lemma} \label{Lemma Gorenstein minimal} Let $\mathfrak{g} \subseteq \Bbbk[x_1,\ldots,x_n]$ be an ideal of height three, containing a regular sequence of degrees $\d = (d_1,d_2,d_3)$. If $S/\mathfrak{g}$ is Gorenstein, then $\mathfrak{g}$ satisfies $\EGH{\d}{n}$.
\end{lemma}
\begin{proof}
We may harmlessly assume that $\Bbbk$ is infinite. Since $\mathfrak{g}$ contains a regular sequence of degrees $\d$, we can find a regular sequence $f_1',f_2',f_3'$ of degrees $\d'$, with $d_i' \leq d_i$ for $i=1,2,3$, among the minimal generators of $\mathfrak{g}$. By \cite[Corollary 11]{Chong}, there exists a $\d'$-LPP-ideal $\LL$ which has the same Hilbert function as $\mathfrak{g}$. Since $\LL$ contains $(\mathsf{x}^{\d'})$ which, in turn, contains $(\mathsf{x}^\d)$, by \cite[Theorem 1.2]{MP} we can find a $\d$-LPP ideal with the same Hilbert function as $\LL$, and this concludes the proof.
\end{proof}
We now turn our attention to almost complete intersections.
\begin{definition} \label{Defn ACI}
Let $\a$ be a homogeneous ideal of $S$. We say that $\a$ is an almost complete intersection of degrees $(\d;D) = (d_1,\ldots,d_h;D)$ if $\Ht(\a) = h$, and we can write $\a=\mathfrak{f}+(G)$, where the ideal $\mathfrak{f}=(f_1,\ldots,f_h)$ is generated by a regular sequence of degrees $d_1 \leq \cdots \leq d_h$, and $G$ is an element of degree $D$ which does not belong to $\mathfrak{f}$.
\end{definition}
Observe that we do not require that $\a$ is minimally generated by $h+1$ elements. For example, according to our definition, the ideal $\a=(x_1^2,x_2^3)+(x_2^2)$ is an almost complete intersection of degrees $(2,3;2)$, but also a complete intersection of degrees $(2,2)$. What is important to observe, though, is that an almost complete intersection of degrees $(\d;D)$ cannot be generated by a regular sequence of degrees $\d$.
\begin{notation}
Given integers $(\d;D) = (d_1,\ldots,d_h;D)$, with $D \leq \sum_{i=1}^h(d_i-1)$, we let $\LL(\d;D) = (\mathsf{x}^\d) + (U_D)$ be the $\d$-LPP ideal of $S = \Bbbk[x_1,\ldots,x_n]$ which is an almost complete intersection of degrees $(\d;D)$. In other words, $U_D$ is the largest monomial with respect to the lexicographic order which has degree $D$, and does not belong to $(\mathsf{x}^\d)$.
\end{notation}
In order to apply Lemma \ref{Lemma Gorenstein minimal} to obtain an upper bound on the multiplicity of almost complete intersections, we will use partial initial ideals with respect to the weight order $\omega=(1,1,\ldots,1,0)$. For unexplained notation and terminology, we refer to \cite{CK} and \cite{CDS}, where such weight order is denoted by $\rev{1}$.
For convenience of the reader, we recall the main features of such an object. Let $I$ be a homogeneous ideal in $S = \Bbbk[x_1,\ldots,x_n]$, and assume that $\Bbbk$ is infinite. After performing a sufficiently general change of coordinates, there is a vector space decomposition ${\rm in}_\omega(I) = \bigoplus_{j \geq 0} I_{[j]}x_n^j$, where each $I_{[j]}$ is an ideal in $\ov{S} = \Bbbk[x_1,\ldots,x_{n-1}]$. This decomposition is analogous to the one in \cite[Section 6]{GreenGin}, where Green constructs partial elimination ideals for the lexicographic order. Observe that $I_{[0]}$ is the ideal defining the hyperplane section $S/(I+(x_n))$ viewed inside $\ov{S} \cong S/(x_n)$. In characteristic zero, the ideals $I_{[j]}$ automatically satisfy $\ov{\mathfrak{m}} I_{[j+1]} \subseteq I_{[j]}$ for all $j \geq 0$, where $\ov{\mathfrak{m}} = (x_1,\ldots,x_{n-1})\ov{S}$ (see \cite[Theorem 3.2]{CS-JCA}). We will refer to this phenomenon as stability of partial general initial ideals. We may achieve this also in characteristic $p>0$, without altering the relevant features of ${\rm in}_{\omega}(I)$, by recursively applying general distractions and partial initial ideals with respect to $\omega$. For a description of this process, see the proof of \cite[Theorem 4.1]{CK}, or \cite[Proposition 1.4]{CS-MA}. We point out that, while this process may change the ideals $I_{[j]}$, it can only enlarge $I_{[0]}$.
We record these facts in a lemma, for future use.
\begin{lemma} \label{Lemma decomposition} Let $I$ be a homogeneous ideal in $S=\Bbbk[x_1,\ldots,x_n]$, where $\Bbbk$ is an infinite field. With the notation introduced above, after performing a sufficiently general change of coordinates, there exist ideals $I_{[j]} \subseteq \ov{S}$ and an ideal $\widetilde{I} = \bigoplus_{j\geq 0} I_{[j]} x_n^j$ of $S$ such that
\begin{itemize}
\item $\ov{\mathfrak{m}} I_{[j+1]} \subseteq I_{[j]}$ for all $j \geq 0$.
\item $I+(x_n) \subseteq \widetilde{I}+(x_n)$, with equality if $\operatorname{char}(\Bbbk)=0$.
\item $\operatorname{HF}(I) = \operatorname{HF}(\widetilde{I})$.
\end{itemize}
\end{lemma}
Let $\a=\mathfrak{f}+(G)$ be an almost complete intersection of degrees $(\d;D)$, with $D \leq \sum_{i=1}^h(d_i-1)$. In order to estimate the Hilbert function and the multiplicity of $S/\a$, it would be desirable to reduce to the Artinian case, without losing the relevant features of $\a$. In particular, if $y \in S$ is a linear form which is regular modulo $\mathfrak{f}$, then it would be good to have that the image of $G$ is non-zero in $S/(\mathfrak{f}+(y))$, at least for a general choice of $y$. While this is true if $\Bbbk$ has characteristic zero as a consequence of the proof of the forthcoming Theorem \ref{Theorem Artinian}, it may be false in prime characteristic.
\begin{example} \label{Example reduction} Let $S=\Bbbk[x_1,x_2,x_3]$, with $\operatorname{char}(\Bbbk)=p>0$, and let $\a=(x_1^{p^2},x_2^{p^2}) + (x_1x_3^{p^2})$, which is an almost complete intersection of degrees $(p^2,p^2;p^2+1)$. A linear regular element for $S/(x_1^{p^2},x_2^{p^2})$ is necessarily of the form $y = \lambda_1x_1+\lambda_2x_2+\lambda_3x_3$, with $\lambda_i \in \Bbbk$, and $\lambda_3\ne 0$. It follows that $G=x_1x_3^{p^2}$ is zero in $S/(x_1^{p^2},x_2^{p^2},y)$ for any choice of $y$ as above.
\end{example}
The next theorem allows us to tackle the issue illustrated by the previous example. Even if the image of $G$ can be zero in $S/(\mathfrak{f}+(y))$ for any general linear form $y$, using the techniques described above we can still reduce to the Artinian case in order to estimate the Hilbert function of an almost complete intersection, even in characteristic $p>0$.
\begin{theorem} \label{Theorem Artinian}
Let $S=\Bbbk[x_1,\ldots,x_n]$, where $\Bbbk$ is an infinite field, and $\a = \mathfrak{f}+(G)$ be an almost complete intersection of degrees $(\d;D)=(d_1,\ldots,d_h; D)$. If $D \leq \sigma=\sum_{i=1}^h (d_i-1)$, then there exists an Artinian almost complete intersection
$\ov\a \subseteq \ov{S}=\Bbbk[x_1,\ldots,x_h]$ of degrees $(\d;D)$ such that $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\ov{\a}S)$. In particular, $\operatorname{e}(S/\a) \leq \operatorname{e}(\ov{S}/\ov{\a})$.
\end{theorem}
\begin{proof}
It suffices to show the inequality on Hilbert functions and, to prove it, we proceed by induction on $\dim(S/\a) \geq 0$. The base case is trivial, so assume $\dim(S/\a)>0$. After a general change of coordinates we may find a decomposition $\widetilde{\a} = \bigoplus_{j \geq 0} \a_{[j]}x_n^j$ for $\a$ as in Lemma \ref{Lemma decomposition}, and we may further assume that $x_n$ is regular for $\mathfrak{f}$.
Let $S'=\Bbbk[x_1,\ldots,x_{n-1}]$, and $\mathfrak{m}' = (x_1,\ldots,x_{n-1})S'$. Since $x_n$ is regular for $\mathfrak{f}$, the elements ${\rm in}_\omega(f_1),\ldots,{\rm in}_\omega(f_h)$ form a regular sequence of degree $\d$, sitting necessarily inside $\a_{[0]}$. Let $\mathfrak{f}'$ be the ideal they generate inside $S'$.
Define $j= \inf\{t \geq 0 \mid \operatorname{HF}(\a_{[t]}x_n^t;D) \ne \operatorname{HF}(\mathfrak{f}'x_n^t;D)\}$. Observe that $j$ is finite, since otherwise the condition $\operatorname{HF}(\a_{[t]}x_n^t;D) = \operatorname{HF}(\mathfrak{f}'x_n^t;D)$ for all $t\geq 0$ would imply that $\operatorname{HF}(\a;D) = \operatorname{HF}(\widetilde{\a};D) = \operatorname{HF}( \bigoplus_{t \geq 0} \mathfrak{f}' x_n^t ;D) = \operatorname{HF}(\mathfrak{f};D)$, contradicting our assumption that $G \in \a \smallsetminus \mathfrak{f}$.
We claim that $j=0$. If not, let $H x_n^j \in \a_{[j]}x_n^j$ be an element of degree $D$, so that $H \in \a_{[j]} \smallsetminus \mathfrak{f}'$ is an element of degree $D-j<D$. Observe that $H \in \mathfrak{f}':\mathfrak{m}'$ by stability, and because $\a_{[j-1]}$ coincides with $\mathfrak{f}'$ up to degree $D-j+1$. It follows that $H$ represents a non-zero element of $\operatorname{soc}(S'/\mathfrak{f}')$. If $\dim(S/\a)=1$, then we reach a contradiction since $\deg(H) < \sum_{i=0}^h (d_i-1)$, and the latter is the degree in which the socle of $S'/\mathfrak{f}'$ is concentrated. If $\dim(S/\a)>1$, then $\dim(S'/\mathfrak{f}') = \operatorname{depth}(S'/
\mathfrak{f}')>0$, therefore $\operatorname{soc}(S'/\mathfrak{f}')=0$. A contradiction again.
Therefore $j=0$, and $\a' = \mathfrak{f}'+(H)S'$ is an almost complete intersection of degrees $(\d;D)$, with $\dim(S'/\a') = \dim(S/\a)-1$. By induction, there exists an Artinian almost complete intersection $\ov{\a} \subseteq \ov{S}$ such that $\operatorname{HF}(S'/\a') \leq \operatorname{HF}(S'/\ov{\a}S')$. Because $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\a'S)$, it follows that $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\ov{\a}S)$, and the proof is complete.
\end{proof}
Building from Chong's work in \cite{Chong}, and using Lemma \ref{Lemma Gorenstein minimal} and Theorem \ref{Theorem Artinian}, we can now prove the main result of this section.
\begin{theorem} \label{Theorem EGH ACI} Let $\a \subseteq S = \Bbbk[x_1,\ldots,x_n]$ be an almost complete intersections of degrees $(\d;D)=(d_1,d_2,d_3;D)$, with $D \leq \sigma = d_1+d_2+d_3-3$. Then $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\LL(\d;D))$.
\end{theorem}
\begin{proof}
We may assume that $\Bbbk$ is infinite and, by Theorem \ref{Theorem Artinian}, that $S/\a$ is Artinian. Write $\a=\mathfrak{f}+(G)$, where $\mathfrak{f}$ is generated by a regular sequence of degrees $\d$, and $G$ is a form of degree $D$. By a standard argument of linkage (for instance, see \cite[Corollary 5.2.19]{Migliore}), for all $j \in \mathbb{Z}$ we get $\operatorname{HF}(S/\a;j) = \operatorname{HF}(S/\mathfrak{f};j) - \operatorname{HF}(S/\mathfrak{g};\sigma-j)$, where $\mathfrak{g}=\mathfrak{f}:\a$. Since $\mathfrak{g}$ contains $\mathfrak{f}$, and it defines a Gorenstein ring, by Lemma \ref{Lemma Gorenstein minimal} there is a $\d$-LPP ideal $\LL$ with the same Hilbert function as $\mathfrak{g}$. If we set $\b = (\mathsf{x}^\d): \LL$, then using linkage again we obtain that
\begin{align*}
\operatorname{HF}(S/\b;j) & = \operatorname{HF}(S/(\mathsf{x}^\d);j) - \operatorname{HF}(S/\LL;\sigma-j) \\
& = \operatorname{HF}(S/\mathfrak{f};j) - \operatorname{HF}(S/\mathfrak{g};\sigma-j) = \operatorname{HF}(S/\a;j)
\end{align*}
for all $j \in \mathbb{Z}$. By \cite[Theorem 1.2]{MP}, the monomial ideal $\b$ satisfies $\EGH{\d}{n}$. Therefore, there exists a $\d$-LPP ideal $\LL'$ with the same Hilbert function as $\b$. In particular, since $\LL'$ must contain $\LL(\d;D)$, we have that $\operatorname{HF}(S/\a) = \operatorname{HF}(S/\b) = \operatorname{HF}(S/\LL') \leq \operatorname{HF}(S/\LL(\d;D))$.
\end{proof}
\begin{remark} \label{Remark over socle}
If $\a \subseteq S=\Bbbk[x_1,\ldots,x_n]$ is an almost complete intersection of degrees $(\d;D)=(d_1,\ldots,d_h;D)$, with $D>\sigma = \sum_{i=1}^h(d_i-1)$, then the conclusion of Theorem \ref{Theorem EGH ACI} still holds, even without assuming that $h=3$. In fact, in this scenario we have that $\LL(\d;D) = (\mathsf{x}^\d) + (U_D)$, where $U_D=x_1^{d_1-1}\cdots x_h^{d_h-1}x_{h+1}^{D-\sigma}$. Iterating the argument used in the proof of Theorem \ref{Theorem Artinian}, we can find an almost complete intersection $\a' = \mathfrak{f}'+(G') \subseteq S'=\Bbbk[x_1,\ldots,x_{h+1}]$ of degrees $(\d;D)$ such that $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\a'S)$. Moreover, we may assume that $x_{h+1}$ is regular modulo $\mathfrak{f}'$. Since $\operatorname{HF}(\LL(\d;D)/(\mathsf{x}^\d);m) \leq 1$ for all $m \in \mathbb{Z}$, with equality if and only if $m \geq D$, it follows that $\operatorname{HF}(\a'/\mathfrak{f}') \geq \operatorname{HF}(\LL(\d;D)/(\mathsf{x}^\d))$, because otherwise we would have $\a' \subseteq (\mathfrak{f}')^{\rm sat} = \mathfrak{f}'$. As a consequence, $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\a'S) \leq \operatorname{HF}(S/\LL(\d;D))$.
\end{remark}
We now show how Theorem \ref{Theorem EGH ACI} allows to recover the Cayley-Bacharach theorem for points in $\mathbb{P}^3$, which has been proved by Geramita and Kreuzer \cite[Corollary 4.4]{GK}. To do so, we introduce some notation. Let $\d=(d_1,d_2,d_3) \in \mathbb{N}^{3}$, with
$1 \leq d_1\leq d_2 \leq d_3$, and let $1 \leq D \leq d_1+d_2+d_3-3$. Let $a \in \{1,2,3\}$ be such that $\sum_{i=1}^{a-1} (d_i-1) < D \leq \sum_{i=1}^a (d_i-1)$. We define a new sequence $\c=(c_1,c_2,c_3)$ as
\[
c_i =\begin{cases} 1 & \text{ if } 1 \leq i < a \\
d_a - \left(D-\sum_{i=1}^{a-1} (d_i-1) \right) & \text{ if } i=a \\
d_i & \text{ if } a < i \leq 3.
\end{cases}
\]
For example, if $(\d;D) = (4,4,4;4)$, then $\c=(1,3,4)$.
\begin{corollary} \label{Corollary multiplicity ACI codim 3} Let $\a \subseteq S=\Bbbk[x_1,\ldots, x_n]$ be an almost complete intersection of degrees $(\d;D)=(d_1,d_2,d_3;D)$, with $D \leq \sigma=d_1+d_2+d_3-3$. Then $\operatorname{e}(S/\a) \leq d_1d_2d_3 - c_1c_2c_3$.
\end{corollary}
\begin{proof}
By Theorem \ref{Theorem EGH ACI} we have that $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\LL(\d;D))$. Therefore, in order to obtain an upper bound for the multiplicity of $S/\a$, we may replace $\a$ by $\LL(\d;D) = (\mathsf{x}^\d) + (U_D)$. Since $D \leq \sigma$, the variable $x_i$ does not divide $U_D$ for any $i \geq 4$. Thus, after going modulo the regular sequence $x_4,\ldots,x_n$, we may assume that $S/\LL(\d;D)$ is Artinian. With the notation introduced above, one can easily check that $(\mathsf{x}^\d):U_D = (\mathsf{x}^\c)$. It then immediately follows that $\operatorname{e}(S/\LL(\d;D)) = \operatorname{e}(S/(\mathsf{x}^\d)) - \operatorname{e}(S/((\mathsf{x}^\d):U_D)) = d_1d_2d_3 - c_1c_2c_3$.
\end{proof}
Now that we have obtained Corollary \ref{Corollary multiplicity ACI codim 3}, the proof of the Cayley-Bacharach theorem in $\mathbb{P}^3$, Corollary \ref{Corollary P3}, is immediate. In fact, in the notation of the Corollary, let $\mathfrak{f}\subseteq S=\Bbbk[\mathbb{P}^3]$ be a complete intersection defining $\Gamma$, and $G$ be a form of degree $D$ defining $X$. If $G \notin \mathfrak{f}$, by Corollary \ref{Corollary multiplicity ACI codim 3} the multiplicity of $S/(\mathfrak{f}+(G))$ is at most $d_1d_2d_3 - c_1c_2c_3$. So $X$ contains at most this number of points of $\Gamma$, which contradicts the assumptions of Corollary \ref{Corollary P3}.
\begin{comment}
\begin{corollary} \label{Theorem points P3} Let $\Gamma \subseteq \mathbb{P}^3$ be a complete intersection of forms of degree $\d=(d_1,d_2,d_3)$. If $X$ is a hypersurface of degree $D \leq \sigma=d_1+d_2+d_3-3$ that passes through at least $d_1d_2d_3-c_1c_2c_3+1$ points of $\Gamma$, then $X$ contains $\Gamma$.
\end{corollary}
\ale{Togliere esempio?}
We conclude the section with an explicit example.
\begin{example} Let $\Gamma \subseteq \mathbb{P}^3$ be a complete intersection of degrees $\d=(d,d,d)$. If $X$ is a surface of degree $d$ passing through at least $d^3-(d-1)d + 1$ points of $\Gamma$, then it contains $\Gamma$. For instance, if a quartic contains at least $53$ points of a complete intersection of three quartics, then it contains all $64$ of them.
\end{example}
\end{comment}
\section{Almost complete intersections and Cayley-Bacharach theorems in $\mathbb{P}^n$} \label{Section ACI}
In order to obtain a Cayley-Bacharach type theorem for points in $\mathbb{P}^n$, we need to exhibit upper bounds on the multiplicity of almost complete intersections of height $n$. The strategy is to use Theorem \ref{Theorem Artinian} to first reduce to the Artinian case, and then to repeatedly apply a result on the EGH conjecture due to Francisco \cite{F}, together with some symmetry considerations on certain Hilbert functions. This combination of techniques allows us to significantly improve the known upper bounds due to Engheta \cite[Theorem 1]{E}, and later extended by Huneke, Mantero, McCullough and Seceleanu to a more general setting \cite[Theorem 2.2]{HMMCS}.
We start with an easy observation on multiplicities of unmixed ideals. Given a homogeneous ideal $I$, we let $\operatorname{Assh}(S/I) = \{P \in \operatorname{Ass}(S/I) \mid \dim(S/I) = \dim(S/P)\}$. An ideal is called unmixed if $\operatorname{Assh}(S/I) = \operatorname{Ass}(S/I)$.
\begin{remark} \label{Remark multiplicity unmixed}
If $J$ is an unmixed homogeneous ideal of height $h$, and $I$ is a homogeneous ideal of height $h$ which strictly contains $J$, then $\operatorname{e}(S/J) > \operatorname{e}(S/I)$. In fact, there must exist $P \in \operatorname{Assh}(S/J) = \operatorname{Ass}(S/J)$ such that $J_P \subsetneq I_P$, otherwise the two ideals would coincide. As $J$ is unmixed, and $\operatorname{Assh}(S/I) \subseteq \operatorname{Assh}(S/J) = \operatorname{Ass}(S/J)$, the associativity formula for multiplicities (for instance, see \cite[Theorem 11.2.4]{SH}) gives
\begin{align*}
\operatorname{e}(S/J) & = \sum_{P \in \operatorname{Assh}(S/J)} \operatorname{e}(S/P) \ell((S/J)_P) \\
& > \sum_{P \in \operatorname{Assh}(S/J)} \operatorname{e}(S/P) \ell((S/I)_P) \geq \sum_{Q \in \operatorname{Assh}(S/I)} \operatorname{e}(S/Q) \ell((S/I)_Q) = \operatorname{e}(S/I).
\end{align*}
\end{remark}
\begin{notation}
Let $\d=(d_1,\ldots,d_h)$, with $1 \leq d_1 \leq \cdots \leq d_h$. For $m \geq 2$, we consider the $\d$-LPP ideal $\LL(\d;m-1)$ inside $\ov{S}=\Bbbk[x_1,\ldots,x_h]$. Let $\sigma=\sum_{i=1}^h(d_i-1)$, and define
\[
\varphi_m = \begin{cases} \operatorname{HF}(\ov{S}/(\mathsf{x}^\d);m) - \operatorname{HF}(\ov{S}/\LL(\d;m-1);m) & \text{ if } 2\leq m \leq \sigma\\
0 & \text{ otherwise. }
\end{cases}
\]
\end{notation}
Clearly, $\varphi_m$ only depends on $m$ and on the given sequence $(\d;D)$. Moreover, observe that for $2 \leq m \leq \sigma$ we have $\varphi_m = \operatorname{HF}(\LL(\d;m-1)/(\mathsf{x}^\d);m)=h-\dim_\Bbbk(((\mathsf{x}^\d) \cap (U_{m-1}))_m)$. In particular, $\varphi_m>0$ for $2 \leq m \leq \sigma$.
\begin{theorem} \label{THM multiplicity}
Let $\a \subseteq S=\Bbbk[x_1,\ldots,x_n]$ be an Artinian almost complete intersection of degrees $(\d;D)=(d_1,\ldots,d_n;D)$, with $D \leq \sigma = \sum_{i=1}^n(d_i-1)$. Then $\operatorname{HF}(S/\a;m) \leq \operatorname{HF}(S/(\mathsf{x}^\d);m)-\varphi_m$ for all $D < m \leq \sigma$. In particular, $\operatorname{e}(S/\a) \leq \prod_{i=1}^n d_i - \sum_{m=D+1}^{\sigma} \varphi_m - 1$.
\end{theorem}
\begin{proof}
Without loss of generality we may assume that $\Bbbk$ is infinite. We first prove the inequality on Hilbert functions.
We start by treating the case $D<d_1$. Under this assumption, we can find a regular sequence $f_1',\ldots,f_n'$ of degrees $\d'=(D,d_2,\ldots,d_n)$ inside $\a$. To see this, pick $G$ as the first element $f_1'$. Since $\a_{\leq d_2}$ has height at least two, we may find an element $f_2'$ of degree $d_2$ which is regular modulo $f_1'$. Proceding this way, we construct an ideal $\mathfrak{f}' \subseteq \a$ generated by a regular sequence of degrees $\d'$. Observe that $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/\mathfrak{f}') = \operatorname{HF}(S/(\mathsf{x}^{\d'}))$. Moreover, since $(\mathsf{x}^{\d'}) = (\mathsf{x}^\d) + (x_1^D)$ is a $\d$-LPP almost complete intersection, we have that $\LL(\d;m-1) \subseteq (\mathsf{x}^{\d'})$ for all $D< m \leq \sigma$. As a consequence, $\operatorname{HF}(S/\a;m) \leq \operatorname{HF}(S/\LL(\d;m-1);m) = \operatorname{HF}(S/(\mathsf{x}^\d);m) - \varphi_m$ for all $D< m \leq \sigma$.
Now assume that $D \geq d_1$. If $D=\sigma$, there is nothing to show, so we may assume that $D<\sigma$. We proceed by induction on $\sigma' = \sigma-D \geq 1$. Assume that $\sigma'=1$, and observe that $\varphi_\sigma=1$. Since $S/\mathfrak{f}$ is an Artinian complete intersection, and $\mathfrak{f} \subsetneq \a$, the socle of $S/\mathfrak{f}$ must be contained in $\a/\mathfrak{f}$. Thus $\operatorname{HF}(S/\a;\sigma) = 0 = \operatorname{HF}(S/\mathfrak{f};\sigma) - 1 = \operatorname{HF}(S/(\mathsf{x}^\d);\sigma)- \varphi_\sigma$, and the desired inequality holds in this case.
Assume $\sigma'>1$. By \cite[Corollary 5.2]{F}, we get $\operatorname{HF}(S/\a;D+1) \leq \operatorname{HF}(S/\LL(\d;D);D+1) = \operatorname{HF}(S/(\mathsf{x}^\d);D+1) - \varphi_{D+1}$. We have already observed that $\varphi_{D+1} > 0$, since $D+1 \leq \sigma$. In particular, the above inequality implies that $\operatorname{HF}(\a;D+1) > \operatorname{HF}(\mathfrak{f};D+1)$. Therefore, there exists an element $G' \in \a$ of degree $D+1$ which does not belong to $\mathfrak{f}$. Let $\a' = \mathfrak{f} + (G')$, which is an almost complete intersection of degrees $(\d;D+1)$. By induction, and because $\a'\subseteq \a$, we have that $\operatorname{HF}(S/\a;m) \leq \operatorname{HF}(S/\a';m) \leq \operatorname{HF}(S/(\mathsf{x}^\d);m) - \varphi_m$ for all $D+1 < m \leq \sigma$, and this concludes the proof of the claimed inequalities on Hilbert function.
Finally, to obtain the inequality for the multiplicity, it is sufficient to observe that
\begin{align*}
\operatorname{e}(S/\a) &= \sum_{m=0}^\sigma \operatorname{HF}(S/\a;m) = \sum_{m=0}^{D} \operatorname{HF}(S/\mathfrak{f};m) -1 + \sum_{m=D+1}^\sigma \operatorname{HF}(S/\a) \\
& \leq \sum_{m=0}^\sigma \operatorname{HF}(S/(\mathsf{x}^\d);m) - \sum_{m=D+1}^\sigma \varphi_m -1 = \prod_{i=1}^n d_i - \sum_{m=D+1}^\sigma \varphi_m -1. \qedhere
\end{align*}
\end{proof}
\begin{remark} \label{Remark HMMCS}
The bound obtained in Theorem \ref{THM multiplicity}, together with Remark \ref{Remark multiplicity unmixed}, recovers and improves the one given in \cite{HMMCS} and \cite{E}. In fact, by Theorem \ref{Theorem Artinian} we can first of all reduce to the Artinian case. If $D < \sigma$ then $\sum_{m=D+1}^{\sigma} \varphi_m \geq \sigma -D$, and thus $\prod_{i=1}^h d_i - \sum_{m=D+1}^{\sigma} \varphi_m - 1 \leq \prod_{i=1}^h d_i -\sigma + D-1$, which is the bound given in \cite{E,HMMCS}. When $D \geq \sigma$, the results in \cite{E,HMMCS} just give that $\operatorname{e}(S/\a)\leq \prod_{i=1}^h d_i-1$, which is the bound given by Remark \ref{Remark multiplicity unmixed}. Observe that, in the case $D \geq \sigma$, the bound $\operatorname{e}(S/\a) \leq \prod_{i=1}^h d_i-1$ is also the one predicted by the EGH conjecture.
\end{remark}
We now further improve the bound of Theorem \ref{THM multiplicity} by using that, if $\a=\mathfrak{f}+(G)$ is an almost complete intersection, then the ideal $\mathfrak{g}=\mathfrak{f}:\a$ defines a Gorenstein ring, hence it has symmetric Hilbert function.
\begin{theorem} \label{Theorem symmetric}
Let $\a=\mathfrak{f}+(G) \subseteq S=\Bbbk[x_1,\ldots,x_n]$ be an almost complete intersection of degrees $(\d;D)=(d_1,\ldots,d_h;D)$, with $D < \sigma = \sum_{i=1}^h(d_i-1)$. Let $\tau_- = \lfloor \frac{\sigma+D-1}{2} \rfloor$ and $\tau^+ = \lceil \frac{\sigma+D-1}{2} \rceil$. Then $\operatorname{e}(S/\a) \leq \prod_{i=1}^h d_i - \sum_{m=D+1}^{\tau_-} \varphi_m - \sum_{m=D+1}^{\tau^+} \varphi_m-2$.
\end{theorem}
\begin{proof}
We may assume that $\Bbbk$ is infinite and, by Theorem \ref{Theorem Artinian}, that $S/\a$ is Artinian. Let $\mathfrak{g}=\mathfrak{f}:\a$. Since $\a/\mathfrak{f} \cong S/\mathfrak{g}(-D)$, and $S/\mathfrak{g}$ is Gorenstein, we have that $\operatorname{HF}(\a/\mathfrak{f};D+m) = \operatorname{HF}(\a/\mathfrak{f};\sigma-m)$ for all $m \in \mathbb{Z}$. By Theorem \ref{THM multiplicity} we have that $\operatorname{HF}(\a/\mathfrak{f};m) = \operatorname{HF}(S/\mathfrak{f};m) - \operatorname{HF}(S/\a;m) \geq \varphi_m$ for all $m \geq D+1$. Therefore
\begin{align*}
\operatorname{e}(S/\mathfrak{g}) & = \sum_{m=D}^{\tau_-} \operatorname{HF}(\a/\mathfrak{f};m) + \sum_{m=D}^{\tau^+} \operatorname{HF}(\a/\mathfrak{f};m) \\
& \geq \sum_{m=D+1}^{\tau_-} \varphi_m + \sum_{m=D+1}^{\tau^+} \varphi_m + 2.
\end{align*}
Since $\operatorname{e}(S/\a) = \operatorname{e}(S/\mathfrak{f}) - \operatorname{e}(S/\mathfrak{g}) = \prod_{i=1}^h d_i - \operatorname{e}(S/\mathfrak{g})$, the proof is complete.
\end{proof}
\begin{remark}
Since the function $m \mapsto \varphi_m$ is non-increasing for $m \geq 2$, Theorem \ref{Theorem symmetric} always provides a bound at least as effective at the one of Theorem \ref{THM multiplicity}.
\end{remark}
We can finally state the main result of this section.
\begin{theorem} \label{THM Pn}
Let $\Gamma$ be a complete intersection of degrees $\d=(d_1,\ldots,d_n)$ in $\mathbb{P}^n$, and $X$ be a hypersurface of degree $D$, with $1 \leq D \leq \sigma = \sum_{i=1}^n(d_i-1)$. Set $\delta(\d;D) = \prod_{i=1}^n d_i - \sum_{m=D+1}^{\tau_-} \varphi_m - \sum_{m=D+1}^{\tau^+} \varphi_m - 1$. If $X$ contains at least $\delta(\d;D)$ points of $\Gamma$, then $X$ contains $\Gamma$.
\end{theorem}
We omit the proof since the strategy is the same as in the case of $\mathbb{P}^3$, outlined at the end of Section \ref{Section Gorenstein}. By Theorem \ref{Theorem symmetric}, in fact, we may choose $\delta(\d;D)-1$ as an upper bound for the multiplicity of any almost complete intersection of degrees $(\d;D)$.
\begin{example} Let $\Gamma \subseteq \mathbb{P}^{2n}$ be a complete intersection of $2n$ cubics. If $D=3$, with the notation of Theorem \ref{Theorem symmetric} we have that $\sigma=4n$, $\tau_- = \tau^+ = 2n+1$, and $\sum_{m=4}^{2n+1} \varphi_m=3n^2-4n+1$. Therefore, if $X$ is a cubic containing at least $3^{2n} - (6n^2-8n+3)$ points of $\Gamma$, then $X$ contains $\Gamma$. For instance, if $\Gamma$ is a complete intersection of four cubics in $\mathbb{P}^4$, and $X$ is a cubic containing at least $\delta(3,3,3,3;3) = 70$ points of $\Gamma$, then it contains all $81$ points of $\Gamma$. Observe that the optimal value given by the EGH conjecture in this case would be $64$.
\end{example}
We conclude the section showing that, if either $h = 3$, or $h \geq 4$ and $D < d_4$, then we can improve Theorem \ref{THM Pn} using the results from Section \ref{Section Gorenstein}. In fact, with the notation of Section \ref{Section Gorenstein}, if $h=3$ one can take $\delta(\d;D) = d_1d_2d_3 - c_1c_2c_3+1$, by Corollary \ref{Corollary multiplicity ACI codim 3}. This is a more convenient choice than the value of $\delta(\d;D)$ coming from Theorem \ref{THM Pn}, since it comes from the sharper estimates of Section \ref{Section Gorenstein} on Hilbert functions, which only work for almost complete intersections of height three. If $h \geq 4$ and $D < d_4$, we have the following theorem.
\begin{theorem} \label{Theorem delta2}
Let $\a =\mathfrak{f}+(G)\subseteq S = \Bbbk[x_1,\ldots,x_n]$ be an almost complete intersection of degrees $(\d;D) = (d_1,\ldots,d_h;D)$. Assume that $h \geq 4$ and $D < d_4$. Let $\sigma=\sum_{i=1}^h(d_i-1)$, $\tau_- = \lfloor \frac{\sigma+D-1}{2} \rfloor$ and $\tau^+ = \lceil \frac{\sigma+D-1}{2} \rceil$. Consider the ideal $\LL(\d;D)$ inside $\ov{S} = \Bbbk[x_1,\ldots,x_h]$, and let
\[
\delta_m = \begin{cases} \operatorname{HF}(\ov{S}/(\mathsf{x}^\d);m) - \operatorname{HF}(\ov{S}/\LL(\d;D);m) & \text{ for } 0 \leq m \leq d_4 \\
\varphi_m & \text{ otherwise. }
\end{cases}
\]
Then $\operatorname{e}(S/\a) \leq \prod_{i=1}^h d_i - \sum_{m=D+1}^{\tau_-} \delta_m - \sum_{m=D+1}^{\tau^+} \delta_m-2$.
\end{theorem}
\begin{proof}
We can assume that $\Bbbk$ is infinite and, by Theorem \ref{Theorem Artinian}, that $S=\ov{S}$ and $h=n$.
We start by showing that $\operatorname{HF}(S/\a;m) \leq \operatorname{HF}(S/(\mathsf{x}^\d);m) - \delta_m$ for all $m \in \mathbb{Z}$. As in the proof of Theorem \ref{Theorem symmetric}, this will yield the desired upper bound for $\operatorname{e}(S/\a)$ since the Hilbert function of $\a/\mathfrak{f}$ is symmetric and $\operatorname{e}(S/(\mathsf{x}^\d)) = \prod_{i=1}^n d_i$.
Observe that $\operatorname{HF}(S/\a;m) \leq \operatorname{HF}(S/(\mathsf{x}^\d);m) - \delta_m$ is true for $m>d_4$ by Theorem \ref{THM multiplicity}. Therefore, it suffices to focus on the inequality in degrees $0 \leq m \leq d_4$.
Let $s=\max\{j \geq 4 \mid d_j=d_4\}$. First, assume that the elements $f_1,f_2,f_3,G$ form a regular sequence. Then $\a$ contains a regular sequence $f_1',\ldots,f_n'$ of degrees $\d'=(d_1,d_2,d_3,D,d_5,\ldots,d_n)$. We have $\operatorname{HF}(S/\a) \leq \operatorname{HF}(S/(\mathsf{x}^{\d'})) \leq \operatorname{HF}(S/\LL(\d;D))$, because $\LL(\d;D) \subseteq \LL$, where $\LL$ is the $\d$-LPP ideal with the same Hilbert function as $(\mathsf{x}^{\d'})$, which exists by \cite[Theorem 1.2]{MP}. By definition, $\operatorname{HF}(S/\LL(\d;D);m) = \operatorname{HF}(S/(\mathsf{x}^\d);m)-\delta_m$ for all $0 \leq m \leq d_4$.
If the elements $f_1,f_2,f_3,G$ do not form a regular sequence, then $\b=(f_1,f_2,f_3,G)$ is an almost complete intersection of degrees $(\d'';D) = (d_1,d_2,d_3;D)$. Using Theorem \ref{Theorem EGH ACI} and Remark \ref{Remark over socle} we have that $\operatorname{HF}(S/\b) \leq \operatorname{HF}(S/\LL(\d'';D))$. For all $m < d_4$ we conclude that $\operatorname{HF}(S/\a;m) = \operatorname{HF}(S/\b;m) \leq \operatorname{HF}(S/\LL(\d'';D);m) = \operatorname{HF}(S/\LL(\d;D);m) =\operatorname{HF}(S/(\mathsf{x}^\d);m) - \delta_m$. For $m=d_4$, observe that the elements $f_4,\ldots,f_s$ are all minimal generators of $\a_{\leq d_4}$, so that $\operatorname{HF}(\a/\b;d_4) = s-3$. As a consequence, we get that $\operatorname{HF}(S/\a;d_4) = \operatorname{HF}(S/\b;d_4) - \operatorname{HF}(\a/\b;d_4) \leq \operatorname{HF}(S/\LL(\d'';D);d_4) - (s-3) = \operatorname{HF}(S/\LL(\d;D);d_4)$.
\end{proof}
\begin{remark} \label{Remark delta2}
Theorem \ref{Theorem delta2} shows that, in the case $h \geq 4$ and $D < d_4$, we may replace the value $\delta(\d;D)$ from Theorem \ref{THM Pn} with $\prod_{i=1}^n d_i - \sum_{m=D+1}^{\tau_-} \delta_m - \sum_{m=D+1}^{\tau^+} \delta_m - 1$. As in the case $h=3$, the latter choice is always more convenient to make, whenever possible. This becomes significantly more evident when $d_4 \gg D$, as the following example shows.
\end{remark}
\begin{example} \label{Example (4,4,4,10;4)} Let $\Gamma \subseteq \mathbb{P}^4$ be a complete intersection of degrees $\d=(4,4,4,10)$. By Remark \ref{Remark delta2}, if $X$ is a quartic passing through at least $532$ points of $\Gamma$, then $X$ contains all $640$ points of $\Gamma$. Notice that Theorem \ref{THM Pn} would give a value of $\delta(\d;4) = 612$, while the one predicted by the EGH conjecture would be $521$ points.
\end{example}
\bibliographystyle{alpha}
|
1,116,691,500,372 | arxiv | \section{Introduction}
\label{intro}
\setcounter{equation}{0}
This work concerns uniqueness theory for parabolic semilinear stochastic partial differential equations (SPDE)
of the form
\begin{eqnarray}
\label{generalized}
\frac{\partial u}{\partial t} (t,x)&=& \frac{\Delta}{2} u(t,x) + \sigma(x,u(t,x)) \dot{W}(t,x) \\
u(0,x) &=& u_0(x), \nonumber
\end{eqnarray}
where $\dot{W}(t,x)$ is two-parameter white noise on ${\mathbb{R}}_+\times {\mathbb{R}}$,
and $\sigma:{\mathbb{R}}^2\to{\mathbb{R}}$ is $\gamma$-H\"older continuous in $u$ and also
has at most linear growth at $\infty$ in $u$.
Weak existence of solutions in the appropriate function space is then standard
(see, e.g. Theorems 1.1 and 2.6 of Shiga~\cite{shi94} or Theorem 1.1 of
Mytnik-Perkins~\cite{mp11}), and if $\gamma=1$, pathwise uniqueness of
solutions follows from standard fixed-point arguments (see Chapter 3 in \cite{wal86}).
A natural question is then:
\[\hbox{If $\gamma<1$, are solutions pathwise unique?}\]
The motivation for this problem comes from a number of models arising from
branching models and population genetics for which $\gamma=1/2$.
\medskip
Next we give some examples. In the first three, we only consider nonnegative
solutions, while in the fourth example we allow solutions to take negative
values. If $E\subset {\mathbb{R}}$, we write $C(E)$ for the space of continuous
functions on $E$ with the compact-open topology.
\medskip
\noindent{\bf Example 1.} If $\sigma(u)=\sqrt u$ and we assume $u\ge 0$, then
a solution to \eqref{generalized} corresponds to the density $u(t,x)dx=X_t(dx)$, where
$X_t$ is the one-dimensional super-Brownian motion. The super-Brownian motion
arises as the rescaled limit of branching random walk (see Reimers~\cite{rei89} and
Konno-Shiga~\cite{ks88}). More precisely, assume that particles occupy sites
in ${\mathbb{Z}}/\sqrt N$. With Poisson rate $N/2$, each particle produces offspring at
a randomly chosen nearest neighbor site. Finally, particles die at rate $N/2$.
For $x\in{\mathbb{Z}}/\sqrt N$ and $t\geq0$, set
\[
U^N(t,x)=N^{-1/2}\times(\hbox{number of particles at $x$ at time $t$}).
\]
If the initial ``densities'' converge in the appropriate state space, then $U^N$
will converge weakly on the appropriate function space to the
solution of \eqref{generalized}, with $\sigma$ as above--see
Reimers~\cite{rei89} for a proof of this result using nonstandard analysis.
Furthermore, this solution is unique in law.
Uniqueness in law is established by the well-known exponential duality between
$u(t,x)$ and solutions $v(t,x)$ of the semilinear PDE
\[\frac{\partial v}{\partial t}=\frac{\Delta v}{2}-\frac{1}{2}v^2.\]
One of us (Mytnik~\cite{myt98w}) extended this exponential duality, and hence proved uniqueness in law for $\sigma(u)=u^p, \ u\ge 0$ where $1/2<p<1$. The dual process is then a solution to an SPDE driven by a one sided stable process.
Pathwise uniqueness among nonnegative solutions remains unsolved for $0< p\le 3/4$
(see below for $p>3/4$).
\medskip
\noindent {\bf Example 2.} If $\sigma(x,u)=\sqrt {g(x,u)u},\ u\ge0$, where $g$
is smooth, bounded, and bounded away from $0$, then any kind of uniqueness
for solutions to ~\eqref{generalized} is unresolved except when $g$ is constant. Such
equations arise as weak limit points of the branching particle systems as in
Example~1, but where the branching and death rates of a particle at $x$ in
population $u^N$ is $Ng(x,u^N)/2$.
\medskip
\noindent{\bf Example 3.} If $\sigma(x,u)=\sqrt{u(1-u)}$, $u\in[0,1]$, then
solutions to \eqref{generalized} are population densities for the stepping
stone model on the line. That is, $u(t,x)$ is the proportion of a particular
allele type at location $x$ in a population undergoing Brownian migration and
resampling between generations. Then uniqueness in law holds by a moment
duality argument (see Shiga~\cite{shi88}), and pathwise uniqueness remains unresolved.
\medskip
\noindent{\bf Example 4.} In this example, we no longer require $u$ to be
nonnegative. Consider $\sigma(u)=\sqrt{|u|}$ for $u\in{\mathbb{R}}$;
that is, consider the SPDE
\begin{equation}\label{annbranch}
\frac{\partial u}{\partial t}(t,x) =\frac{\Delta}{2} u(t,x) + \sqrt{|u(t,x)|} \dot{W}(t,x) .
\end{equation}
This equation arises as a weak limit point of the signed particle density of
two branching random walks, one with positive mass and one with negative mass,
which annihilate each other upon collision. More precisely, consider two
particle systems on ${\mathbb{Z}}/\sqrt N$, one with positive mass and the other with
negative mass. Each particle independently produces offspring of the same sign at a
randomly chosen nearest neighbor at rate $N/2$ and dies at rate $N/2$. The
systems interact when particles collide, and then there is pairwise
annihilation. Define $U^{N,\pm}(t,x)$ as in Example 1 where one considers
separately the positive and negative masses. Extend these functions by linear
interpolation to $x\in{\mathbb{R}}$. If $U^{N,\pm}(0,\cdot)\to u^{\pm}(0,\cdot)$ uniformly for
some limiting continuous functions with compact support satisfying
$u^+(0,\cdot)U^-(0,\cdot)\equiv 0$, then $\{(U^{N,+},U^{N,-}):N\in{\mathbb{N}}\}$ is
tight in the Skorokhod space of cadlag $C({\mathbb{R}})$-valued paths, where the
latter space of continuous functions has the compact open topology.
Any weak limit point $(u^+,u^-)$ will satisfy
\begin{equation}\label{annbranch2}
\frac{\partial u^{\pm}}{\partial t}(t,x) =\frac{\Delta}{2} u^{\pm}(t,x) + \sqrt{u^\pm(t,x)} \dot{W_\pm}(t,x) -\dot K_t,\ u^+(t,x)u^-(t,x)\equiv 0,\end{equation}
where $\dot W_+$ and $\dot W_{-}$ are independent space-time white noises and
$K_t$ is a continuous non-decreasing process taking values in the space of
finite measures on the line with the topology of weak convergence. The space-time
measure $K(dt,dx)$ records the time and location of the killing
resulting from the particle collisions. It is then easy to check that
$u=u^+-u^-$ satisfies \eqref{annbranch}. No results about uniqueness were
known for this process. The above convergence was proved in an earlier draft of this
article but we have not included it as the details are a bit lengthy, if
routine. The convergence will only be used to help our intuition in what
follows.
\medskip
In general, pathwise uniqueness of solutions, i.e. the fact that two solutions
with the same white noise and initial condition must coincide a.s., implies
the uniqueness of their laws (see, e.g. Kurtz~\cite{kur07}). Although quite
different duality arguments give uniqueness in law in Examples 1 and 3, at
least among nonnegative solutions, this kind of duality argument is notoriously
non-robust, and the interest in pathwise uniqueness stems in part from the
hope that such an approach would apply to a broader class of examples,
including perhaps Examples 2 and 4.
It has long been hoped that pathwise uniqueness holds in \eqref{generalized}
if $\sigma$ is $\gamma$-H\"older continuous in the solution $u$ for
$\gamma\ge 1/2$, since Yamada and Watanabe~\cite{yw71} showed the corresponding
result holds for finite-dimensional stochastic differential equations (SDE's).
They proved that if $\sigma_i:{\mathbb{R}}\to{\mathbb{R}}$ is H\"older continuous of index $1/2$ and
$b_i:{\mathbb{R}}^d\to{\mathbb{R}}$ is Lipschitz continuous then solutions to
\[ dX^i_t=\sigma_i(X^i_t)dB^i_t+b_i(X_t)dt,\ i=1,\dots,d\]
are pathwise unique. Note that \eqref{generalized} has the same ``diagonal
form'' as the above SDE albeit in infinitely many dimensions. It was
Viot~\cite{vio75b} who first noted Yamada and Watanabe's proof does extend to
infinite dimensional equations such as \eqref{generalized} if the noise is
white in time but has a bounded covariance kernel in space. The standard
proof breaks down for white noise since in the $t$ variable, solutions are
H\"older continuous of index $(1/4)-\epsilon$ for all $\epsilon>0$, but not
H\"older continuous of index $1/4$. Hence, solutions are too rough in the
time variable to be semimartingales. Nonetheless in
Mytnik-Perkins~\cite{mp11} a more involved extension of the Yamada-Watanabe
argument was established which proved pathwise uniqueness in
\eqref{generalized} if $\sigma(x,\cdot)$ is H\"older continuous of index
$\gamma>3/4$, uniformly in $x$.
This leads to the natural question of sharpness in this last result, that is:
\begin{eqnarray}\label{mainqu}&\hbox{Does pathwise uniqueness fail in general for \eqref{generalized} if $\sigma(x,\cdot)=\sigma(\cdot)$}\\
\nonumber&\hbox{is $\gamma$-H\"older continuous for $\gamma\le 3/4$, and in particular for $\gamma=1/2$?}\end{eqnarray}
For the corresponding SDE, the Yamada-Watanabe result is shown to be
essentially sharp by Girsanov's equation
\begin{equation}\label{girseq}
X_t=\int_0^t|X_s|^\gamma dB(s)
\end{equation}
for which one solution is $X_t=0$. If $\gamma<1/2$, there are non-zero solutions to
(\ref{girseq}), and so solutions are neither pathwise unique nor unique in
law, see section V.26 in Rogers and Williams~\cite{rw87}. This suggests we
consider the SPDE
\begin{eqnarray}
\label{spde}
\frac{\partial u}{\partial t}(t,x) &=& \frac{\Delta}{2} u(t,x) + |u(t,x)|^\gamma \dot{W}(t,x) \\
u(0,x) &=& 0. \nonumber
\end{eqnarray}
To state our main result we need some notation.
A superscript $k$, respectively $\infty$, indicates that
functions are in addition $k$ times, respectively infinitely often, continuously
differentiable. A subscript $b,$ respectively $c,$ indicates that they are also
bounded (together with corresponding derivatives), respectively have compact support.
Let
$\langle f,g\rangle=\int_{{\mathbb{R}}}f(x)g(x)\,dx$ denote the $L^2$ inner product.
Set
\begin{equation*}
\|f\|_{\lambda}:=\sup_{x \in {\mathbb{R}}} |f(x)| e^{\lambda |x|},
\end{equation*}
and define $C_{\rm rap}:=\{f \in C({\mathbb{R}}): \|f\|_{\lambda}
< \infty \text{ for any } \lambda >0\}$, endowed with the topology induced
by the norms $\|\cdot \|_{\lambda}$ for $\lambda>0.$ That is,
$f_n\to f$ in $C_{\rm rap}$ iff $d(f,f_n)=\sum_{k=1}^\infty2^{-k}(\Vert f-f_n\Vert_{k}\wedge 1)\to 0$ as $n\to\infty$.
Then $(C_{\rm rap},d)$ is a Polish space. The space $C_{\rm rap}$ is a commonly used state space
for solutions to \eqref{generalized} (see Shiga\cite{shi94}).
We assume in \eqref{generalized} that $\dot W$ is a white noise on the
filtered probability space
$(\Omega,\mathcal{F},\mathcal{F}_t,P)$, where $\mathcal{F}_t$ satisfies the usual hypotheses. This
means $W_t(\phi)$ is an $\mathcal{F}_t$-Brownian motion with variance
$\Vert\phi\Vert_2^2t$ for each $\phi\in L^2({\mathbb{R}},dx)$ and $W_t(\phi_1)$ and
$W_t(\phi_2)$ are independent if
$\langle\phi_1,\phi_2\rangle=0$. A stochastic
process $u:\Omega\times{\mathbb{R}}_+\times{\mathbb{R}}\to{\mathbb{R}}$ which is
$\mathcal{F}_t-\hbox{previsible}\times\hbox{Borel}$ measurable will be called a solution to the
SPDE \eqref{generalized} with initial condition $u_0:{\mathbb{R}}\to{\mathbb{R}}$ if
for each $\phi\in C_c^\infty({\mathbb{R}})$,
\begin{align}\label{SPDEMP}
\langle u_t,\phi\rangle=&\langle u_0,\phi\rangle+\int_0^t\left\langle u_s,\frac{\Delta}{2}\phi\right\rangle ds\\
\nonumber &+\int_0^t\int \sigma(x,u(s,x))\phi(x)W(ds,dx)\quad\hbox{ for all $t\ge 0$ a.s.}
\end{align}
(The existence of all the integrals is of course part of the definition.) We
use the framework of Walsh \cite{wal86} to define stochastic integrals with
respect to $W(ds,dx)$.
For $u_0\in C_{\rm rap}$, we say $u$ is a $C_{\rm rap}$-valued solution if, in addition, $t\to u(t,\cdot)$ has continuous $C_{\rm rap}$-valued paths for all $\omega$.
Here then is our main result which answers question \eqref{mainqu}
at least for $\gamma<3/4$.
\begin{theorem} \label{thm:mainresult} If $0<\gamma<3/4$ there is
$C_{\rm rap}$-valued solution $u(t,x)$ to \eqref{spde} such that with positive
probability, $u(t,x)$ is not identically zero. In particular, uniqueness in
law and pathwise uniqueness fail for \eqref{spde}.
\end{theorem}
This leaves open the state of affairs for $\gamma=3/4$ where, based on analogy
with the SDE, one would guess that uniqueness holds. Our theorem does,
however, dampen the hope of handling many of the SPDE's in the above examples
through a Yamada-Watanabe type theorem. It also shows that the SPDE in
Example 4 does not specify a unique law.
A standard construction of non-zero solutions to Girsanov's SDE proceeds by starting an ``excursion" from $\pm\epsilon$, run it until it hits $0$, and then proceed to the next excursion, starting with the opposite sign. The process consisting of $\pm\epsilon$ jumps will disappear as $\epsilon\to 0$ due to the alternating signs. For $\gamma<1/2$, a diffusion calculation shows the the rescaled return time of the diffusion is in the domain of attraction of a stable subordinator of index $(2(1-\gamma))^{-1}<1$ and the limiting jumps will lead to non-trivial excursions in the scaling limit. It turns out that with a bit of work one can do the same in \eqref{spde} for $\gamma<1/2$. That is one can seed randomly chosen bits of mass of size $\pm\epsilon$ and run the SPDE until it hits $0$ and try again. Theorem~4 of Burdzy-Mueller-Perkins~\cite{bmp10} carries out this argument and gives Theorem~\ref{thm:mainresult} for $\gamma<1/2$. As a result in the rest of this work we will assume
me
\begin{equation}\label{gammch}1/2\le \gamma<3/4.
\end{equation}
When $\gamma\ge 1/2$ the above excursion argument breaks down as the time to
construct a non-trivial excursion will explode. Instead we start excursions
which overlap in time and deal with the potential spatial overlap of
positive and negative excursions. As Example 4 suggests we will
annihilate mass when the overlap occurs. Much of the challenge will be to show that this overlap can be quite small if $\gamma<3/4$.
We now outline our strategy for constructing a non-trivial solution to
\eqref{spde}. Let $M_F(E)$ denote the space of finite measures on the metric
space $E$ with the weak topology. We will also use $\mu(\phi)$ and
$\langle \mu,\phi\rangle$ to denote integral of function $\phi$ against a
measure $\mu$. Below we will construct
$\eta^+_{\epsilon}, \eta^-_{\epsilon}\in M_F([0,1]^2)$, both of which converge to
Lebesgue measure $dsdx$ on the unit square as $\epsilon\downarrow 0$, and we will
also construct non-negative solutions $U^\epsilon(t,x)$ and $V^\epsilon(t,x)$ with $0$
initial conditions to the equation
\begin{eqnarray}
\label{pmn-spde}
\frac{\partial U^{\epsilon}}{\partial t}(t,x) &=& \dot\eta^+_{\epsilon}(t,x) + \frac{\Delta}{2} U^{\epsilon}(t,x)+ U^\epsilon_{\epsilon}(t,x)^\gamma \dot W^+(t,x)-\dot K^\epsilon_t\\
\label{pmn-spdeV} \frac{\partial V^{\epsilon}}{\partial t}(t,x) &=& \dot\eta^-_{\epsilon}(t,x) + \frac{\Delta}{2} V^{\epsilon}(t,x)+ V^\epsilon_{\epsilon}(t,x)^\gamma \dot W^-(t,x)-\dot K_t^\epsilon.
\end{eqnarray}
Here $\dot W^+$ and $\dot W^-$ are independent white noises and $t\to K_t^\epsilon$
is a non-decreasing $M_F({\mathbb{R}})$-valued process. As suggested by
\eqref{annbranch2}, $K^\epsilon(dt,dx)$ will record the locations of the
pairwise annihilations resulting from the collisions between our two
annihilating populations. This construction will lead to the condition
\[
U^\epsilon(t,\cdot)V^\epsilon(t,\cdot)\equiv 0.
\]
Note that $\eta^\pm_{\epsilon}$ are immigration terms. We will always assume that
$\epsilon\in(0,1]$. If $\eta_\epsilon=\eta_\epsilon^+-\eta_\epsilon^-$, it is easy to check that
$u_\epsilon=U^\epsilon-V^\epsilon$ satisfies
\begin{equation}\label{n-spde}
\frac{\partial u_{\epsilon}}{\partial t}(t,x) = \dot\eta_{\epsilon}(t,x) + \frac{\Delta}{2} u_{\epsilon}(t,x)+ |u_{\epsilon}(t,x)|^\gamma \dot W(t,x),
\end{equation}
for an appropriately defined white noise $\dot W$. We will show that there
exists a subsequence $\epsilon_k$ such that as $k\to\infty$, $u_{\epsilon_k}(t,x)$
converges weakly in the Skorokhod space of $C_{\rm rap}$-valued paths to a solution $u(t,x)$ of (\ref{spde}) (see
Proposition~\ref{prop:2.1}). $U^\epsilon$ is the positive part of $u^\epsilon$ and so
Theorem~\ref{thm:mainresult} will then follow easily from the following
assertion:
\begin{claim}
\label{claim1}
There exists $\delta>0$ such that for all $\epsilon\in (0,1]$,
\[
P\left(\sup_{t\in[0,1]}\int U^{\epsilon}(t,x)\,dx>\delta\right) > \delta.
\]
\end{claim}
If $N_\epsilon=\lfloor \epsilon^{-1}\rfloor$ (the greatest integer less than $\epsilon^{-1}$), the
measure $\eta_{\epsilon}$ will be obtained by smearing out spatial mass using the time grid
\begin{equation}
\label{main-prob-est}
\mathcal{G}_{\epsilon} = \left\{k\epsilon/2
: 1\leq k\leq 2N_\epsilon\right\}.
\end{equation}
We further denote by $\mathcal{G}_{\epsilon}^{\rm odd}$ the points of $\mathcal{G}_{\epsilon}$
for which $k$ is odd, where $k$ is
in the definition of $\mathcal{G}_{\epsilon}$ above. We also define $\mathcal{G}_{\epsilon}^{\rm even}$
to be those grid points for which $k$ is even and let
\begin{equation}
\label{def:J}
J_{\epsilon}^{x}(z)= \epsilon^{1/2} J((x-z)\epsilon^{-1/2}),\;\;\;x,z\in {\mathbb{R}},
\end{equation}
where $J$ is a non-negative even continuous function bounded by $1$ with support in $[-1,1]$, and such that \mbox{$\int_{{\mathbb{R}}} J(z)\,dz=1$}.
Now let us enumerate points in $\mathcal{G}_{\epsilon}^{\rm odd}$ and $\mathcal{G}_{\epsilon}^{\rm even}$, as follows,
\[ \{s_i\,, i\in {\mathbb{N}}_{\epsilon}\} = \mathcal{G}_{\epsilon}^{\rm odd}, \qquad \{t_i\,, i\in
{\mathbb{N}}_{\epsilon}\} = \mathcal{G}_{\epsilon}^{\rm even}\,,\]
where $s_i=(2i-1)\frac{\epsilon}{2}$ and $t_i=2i\frac{\epsilon}{2}$ for
$i\in{\mathbb{N}}_\epsilon=\{1,\dots, N_\epsilon\}$. Let $x_i, y_i\,, i=1,2\ldots$, be a sequence
of independent random variables distributed uniformly on $[0,1]$.
We define $\eta_{\epsilon}$
to be the signed measure
\begin{eqnarray*}
\eta_{\epsilon}(A) &=& \left[
\sum_{s_i\in\mathcal{G}_{\epsilon}^{\rm odd}}\int J_{\epsilon}^{x_i}(y)1_A(s_i,y)\,dy
- \sum_{t_i\in\mathcal{G}_{\epsilon}^{\rm even}}\int J_{\epsilon}^{y_i}(y)1_A(t_i,y)\,dy\right]\\
&\equiv&\eta_{\epsilon}^+(A)-\eta_{\epsilon}^-(A).
\end{eqnarray*}
It is easy to check that $\eta^\pm_\epsilon$ are as claimed above.
To simplify our outline of the proof, we will take $\gamma=1/2$ so that we can
appeal to Example~4 for intuition. In later sections we do not make this
restriction on $\gamma$. We can then decompose
$U^\epsilon=\sum_{i=1}^{N_\epsilon}U^i$ into descendants of the $i$th immigrant at
$(s_i,x_i)$ (type $i$ particles) and similarly write
$V^\epsilon=\sum_{j=1}^{N_\epsilon}V^j$.
We can also keep track of the killed mass and, by adding these ghost particles back in, dominate
$U^\epsilon$ by a super-Brownian motion $\bar U$ with immigration $\eta^+_\epsilon$, and dominate the $\{U^i\}$ by independent super-Brownian motions $\{\bar U^i\}$ which sum to $\bar U$. Similar processes $\bar V$ and $\{\bar V^j\}$ may be built to bound the $V^\epsilon$ and $\{V^j\}$, respectively.
We also can decompose $K=\sum_iK^{i,U}=\sum_j K^{j,V}$ according to the type of individual being killed. From hitting probabilities of Feller's branching diffusion, $\bar U^i(1)=\langle \bar U^i,1\rangle$ we know that with reasonable probability one of the $\bar U^i$ clusters does hit $1$ and we condition on such an event for a fixed choice of $i$, denoting the conditional law by $Q_i$. We now proceed in three steps:
{\bf Step 1.} $K^{i,U}_{s_i+t}(1)\le t^{3/2-\epsilon}$ for small $t$ with reasonable probability ( see Lemma~\ref{thetabnd} below), uniformly in $\epsilon$.
\noindent This step uses a modulus of continuity for the support of the dominating
super-Brownian motions which states that they can spread locally no faster
than $t^{1/2}$ with some logarithmic corrections which we omit for the
purposes of this outline (see
Theorem~3.5 in Mueller-Perkins\cite{mp92} for a more general version which
we will need for the general $\gamma$ case). This means both $\bar U^i$ and
$\bar V^j$ are constrained to lie inside a growing space-time parabola rooted
at their space-time birth points and hence the same is true of the dominated processes
$U^i$ and $V^j$. If $\tau_j$ is the lifetime of $\bar V^j$ then, using the
known law of $\tau_j$ (it is the hitting time of zero by Feller's branching
diffusion starting from $\epsilon$) and a bit of geometry to see how large $\tau_j$
has to be for the parabola of $\bar V^j$ to intersect with that of $\bar U^i$
from $s_i$ to $s_i+t$, one can easily deduce that with reasonable probability
the only $\bar V^j$ clusters which can intersect with the $\bar U^i$ cluster
we have singled out are those born in the space-time rectangle
$[s_i,s_i+t]\times [x_i-2t^{1/2},x_i+2t^{1/2}]$. This means these are the
only $K^{j,V}$'s (killing by descendants of $(t_j,y_j)$) that can contribute to
$K^{i,U}$ on $[s_i,s_i+t]$ since other $V$ particles will not collide with the
$U^i$ mass. In particular, with reasonably probability none of the $V^j$
clusters born before $s_i$ can affect the mass of $U^i$ on $[s_i,s_i+t]$ (see
Lemma~\ref{presimass} for the proof of this last assertion for general $\gamma$.)
The mean amount of killing by these $V^j$'s can be no more than the mean
amount of immigration which fuels these populations. More precisely if one
integrates out the version of \eqref{pmn-spdeV} for $V^j$ over space, sums
over the above indices $j$ and bring the sum of the resulting $K^j$ to the
left hand side, then one finds that if
\begin{equation*}
R_j=[s_i,s_i+t]\times [x_i-2t^{1/2},x_i+2t^{1/2}]
\end{equation*}
then
\begin{equation*}
E\left[\sum_{(t_j,y_j)\in R_j}K^j_{s_i+t}(1)\right]
\le E(\eta^-_\epsilon( [s_i,s_i+t]\times [x_i-2t^{1/2},x_i+2t^{1/2}])\le ct^{3/2}.
\end{equation*}
A standard interpolation argument now shows the integrand on the left-hand side is bounded by $ct^{3/2-\epsilon}$ for small enough $t$ a.s. and the claimed result follows from the above and the fact that any killing by $K^{i,U}$ is matched by a killing on $V$ by one of the $K^{j,V}$'s. It will turn out that for $\gamma<3/4$ one can get the same bound on $K^i_t(1)$.
\medskip
{\bf Step 2.} Under $Q_i$, which was the conditional law defined before Step
1, $4\bar U^i_{s_i+\cdot}(1)$ is a $4$-dimensional $\hbox{Bess}^2$-process and so
$\bar U^i_{s_i+t}(1)\ge t^{1+\epsilon}$ for small $t$ a.s.
\noindent This follows from a standard change of measure argument--see
Lemma~\ref{lem:baresc} and its proof below. For general $\gamma<3/4$, the mass
$4\bar U^i_{s_i+\cdot}(1)$ will be a time change of a $4$-dimensional $\hbox{Bess}^2$-process
and one will be able to show that $\bar U^i_{s_i+t}(1)\ge t^\beta$ for small $t$
a.s. for some $\beta <3/2$.
\medskip
{\bf Step 3.} There is a reasonable $Q_i$-probability (uniform in $\epsilon$) that $U^i_{s_i+t}(1)\ge t^{1+\epsilon}$ for small $t$.
\noindent To see this, note that the above steps set up a competition between the conditioning which
gives $\bar U^i(1)$ a positive linear drift and the killing which is limited
by Step 1. To decide which effect wins when considering $U^i(1)$ we will
consider the ratio
\[
R_t=\frac{\bar U^i_{s_i+t}(1)-U^i_{s_i+t}(1)}{\bar U^i_{s_i+t}(1)}\in[0,1]
\]
of ghost particles to total population (alive and dead). An application of
Ito's Lemma will show that $R$ is a submartingale satisfying
\[R_t=N_t+\frac{K_{s_i+t}(1)}{\bar U_{s_i+t}(1)},\]
where $N_t$ is a continuous martingale. The last term is at most $t^{1/2-2\epsilon}$
for small $t$ with reasonable $Q_i$ probability by Steps 1 and 2. We
localize to get the above behavior almost surely up to a stopping time, take means
and use the Kolmogorov's inequality for martingales to see that $R_t$ is less than $1/2$ with
reasonable probability, uniformly in $\epsilon$. By Step 2 we can conclude that on
this set $U^i_{s_i+t}(1)\ge (1/2) t^{1+\epsilon}$ for small $t$ and so is bounded away
from $0$ for small $t$ with reasonable $Q_i$-probability uniformly in $t$, as
required. This step is carried out in the proof of Proposition~\ref{prop:2} in
Section~\ref{sec:stochanal} below.
\medskip
There are a number of problems when carrying out the above argument. In Step 1 we should
pay attention to the fact that the underlying probability is $Q_i$. In
addition the argument for general $\gamma$ is more involved. For example, the
clusters of the dominating processes $\bar V^j$ will no longer be
independent. Also, the rate of propagation results in
Mueller-Perkins~\cite{mp92} only apply for solutions where there is an
underlying historical process which records the ancestral histories of the
surviving population members. We could extend the construction of our
solutions to \eqref{pmn-spde} and \eqref{pmn-spdeV} to include such processes
but this gets a bit unwieldy. Instead we prove a comparison theorem for
supports of solutions of parabolic SPDE's (Proposition~\ref{prop:1}) which
allows us to derive these results from the corresponding property of solutions
of $\eqref{generalized}$ with $\sigma(u)=u^\gamma$. The latter property holds
for any solution since these solutions are known to be unique in law by
Mytnik~\cite{myt98w}.
The condition that $\gamma<3/4$ is required in Step 1 to ensure that with
reasonable probability, the $V$ particles born before time $s_i$ do not contribute to
the killing. Such killing, if it occurred, could lead to the
immediate annihilation of the $i$th seed with high probability. The bound on
$\gamma$ is also used in Steps 2 and 3 since otherwise the lower bound on
$\bar U^i_{s_i+t}(1)$ near $0$ will be $t^\beta$ for some $\beta>3/2$ which will be
of no use in keeping $R_t$ small for $t$ small.
Here is an outline the paper. Section~\ref{sec:setup}
gives a careful description of the approximating solutions arising in
\eqref{pmn-spde}, \eqref{pmn-spdeV} and the various decompositions of these
processes. The actual construction of these solutions is carried out in
Section~\ref{sec:constr}. In Section~\ref{sec:inclexcl} an
inclusion-exclusion calculation (Lemma~\ref{lem:inclexcl}) reduces the
non-uniqueness result to a pair of Propositions (\ref{prop:2} and
\ref{prop:3}) which correspond to Steps 3 and an amalgamation of Steps 1 and
2, respectively. In Section~\ref{sec:prop3} Proposition~\ref{prop:3} is then
reduced to a sequence of 5 Lemmas, the main ones being Lemma~\ref{lem:baresc}
and Lemma~\ref{thetabnd}, corresponding to Steps 2 and 1, respectively.
Sections~\ref{sec:stochanal} and \ref{sec:spdegr} deal with the main parts of
the proof rooted in stochastic analysis including the proofs of
Lemma~\ref{lem:baresc} and Proposition~\ref{prop:2} in
Section~\ref{sec:stochanal}. Sections~\ref{sec:lem4.4} and \ref{sec:Kgrowth}
deal with the main parts of the proof involving qualitative properties of the
clusters including the proof of Lemma~\ref{thetabnd} in
Section~\ref{sec:Kgrowth}. Finally, Section~\ref{sec:suppcomp} gives the proof
of the comparison theorem for supports of solutions of certain SPDE's.
\section{Set-up of equations}\label{sec:setup}
\setcounter{equation}{0}
In what follows we assume that $\gamma\in [1/2,3/4)$, and we will carry out
the method outlined in the introduction.
Recall that ${\mathbb{N}}_\epsilon=\{1,\ldots,N_\epsilon\}$ where $N_\epsilon=\lfloor \epsilon^{-1}\rfloor$.
For any Polish space $\mathbf{E}$, let $D({\mathbb{R}}_+,\mathbf{E})$ be the Skorokhod space of right-continuous $\mathbf{E}$-valued paths with left limits in $\mathbf{E}$, and define
\begin{eqnarray*}
D^{\epsilon}({\mathbb{R}}_+,\mathbf{E})&=& D({\mathbb{R}}_+,\mathbf{E})\cap C({\mathbb{R}}_+\setminus \mathcal{G}_{\epsilon},\mathbf{E})\\
&=& \mbox{the space of cadlag $\mathbf{E}$-valued functions on ${\mathbb{R}}_+$, whose paths}\\
&& \mbox{ are continuous on any
time interval $[\frac{(i-1)\epsilon}{2}, \frac{i\epsilon}{2}),1\le i\le 2N_\epsilon$,
}\\
&&\mbox{ and on $[N_{\epsilon}\epsilon,\infty)$.}
\end{eqnarray*}
We will construct a sequence of processes
$\{(U^{i,\epsilon},V^{i,\epsilon})\,, i\in{\mathbb{N}}_{\epsilon}\}$ with sample paths in
$(C({\mathbb{R}}_+\setminus\mathcal{G}_{\epsilon},C^+_{\rm rap})\cap D^\epsilon({\mathbb{R}}_+,L^1({\mathbb{R}}))^2$.
For each $\phi \in C_b^2({\mathbb{R}})$, w.p. $1$, $U^i,V^j$ (we will suppress $\epsilon$ in our notation) will satisfy the
following equations for all $t\ge 0$ and all $i,j\in\NN_\epsilon$.
Recall that $J^{x_i}$ was defined in (\ref{def:J}).
\begin{eqnarray}
\label{UVdefn}
&&\mbox{}\vspace*{-2cm}\left\{
\begin{array}{rcl}
U^i_t(\phi) &=& \langle J^{x_i},\phi\rangle\mathbf{1}(t\geq s_i)\\
&& +
\int_0^t \int_{{\mathbb{R}}}U(s,x)^{\gamma-1/2} U^i(s,x)^{1/2} \phi(x) W^{i,U}(ds,dx) \\
&& \mbox{}+ \int_0^t U^{i}_s(\frac{1}{2}\Delta \phi)\, ds - K^{i,U}_t(\phi),
\\
&&\mbox{}\\
V^j_t(\phi) &=& \langle J^{y_j},\phi\rangle\mathbf{1}(t\geq t_j) \\
&& + \int_0^t\int_{{\mathbb{R}}} V(s,x)^{\gamma-1/2} V^j(s,x)^{1/2} \phi(x) W^{j,V}(ds,dx) \\
&& \mbox{}+ \int_0^t V^{j}_s(\frac{1}{2}\Delta \phi)\, ds - K^{j,V}_t(\phi),
\\
&&\\
&&\mbox{}
{\rm with}\; U_t = \sum_i U^i_t,\;\;\;
V_t =\sum_i V^i_t,
\end{array}
\right.
\end{eqnarray}
where, as will be shown in Theorem~\ref{thm:1.1}, $U$ and $V$ have paths in $D^\epsilon({\mathbb{R}}_+,C^+_{\rm rap})$.
Here $W^{i, U}, W^{j,V},\/ i,j\in \NN_{\epsilon}$ are independent space time white noises. $K^{i,U}, K^{j,V}$, and hence $K_t$ below, are all right-continuous nondecreasing $M_F({\mathbb{R}})$-valued processes representing
the mutual killing of the two kinds of particles, such that
\begin{eqnarray}
\label{eq:2.2}
\sum_i K^{i,U}_t = \sum_j K^{j,V}_t =: K_t\,,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:2.3}
U_t(x)V_t(x)= 0,\;\;\forall t\geq 0, \;x\in{\mathbb{R}}.
\end{eqnarray}
That is, $U$ and $V$ have disjoint supports and hence the same is true of $U^i$ and $V^j$ for all $i,j\in\NN_\epsilon$.
It is easy to see from \eqref{UVdefn} (set $\phi\equiv 1$) that $K^{i,U}_t=U^i_t = 0, \; t< s_i$ and $K^{j,V}_t=V^j_t=0,\; t<t_j$ for all $i,j \in \NN_{\epsilon}$.
One can think of $U$ and $V$ as two populations with initial masses
immigrating at times $s_i\,, i\in {\mathbb{N}}_\epsilon$ and $t_j\,, j\in {\mathbb{N}}_\epsilon\,,$ respectively.
Condition (\ref{eq:2.3}) implies the presence of a ``hard killing'' mechanism
in which representatives of both populations annihilate each
other whenever they meet. The meaning of the ``hard killing'' notion will
become clearer when we will explain the construction of the equations as
limits of so-called soft-killing models.
We can regard $K^{i,U}$ and $K^{j,V}$ as the ``frozen'' mass that was killed
in corresponding populations due to the hard killing. If we reintroduce this
mass back we should get the model without killing. To this end let us
introduce the equations for ``killed'' populations which we denote by
$\widetilde{U}^i, \widetilde{V}^j$ and will take values in the same path space as $U^i$, $V^j$. For each $\phi\in C_b^2({\mathbb{R}})$, w.p. $1$ the following equations hold for all
$t\ge 0$ and $i,j\in\NN_\epsilon$:
\begin{eqnarray}
\label{tUVdefn}
\left\{
\begin{array}{rcl}
\widetilde{U}^i_t(\phi) &=& \int_0^t \int_{{\mathbb{R}}}\left[ \left(\widetilde{U}(s,x)+U(s,x)\right)^{2\gamma} - U(s,x)^{2\gamma}\right]^{1/2}
\sqrt{\frac{\widetilde{U}^i(s,x)}{\widetilde{U}(s,x)}} \phi(x) \widetilde{W}^{i,U}(ds,dx) \\
&& \mbox{}+ \int_0^t \widetilde{U}^{i}_s(\frac{1}{2}\Delta \phi)\, ds + K^{i,U}_t(\phi),
\\
&&
\\
\widetilde{V}^j_t(\phi) &=& \int_0^t\int_{{\mathbb{R}}} \left[ \left(\widetilde{V}(s,x)+V(s,x)\right)^{2\gamma} - V(s,x)^{2\gamma}\right]^{1/2}
\sqrt{\frac{\widetilde{V}^j(s,x)}{\widetilde{V}(s,x)}} \phi(x) \widetilde{W}^{j,V}(ds,dx) \\
&& \mbox{}+ \int_0^t \widetilde{V}^{j}_s(\frac{1}{2}\Delta \phi)\, ds + K^{j,V}_t(\phi),
\\
&&
\\
&&
{\rm with}\;\widetilde{U}_t = \sum_i \widetilde{U}^i_t,\;
\widetilde{V}_t = \sum_j \widetilde{V}^j_t\,,
\end{array}
\right.
\end{eqnarray}
where, as will be shown in Theorem~\ref{thm:1.1}, $\widetilde{U}$ and $\widetilde{V}$ have paths in $D^\epsilon({\mathbb{R}}_+,C^+_{\rm rap})$ and we define $\sqrt{0/0}=0$ in the stochastic integral. The white noises $\widetilde{W}^{i, U}$, $\widetilde{W}^{j,V}$, $i,j\in \NN_{\epsilon}$ are independent and also independent of
$\{ W^{i, U}, W^{j,V}, i,j\in \NN_{\epsilon}\}$. Again it is easy to see that
\begin{equation}\label{0early}
\widetilde{U}^i_t=0\hbox{ for }t<s_i\hbox{ and }\widetilde{V}_t^j=0\hbox{ for }t<t_j,\ i,j\in\NN_\epsilon.
\end{equation}
Then using stochastic calculus, we deduce that the
processes defined by $\bar{U}^i_t\equiv U^i_t+\widetilde{U}^i_t\,, \bar{V}^i\equiv V^i_t+\widetilde{V}^i_t$ satisfy the following equations for each $\phi$ as above, w.p.1 for all $t\ge 0$, $i,j\in\NN_\epsilon$:
\begin{eqnarray}
\label{eq:2.5}
\left\{
\begin{array}{rcl}
\bar{U}^i_t(\phi) &=&\langle J^{x_i},\phi\rangle\mathbf{1}(t\geq s_i)+ \int_0^t \bar{U}^{i}_s(\frac{1}{2}\Delta \phi)\, ds\\
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} \sqrt{U(s,x)^{2\gamma-1}U^i(s,x)+
\left(\bar{U}(s,x)^{2\gamma} - U(s,x)^{2\gamma}\right)\frac{\widetilde{U}^i(s,x)}{\widetilde{U}(s,x)}}\\
&&\mbox{}\hspace*{1.5cm}\times
\phi(x) \bar{W}^{i,U}(ds,dx),
\\
&&\\
\bar{V}^j_t(\phi) &=& \langle J^{y_j},\phi\rangle\mathbf{1}(t\geq t_j)+ \int_0^t \bar{V}^{j}_s(\frac{1}{2}\Delta \phi)\, ds
\\
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} \sqrt{V(s,x)^{2\gamma-1}V^j(s,x)+
\left(\bar{V}(s,x)^{2\gamma} - V(s,x)^{2\gamma}\right)\frac{\widetilde{V}^j(s,x)}{\widetilde{V}(s,x)}}\\
&&\mbox{}\hspace*{1.5cm}\times
\phi(x) \bar{W}^{j,V}(ds,dx),
\\
&&\\
&&{\rm with}\;\bar{U}_t = \sum_i \bar{U}^i_t,
\bar{V}_t = \sum_j \bar{V}^j_t\,,
\end{array}
\right.
\end{eqnarray}
where, $\{\bar{W}^{i,U},\bar{W}^{j,V}, i,j\in \NN_{\epsilon}\}$ is again a collection of independent white noises.
In spite of the complicated appearance of (\ref{eq:2.5}), for $\bar{U}, \bar{V}$ we easily get
\begin{eqnarray}
\label{eq:2.8}
\left\{
\begin{array}{rcl}
\bar{U}_t(\phi) &=& \int_0^t \int\phi(x)\eta^+_{\epsilon}(ds,dx) + \int_0^t \bar{U}_s(\frac{1}{2}\Delta \phi)\, ds\\
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} \bar{U}(s,x)^{\gamma}
\phi(x) \bar{W}^{U}(ds,dx), \;\;t\geq 0,
\\
&&\mbox{}\\
\bar{V}_t(\phi) &=& \int_0^t\int \phi(x)\eta^-_{\epsilon}(ds,dx)+ \int_0^t \bar{V}_s(\frac{1}{2}\Delta \phi)\, ds
\\
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} \bar{V}(s,x)^{\gamma}
\phi(x) \bar{W}^{V}(ds,dx), \;\;t\geq 0,
\end{array}
\right.
\end{eqnarray}
for independent white noises, $\bar{W}^{U}$ and $\bar{W}^{V}$.
One can easily derive from the proof of Theorem~1 of \cite{myt98w} that $(\bar U,\bar V)$ is unique in law (see Remark~\ref{rem:09_08_1} below).
Our next theorem claims existence of solutions to the above systems of equations. The filtration $(\mathcal{F}_t)$ will
always be right-continuous and such that
$\mathcal{F}_0$ contains the $P$-null sets in $\mathcal{F}$.
For any $T\geq 1$, the space $D^{\epsilon}([0,T],\mathbf{E})$ is defined in the
same way as $D^{\epsilon}({\mathbb{R}}_+,\mathbf{E})$, but for $\mathbf{E}$-valued functions on $[0,T]$.
For any function $f\in D({\mathbb{R}}_+\,, {\mathbb{R}})$, we set $\mathbf{\Delta} f(t)\equiv f(t)-f(t-)$, for any $t\geq 0$.
\begin{theorem}
\label{thm:1.1}
There exists a sequence $(U^i,V^i,\widetilde{U}^i, \widetilde{V}^i,\bar{U}^i,\bar{V}^i, K^{i,U},K^{i,V})_{i\in\NN_{\epsilon}}$ of processes
in
$$\left((C([0,T]\setminus\mathcal{G}_{\epsilon}, C^+_{\rm rap})\cap D^{\epsilon}([0,T], L^1({\mathbb{R}})))^4\times D^{\epsilon}({\mathbb{R}}_+,C^+_{\rm rap})^2\times D^{\epsilon}({\mathbb{R}}_+, M_F({\mathbb{R}}))^2\right)^{N_{\epsilon}}$$
which satisfy
(\ref{UVdefn}---\ref{eq:2.8}). Moreover, $(U,V,\widetilde{U},\widetilde{V})\in D^\epsilon({\mathbb{R}}_+,C^+_{\rm rap})^4$, and
\begin{itemize}
\item [{\bf (a)}] For any $i\in \NN_{\epsilon}$,
$\bar{U}^i_{s_i+\cdot}\in C({\mathbb{R}}_+,C^+_{\rm rap})$, $\bar{V}^i_{t_i+\cdot}\in C({\mathbb{R}}_+,C^+_{\rm rap})$ and
$$\bar{U}^i(s,\cdot)=0, s<s_i,\;\;
\bar{V}^i(s,\cdot)=0, s<t_i.$$
\item[{\bf (b)}]
$K_\cdot$ only has jumps at times in $\mathcal{G}_{\epsilon}$, and
\begin{equation}\label{Kjumps}
\sup_t\mathbf{\Delta} K_t(1)\le \epsilon.
\end{equation}
\end{itemize}
\end{theorem}
In what follows we will call $\bar{U}^i$, $\bar{V}^i$ (respectively, $U^i$, $V^i$) the clusters of the processes $\bar{U}$, $\bar{V}$
(resp. $U$, $V$).
Now with all the processes in hand let us state the results which will imply the non-uniqueness
in~(\ref{spde}) with zero initial conditions.
First define
\begin{equation}
u_{\epsilon}(t):= U_t-V_t\in C_{\rm rap}
\end{equation}
and recall that $U_t,V_t$ implicitly depend on $\epsilon$.
Then it is easy to see from the above construction that $u_{\epsilon}$ satisfies the following SPDE:
\begin{eqnarray}
\label{eq:spde-u-ep}
\langle u_{\epsilon}(t),\phi\rangle&=&\sum_{i}\langle J^{x_i}\mathbf{1}(t\geq s_i),\phi\rangle-
\sum_{j}\langle J^{y_j}\mathbf{1}(t\geq t_j),\phi\rangle \\
\nonumber&& +\int_0^t \frac{1}{2}\langle u_{\epsilon}(s),\Delta\phi\rangle ds
+ \int_0^t\int |u_{\epsilon}(s,x)|^{\gamma} \phi(x) W(ds,dx)
\end{eqnarray}
for $\phi\in C^2_b({\mathbb{R}})$.
The following two propositions will imply Theorem~\ref{thm:mainresult}.
\begin{prop}
\label{prop:2.1} Let $\epsilon_n=\frac{1}{n}$. Then
$\{ u_{\epsilon_n}\}_{n}$ is tight in $D({\mathbb{R}}_+,C_{\rm rap})$. If $u$ is any limit point
as $\epsilon_{n_k}\downarrow 0$, then $u$ is a $C_{\rm rap}$-valued solution of the SPDE (\ref{spde}).
\end{prop}
The next proposition is just a restatement of Claim~\ref{claim1}.
\begin{prop}
\label{prop:2.2}
There exists $\delta_{\ref{prop:2.2}},\epsilon_{\ref{prop:2.2}}>0$ such that for all $\epsilon\in (0,\epsilon_{\ref{prop:2.2}}]$,
\[
P\left(\sup_{t\in[0,1]}\int U^\epsilon_t(x)\,dx>\delta_{\ref{prop:2.2}}\right) > \delta_{\ref{prop:2.2}}.
\]
\end{prop}
The proof of Proposition~\ref{prop:2.1} will be standard and may be found in Section~\ref{sec:spdegr}. Most of the paper is devoted to the proof of
Proposition~\ref{prop:2.2}.
\section{Outline of the proof of Proposition~\ref{prop:2.2}}\label{sec:inclexcl}
\setcounter{equation}{0}
We analyze the behaviour of the clusters $U^i, V^i$ and show that with
positive probability at least one of them survives. As in the previous
section, we suppress dependence on the parameter $\epsilon\in(0,1]$.
To make our analysis precise we need to introduce the event $A_i$ that the mass of the cluster
$\bar U^i$ reaches $1$ before the cluster dies. Define
\begin{eqnarray*}
\bar{\tau}_i &=&\inf\{t:\; \bar{U}^i_{s_i+t}(1)=1\},\\
A_i&\equiv& \{\bar{\tau}_i<\infty\},
\end{eqnarray*}
so that $\bar{\tau}_i$ is an $(\mathcal{F}_{s_i+t})$-stopping time.
A lot of analysis will be done under the assumption that one of $A_i$ occurs with positive probability, and so we define the conditional probability measure $Q_i$:
\begin{eqnarray}\label{Qdef}
Q_i(A) &=& P(A | A_i),\;\;\forall A\in \mathcal{F}.
\end{eqnarray}
We need the following elementary lemma whose proof is given in
Section~\ref{sec:stochanal}.
\begin{lemma}
\label{lem:3.2}
For all $1\le i,j\le N_\epsilon$, the events $A_i=A_i(\epsilon)$ satisfy
\begin{itemize}
\item[{\bf (a)}]
\qquad$P(A_i) = \epsilon$.
\item[{\bf (b)}]
\qquad$P(A_i\cap A_j) = \epsilon^2, \;\;i\not= j$.
\end{itemize}
\end{lemma}
A simple inclusion-exclusion lower bound on $P(\cup_{i=1}^{\lfloor 2^{-1}\epsilon^{-1}\rfloor}A_i)$ shows that
for $\epsilon\le 1/4$, with probability at least $3/16$,
at least one cluster of $\bar U^i$ survives until it attains mass $1$.
We will focus on the corresponding $U^i$ and to show it is non-zero with positive probability (all uniformly in $\epsilon$), we will
establish a uniform (in $\epsilon$) escape rate. Set
\begin{equation}\label{betadef}\beta=\frac{3/2-\gamma}{2(1-\gamma)},
\end{equation}
and
note that $\beta<3/2$ for $\gamma<3/4$. Our escape rate depends on a parameter $\delta_1\in(0,1)$ (which will eventually be taken small enough depending on $\gamma$) and is given in the event
\[ B_i(t) = \left\{ U^i_{s_i+s}(1)\geq \frac{1}{2} s^{\beta+\delta_1},\;\forall s\in[\epsilon^{2/3}, t] \right\}.
\]
We denote the closed support of a measure $\mu$ on ${\mathbb{R}}$ by $S(\mu)$. Let
\[
T_R =\inf\{t:\; \| \bar{U}_t(\cdot)\|_{\infty}\vee\|\bar{V}_t(\cdot)\|_{\infty}> R\},
\]
so that $(T_R-s_i)^+$ is an $\mathcal{F}_{s_i+t}$-stopping time.
To localize the above escape rate we let $\delta_0\in(0,1/4]$ and define additional $(\mathcal{F}_{s_i+t})$-stopping times ($\inf\emptyset=\infty$) by
\begin{eqnarray*}
\rho_i^{\delta_0,\epsilon}= \rho_i&=&\inf\{ t: S(\bar U^i_{s_i+t})\not \subset
[x_i-\epsilon^{1/2}- t^{1/2-\delta_0}, x_i +\epsilon^{1/2}+t^{1/2-\delta_0}]\},
\\
H_i^{\delta_1,\epsilon}=H_i&=&\inf\{t\ge0: \bar{U}^i_{t+s_i}(1)<(t+\epsilon)^{\beta+\delta_1}\},\\
\theta^{\delta_0,\epsilon}_i=\theta_i&=&\inf\{t: K^{i,U}_{t+s_i}(1)> (t+\epsilon)^{3/2-2\delta_0}\},\\
v_i^{\delta_0,\delta_1,\epsilon}=v_i&=&\bar{\tau}_i\wedge H_{i}\wedge \theta_i\wedge \rho_i\wedge (T_R-s_i)^+.
\end{eqnarray*}
We now state the two key results and show how they lead to Proposition~\ref{prop:2.2}. The first result
is proved in Section~\ref{sec:stochanal} below using some stochastic analysis and change of measure arguments. The second is reduced to a sequence of Lemmas in Section~\ref{sec:prop3}.
\begin{prop}
\label{prop:2} There are $\delta_{\ref{prop:2}}(\gamma)>0$ and $p=p_{\ref{prop:2}}(\gamma)\in(0,1/2]$ such that if $0<2\delta_0\le\delta_1\le \delta_{\ref{prop:2}}$, then
\[ Q_i(B_i(t\wedge v_i))\geq 1-5t^{p},\;\;\hbox{ for all }t>0, \hbox{ and }\epsilon\in(0,1]. \]
\end{prop}
\begin{prop}
\label{prop:3}
For each $\delta_1\in(0,1)$ and small enough $\delta_0>0$, depending on $\delta_1$ and $\gamma$,
there exists a non-decreasing function $\delta_{\ref{prop:3}}(t)$, not depending on $\epsilon$, such that
\[ \lim_{t\downarrow 0} \delta_{\ref{prop:3}}(t)=0,\]
and for all $\epsilon,t\in(0,1]$,
\[ P\left(\bigcup_{i\geq 1}^{tN_\epsilon} \big(\{v_i <t\}\cap A_i\big)\right) \leq t\delta_{\ref{prop:3}}(t).
\]
\end{prop}
With these two propositions we have the following lemma:
\begin{lemma}\label{lem:inclexcl}
Let $p=p_{\ref{prop:2}}$ and $\delta(t)=\delta_{\ref{prop:3}}(t)$.
Assume $t=t_{\ref{lem:inclexcl}}\in(0,1]$ is chosen so that $5t^{p}+t+\delta(t)\leq 1/2$. Then we have
\begin{eqnarray*}
P\left(\bigcup_{i=1}^{tN_\epsilon} B_i(t)\right)\geq \frac{t}{4},\;\;
\forall \epsilon\in (0,t/8].
\end{eqnarray*}
\end{lemma}
\paragraph{Proof} Choose $\delta_1>0$ as in Proposition~\ref{prop:2}, then $\delta_0\in(0,\delta_1/2]$ as in Proposition~\ref{prop:3}, and finally $t_{\ref{lem:inclexcl}}=t$ as above. Then we have
\begin{eqnarray*}
\lefteqn{ P\left(\bigcup_{i=1}^{tN_\epsilon} B_i(t)\right) }\\
&\geq&
P\left(\bigcup_{i=1}^{tN_\epsilon} B_i(t\wedge v_i)\cap A_i\cap \{v_i\geq t\}\right)\\
&\geq& P\left(\bigcup_{i=1}^{tN_\epsilon}
B_i(t\wedge v_i)\cap A_i\right)- P\left(\bigcup_{i=1}^{tN_\epsilon}
A_i\cap \{v_i< t\}\right)\\
&\geq& \sum_{i=1}^{tN_\epsilon} P\left(
B_i(t\wedge v_i)\cap A_i\right) - \sum_{i=1}^{tN_\epsilon}
\sum_{j=1, j\not=i}^{tN_\epsilon}P\left(
A_i\cap A_j\right) \\
&&\mbox{} - P\left(\bigcup_{i=1}^{tN_\epsilon}
A_i\cap \{v_i< t\}\right).
\end{eqnarray*}
Recall the definition of the conditional law $Q_i$ and use Lemma~\ref{lem:3.2}(b) to see that the above is at least
\begin{eqnarray*}
&\sum_{i=1}^{tN_\epsilon}& Q_i\left(
B_i(t\wedge v_i)\right)P(A_i) - t^2N_\epsilon^2\epsilon^2 \\
&&\mbox{} - P\left(\bigcup_{i=1}^{tN_\epsilon}
A_i\cap \{v_i< t\}\right)
\\
&\geq&\epsilon(tN_\epsilon-1)- 5N_\epsilon\ep t^{1+p_{\ref{prop:2}}} - t^2 - t\delta_{\ref{prop:3}}(t)\\
&\geq&t[1-5t^{p_{\ref{prop:2}}}-t-\delta_{\ref{prop:3}}(t)]-2\epsilon,
\end{eqnarray*}
where the next to last inequality follows by Lemma~\ref{lem:3.2}(a) and Propositions~\ref{prop:2},~\ref{prop:3}.
Our choice of $t=t_{\ref{lem:inclexcl}}$ shows that for $\epsilon\le t_{\ref{lem:inclexcl}}/8$. the above is at least $$\frac{t}{2}-\frac{t}{4}=\frac{t}{4}.$$
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\paragraph{Proof of Proposition~\ref{prop:2.2}} It follows from the final part of \eqref{UVdefn} that for all $t\ge 0$, $\int U^\epsilon_t(x)\,dx\ge \max_i\int U^{i,\epsilon}_t(x)\,dx$. The proposition follows
immediately from Lemma \ref{lem:inclexcl}.
\section{Proof of Proposition~\ref{prop:3}}\label{sec:prop3}
\setcounter{equation}{0}
In this Section we reduce the proof of Proposition~\ref{prop:3} to five lemmas which will be proved in Sections \ref{sec:stochanal}-\ref{sec:Kgrowth} below.
The bounds in this section may depend on the parameters $\delta_0$ and $\delta_1$ but not $\epsilon$.
We introduce
\begin{equation}\label{bardef}
\bar\delta=\bar\delta(\gamma)=\frac{1}{3}\left(\frac{3}{2}-2\gamma\right)\in(0,1/6].
\end{equation}
\begin{lemma}\label{lem:baresc} For $\delta_0>0$ sufficiently small, depending
on $\delta_1,\gamma$, there is a function \break
$\eta_{\ref{lem:baresc}}:{\mathbb{R}}_+\to[0,1]$ so
that $\eta_{\ref{lem:baresc}}(t)\rightarrow 0$ as $t\downarrow 0$, and for all $t>0$ and $\epsilon\in(0,1]$,
$$ Q_i(H_i\leq \bar{\tau}_i\wedge \rho_i\wedge t)\leq \eta_{\ref{lem:baresc}}(t)+8\epsilon^{\delta_1}.$$
\end{lemma}
\begin{lemma}\label{bartaubnd} For all $t>0$ and $\epsilon\in(0,1]$,
$$ Q_i(\bar{\tau}_i\leq t\wedge (T_R-s_i)^+)\leq 2\gamma R^{2\gamma-1}t+\epsilon.$$
\end{lemma}
\begin{lemma}\label{thetabnd} If $0<\delta_0\le \bar\delta$, there is a
constant $c_{\ref{thetabnd}}$, depending on $\gamma$ and $\delta_0$, so that
$$ Q_i(\theta_i< \rho_i\wedge t)\leq c_{\ref{thetabnd}}(t\vee\epsilon)^{\delta_0}\hbox{ for all }\epsilon,t\in(0,1] \hbox{ and }s_i\le t.$$
\end{lemma}
It remains to handle the $\rho_i$ and $T_R$. This we do under the probability $P$.
\begin{lemma} \label{rhobnd} There is a constant $c_{\ref{rhobnd}}\ge 1$, depending on $\gamma$ and $\delta_0$, so that
$$P\left(\bigcup_{i=1}^{pN_\epsilon}\{\rho_i\le t\}\right)\le c_{\ref{rhobnd}}(t\vee\epsilon)p1(p\ge \epsilon)\quad\hbox{for all $\epsilon,p,t\in(0,1]$}.$$
\end{lemma}
\begin{lemma}\label{TRbnd} For any $\epsilon_0>0$ there is a function
$\delta_{\ref{TRbnd}}:(0,2]\to{\mathbb{R}}_+$ so that \break
$\lim_{t\to0}\delta_{\ref{TRbnd}}(t)=0$ and
\[P(\sup_{s< t,x\in{\mathbb{R}}}\bar U(s,x)\vee\bar V(s,x)>t^{-2-\epsilon_0})\le t\delta_{\ref{TRbnd}}(t)\hbox{ for all }\epsilon\in(0,1], t\in(0,2].\]
\end{lemma}
Assuming the above five results it is now very easy to give the\\
\noindent{\bf Proof of Proposition~\ref{prop:3}} For $\delta_1\in (0,1)$ choose $\delta_0>0$ small enough so that the conclusion of Lemmas~\ref{lem:baresc} and \ref{thetabnd} hold. Then for $0<t\le 1\le R$ and $0<\epsilon\le 1$, using Lemma~\ref{rhobnd} with $p=t$, we have
\begin{align*}
P&(\cup_{i=1}^{tN_\epsilon}\{v_i< t\}\cap A_i)\\
&\le P(\cup_{i=1}^{tN_\epsilon}\{T_R< t+s_i\})+P(\cup_{i=1}^{tN_\epsilon}\{\bar{\tau}_i<t\wedge (T_R-s_i)^+\}\cap A_i)\\
&+P(\cup_{i=1}^{tN_\epsilon}\{\rho_i< t\})
+P(\cup_{i=1}^{tN_\epsilon}\{H_i< \bar{\tau}_i\wedge \rho_i\wedge t\}\cap A_i)+P(\cup_{i=1}^{tN_\epsilon}\{\theta_i< \rho_i\wedge t\}\cap A_i)\\
&\le P(T_R< 2t)+\sum_{i=1}^{tN_\epsilon}Q_i(\bar{\tau}_i\le t\wedge (T_R-s_i)^+)P(A_i)+c_{\ref{rhobnd}}(t\vee\epsilon)t1(t\ge\epsilon)\\
&\ \ +\sum_{i=1}^{tN_\epsilon}Q_i(H_i< \bar{\tau}_i\wedge\rho_i\wedge t)P(A_i)+\sum_{i=1}^{tN_\epsilon} Q_i(\theta_i< \rho_i\wedge t)P(A_i).
\end{align*}
Now apply Lemma~\ref{lem:3.2} and Lemmas~\ref{lem:baresc}-\ref{thetabnd} to bound the above by
\begin{align*}
P\left(\sup_{s<2t,x\in{\mathbb{R}}}\bar U(s,x)\vee \bar V(s,x)> R\right)&+2\gamma R^{2\gamma-1}t^2 +\epsilon t+c_{\ref{rhobnd}}t^2\\
&+t\eta_{\ref{lem:baresc}}(t)+t8\epsilon^{\delta_1}+tc_{\ref{thetabnd}}(t\vee\epsilon)^{\delta_0}.\end{align*}
We may assume without loss of generality that $\eta_{\ref{lem:baresc}}$ is
non-decreasing and $t\ge \epsilon$ (or else the left-hand side is $0$). Set $R=t^{-2-\epsilon_0}$, where $\epsilon_0>0$ is
chosen so that $3-4\gamma-\epsilon_0(2\gamma-1)>0$ and use Lemma~\ref{TRbnd} to
obtain the required bound with
\[
\delta_{\ref{prop:3}}(t)=2\delta_{\ref{TRbnd}}(2t)+2\gamma (2t)^{3-4\gamma-\epsilon_0(2\gamma-1)}+2c_{\ref{rhobnd}}t+\eta_{\ref{lem:baresc}}(t)+8t^{\delta_1}+c_{\ref{thetabnd}}t^{\delta_0}.
\]
This finishes the proof of Proposition~\ref{prop:3}. \hfill\quad \vrule height7.5pt width4.17pt depth0pt
\section{Proofs of Proposition~\ref{prop:2} and Lemmas~\ref{lem:baresc} and \ref{bartaubnd}}\label{sec:stochanal}
\setcounter{equation}{0}
Define
\[\bar\tau_i(0)=\inf\{t\ge 0:\bar{U}^i_{s_i+t}(1)=0\},\]
and
\[\bar\tau_i(0,1)=\bar\tau_i(0)\wedge \bar\tau_i.\]
where $\bar\tau_i$ was defined at the beginning of Section \ref{sec:inclexcl}.
It follows from \eqref{eq:2.5} that
\begin{equation}\label{barmassmart}
\bar{U}^i_{t+s_i}(1)=\varepsilon+\bar{M}^i_t,
\end{equation}
where $\bar{M}^i$ is a continuous local $(\mathcal{F}_{s_i+t})$-martingale starting at $0$ at $t=0$ and satisfying
\begin{eqnarray}\label{barMsqfn}
\langle\bar{M}^i\rangle_t&=&\int_{s_i}^{s_i+t}\int U(s,x)^{2\gamma-1}U^i(s,x)\\
\nonumber&&\phantom{\int_0^t\int}+(\bar{U}(s,x)^{2\gamma}-U(s,x)^{2\gamma})\frac{\tilde U^i(s,x)}{\tilde U(s,x)}\,dx\,ds.
\end{eqnarray}
\begin{lemma}\label{hittime}
There is a $c_{\ref{hittime}}=c_{\ref{hittime}}(\gamma)>0$ so that
\[P(\bar\tau_i(0)>t)\le c_{\ref{hittime}}\epsilon^{2-2\gamma}t^{-1}\hbox{ for all }t>0.\]
\end{lemma}
\paragraph{Proof} It follows from \eqref{barMsqfn} that
\begin{align}\label{barsqfnbnd}
\nonumber&\frac{d\langle\bar{M}^i\rangle(t)}{dt}\\
\nonumber&=\int U(s_i+t,x)^{2\gamma-1}U^i(s_i+t,x) \\
\nonumber&\qquad +(\bar U(s_i+t,x)^{2\gamma}-U(s_i+t,x)^{2\gamma})\
\frac{\tilde U^i(s_i+t,x)}{\tilde U(s_i+t,x)}\,dx\\
\nonumber&\ge\int U(s_i+t,x)^{2\gamma-1}U^i(s_i+t,x)+\tilde U(s_i+t,x)^{2\gamma-1}\tilde U^i(s_i+t,x)\,dx\\
\nonumber&\ge\int U^i(s_i+t,x)^{2\gamma}+\tilde U^i(s_i+t,x)^{2\gamma}\,dx\\
&\ge2^{1-2\gamma}\int \bar{U}^i(s_i+t,x)^{2\gamma}\,dx.
\end{align}
If $\gamma>1/2$, the result now follows from Lemma 3.4 of \cite{mp92}.
If $\gamma=1/2$, then one can construct a time scale $\tau_t$ satisfying
$\tau_t\le t$ for $\tau_t\le \bar \tau_i(0)$, under which
$t\to U^i_{s_i+\tau_t}(1)$ becomes Feller's continuous state branching
diffusion. The required result then follows from well-known bounds on the
extinction time for the continuous state branching process (e.g. see (II.5.12)
in \cite{per02}).
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{prop}\label{girs}\quad
$$Q_i(A)=\int_A\frac{\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1)}{\epsilon}\,dP,\quad\hbox{ for all }A\in\mathcal{F}_{s_i+t},\ t\ge 0.$$
\end{prop}
\paragraph{Proof} Since $\bar\tau_i(0,1)<\infty$ a.s. (by the previous Lemma) and $\bar U^i(1)$ remains at $0$ when it hits $0$, we have
\begin{equation}\label{tau1}1(\bar\tau_i<\infty)=\bar{U}^i_{s_i+\bar\tau_i(0,1)}(1)\quad\hbox{a.s.}
\end{equation}
By considering $\bar\tau_i(0,1)\le t$ and $\bar\tau_i(0,1)>t$ separately we see that
\begin{equation}\label{tau2}
\bar{U}^i_{s_i+(\bar\tau_i(0,1)\wedge t)}(1)=\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1)\hbox{ a.s. on }\{\bar\tau_i>t\}.
\end{equation}
If $A\in\mathcal{F}_{s_i+t}$, then
\begin{eqnarray}\label{pdec}
\lefteqn{ P(A,\bar\tau_i<\infty) }\\
\nonumber&=&P(A,\bar\tau_i\le t)+P(A,t<\bar\tau_i<\infty)\\
\nonumber&=&\int1(A,\bar\tau_i\le t)\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1)\,dP+E(1(A,\bar\tau_i>t)P(\bar\tau_i<\infty|\mathcal{F}_{s_i+t})).
\end{eqnarray}
By \eqref{tau1} and \eqref{tau2} on $\{\bar\tau_i>t\}$,
\begin{eqnarray*}
P(\bar\tau_i<\infty|\mathcal{F}_{s_i+t})&=&E(\bar{U}^i_{s_i+\bar\tau_i(0,1)}(1)|\mathcal{F}_{s_i+t})\\
&=&\bar{U}^i_{s_i+(\bar\tau_i(0,1)\wedge t)}(1)\\
&=&\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1).
\end{eqnarray*}
Then from \eqref{pdec} we conclude that
\begin{equation}\label{girs1}
P(A,\bar\tau_i<\infty)=\int_A\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1)\,dP.
\end{equation}
If $A=\Omega$ we get
\begin{equation}\label{girs2}
P(\bar\tau_i<\infty)=E(\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1))=\bar{U}^i_{s_i}(1)=\epsilon.
\end{equation}
Taking ratios in the last two equalities to see that
\begin{equation*}
Q_i(A)=\int_A\bar{U}^i_{s_i+(\bar\tau_i\wedge t)}(1)/\epsilon\,dP,
\end{equation*}
as required.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\medskip
\noindent{\bf Proof of Lemma~\ref{lem:3.2}} (a) is immediate from \eqref{girs2}.
\noindent (b) Assume $i<j$. The orthogonality of the bounded continuous $(\mathcal{F}_t)$-martingales
$\bar U^i_{t\wedge(s_i+\bar\tau^i(0,1))}(1)$ and $\bar U^j_{t\wedge(t_j+\bar\tau^j(0,1))}(1)$ (see \eqref{eq:2.5}) shows that
\begin{align}
\label{orthoU}
E\left[\bar U^i_{s_i+\bar\tau^i(0,1)}(1)\bar U^j_{s_j+\bar\tau^j(0,1)}(1)|\mathcal{F}_{s_j}\right]&\mathbf{1}(s_i+\bar\tau_i(0,1)>s_j) \\
&= \bar U^i_{s_j}(1)\epsilon\mathbf{1}(s_i+\bar\tau_i(0,1)>s_j). \nonumber
\end{align}
By first using \eqref{tau1} and then \eqref{orthoU}, we have
\begin{align*}
P(A_i\cap A_j)&=E\left[\bar U^i_{s_i+\bar\tau^i(0,1)}(1)\bar U^j_{s_j+\bar\tau^j(0,1)}(1)\right]\\
&=E\Bigl[\bar U^i_{s_i+\bar\tau^i(0,1)}(1)\mathbf{1}(s_i+\bar\tau_i(0,1)\le s_j)E\left[\bar U^j_{s_j+\bar\tau^j(0,1)}(1)|\mathcal{F}_{s_j}\right]\Bigr]\\
&\quad +E\Bigl[E\left[\bar U^i_{s_i+\bar\tau^i(0,1)}(1)\bar U^j_{s_j+
\bar\tau^j(0,1)}(1)|\mathcal{F}_{s_j}\right]\mathbf{1}(s_i+\bar\tau_i(0,1)>s_j)\Bigl]\\
&=E\left[\bar U^i_{(s_i+\bar\tau^i(0,1))\wedge s_j}(1)\mathbf{1}(s_i+\bar\tau_i(0,1)\le s_j)\epsilon\right]\\
&\qquad+E\left[\bar U^i_{s_j}(1)\epsilon\mathbf{1}(s_i+\bar\tau_i(0,1)>s_j)\right]\\
&=E\left[\bar U^i_{(s_i+\bar\tau^i(0,1))\wedge s_j}(1)\right]\epsilon=\epsilon^2.
\end{align*}
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\medskip
\noindent{\bf Proof of Lemma~\ref{lem:baresc}} Clearly $\bar M^i_{t\wedge\bar\tau_i}$ is a bounded $(\mathcal{F}_{s_i+t})$-martingale under $P$. Girsanov's theorem (see Theorem VIII.1.4 of Revuz and Yor \cite{ry99}) shows that
\begin{equation}\label{girsdec}\bar M^i_{t\wedge\bar\tau_i}=\bar M^{i,Q}_t+\int_0^{t\wedge\bar{\tau}_i}\bar U^i_{s_i+s}(1)^{-1}\,d\langle \bar M^i \rangle_s,\end{equation}
where $\bar M^{i,Q}$ is an $(\mathcal{F}_{s_i+t})$-local martingale under $Q_i$ such that $\langle \bar M^{i,Q}\rangle_t=\langle \bar M^i\rangle_{t\wedge\bar\tau_i}$.
If $\bar X_t=\bar U^i_{s_i+(t\wedge \bar{\tau}_i)}(1)$, for
\[t\le \int_0^{\bar{\tau}_i}\bar X_s^{-1}d\langle \bar M^i\rangle_s\equiv R_i,\]
define $\tau_t$ by
\begin{equation}\label{taudef}\int_0^{\tau_t}\bar X_s^{-1}d\langle \bar M^i\rangle_s=t.\end{equation}
Since $\bar X_s>0$ and $\frac{d\langle \bar M^i\rangle_s}{ds}>0$ for all $0\le s\le \bar{\tau}_i$ $Q_i$-a.s. (see \eqref{barMsqfn}) this uniquely defines $\tau$ under $Q_i$ as a strictly increasing continuous function on $[0,R_i]=[0,\tau^{-1}(\bar{\tau}_i)]$. By differentiating \eqref{taudef} we see that
\begin{equation}\label{diffid1}
\frac{d}{dt}(\langle \bar M^i \rangle\circ\tau)(t)=\bar X(\tau_t),\ t\le \tau^{-1}(\bar{\tau}_i).
\end{equation}
Let $N_t=\bar M^{i,Q}(\tau_t)$, so that
\[Z_t\equiv\bar X (\tau_t)=\epsilon+N_t+t, \hbox{ for }t\le \tau^{-1}(\bar{\tau}_i),\]
and by \eqref{diffid1} for $t$ as above
\[\langle N\rangle_t=\langle \bar M^i\rangle(\tau_t)=\int_0^t Z_s\,ds.\]
Therefore we can extend the continuous local martingale $N(t\wedge \tau^{-1}(\bar{\tau}_i))$ for $t>\tau^{-1}(\bar{\tau}_i)$ so that $4Z_t$ is the square of a $4$-dimensional Bessel process (see Section XI.1 of Revuz and Yor \cite{ry99}). By the escape rate for $4Z$ (see Theorem 5.4.6 of Knight \cite{kni81}) and a comparison theorem for SDE (Thm. V.43.1 of \cite{rw87}) there is a non-decreasing $\eta_{\delta_0}:{\mathbb{R}}_+\to[0,1]$ so that $\eta_{\delta_0}(0+)=0$ and if $T_Z=\inf\{t:Z_t=1\}$, and
\[\Gamma(\varepsilon,\delta_0)=\inf_{0<t\le T_Z}\frac{Z(t)}{t^{1+\delta_0}},\]
then
\begin{equation}\label {gambnd}
\sup_{0<\epsilon\le 1}Q_i(\Gamma(\epsilon,\delta_0)\le r)\le \eta_{\delta_0}(r).
\end{equation}
Clearly $T_Z=\tau^{-1}(\bar\tau_i)$ and so
\[
\inf_{0<u\le \bar{\tau}_i}\frac{\bar X(u)}{\tau^{-1}(u)^{1+\delta_0}}=\inf_{0<t\le T_Z}\frac{\bar X(\tau_t)}{t^{1+\delta_0}}=\Gamma(\epsilon,\delta_0).
\]
That is,
\begin{equation}\label{barXbnd1}
\bar X(u)\ge \Gamma(\varepsilon,\delta_0)\tau^{-1}(u)^{1+\delta_0}\quad\hbox{for all }0<u\le \bar{\tau}_i.
\end{equation}
To get a lower bound on $\tau^{-1}(u)$, use \eqref{barsqfnbnd} to see that for $s<\rho_i\wedge \bar{\tau}_i$,
\begin{eqnarray*}
\frac{d\langle \bar M^i\rangle_s}{ds}
&\ge& 2^{1-2\gamma}\int1(x_i-\epsilon^{1/2}-s^{(1/2)-\delta_0}\le x\le x_i+\epsilon^{1/2}+s^{(1/2)-\delta_0}) \\
&&\hspace{2.7in} \times\,\bar U^i(s_i+s,x)^{2\gamma}\,dx\\
&\ge& 2^{1-2\gamma}\left[2(\epsilon^{1/2}+s^{(1/2)-\delta_0})\right]^{1-2\gamma}\bar X(s)^{2\gamma},
\end{eqnarray*}
where the bound on $s$ is used in the last line. Therefore for $\epsilon/2\le s<\rho_i\wedge\bar{\tau}_i$ there is a $c_1(\gamma)>0$ so that
\begin{eqnarray*}
\frac{d\tau^{-1}(s)}{ds}&=&\bar X_s^{-1}\frac{2\langle \bar M^i\rangle_s}{ds}\\
&\ge &c_1(\gamma)s^{((1/2)-\delta_0)(1-2\gamma)}\bar X_s^{2\gamma-1}\\
&\ge & c_1(\gamma)\Gamma(\epsilon,\delta_0)^{2\gamma-1}s^{((1/2)-\delta_0)(1-2\gamma)}\tau^{-1}(s)^{(2\gamma-1)(1+\delta_0)},
\end{eqnarray*}
where \eqref{barXbnd1} is used in the last line. Therefore if $\epsilon\le t\le \rho_i\wedge \bar{\tau}_i$, then
\[\int_{\epsilon/2}^t\frac{d\tau^{-1}(s)}{\tau^{-1}(s)^{(2\gamma-1)(1+\delta_0)}}\ge c_1(\gamma)\Gamma(\epsilon,\delta_0)^{2\gamma-1}\int_{\epsilon/2}^t s^{((1/2)-\delta_0)(1-2\gamma)}\,ds.\]
If $\delta'_0=\delta_0(2\gamma-1)$, this in turn gives
\begin{eqnarray*}
\tau^{-1}(t)^{2-2\gamma-\delta_0'}&\ge & c_1(\gamma)\Gamma(\epsilon,\delta_0)^{2\gamma-1} \Bigl[t^{1+(\frac{1}{2}-\delta_0)(1-2\gamma)}-\Bigl(\frac{\epsilon}{2}\Bigr)^{1+(\frac{1}{2}-\delta_0)(1-2\gamma)}\Bigr]\\
&\ge &c_2(\gamma)\Gamma(\epsilon,\delta_0)^{2\gamma-1}t^{(3/2)-\gamma+\delta'_0}.
\end{eqnarray*}
We have shown that if $\beta(\delta_0)=\frac{(3/2)-\gamma+\delta'_0}{2-2\gamma-\delta'_0}$, then for $\epsilon\le t\le \rho_i\wedge\bar{\tau}_i$,
\begin{eqnarray*}
\tau^{-1}(t)&\ge&c_2(\gamma)^{1/(2-2\gamma-\delta'_0)}\Gamma(\epsilon,\delta_0)^{\frac{2\gamma-1}{2-2\gamma-\delta'_0}}t^{\beta(\delta_0)}\\
&\ge&c_2(\gamma)^{1/(2-2\gamma-\delta'_0)} (\Gamma(\epsilon,\delta_0)\wedge1)^2 t^{\beta(\delta_0)},
\end{eqnarray*}
where $\delta'_0<1/4$ is used in the last line.
Recall the definition of the constant $\beta\in[1,\frac{3}{2})$ from \eqref{betadef}. Use the above in \eqref{barXbnd1} to see that there is a $c_3(\gamma)\in(0,1)$ so that for $\epsilon\le t\le \rho_i\wedge\bar{\tau}_i\wedge 1$,
\begin{eqnarray*}
\bar X(t)&\ge &[c_3(\gamma)(\Gamma(\epsilon,\delta_0)\wedge 1)]^4t^{\beta(\delta_0)(1+\delta_0)}\\
&>& (2t)^{\beta+\delta_1},
\end{eqnarray*}
provided that $c_3(\gamma)(\Gamma(\epsilon,\delta_0)\wedge 1)> 2t^{\delta_0}$ and $\delta_0$ is chosen small enough depending on $\delta_1$ and $\gamma$. By \eqref{gambnd} we conclude that for $t\le 1$, and $\epsilon\in(0,1]$,
\begin{align}\label{escaway0}Q_i&(\bar X_s\le (2s)^{\beta+\delta_1}\hbox{ for some }\epsilon\le s\le \rho_i\wedge\bar\tau_1\wedge t)\\
\nonumber&\le Q_i(\Gamma(\epsilon,\delta_0)\wedge 1\le2 t^{\delta_0}/c_3(\gamma))\\
\nonumber&\le \eta_{\delta_0}( 2t^{\delta_0}/c_3(\gamma))+1(2t^{\delta_0}\ge c_3(\gamma))\equiv\eta_{\ref{lem:baresc}}(t).
\end{align}
The above inequality is trivial for $t>1$ as then the right-hand side is at least $1$.
Next note that since $Z_t=\bar X(\tau_t)$ for $t\le T_Z$, $\bar X_u\equiv 1$ for $u\ge \bar\tau_i$, and $4Z$ has scale function $s(x)=-x^{-1}$ (see (V.48.5) in Rogers and Williams \cite{rw87}), we see that for $\epsilon^{\delta_1}\le2^{-\beta-\delta_1}$,
\begin{align}
\nonumber Q_i(\bar X_t\le (2\epsilon)^{\beta+\delta_1}&\hbox{ for some }t\ge 0)\le Q_i(4Z\hbox{ hits }4(2\epsilon)^{\beta+\delta_1}\hbox{ before }4)\\
\nonumber&=\frac{s(4)-s(4\epsilon)}{s(4)-s(4\cdot 2^{\beta+\delta_1}\epsilon^{\beta+\delta_1})}\\
\nonumber&=\frac{1-\epsilon}{2^{-\beta-\delta_1}\epsilon^{1-\beta-\delta_1}-\epsilon}\\
\nonumber&=\frac{1-\epsilon}{2^{-\beta-\delta_1}\epsilon^{-\delta_1}(\epsilon^{1-\beta}-2^{\beta+\delta_1}\epsilon^{\delta_1+1})}\\
\nonumber&\le \frac{1-\epsilon}{2^{-\beta-\delta_1}\epsilon^{-\delta_1}(\epsilon^{1-\beta}-\epsilon)}\\
\label{escnr0}&\le 2^{\beta+\delta_1}\epsilon^{\delta_1}\le 8\epsilon^{\delta_1}.
\end{align}
The above bound is trivial if $\epsilon^{\delta_1}>2^{-\beta-\delta_1}$.
Combine \eqref{escaway0} and \eqref{escnr0} to conclude that
\begin{align*}
Q_i&(\bar X_s\le (s+\epsilon)^{\beta+\delta_1}\hbox{ for some }0\le s\le \rho_i\wedge\bar\tau_i\wedge t)\\
&\le Q_i(\bar X_s\le (2s)^{\beta+\delta_1}\hbox{ for some }\epsilon\le s\le \rho_i\wedge\bar\tau_i\wedge t)\\
&\quad +Q_i(\bar X_s\le (2\epsilon)^{\beta+\delta_1}\hbox{ for some }0\le s\le\epsilon)\\
&\le \eta_{\ref{lem:baresc}}(t)+8\epsilon^{\delta_1}.
\end{align*}
The result follows.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\medskip
\noindent{\bf Proof of Lemma \ref{bartaubnd}} As in the previous proof we set
\[\bar X_t=\bar U_{s_i+(t\wedge\bar{\tau}_i)}(1)=\epsilon+\bar M^i_t.\]
\phantom{}From \eqref{girsdec} we have under $Q_i$,
\begin{equation}\label{Xdec1}
\bar X_t=\epsilon+\bar M_t^{i,Q}+\int_0^{t\wedge\bar{\tau}_i}\bar X_s^{-1}d\langle \bar M^i\rangle_s,
\end{equation}
where $\bar M^{i,Q}$ is an $(\mathcal{F}_{s_i+t})$-local martingale under $Q_i$. Therefore $\bar X$ is a bounded non-negative submartingale under $Q_i$ and by the weak $L^1$ inequality
\begin{eqnarray}
\nonumber Q_i(\bar{\tau}_i\le t\wedge (T_R-s_i)^+)&=&Q_i\left(\sup_{s\le t\wedge (T_R-s_i)^+} \bar X_s\ge 1\right)\\
\label{meanbnd}&\le &\int \bar X_{t\wedge (T_R-s_i)^+}\,dQ_i.
\end{eqnarray}
It is not hard to show that $\bar M^{i,Q}$ is actually a martingale under $Q_i$ but even without this we can localize and use Fatou's Lemma to see that the right-hand side of \eqref{meanbnd} is at most
\begin{equation}\label{sqfnbnd}
\epsilon + E_{Q_i}\left[\int_0^t 1(s\le (T_R-s_i)^+\wedge\bar{\tau}_i)\bar X_s^{-1}d\langle \bar M^i\rangle_s\right]\equiv \epsilon+I.
\end{equation}
Next use \eqref{eq:2.5} and then the mean value theorem to see that
\begin{align*}
I&=E_{Q_i}\Bigl[\int_{s_i}^{s_i+t}\mathbf{1}(s\le T_R\wedge (s_i+\bar{\tau}_i)) \\
&\phantom{=E_{Q_i}\Big[} \times\int \Big(U(s,x)^{2\gamma-1}U^i(s,x)+(\bar
U(s,x)^{2\gamma}-U(s,x)^{2\gamma}\Big) \\
&\phantom{=E_{Q_i}\Big[\int_{s_i}^{s_i+t}\mathbf{1}(}\times\tilde U^i(s,x)\tilde U(s,x)^{-1}\,dx\,\bar U^i_s(1)^{-1}\,ds\Bigr]\\
&\le \int_{s_i}^{s_i+t}E_{Q_i}\Bigl[1(s\le T_R\wedge(s_i+\bar{\tau}_i)) \\
&\phantom{=E_{Q_i}} \times\int \left(U(s,x)^{2\gamma-1}U^i(s,x)+2\gamma\bar U(s,x)^{2\gamma-1}\tilde U^i(s,x)\right)\,dx\,\bar U^i_s(1)^{-1}\Bigr]\,ds\\
&\le 2\gamma R^{2\gamma-1}\int_{s_i}^{s_i+t}E_{Q_i}\Bigl[1(s\le s_i+\bar{\tau}_i)\int \bar U^i(s,x)\,dx\,\bar U^i_s(1)^{-1}\Bigr]\,ds\\
&\le 2\gamma R^{2\gamma-1} t.
\end{align*}
Put the above bound into \eqref{sqfnbnd} and then use \eqref{meanbnd} to conclude that
\[Q_i(\bar{\tau}_i\le t\wedge (T_R-s_i)^+)\le \epsilon+2\gamma R^{2\gamma-1} t,\]
as required. \hfill\quad \vrule height7.5pt width4.17pt depth0pt
\medskip
\noindent{\bf Proof of Proposition~\ref{prop:2}} Fix $i\le N_\epsilon$ and set
\[X_t=U^i_{s_i+(t\wedge\bar\tau_i)}(1),\ D_t=\widetilde{U}^i_{s_i+(t\wedge\bar\tau_i)}(1).\]
If $f(x,d)=d/(x+d)$, then
\begin{equation}\label{Rdefn} R_t\equiv\frac{\widetilde{U}^i_{s_i+(t\wedge\bar\tau_i)}(1)}{\bar U^i_{s_i+(t\wedge\bar\tau_i)}(1)}=f(X_t,D_t)\in[0,1].
\end{equation}
Theorem~\ref{thm:1.1} shows that $X$ and $D$ are right-continuous semimartingales with left limits.
We will work under $Q_i$ so that the denominator of $R$ is strictly positive for all $t\ge 0$ $Q_i$-a.s.
Our goal will be to show that $R$ remains small on $[0,t\wedge v_i]$ for $t$ small with high probability, uniformly in $\epsilon$. Then $U^i_{s_i+s}(1)$ will be bounded below by a constant times $\bar U_{s_i+s}(1)$ on this interval with high probability, and the latter satisfies a uniform escape rate on the interval by the definition of $v_i$.
\,From Theorem~\ref{thm:1.1}, and in particular \eqref{tUVdefn} and \eqref{0early}, we have
\[\widetilde{U}^i_{s_i+(t\wedge\bar\tau_i)}(1)=\widetilde{M}^i_t+K^{i,U}_{s_i+(t\wedge\bar\tau_i)}(1),\]
where $\widetilde{M}^i$ is the continuous $(\mathcal{F}_{s_i+t})$-local martingale (under $P$) given by
\[\widetilde{M}^i_t=\int_{s_i}^{s_i+(t\wedge\bar\tau_i)}\left(\bar U(s,x)^{2\gamma}-U(s,x)^{2\gamma}\right)^{1/2}\sqrt{\frac{\widetilde{U}^i(s,x)}{\widetilde{U}(s,x)}}\widetilde{W}^{i,U}(ds,dx),\]
and $K^{i,U}_{s_i+\cdot}$ is a right-continuous non-decreasing process.
By Girsanov's Theorem (Theorem VIII.1.4 in Revuz and Yor \cite{ry99}) there is a continuous $(\mathcal{F}_{s_i+t})$-local martingale under $Q_i$, $\widetilde{M}^{i,Q}$, so that
\begin{align}\label{tMQi}
\widetilde{M}^i_t&=\widetilde{M}^{i,Q}_t+\int_{s_i}^{s_i+(t\wedge\bar\tau_i)}\bar U^i_s(1)^{-1}d\langle \widetilde{M}^i,\bar M^i\rangle_s\\
\nonumber&=\widetilde{M}^{i,Q}_t+\int_{s_i}^{s_i+(t\wedge\bar\tau_i)}\int\left(\bar U(s,x)^{2\gamma}-U(s,x)^{2\gamma}\right)\frac{\widetilde{U}^i(s,x)\widetilde{U}(s,x)^{-1}}{\bar U^i_s(1)}dx\,ds.
\end{align}
\,From \eqref{UVdefn} we have
\[U^i_{s_i+(t\wedge\bar\tau_i)}(1)=\epsilon+M^i_t-K^{i,U}_{s_i+(t\wedge\bar\tau_i)}(1),\]
where $M^i$ is the continuous $(\mathcal{F}_{s_i+t})$-local martingale (under $P$),
\[M^i_t=\int_{s_i}^{s_i+(t\wedge\bar \tau_i)}\int U(s,x)^{\gamma-(1/2)}U^i(s,x)^{1/2}\,W^{i,U}(ds,dx).\]
Another application of Girsanov's Theorem implies there is a continuous $(\mathcal{F}_{s_i+t})$-local martingale under $Q_i$, $M_t^{i,Q}$, such that
\begin{equation}\label{MQi}
M_t^i=M_t^{i,Q}+\int_{s_i}^{s_i+(t\wedge\bar\tau_i)}\int \frac{U(s,x)^{2\gamma-1}U^i(s,x)}{\bar U^i_s(1)}\,dx\,ds.
\end{equation}
Note that $\langle M^i,\widetilde{M}^i\rangle=0$ and so $M^{i,Q}$ and $\widetilde{M}^{i,Q}$ are also orthogonal under $Q_i$.
If
\begin{align*}
J_t=\sum_{s\le t} f(X_s,D_s)-f(X_{s-},D_{s-})&-f_x(X_{s-},D_{s-})\Delta X_s\\
&-f_d(X_{s-},D_{s-})\Delta D_s,
\end{align*}
then It\^o's Lemma (e.g. Theorem VI.39.1 in Rogers and Williams \cite{rw87}) shows that under $Q_i$,
\begin{align}\label{ItoR}
R_t=&R_0+\int_0^t f_x(X_{s-},D_{s-})dX_s+\int_0^t f_d(X_{s-},D_{s-})dD_s\\
\nonumber&+\int_0^{t\wedge\bar\tau_i} \frac{1}{2}f_{xx}(X_{s-},D_{s-})\int U(s_i+s,x)^{2\gamma-1}U^i(s_i+s,x)\,dx\,ds\\
\nonumber&+\int_0^{t\wedge\bar\tau_i} \frac{1}{2}f_{dd}(X_{s-},D_{s-})\int [\bar U(s_i+s,x)^{2\gamma}-U(s_i+s,x)^{2\gamma}]\\
\nonumber& \phantom{+\int_0^{t\wedge\bar\tau_i} \frac{1}{2}f_{dd}(X_{s-},D_{s-})\int}\times \widetilde{U}^i(s_i+s,x)\widetilde{U}(s_i+s,x)^{-1}\,dx\,ds+J_t.
\end{align}
Since
\[\Delta X_t=-\Delta K^{i,U}_{s_i+(t\wedge\bar\tau_i)}(1)=-\Delta D_t,\]
and $f_x=-d(x+d)^{-2}$, $f_d=x(x+d)^{-2}$, we conclude that
\begin{align*}
J_t&=\sum_{s\le t} \Big[ f(X_{s-}-\Delta D_s,D_{s-}+\Delta D_s)-f(X_{s-},D_{s-}) \\
&\hspace{1.5cm}+[f_x-f_d](X_{s-},D_{s-}) \Delta D_s\Big] \\
&=\sum_{s\le t}\frac{\Delta D_s}{X_{s-}+D_{s-}}-\frac{\Delta D_s}{X_{s-}+D_{s-}}=0.
\end{align*}
Use $f_{xx}=2d(x+d)^{-3}$, $f_{dd}=-2x(x+d)^{-3}$, \eqref{tMQi}, and \eqref{MQi} in \eqref{ItoR} to conclude that if $\bar X_t=\bar U^i_{s_i+(t\wedge\bar\tau_i)}(1)$ and
\[ N_t=\int_0^t-D_{s-}\bar X_s^{-2}dM^{i,Q}_s+\int_0^t X_{s-}\bar X_s^{-2} d\widetilde{M}_s^{i,Q},\]
then
\begin{align}
\nonumber
R_t&= R_0+N_t+\int_0^{t\wedge\bar\tau_i}(-D_{s-}\bar X_s^{-3})\int U(s_i+s,x)^{2\gamma-1} U^i(s_i+s,x)\,dx\,ds\\
\nonumber &+\int_0^{t\wedge\bar\tau_i}D_{s-}\bar X_s^{-2}dK_{s_i+s}^{i,U}(1)\\
\nonumber &+\int_0^{t\wedge\bar\tau_i}X_s\bar X_s^{-3}\int \bar
[U(s_i+s,x)^{2\gamma}-U(s_i+s,x)^{2\gamma}] \\
\nonumber &\hspace{4cm} \times\bar U^i(s_i+s,x)\widetilde{U}(s_i+s,x)^{-1}dx\,ds\\
\nonumber &+\int_0^{t\wedge\bar\tau_i}X_{s-}\bar X_s^{-2}dK_{s_i+s}^{i,U}(1) \\
\nonumber &+\int_0^{t\wedge\bar\tau_i}D_{s-}\bar X_s^{-3}\int U(s_i+s,x)^{2\gamma-1}U^i(s_i+s,x)dx\,ds\\
\nonumber &-\int_0^{t\wedge\bar\tau_i} X_s\bar X_s^{-3}\int[\bar
U(s_i+s,x)^{2\gamma}-U(s_i+s,x)^{2\gamma}] \\
\nonumber& \hspace{4cm} \times \widetilde{U}^i(s_i+s,x)\widetilde{U}(s_i+s,x)^{-1}dx\,ds\\
\label{Rdecomp}=&\;R_0+N_t+\int_0^{t\wedge\bar\tau_i}\bar X_s^{-1}dK^{i,U}_{s_i+s}(1).
\end{align}
Under $Q_i$, $N$ is a continuous $(\mathcal{F}_{s_i+t})$-local martingale and the last term in \eqref{Rdecomp} is non-decreasing. It follows from this and $R\in[0,1]$ that
\begin{equation}\label{Rsub} R\hbox{ is an $(\mathcal{F}_{s_i+t})$-submartingale under }Q_i.
\end{equation}
As $R_0=K_{s_i}^{i,U}(1)/\epsilon$, an integration by parts shows that
\begin{align}
\nonumber R_t=&R_0+N_t+\frac{K^{i,U}_{s_i+(t\wedge\bar\tau_i)}(1)}{\bar X_t}-\frac{K_{s_i}^{i,U}(1)}{\epsilon}-\int_0^tK_{s_i+s}^{i,U}(1)d\left(\frac{1}{\bar X_s}\right)\\
\label{Rdecomp2}=&N_t-\int_0^tK_{s_i+s}^{i,U}(1)d\left(\frac{1}{\bar X_s}\right)+\frac{K^{i,U}_{s_i+(t\wedge\bar\tau_i)}(1)}{\bar U^i_{s_i+(t\wedge\bar\tau_i)}(1)}.
\end{align}
Another application of It\^o's Lemma using \eqref{barmassmart} and \eqref{girsdec} shows that
\begin{align*}
\bar X_t^{-1}&=\epsilon^{-1}-\int_0^t\bar X_s^{-2}d\bar X_s+\int_0^{t\wedge\bar\tau_i}\bar X_s^{-3}d\langle \bar M^i\rangle_s\\
&=\epsilon^{-1}-\int_0^t\bar X_s^{-2}d\bar M_s^{i,Q}-\int_0^{t\wedge\bar\tau_i}\bar X_s^{-3}d\langle \bar M^i\rangle_s+\int_0^{t\wedge\bar\tau_i}\bar X_s^{-3}d\langle \bar M^i\rangle_s\\
&=\epsilon^{-1}-\int_0^t\bar X_s^{-2}d\bar M_s^{i,Q}.
\end{align*}
Therefore $\bar X_t^{-1}$ is a continuous $(\mathcal{F}_{s_i+t})$-local martingale under $Q_i$ and hence the same is true of $N^R_t=N_t-\int_0^tK^{i,U}_{s_i+s}(1)d\left(\frac{1}{\bar X_s}\right)$. From \eqref{Rdecomp2} we have
\begin{equation}\label{Rdecomp3}
R_t=N_t^R+\frac{K^{i,U}_{s_i+(t\wedge\bar\tau_i)}(1)}{\bar U^i_{s_i+(t\wedge\bar\tau_i)}(1)}.
\end{equation}
Recall from \eqref{Kjumps} and \eqref{eq:2.2} that
\begin{equation}\label{Kjump}
\Delta K^{i,U}_{s_i+t}(1)\le \epsilon\quad\hbox{ for all }t\ge 0.
\end{equation}
Assume that (recall $\beta<3/2$)
\begin{equation}\label{deltacond1}
0<2\delta_0\le \delta_1\le \frac{1}{4}\left(\frac{3}{2}-\beta\right)\equiv \delta_{\ref{prop:2}}(\gamma).
\end{equation}
These last two inequalities (which give $\frac{3}{2}-\beta-\delta_1-2\delta_0>0$) together with the continuity of $\bar U^i_{s_i+\cdot}(1)$ (recall Theorem~\ref{thm:1.1}(a)), and the definitions of $\theta_i\ge v_i$ and $H_i\ge v_i$ imply that
\[\sup_{s\le v_i\wedge t}\frac{K^{i,U}_{s_i+s}(1)}{\bar U^i_{s_i+s}(1)}\le \sup_{s\le v_i\wedge t}\frac{(s+\epsilon)^{(3/2)-2\delta_0}+\epsilon}{(s+\epsilon)^{\beta+\delta_1}}\le (t+\epsilon)^{(3/2)-\beta-2\delta_0-\delta_1}+\epsilon^{1-\beta-\delta_1},\]
and so from \eqref{Rdecomp3},
\begin{equation}\label{NRbnd}
\sup_{s\le v_i\wedge t} |N^R_s|\le 1+(t+\epsilon)^{(3/2)-\beta-2\delta_0-\delta_1}+\epsilon^{1-\beta-\delta_1}<\infty.
\end{equation}
We now apply the weak $L^1$ inequality to the non-negative submartingale $R$
(recall \eqref{Rsub}) to conclude that ($\sup\emptyset=0$)
\begin{align}
\nonumber Q_i&\left(\sup_{\epsilon^{2/3}\le s\le v_i\wedge t}R_s\ge 1/2\right)\\
\nonumber&=E_{Q_i}\left[Q_i\left(\sup_{\epsilon^{2/3}\le s\le v_i\wedge t}R_s\ge\frac{1}{2}\Big|\mathcal{F}_{\epsilon^{2/3}}\right)\mathbf{1}(v_i\wedge t\ge \epsilon^{2/3})\right]\\
\nonumber &\le2E_{Q_i}\left[R_{v_i\wedge t}\mathbf{1}(v_i\wedge t\ge \epsilon^{2/3})\right]1(t\ge \epsilon^{2/3})\\
\label{maxineq}&\le2E_{Q_i}\left[R_{(v_i\wedge t)-}+\frac{\Delta K^{i,U}_{s_i+(v_i\wedge t)}(1)}{\bar U^i_{s_i+(v_i\wedge t}(1)}\mathbf{1}(v_i\wedge t\ge \epsilon^{2/3})\right]\mathbf{1}(t\ge \epsilon^{2/3}).
\end{align}
By \eqref{Kjump} and the definition of $H_i\ge v_i$ we have
\begin{align}\label{Rjumpbnd}
\frac{\Delta K^{i,U}_{s_i+(v_i\wedge t)}(1)}{\bar U^i_{s_i+(v_i\wedge t)}(1)}1(v_i\wedge t\ge \epsilon^{2/3})
&\le \frac{\epsilon}{(\epsilon+v_i\wedge t)^{\beta+\delta_1}}1(v_i\wedge t\ge \epsilon^{2/3})\\
\nonumber&\le \epsilon^{1-(2/3)(\beta+\delta_1)}.
\end{align}
\phantom{}From \eqref{Rdecomp3} and the definitions of $H_i\ge v_i$ and $\theta_i\ge v_i$ we have
\begin{align}\nonumber
E_{Q_i}\left[R_{(v_i\wedge t)-}\right]&=E_{Q_i}[N^R_{v_i\wedge t}]+E_{Q_i}\left[{K^{i,U}_{s_i+(v_i\wedge t)-}(1)}/{\bar U^i_{s_i+(v_i\wedge t)}(1)}\right]\\
\label{Rminusbnd}&\le E_{Q_i}\left[(\epsilon+(v_i\wedge t))^{(3/2)-\beta-2\delta_0-\delta_1}\right]\le (\epsilon +t)^{(3/2)-\beta-2\delta_0-\delta_1},
\end{align}
where we used \eqref{NRbnd} to see that $N^R_{v_i\wedge t}$ is a mean zero martingale and also applied \eqref{deltacond1} to see the exponent is positive. Inserting \eqref{Rjumpbnd} and \eqref{Rminusbnd} into \eqref{maxineq} and using \eqref{deltacond1}, we get for $t\le 1$,
\begin{align}\label{Rbndaway}
Q_i&\left(\sup_{\epsilon^{2/3}\le s\le v_i\wedge t}R_s\ge\frac{1}{2}\right)\\
\nonumber&\le \left[(\epsilon+t)^{(3/2)-\beta-2\delta_0-\delta_1}+\epsilon^{1-(2/3)(\beta+\delta_1)}\right]1(t\ge\epsilon^{2/3})\\
\nonumber&\le 2^{3/2}t^{(3/2)-\beta-2\delta_1}+t^{(3/2)-(\beta+\delta_1)}\le 5t^{(3/2)-\beta-2\delta_1}.
\end{align}
\eqref{deltacond1} implies $(3/2)-\beta-2\delta_1\ge (1/2)((3/2)-\beta)$ and so for $t\le 1$ we conclude
\begin{align*}
Q_i\left(\sup_{\epsilon^{2/3}\le s\le v_i\wedge t}R_s\ge\frac{1}{2}\right)\le 5t^{(1/2)((3/2)-\beta)}.
\end{align*}
The above is trivial for $t> 1$.
On $\{\sup_{\epsilon^{2/3}\le s\le v_i\wedge t}R_s<1/2\}$ we have for all $s\in[\epsilon^{2/3},t\wedge v_i]$,
\[ U^i_{s_i+s}(1)\ge \frac{1}{2}\bar U^i_{s_i+s}(1)\ge \frac{1}{2}s^{\beta+\delta_1},\]
and so $B_i(t\wedge v_i)$ occurs. The result follows with $p_{\ref{prop:2}}=\frac{1}{2}\left(\frac{3}{2}-\beta\right)\in(0,\frac{1}{4}]$ (as $\gamma\ge 1/2$).\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\section{Proofs of Lemma~\ref{TRbnd} and Proposition~\ref{prop:2.1}}\label{sec:spdegr}
\setcounter{equation}{0}
We start with a moment bound obtained by a modification of the proof of Lemma~4.2 in \cite{mp92}. Let $p(t,x)=p_t(x)$ denote that Gaussian kernel, that is,
\begin{equation}\label{Gauss} p_t(x)= \frac{1}{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}},\; t>0, x\in {\mathbb{R}}.
\end{equation}
\begin{lemma}\label{qmom} For any $q\ge 1$ and $\lambda,T>0$ there is a $C_{T,\lambda,q}$ such that for all $\epsilon\in(0,1]$,\\
\noindent(a) $\sup_{t\le T}\int e^{\lambda|x|}E(\bar U(t,x)^q+\bar V(t,x)^q)\,dx\le C_{T,\lambda,q}$
\noindent(b) $\sup_{t\le T,x\in{\mathbb{R}}}e^{\lambda|x|}E(\bar U(t,x)^q+\bar V(t,x)^q)\le C_{T,\lambda,q}$.
\end{lemma}
\begin{remark}
\label{rem:09_08_1} Lemma~\ref{qmom} and Theorem~1.1 of~\cite{myt98w} easily imply uniqueness in law of each of
$\bar{U}$ and $\bar{V}$ separately for a pair
$(\bar{U},\bar{V})$ solving~\eqref{eq:2.8}. To show the uniqueness in law for the pair $(\bar{U},\bar{V})$, one should follow the proof of Theorem~1.1 of~\cite{myt98w} and derive the counterpart of Proposition~2.3 from~\cite{myt98w}, which is the main ingredient of the proof. More specifically, suppose $t\in [s_i, t_i)$ for some $i\in {\mathbb{N}}_{\epsilon}$. Following the argument from~\cite{myt98w}, for any
non-negative $\phi_1,\phi_2\in L^1({\mathbb{R}})$, one can easily construct a sequence of $M_F({\mathbb{R}})^2$-valued processes
$\{(Y^{1,n}, Y^{2,n})\}_{n\geq 0}$ such that $\{Y^{1,n}\}_{n\geq 1}$ and $\{Y^{2,n}\}_{n\geq 1}$ are independent, and for any $(\bar{U},\bar{V})$ solving~\eqref{eq:2.8} we have
\begin{eqnarray}
\label{eq:29_12}
E\left[ e^{-\langle \phi_1 , \bar{U}_{t}\rangle+\langle \phi_2, \bar{V}_{t}\rangle}\right]
&=& \lim_{n\rightarrow \infty} E\left[ e^{-\langle Y^{1,n}_{t-s_i} , \bar{U}_{s_i}\rangle+\langle Y^{2,n}_{t-s_i}, \bar{V}_{s_i}\rangle}|Y^{1,n}_0=\phi_1, Y^{2,n}_0=\phi_2\right].
\end{eqnarray}
Similar expression can be derived for $t\in [t_i, s_{i+1}), i\in {\mathbb{N}}_{\epsilon},$ and then uniqueness in law for
the pair $(\bar{U},\bar{V})$ follows by standard argument: see again~\cite{myt98w} where the single process without immigration is treated.
\end{remark}
\paragraph{Proof of Lemma~\ref{qmom}} It suffices to consider $\bar U$. We let $C$ denote a constant which may depend on $q$, $\lambda$ and $T$, and which may change from line to line. Note that the equation~(\ref{eq:2.8}) for $\bar{U}$ can be rewritten in the so-called mild form (see Theorem~2.1 of \cite{shi94}):
\begin{eqnarray}
\label{eq:mild1}
\bar{U}_t(x) &=& \int_0^t \int_{{\mathbb{R}}} p_{t-s}(x-y)\eta^+_{\epsilon}(ds,dy) \\
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) \bar{U}(s,y)^{\gamma}
\bar{W}^{U}(ds,dy), \;\;t\geq 0, x\in {\mathbb{R}}. \nonumber
\end{eqnarray}
Let $N(t,x)$ denote the stochastic integral term in the above.
The first term on the right hand side of~(\ref{eq:mild1}) can be rewritten as
\begin{eqnarray}
I_1(t,x)=I_{1,\epsilon}(t,x)=\sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}} p_{t-s_i}(x-y) J^{x_i}_{\epsilon}(y)\,dy,
\end{eqnarray}
(the meaning of the above if $t=s_i$ some $i$ is obvious). Recall that $x_i\in[0,1]$ and so $y$ in the above integral may be restricted to $|y|\le 2$. Therefore for $s_i\le t\le T$,
\begin{equation}\label{expabs}e^{\lambda|x|}p_{t-s_i}(x-y)\le Cp_{(t-s_i)/2}(x-y).\end{equation}
It follows that
\begin{align}\label{eq:pmom2}
\sup_{t\le T,x\in{\mathbb{R}}}&e^{\lambda|x|}I_1(t,x)\\
\nonumber\ &\le
\sum_{s_i\leq t- 2\epsilon} C (t-s_i)^{-1/2}\epsilon + \sum_{t-2\epsilon<s_i<t}\sqrt{\epsilon} \int_{{\mathbb{R}}} p_{(t-s_i)/2}(x-y)\,dy\\
\nonumber&\phantom{\le
\sum_{s_i\leq t- 2\epsilon} C (t-s_i)^{-1/2}\epsilon}+1(s_i=t)e^{\lambda|x|}J_\epsilon^{x_i}(x)\\
\nonumber
\ &\leq C \Bigl[\int_{0}^{t}(t-s)^{-1/2}\,ds + \epsilon^{1/2}\Bigr] \\
\nonumber
\ &\leq C,
\end{align}
uniformly on $\epsilon\in (0,1]$.
By \eqref{eq:mild1} and \eqref{eq:pmom2} we have for $t\le T$ and all $x$,
\begin{equation}\label{Uqmom1}
E(\bar U(t,x)^q)\le C\Bigl[E(I_1(t,x)^q)+E(|N(t,x)|^q)\Bigr]\le C\Bigl[e^{-\lambda|x|}+E(|N(t,x)|^q)\Bigr].
\end{equation}
For $q\ge 1$ and $\lambda,t>0$ let
\begin{equation*}
\nu(q,\lambda,t)=\sup_{0\leq s\leq t}\int e^{\lambda|x|}E\left[\bar U(s,x)^q\right]dx
\end{equation*}
and note that $\nu$ implicitly depends on $\epsilon$. Using the Burkholder-Davis-Gundy
inequality and Jensen's inequality, we get for $q\ge 2$,
\begin{align}
\label{Nqbound}E&\Big[|N(t,x)|^q\Big]
\leq CE\left[\left(\int_{0}^{t}\int p_{t-s}(x-y)^2\bar U(s,y)^{2\gamma}dyds\right)^{q/2}\right] \\
\nonumber&\leq CE\left[\int_{0}^{t}\int p_{t-s}(x-y)^2\bar U(s,y)^{\gamma q}dyds\right]
\left(\int_{0}^{t}\int p_{t-s}(x-y)^2dyds\right)^{(q/2)-1} \\
\nonumber&\leq Ct^{(q-2)/4}E\left[\int_{0}^{t}\int p_{t-s}(x-y)^2[\bar U(s,y)^{q/2}+\bar U(s,y)^{q}]dyds\right].
\end{align}
The final inequality follows because $p_{t-s}(x-y)^2\leq(t-s)^{-1/2}p_{t-s}(x-y)$ and $a^{\gamma q}\le a^{q/2}+a^q$. A short calculation using the above bound, just as in the bottom display on p. 349 of \cite{mp92} shows that
\begin{align}\nonumber
\nu(q,\lambda,t)&\le C\Bigl[1+\sup_{s\le t}\int e^{\lambda|x|}E(|N(t,x)|^q)\,dx\Bigr]\quad\hbox{(by\eqref{Uqmom1})}\\
\nonumber&\le C+C\int_0^t (t-s)^{-1/2}[\nu(q/2,\lambda,s)+\nu(q,\lambda,s)]\,ds\\
\nonumber&\le C\Bigl[1+\nu(q/2,\lambda,t)+\int_0^t(t-s)^{-1/2}\nu(q,\lambda,s)\,ds\Bigr].
\end{align}
A generalized Gronwall inequality (e.g. see Lemma 4.1 of \cite{mp92}) shows that the above implies that for $q\ge 2$,
\begin{equation}\label{nuind}
\nu(q,\lambda,t)\le (1+\nu(q/2,\lambda,t))\exp(4Ct^{1/2})\quad\hbox{for all }t\le T.
\end{equation}
The obvious induction on $q=2^n$ will now give (a) providing we can show
\begin{equation}\label{init}
\nu(1,\lambda,T)\le C.
\end{equation}
It follows from \eqref{eq:mild1} and an argument using localization and
Fubini's theorem that
\[
\sup_{t\le T}\sup_x e^{\lambda|x|}E[\bar U(t,x)]\le \sup_{t\le T}\sup_x e^{\lambda|x|}E[I_1(t,x)]\le C,
\]
the last by \eqref{eq:pmom2}. By optimizing over $\lambda$ we get
\eqref{init}. Therefore we have proved (a) except for one detail. To use
Lemma~4.1 in \cite{mp92} to derive \eqref{nuind} we need to know that
$\nu(q,\lambda,T)<\infty$ (the bound can now depend on $\epsilon$). To handle this
issue one can localize just as in \cite{mp92} using the facts that
$t\to \bar U_t$ is in
$D({\mathbb{R}}_+,C^+_{\rm rap})$, and (from Theorem~\ref{thm:1.1} and
$\bar U=\sum_i\bar U^i$) that the jumps of $\bar U$ occur
at $\{s_i\}$ with the $i$th jump equaling $J^{x_i}\le \sqrt\epsilon$.
Turning to (b), it suffices to consider $q>2$. By \eqref{eq:mild1}, \eqref{eq:pmom2} and the first line of \eqref{Nqbound} for $t\le T$, $p=q/(q-2)$ and $p'=q/2$, we have by H\"older's inequality
\begin{align*}
\sup_x\ &e^{\lambda |x|}E\left[\bar U(t,x)^q\right]\\
&\le C\Bigl(1+\sup_x E\Bigl[\Bigl(\int_0^t\int[p_{t-s}(x-y)^{1/p}e^{2\lambda|x|/q-2\lambda|y|/q}]\\
&\phantom{\le C\Bigl[1+\sup_x E\Bigl[\Bigl(\int_0^t\int}\times[e^{2\lambda|y|/q}\bar U(s,y)^{2\gamma}]p_{t-s}(x-y)^{2-(1/p)}dyds\Bigr)^{q/2}\Bigr]\Bigr)\\
&\le C\Bigl(1+\sup_x\Bigl(\int_0^t\int p_{t-s}(x-y)e^{2\lambda p|x|/q-2\lambda p|y|/q}dy(t-s)^{-1+(1/2p)}ds\Bigr)^{q/2p}\\
&\phantom{\le C\Bigl[1+\sup_x}\times E\Bigl[\int_0^t\int e^{2\lambda p'|y|/q}\bar U(s,y)^{2\gamma p'}dy(t-s)^{-1+(1/2p)}ds\Bigr]\Bigr)\\
&\le C\Bigl(1+\int_0^t(t-s)^{-(q+2)/(2q)}ds\nu(\gamma q,\lambda,t)\Bigr)\\
&\le C.
\end{align*}
In the next to last line we have used Lemma~6.2 of \cite{shi94} and in the last line we have used part (a).
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{lemma}
\label{pmom}
For any $q,T>0$, there exists $C_{q,T}$ such that
\begin{eqnarray}
\sup_{0<\epsilon\le 1}E\left[\sup_{s\le T,x\in{\mathbb{R}}}(\bar U(s,x)^{q}+\bar V(s,x)^{q})\right]\le C_{q,T}\,.
\end{eqnarray}
\end{lemma}
Lemma~\ref{TRbnd} with $\delta_{\ref{TRbnd}}(t)=C_{\frac{1}{2}, 2}t^{\epsilon_0/2}$ is an immediate corollary of Markov's lemma and the above lemma with $q=1/2$.
The proof of the above lemma is based on a simple adaptation of the methods used for the proof of Proposition 1.8(a) of~\cite{mps}, and
in particular Lemma A.3 of that paper.
\paragraph{Proof of Lemma~\ref{pmom}} It suffices to consider $\bar U$. Let $C$ denote a constant depending on $q$ and $T$ which may change from line to line. We adapt the proof of Lemma A.3 of~\cite{mps} to the white noise setting and with
$\lambda =0$.
By \eqref{eq:mild1}, \eqref{eq:pmom2} and the continuity properties of $\bar U$, we have
\begin{eqnarray*}
\lefteqn{E\left[\sup_{t\le T,x\in {\mathbb{R}}}\bar U(t,x)^{q}\right]}
\\
&\le& C_{q,T}
\left(1 + E\left[\sup_{t\le T,x\in {\mathbb{Q}}, t\in{\mathbb{Q}}_+}\left\vert\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) \bar{U}(s,y)^{\gamma}
\bar{W}^{U}(ds,dy)\right\vert^q\right]
\right).
\end{eqnarray*}
To handle the above expectation we carry out the argument in the proof of Lemma~A.3 of \cite{mps} with $\lambda=0$ and $W$ a white noise. We take $a\in(0,1/4)$ and $q>\frac{3}{2a}$ in that work. With this choice of $q$, the arguments in Lemma~A.3 of \cite{mps} then go through to show that
the expectation in the above is at most
\begin{align*}
C&\int_0^T\int E\left[\Bigl|\int_0^t\int(t-s)^{-a}p_{t-s}(x-y)\bar U(s,y)^{\gamma}\,d\bar W^U(s,y)\Bigr|^q\right]dxdt\\
&\le C\int_0^T\int E\left[\Bigl|\int_0^t\int(t-s)^{-2a}p_{t-s}(x-y)^2\bar U(s,y)^{2\gamma}dyds\right]^{q/2}dxdt\\
&\le C\int_0^T\int \left[\int_0^t\int(t-s)^{-2a-(1/2)}p_{t-s}(x-y)E(\bar U(s,y)^{q\gamma})dyds\right]dxdt\\
&\le C,
\end{align*}
by Fubini, Lemma~\ref{qmom}(a) and the choice of $a$. This gives the result for $q>3/2a$ and hence for all $q>0$. \hfill\quad \vrule height7.5pt width4.17pt depth0pt
We turn next to the proof of Proposition~\ref{prop:2.1} which is fairly standard. We
follow the proof in Section 4 of Mueller and Perkins \cite{mp92}, where a
similar existence proof is given. The main difference is the immigration term
in the present situation.
By the mild form of \eqref{eq:spde-u-ep} we have
\begin{align}
\label{eq:spde-u-ep1}
u_\epsilon(t,x)
=& \sum_{i}\int p(t-s_i,y-x)J^{x_i}(y)\mathbf{1}(t\geq s_i)dy \\
&- \sum_{j}\int p(t-t_j,y-x)J^{y_j}(y)\mathbf{1}(t\geq t_j)dy \nonumber\\
&+ \int_{0}^{t}\int p(t-s,y-x)|u_\epsilon(s,y)|^\gamma W(ds,dy). \nonumber\\
\equiv&I_{1,\epsilon}(t,x)-I_{2,\epsilon}(t,x)+N_\epsilon(t,x).\nonumber
\end{align}
Now we give a modified version of Lemma 4.4 of \cite{mp92}. The only
difference is that Lemma 4.4 of \cite{mp92} deals with $C^+_{\rm rap}$ instead of
$C_{\rm rap}$, but the proof carries over with almost no change.
\begin{lemma}
\label{mp:4.4}
Let $\{X_n(t,\cdot): t\geq0, n\in{\mathbb{N}}\}$ be a sequence of continuous $C_{\rm rap}$-valued
processes. Suppose $\exists q>0,\gamma>2$ and $\forall\ T,\lambda>0$
$\exists C=C(T,\lambda)>0$ such that
\begin{align}
\label{lem-X-replace-by-N}
E\Big[|X_n(t,x)-X_n(t',x')|^q\Big] &\leq
C\left(|x-x'|^\gamma+|t-t'|^\gamma\right)e^{-\lambda|x|} \\
&\qquad \forall t,t'\in[0,T], |x-x'|\leq1, n\in{\mathbb{N}}. \nonumber
\end{align}
If $\{P_{X_n(0):n\in{\mathbb{N}}}\}$ is tight on $C_{\rm rap}$, then $\{P_{X_n}:n\in{\mathbb{N}}\}$
is tight on $C({\mathbb{R}}_+,C_{\rm rap})$.
\end{lemma}
We also need Lemma 4.3 of \cite{mp92}:
\begin{lemma}
\label{lem:mp-4.3}
If $T,\lambda>0$ there is a constant $C(T,\lambda)<\infty$ such that
\begin{align*}
\int_{0}^{t}\int & \left(p_{t-s}(y-x)-p_{t'-s}(y-x')\right)^2e^{-\lambda|y|}dyds \\
& \leq C(T,\lambda)\left(|x-x'|+(t-t')^{1/2}\right)e^{-\lambda|x|} \\
&\qquad \forall\ 0<t'<t\leq T, |x-x'|\leq1, \lambda>0.
\end{align*}
where $p_u(z)$ is defined to be 0 if $u<0$.
\end{lemma}
Clearly $t\to I_{\ell,\epsilon}(t,\cdot)$ is in $D({\mathbb{R}}_+,C_{\rm rap})$ with jumps only at $\{s_i\}$ for $\ell =1$ and at $\{t_j\}$ if $\ell=2$.
It is fairly easy to see that for $t,x$ fixed $I_{\ell,\epsilon}(t,x)$ converges in probability
to
$$I(t,x)=\int_{0}^{t\wedge1}\int_{0}^{1}p(t-s,x-y)dyds$$
by the weak law of large numbers.
We need convergence in path space. It is easy to check that $t\to I(t,\cdot)$ is in $C({\mathbb{R}}_+,C_{\rm rap})$.
\begin{lemma}\label{Iconv} For $\ell=1,2$, $I_{\ell,\epsilon}$ converges in probability in $D({\mathbb{R}}_+,C_{\rm rap})$ to $I$ as $\epsilon\downarrow 0$.
\end{lemma}
\paragraph{Proof} The argument is routine if a bit tedious. We sketch the proof for $\ell=2$ where $t_j=j\epsilon$. If $\delta=\epsilon^{3/4}$, write
\begin{align*}
I_{2,\epsilon}(t,x)=&\sum_{t_j\le t-\delta}\epsilon\int\left[p_{t-t_j}(y_j-x+\sqrt\epsilon w)-p_{t-t_j}(y_j-x)\right]J(w)\,dw\\
&+\sum_{t-\delta<t_j\le t}P_{t-t_j}J^{y_j}_\epsilon(x)+\sum_{t_j\le t-\delta}\epsilon p_{t-t_j}(y_j-x)\\
=&T_{1,\epsilon}+T_{2,\epsilon}+T_{3,\epsilon}.
\end{align*}
It is easy to check that for any $\lambda,T>0$,
\[\sup_{t\le T,x\in{\mathbb{R}}}e^{\lambda|x|}|T_{2,\epsilon}(t,x)|\le C_{T,\lambda}\delta/\sqrt\epsilon\to 0,\]
and
\[\sup_{t\le T,x\in{\mathbb{R}}}e^{\lambda|x|}|T_{1,\epsilon}(t,x)|\le C_{T,\lambda}\sqrt\epsilon(1+\ln(1/\epsilon))\to 0.\]
So it suffices to show that $T_{3,\epsilon}$ converges in probability in $D({\mathbb{R}}_+,C^+_{\rm rap})$ to $I$.
We next write
\begin{align*}
T_{3,\epsilon}(t,x)=&\sum_{t_j\le t-\delta}\Bigl(\epsilon p_{t-t_j}(y_j-x)-\epsilon\int_0^1p_{t-t_j}(y-x)\,dy\Bigr)\\
&\quad+\sum_{t_j\le t-\delta}\epsilon\int_0^1p_{t-t_j}(y-x)\,dy\\
\equiv &T_{4,\epsilon}+T_{5,\epsilon}.
\end{align*}
$T_{5,\epsilon}$ is a Riemman sum for $\int_0^{t\wedge 1}\int_0^1 p_{t-s}(y-x)\,dy\,ds$ (note that $t_j\le 1$, whence the truncation by $1$), and using the $t-\delta$ cut-off, the Gaussian tail and $y\in[0,1]$, it is easy to see that for any $\lambda,T>0$,
\[\lim_{\epsilon\to 0}\sup_{t\le T, x\in{\mathbb{R}}}e^{\lambda |x|}\Bigl|T_{5,\epsilon}-\int_0^{t\wedge 1}\int_0^1 p_{t-s}(y-x)\,dy\,ds\Bigr|=0.\]
Therefore it remains to show that $T_{4,\epsilon}\to 0$ in probability in $D({\mathbb{R}}_+,C_{\rm rap})$. $T_{4,\epsilon}$ is a sum of mean $0$ independent random variables and so one easily sees that
$$E(T_{4,\epsilon}(t,x)^2)\le \epsilon^2\sum_{t_j\le t-\delta}p_{2(t-t_j)}(0)\to 0\hbox{ as }\epsilon\downarrow 0.$$
Therefore if we can show for any $\epsilon_n\downarrow 0$,
\begin{equation}\nonumber
\{T_{4,\epsilon_n}:n\}\hbox{ is $C$-tight in $D({\mathbb{R}}_+,C_{\rm rap})$}
\end{equation}
the result would follow as the only possible weak limit point is $0$ by the above.
Let $\hat p_{t-t_j}(y_j-x)=p_{t-t_j}(y_j-x)-\int_0^1 p_{t-t_j}(y-x)dy$ and
$$[t-\delta]_\epsilon=\max\{j\epsilon:j\epsilon\le t-\delta, j\in{\mathbb{Z}}_+\}.$$
To work in the space of continuous $C_{\rm rap}$-valued paths we interpolate $T_{4,\epsilon}$ linearly and define
\begin{align*} \tilde T_{4,\epsilon_n}(t,x)=&\sum_{t_j\le [t-\delta_n]_{\epsilon_n}} \epsilon \hat p_{t-t_j}(y_j-x)\\
&\ \ +\Bigl((t-\delta_n)-[t-\delta_n]_{\epsilon_n}\Bigr)\hat p_{t-[t-\delta_n]_{\epsilon_n}-\epsilon_n}(y_{1+([t-\delta_n]_{\epsilon_n}/\epsilon_n)}-x),\end{align*}
so that $t\to\tilde T_{4,\epsilon_n}(t,\cdot)\in C({\mathbb{R}}_+,C_{\rm rap})$. If $d$ is the metric on $C_{\rm rap}$, then it is clear that
\[\lim_{n\to\infty} \sup_{t\le T}d(\tilde T_{4,\epsilon_n}(t),T_{4,\epsilon_n}(t))=0\hbox{ for all }T>0.\]
Therefore it remains to show that
\begin{equation}\label{tightcrit}
\{\tilde T_{4,\epsilon_n}:n\}\hbox{ is tight in $C({\mathbb{R}}_+,C_{\rm rap})$}.
\end{equation}
This is proved by a straightforward application of Lemma~\ref{mp:4.4}, as we illustrate below.
To illustrate the method of the aforementioned proof let us bound the spatial moments and work with $T_{4,\varepsilon}$, hence dropping the trivial continuity correction and dependence on $n$. Assume $0\le t\le T$, $\lambda>0$ and $|x-x'|\le 1$. For $q\ge 2$ we use a predictable square function inequality of Burkholder (see Theorem~21.1 of \cite{b73}) as follows:
\begin{align}\label{sqfnin}
\nonumber e^{\lambda|x|}&E\left[|T_{4,\epsilon}(t,x)-T_{4,\epsilon}(t,x')|^q\right]\\
&\le e^{\lambda|x|}c_q\Bigg[\Bigl|\sum_{t_j\le [t-\delta]_\epsilon}\epsilon^2E((\hat p_{t-t_j}(y_j-x)-\hat p_{t-t_j}(y_j-x'))^2)\Bigr|^{q/2}\\
\nonumber&\phantom{\le e^{\lambda|x|}c_1\Bigl[}+\sum_{t_j\le [t-\delta]_\epsilon}\epsilon^qE(|\hat p_{t-t_j}(y_j-x)-\hat p_{t-t_j}(y_j-x'))|^q]\Bigg].
\end{align}
Now for $q\ge 2$ and for, say $x>x'$,
\begin{align*}
e&^{\lambda|x|}E\left[|\hat p_{t-t_j}(y_j-x)-\hat p_{t-t_j}(y_j-x')|^q\right]\\
&\le ce^{\lambda|x|}\int_0^1|p_{t-t_j}(y-x)-p_{t-t_j}(y-x')|^q\,dy\\
&\le C_{\lambda,T}(t-t_j)^{-1/2}\int_0^1|p_{t-t_j}(y-x)-p_{t-t_j}(y-x')|^{q-1}\,dy.
\end{align*}
In the last line we used the bound on $|x-x'|$ and the fact that $y\in[0,1]$ to use the Gaussian tail of $(p_{t-t_j}(y-x)+p_{t-t_j}(y-x'))$ to absorb the $e^{\lambda|x|}$ as in \eqref{expabs}. By using the spatial derivative of $p_t(z)$ and then carrying out a change of variables we may bound the above by
\begin{align*}
C_{\lambda,T}&(t-t_j)^{-1/2}\int_0^1(t-t_j)^{-(q-1)/2} \\
&\qquad\qquad \times\Bigl|\int1(\frac{y-x}{\sqrt{t-t_j}}\le z\le \frac{y-x'} {\sqrt{t-t_j}}
)zp_1(z)\,dz\Bigr|^{q-1}\,dy\\
&\le C_{\lambda,T}(t-t_j)^{-q+.5}|x-x'|^{q-1}.
\end{align*}
Use the above in \eqref{sqfnin} with $q=2$ and general $q$ to conclude that
\begin{align*}
e&^{\lambda|x|}E(|T_{4,\epsilon}(t,x)-T_{4,\epsilon}(t,x')|^q)\\
&\le C_{\lambda,T}\Bigl(\sum_{t_j\le [t-\delta]_\epsilon}\epsilon^2(t-t_j)^{-3/2}\Bigr)^{q/2}|x-x'|^{q/2}\\
&\phantom{\le C_{\lambda,T}\Bigl[}+C_{\lambda,T}\sum_{t_j\le [t-\delta]_{\epsilon}}\epsilon^q(t-t_j)^{-q+.5}
|x-x'|^{q-1}\\
&\le C_{\lambda,T}|x-x'|^{q/2},
\end{align*}
where we used $\delta=\epsilon^{3/4}$, $q\ge 2$ and an elementary calculation in the last line. So taking $q>4$ gives the required spatial increment bound in Lemma~\ref{mp:4.4}.
A similar, but slightly more involved, argument verifies the hypotheses of Lemma~\ref{mp:4.4} for the time increments. Here when $0\le t'-t\le \epsilon$ the linear interpolation term must be used and the cases $[t'-\delta]_\epsilon=[t-\delta]_\epsilon$ and $[t'-\delta]_\epsilon=[t-\delta]_\epsilon+\epsilon$ are treated separately. The details are left for the reader. This establishes \eqref{tightcrit} and so completes the proof.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
Next we apply Lemma \ref{mp:4.4} to $X_n(t,x)=N_{\epsilon_n}(t,x)$ for any $\epsilon_n\downarrow 0$ by showing that
(\ref{lem-X-replace-by-N}) holds for $X_n=N_{\epsilon_n}$.
\begin{lemma}
\label{lem:N-satisfies}
$\exists q>0,\gamma>2$ and $\forall \ T,\lambda>0$
$\exists C=C(T,\lambda)>0$ such that
\begin{align}
\label{mp:4.7}
E\Big[|N_\epsilon(t,x)-N_\epsilon(t',x')|^q\Big] &\leq
C\left(|x-x'|^\gamma+|t-t'|^\gamma\right)e^{-\lambda|x|} \\
&\qquad \forall t,t'\in[0,T], |x-x'|\leq1, 0<\epsilon<1. \nonumber
\end{align}
\end{lemma}
\paragraph{Proof.} Here we follow the proof of Proposition 4.5 of \cite{mp92}.
For convenience we will omit the dependence on $\epsilon$ and simply write $N(t,x)$, while noting
that it will be crucial that our constants do not depend on the invisible $\epsilon$.
Let $q\geq1$, $\lambda>0$, $0\leq t'<t\leq T$ and $|x-x'|\leq1$. Using the
Burkholder-Davis-Gundy inequality, and allowing $c_q$ to vary from line to
line, we find
\begin{align*}
E&\left[|N(t,x)-N(t',x')|^{2q}\right] \\
&\leq c_q\Bigg(E\bigg[\int_{0}^{t}\int\left(p_{t-s}(y-x)-p_{t'-s}(y-x')\right)^2e^{-\lambda|y|} \\
&\hspace{3cm} \times e^{\lambda|y|}|u(t,x)|^{2\gamma}dy ds\bigg]\Bigg)^q \\
&\leq c_qE\left[\int_{0}^{t}\int |u(s,y)|^{2\gamma}e^{\lambda(q-1)|y|}
\Big(p_{t-s}(y-x)-p_{t'-s}(y-x')\Big)^2dyds\right] \\
& \qquad \times \left(
\int_{0}^{t}\int\big(p_{t-s}(y-x)-p_{t'-s}(y-x')\big)^2 e^{-\lambda|y|}dyds
\right)^{q-1} \\
&\leq c_qE\Big[\int_0^t\int |u(s,y)|^{8\gamma}e^{4\lambda(q-1)|y|}dy\,ds\Big]^{1/4} \\
&\qquad \times \left(\int_{0}^{t}\int |p_{t-s}(y-x)-p_{t'-s}(y-x')|^{8/3}dyds\right)^{3/4} \\
&\qquad \times C'(T,\lambda,q)\left(|x-x'|^{q-1}+|t-t'|^{(q-1)/2}\right)e^{-\lambda(q-1)|x|} \\
& \hspace{4cm} \mbox{(H\"older's inequality and Lemma \ref{lem:mp-4.3})} \\
& \leq C'(T,\lambda,q)\left(|x-x'|^{q-1}+|t-t'|^{(q-1)/2}\right)e^{-\lambda(q-1)|x|}
\end{align*}
by Lemma~\ref{qmom}(a) (recall that $|u|=|U-V|\le \bar U+\bar V$) and an elementary calculation.
The result follows.\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\medskip
\noindent{\bf Proof of Proposition~\ref{prop:2.1}} Recall that $\epsilon_n=\frac{1}{n}$. Lemma~\ref{lem:N-satisfies} allows us to conclude that $N_{\epsilon_n}(t,x)$ is tight in $C({\mathbb{R}}_+,C_{\rm rap})$
as $n\to\infty$. Hence by Lemma~\ref{Iconv} and \eqref{eq:spde-u-ep1}, $\{u_{\epsilon_n}\}$ is $C$-tight
in $D({\mathbb{R}}_+,C_{\rm rap})$.
It remains to show that any limit point satisfies the equation \eqref{spde} (it will then necessarily be a $C_{\rm rap}$-valued solution). Recall from \eqref{eq:spde-u-ep} we have
\begin{eqnarray}
\label{eq:spde-u-ep2}
\langle u_{\epsilon}(t),\phi\rangle&=&\sum_{i}1(s_i\le t)\langle J_\epsilon^{x_i},\phi\rangle-
\sum_{j}1(t_j\le t)\langle J_\epsilon^{y_j},\phi\rangle \\
\nonumber&& +\int_0^t \frac{1}{2}\langle u_{\epsilon}(s),\Delta\phi\rangle ds
+ \int_0^t\int |u_{\epsilon}(s,x)|^{\gamma} \phi(x) W(ds,dx),
\end{eqnarray}
for $\phi\in C^\infty_c$.
If $\phi\in C_c({\mathbb{R}})$, then a simple calculation using the strong law of large
numbers shows that with probability 1,
\begin{align}\label{immcvgt}
\lim_{n\to\infty}\sum_{i}1(s_i\le t)\langle J_{\epsilon_n}^{x_i},\phi\rangle
&=(t\wedge 1)\int_{0}^{1}\phi(x)dx \\
\nonumber\lim_{n\to\infty}\sum_{j}1(t_j\le t)\langle J_{\epsilon_n}^{y_j},\phi\rangle
&=(t\wedge 1)\int_{0}^{1}\phi(x)dx.
\end{align}
It is easy to interpolate in $t$ and conclude that the above convergence is
uniform in $t$ with probability 1. By considering a countable dense set of $\phi$ in
$C_c({\mathbb{R}})$, we may conclude that with probability 1 for all $\phi\in C_c({\mathbb{R}})$ the
convergence in \eqref{immcvgt} holds uniformly in $t$.
Choose a subsequence
$\{n_k\}$ so that $u_{\epsilon_{n_k}}$ converges weakly to $u$ in
$D({\mathbb{R}}_+,C_{\rm rap})$ where $u$ has continuous paths. To ease eye strain we
write $u_k$ for $u_{\epsilon_{n_k}}$ By Skorokhod's theorem we may change spaces
so that (recall convergence in cadlag space $D$ to a continuous path means
uniform convergence on compacts)
\[
\lim_{k\to\infty}\sup_{t\le T}d(u_{k}(t),u(t))=0\quad\hbox{for all }T>0\quad a.s.
\]
This fact and the above convergence in \eqref{immcvgt} show that with
probability 1 for all $\phi\in C_c^\infty$, the left-hand side of
\eqref{eq:spde-u-ep2} and first three terms on the right-hand side of the same
equation converge uniformly in $t$ to the same terms but with $u$ in place of
$u_\epsilon$, or in the case of \eqref{immcvgt}, to the right-hand side of
\eqref{immcvgt}.
Hence the last term on the right-hand side of \eqref{eq:spde-u-ep2} must also converge uniformly in $t$ a.s. to a continuous limit $M_t(\phi)$. So for all $\phi\in C_c^\infty$ we have
\begin{equation}\label{martp}
\langle u_t,\phi\rangle=\int_0^t \frac{1}{2}\langle u(s),\Delta\phi\rangle\,ds+M_t(\phi).
\end{equation}
We see that $M_t(\phi)$ is the a.s. limit of the stochastic integral in
\eqref{eq:spde-u-ep2}. Using the boundedness of the moments uniformly in $\epsilon$
from Lemma~\ref{qmom} it is now standard to deduce that $M_t(\phi)$ is a
continuous $\mathcal{F}_t$-martingale with square function
$\int_0^t\int |u(s,x)|^{2\gamma}\phi(x)^2\, dx\,ds$. Here $\mathcal{F}_t$ is the
right continuous filtration generated by $t\to u_t$. It is also routine to
construct a white noise $W$, perhaps an an enlarged space, so that
$M_t(\phi)=\int_0^t\int u(s,x)^{\gamma}\phi(x)dW(s,x)$ for all $t\ge 0$ a.s.
for all $\phi\in C_c^\infty$. Put this into \eqref{martp} to see that $u$ is
a $C_{\rm rap}$-valued solution of \eqref{spde} and we are done. \hfill\quad \vrule height7.5pt width4.17pt depth0pt
\section{Proof of Lemma~\ref{rhobnd}}\label{sec:lem4.4}
\setcounter{equation}{0}
If $a>0$, $1>\gamma\ge 1/2$ and $X_0\in C^+_{\rm rap}$, then Theorems 2.5 and
2.6 of \cite{shi94} show the existence of continuous $C^+_{\rm rap}$-valued
solutions to
\begin{equation}\label{gammaspde}
\frac{\partial X}{\partial t}=\frac{1}{2} \Delta X+a X^\gamma\dot W,
\end{equation}
where as usual $\dot W$ is a space-time white noise on ${\mathbb{R}}_+\times{\mathbb{R}}$.
Theorem~1.1 of \cite{myt98w} then shows the laws
$\{P_{X_0}:X_0\in C^+_{\rm rap}\}$ of these processes on
$C({\mathbb{R}}_+,C^+_{\rm rap})$ are unique.
We start with a quantified version of Theorem~3.5 of \cite{mp92} applied to the particular equation \eqref{gammaspde}.
\begin{lemma}\label{qMP} Assume $X$ satisfies \eqref{gammaspde} with $X_0=J_\epsilon^{x_0}$ for $x_0\in{\mathbb{R}}$ and $\epsilon\in(0,1]$. If $\gamma\in(1/2,3/4)$ choose $\delta=\delta(\gamma)\in(0,1/5)$ sufficiently small so that
$\beta_0=\beta_0(\gamma)=\frac{2\gamma-\delta}{1-\delta}\in (1,3/2)$ and for $N>1$, define
\[T_N=\inf\Bigl\{t\ge 0:\int X(t,x)^\delta\,dx\ge N\Bigr\}.\]
If $\gamma=1/2$, set $\beta_0=1$ and $T_N=\infty$. For $\delta_0\in(0,1/4]$, define
\begin{equation}\label{rhodef2}\rho=\inf\left\{t\ge 0:S(X_t)\not\subset [x_0-\epsilon^{1/2}-t^{(1/2)-\delta_0}, x_0+\epsilon^{1/2}+t^{(1/2)-\delta_0}]\right\}.
\end{equation}
There is a $c_{\ref{qMP}}>0$ (depending on $\gamma$) so that
\[P(\rho\le t\wedge T_N)\le c_{\ref{qMP}}a^{-1}N^{\beta_0-1}\varepsilon\exp(-t^{-\delta_0}/c_{\ref{qMP}})\hbox{ for all }\epsilon,t\in(0,1].\]
\end{lemma}
\paragraph{Proof} Since $X$ is unique in law, the construction in Section 4 of
\cite{mp92} allows us to assume the existence of a historical process $H_t$, a
continuous $M_F(C)$-valued process, associated with $X$. Here $C$ is the
space of continuous ${\mathbb{R}}$-valued paths. $H$ will satisfy the martingale
problem $(M_{X_0})$ in \cite{mp92}, and the relationship with $X$ is that
\begin{equation}\label{HX}
H_t(\{y\in C:y_t\in B\})=X_t(B)\quad\hbox{for all $t\ge 0$ and Borel subsets $B$ of }{\mathbb{R}}.
\end{equation}
Hence the hypotheses of Theorem~3.5 of \cite{mp92} are satisfied with $a_k\equiv a$ for all $k$. If $I_t=[x_0-\sqrt \epsilon-t^{(1/2)-\delta_0},x_0+\sqrt \epsilon+t^{(1/2)-\delta_0}]$, that result implies $S(X_t)\subset I_t$ for small enough $t$ a.s. but we need to quantify this inclusion and so will follow the proof given there, pointing out some minor changes and simplifications as we go.
If $\gamma=1/2$, $X$ is the density of one-dimensional super-Brownian motion and the argument in \cite{mp92} and its quantification are both much easier. As a result we will assume $3/4>\gamma>1/2$ in what follows and leave the simpler case $\gamma=1/2$ for the reader. The fact that $a_k=a$ for all $k$ (that is, for us $a(u)=au^\gamma$ for all $u$ in the notation of \cite{mp92}), means that in the localization in \cite{mp92}, the times $\{T_N\}$ may be chosen to agree with our definition of $T_N$. We will work with the cruder modulus of continuity, $\psi(t)=\frac{1}{2} t^{(1/2)-\delta_0}$, in place of the more delicate $ch(t)=c(t\log^+(1/t))^{1/2}$ in \cite{mp92}, leading to better bounds.
If
\begin{align*}G_{n,j,k}=\{y\in C:|y(k2^{-n})&-y(j2^{-n})|>\psi((k-j)2^{-n}))\},\\
&0\le j<k;\ j,k,n\in{\mathbb{Z}}_+,\end{align*}
and $B$ is a standard one-dimensional Brownian motion,
then for $k-j\le 2^{n/2}$, (3.16) of \cite{mp92} becomes
\begin{align*}
Q_{X_0}&(H_{(k+1)2^{-n}}(G_{n,j,k})>0,\ T_N\ge (k+1)2^{-n})\\
&\le c_1 N^{\beta_0-1}a^{-1}2^n X_0(1) P\left(|B(k2^{-n})-B(j2^{-n})|>\psi((k-j)2^{-n})\right)^{2-\beta_0}\\
&\le c_2 N^{\beta_0 -1}a^{-1}2^n\epsilon\exp\left(-\frac{1}{16} 2^{n\delta_0}\right)\quad\hbox{(recall $\beta_0<3/2$).}
\end{align*}
Now sum the above bound over $0\le j<k\le 2^n$, $k-j\le 2^{n/2}$, $n\ge m$ and argue as in the proof of Theorem~3.5 in \cite{mp92} to see that if
$$\eta_m=c_3N^{\beta_0-1}a^{-1}\epsilon\exp\left(-2^{(m\delta_0/2)-4}\right),$$
then with probability at least $1-\eta_m$,
\begin{align*}H_t(G_{n,j,k})=0\hbox{ for all }&0\le j<k\le 2^n,\ k-j\le 2^{n/2},\ (k+1)2^{-n}\le T_N,\\
& t\ge (k+1)2^{-n},\hbox{ and }n\ge m.\end{align*}
Rearranging this as in the proof of Theorem 3.5 of \cite{mp92}, we have with probability at least $1-\eta_m$,
\begin{align}\label{levymod}
|y(k2^{-n})-&y(j2^{-n})|\le \psi((k-j)2^{-n})\hbox{ for all }0\le j<k,\ k-j\le 2^{n/2},\\
\nonumber&(k+1)2^{-n}\le t
\hbox{ and }n\ge m\hbox{ for }H_t-a.a. \ y\hbox{ for all }t\le T_N\wedge 1.
\end{align}
Next, we can argue as in the last part of the proof of \cite{mp92}, which was
slightly modified version of L\'evy's classical derivation of the exact
Brownian modulus of continuity, to see that \eqref{levymod} implies
\begin{align*}|y(v)-y(u)|\le 2\psi(|v-u|)&\hbox{ for all }0\le u<v\le t\hbox{ satisfying }|v-u|\le 2^{-m/2}\\
&\hbox{for }H_t-a.a.\ y\ \ \hbox{ for all }t\le T_N\wedge 1.\end{align*}
In particular, the above implies that
\[P(|y(t)-y(0)|\le 2\psi(t)\ H_t-a.a.\ y\ \hbox{for all }t\le 2^{-m/2}\wedge T_N)\ge 1-\eta_m.\]
Now $H_t(|y(0)-x_0|>\sqrt\epsilon)$ is a non-negative martingale starting at $0$ by the martingale problem for $H$ (just as in the proof of Corollary~3.9 in \cite{mp92}) and so is identically $0$ for all $t$ a.s. Therefore, the above and \eqref{HX} imply that
\[P(\rho<2^{-m/2}\wedge T_N)\le \eta_m.\]
A simple interpolation argument now gives the required bound.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{cor}\label{qMP2} Assume $X$, $\delta_0$ and $\rho$ are as in Lemma~\ref{qMP}. There is a $c_{\ref{qMP2}}>0$, depending on $a$, $\delta_0$ and $\gamma$, so that
\[P(\rho\le t)\le c_{\ref{qMP2}}\,\epsilon(t\vee \epsilon)\hbox{ for all }t,\epsilon\in(0,1].\]
\end{cor}
\paragraph{Proof} We clearly may assume $x_0=0$ by translation invariance. By Lemma~\ref{qMP} with $N=N_0\equiv8$ and $\beta_0$, $T_{N_0}$ as in that result, we have
\begin{equation}\label{rhobndI}
P(\rho\le t)\le c_{\ref{qMP}}a^{-1}8^{\beta_0-1}\epsilon\exp(-t^{-\delta_0}/c_{\ref{qMP}})+P(t\wedge T_{N_0}<\rho\le t).
\end{equation}
The result is now immediate if $\gamma=1/2$, so we assume $\gamma\in(1/2,3/4)$. If $\delta\in(0,\frac{1}{5})$ is as in Lemma~\ref{qMP},
$I_s=[-\sqrt\epsilon-s^{(1/2)-\delta_0},\sqrt\epsilon+s^{(1/2)-\delta_0}]$, and $0<t\le 1$, then
\begin{eqnarray}
\nonumber P(t\wedge T_{N_0}<\rho\le t)&\le & P(T_{N_0}<t\wedge \rho)\\
\nonumber&\le & P\left(\int_{I_s} X(s,x)^\delta\,dx>8\mbox{ for some }s\le t\wedge\rho\right)\\
\nonumber&\le & P\left(\left(\int X(s,x)\,dx\right)^\delta\,|I_s|^{1-\delta}>8\mbox{ for some }s\le t\right)
\\
\label{maxX}&\le & P(\sup_{s\le t} X_s(1)>\lambda),
\end{eqnarray}
where
$\lambda=8^{1/\delta}\left[[2\left(\sqrt \epsilon+t^{(1/2)-\delta_0}\right)]^{(1-\delta)/\delta}\right]^{-1}$
. Recall that $X_t(1)$ is a continuous non-negative local martingale starting
at $\epsilon$, and so by the weak $L^1$ inequality and Fatou's Lemma the right-hand
side of \eqref{maxX} is at most
\begin{eqnarray*}
\lambda^{-1}E[X_0(1)]&\le&\ep2^{-1-(2/\delta)}\left(\sqrt\epsilon+t^{1/4}\right)^{(1-\delta)/\delta}\quad\hbox{(by $\delta_0\le 1/4$)}\\
&\le & \epsilon\left[\max(t,\epsilon^2)\right]^{(1-\delta)/(4\delta)}\\
&\le& \epsilon\max(t,\epsilon)\quad\hbox{(since $\delta<1/5$)}.
\end{eqnarray*}
We use the above bound in \eqref{rhobndI} to conclude that
\begin{eqnarray*}
P(\rho\le t)&\le &\left[c_{\ref{qMP}}a^{-1}8^{\beta_0-1}\exp(-t^{-\delta_0}/c_{\ref{qMP}})+(t\vee \epsilon)\right]\epsilon\\
&\le & c_{\ref{qMP2}}(t\vee \epsilon)\epsilon.
\end{eqnarray*}
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
The next proposition will allow us to extend the above bound to a larger class of SPDE's. It will be proved in Section~\ref{sec:suppcomp}.
\begin{prop}
\label{prop:1}
Let $a>0$, $1>\gamma\ge 1/2$, and $Z$ be a continuous $C^+_{\rm rap}$-valued solution to
the following SPDE
\begin{eqnarray}
\frac{\partial Z}{\partial t}&=& \frac{1}{2}\Delta Z + \sigma(Z_s\,, s,\omega)\dot W^1,
\end{eqnarray}
where $\dot W^1$ is a space time white noise, $\sigma$ is Borel$\times$previsable, and
\[ \sigma(y,s,\omega)\geq ay^{\gamma},\;\;\forall s,y, P-{\rm a.s.}\; \omega.\]
Assume also for each $t>0$ we have
\[\sup_{s\le t,x\in{\mathbb{R}}}E[Z(s,x)^2]<\infty.\]
Let $X$ be a continuous $C^+_{\rm rap}$-valued solution to \eqref{gammaspde}
with $Z(0,\cdot)=X(0,\cdot)\in C^+_{\rm rap}$.
Let $A$ be a Borel set in $ {\mathbb{R}}_+\times {\mathbb{R}}$. Then
\[ P({\rm supp}(Z)\cap A)=\emptyset)\geq P_{X_0}({\rm supp}(X)\cap A)=\emptyset).\]
\end{prop}
\noindent{\bf Proof of Lemma~\ref{rhobnd}} We first fix $1\le i\le N_\epsilon$ and argue conditionally on $\mathcal{F}_{s_i}$. Note that the inequalities in \eqref{barsqfnbnd} hold pointwise, that is without integrating over space. This together with \eqref{eq:2.5}, Lemma~\ref{pmom} and Theorem~\ref{thm:1.1} show the hypotheses of Proposition~\ref{prop:1} hold with $Z(t,x)=\bar U^i(s_i+t,x)$, $Z_0=J_\epsilon^{x_i}$, and $a=2^{\frac{1}{2}-\gamma}$. We apply this result to the open set
\[A=A_t=\{(s,y):|y-x_i|>\epsilon^{1/2}+s^{(1/2)-\delta_0},\ 0<s<t\}.\]
Therefore if $\rho$ is as in Lemma~\ref{qMP}, then
\[P(\rho_i<t)=P_{J_\epsilon^{x_i}}(\hbox{supp}(Z)\cap A\neq\emptyset)\le P(\rho<t).\]
Corollary~\ref{qMP2} now shows there is a $c_{\ref{rhobnd}}= c_{\ref{rhobnd}}(\gamma,\delta_0)$ so that for $\epsilon,t\in(0,1]$,
\[P(\rho_i\le t)\le c_{\ref{rhobnd}}\epsilon(t\vee \epsilon).\]
It follows that for $p,\epsilon,t\in(0,1]$,
\[P\left(\cup_{i=1}^{pN_\epsilon}\{\rho_i\le t\}\right)\le \sum_{i=1}^{\lfloor pN_\epsilon\rfloor}P(\rho_i\le t)\le c_{\ref{rhobnd}}\lfloor pN_\epsilon\rfloor\epsilon(t\vee \epsilon)\le c_{\ref{rhobnd}}p(t\vee\epsilon)1(p\ge\epsilon).\]
This finishes the proof of Lemma \ref{rhobnd}.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\section{Proof of Lemma~\ref{thetabnd}}\label{sec:Kgrowth}
\setcounter{equation}{0}
Let
\[G(\bar U^i)=\overline{\{(t,x):\bar U^i(t,x)>0\}}\]
be the closed graph of $\bar U^i$, and let
\[\Gamma^U_i(t)=\Gamma^U_i(t,\delta_0)=\{(s,x):s_i\le s\le s_i+t,\ |x-x_i|\le (s-s_i)^{(1/2)-\delta_0}+\epsilon^{1/2}\},\]
and let $\Gamma^V_j(t)$ be the corresponding set for $V$ with $(t_j,y_j)$ in place of $(s_i,x_i)$. It is easy to check, using the definition of $\rho_i$,
that
\begin{equation}\label{Umod} G(\bar U^i)\cap([s_i,s_i+\rho_i]\times{\mathbb{R}})\subset \Gamma_i^U(\rho_i).
\end{equation}
Of course an analogous inclusion holds for $\bar V^j$. If $K'(\cdot)$ is a non-decreasing right-continuous $M_F({\mathbb{R}})$-valued process, we let $S(K')$ denote the closed support of the associated random measure on
space-time, $K'(ds,dx)$.
\begin{lemma}\label{KinU} $S(K^{i,U})\subset G(\bar U^i)$ and $S(K^{j,V})\subset G(\bar V^j)$ for all $i,j\in{\mathbb{N}}_\epsilon$ $P$-a.s.
\end{lemma}
\paragraph{Proof} It is easy to see from \eqref{UVdefn} that $S(K^{i,U})\subset [s_i,\infty)\times{\mathbb{R}}$. Let $\mathcal{O}$ be a bounded open rectangle in $((s_i,\infty)\times{\mathbb{R}})\cap G(\bar U^i)^c$ whose corners have rational coordinates, and choose a smooth non-negative function $\phi$ on ${\mathbb{R}}$ so that $\mathcal{O}=(r_1,r_2)\times \{\phi>0\}$. Then $\bar U_r^i(\phi)=0$ for all $r\in (r_1,r_2)$ and hence for all $r\in[r_1,r_2]$ a.s. by continuity. It then follows from \eqref{UVdefn} and $U^i\le \bar U^i$ that a.s.
\[
0=U^i_{r_2}(\phi)-U^i_{r_1}(\phi)=-(K^{i,U}_{r_2}(\phi)-K^{i,U}_{r_1}(\phi)).
\]
Therefore $K^{i,U}(\mathcal{O})=0$. Taking unions over such open ``rational'' rectangles we conclude that
\[K^{i,U}(G(\bar U^i)^c\cap((s_i,\infty)\times{\mathbb{R}}))=0 \ \ a.s. \]
On the other hand from \eqref{eq:2.5},
\begin{align*}
K^{i,U}(G(\bar U^i)^c\cap(\{s_i\}\times{\mathbb{R}}))&\le K^{i,U}(\{s_i\}\times [x_i-\sqrt\epsilon,x_i+\sqrt\epsilon]^c)\\
&=0.
\end{align*}
In the last line we used \eqref{UVdefn} (recall from Section~\ref{sec:setup} this implies $U^i_s=0$ for $s<s_i$) to see that $K^{i,U}_{s_i}(\cdot)\le \langle J^{x_i},\cdot\rangle$.
The last two displays imply that $K^{i,U}(G(\bar U^i)^c)=0$ and hence the result for $K^{i,U}$. The proof for $K^{j,V}$ is the same.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\medskip
Next we need a bound on the extinction times of non-negative martingales which is a slight generalization of Lemma~3.4 of Mueller and Perkins \cite{mp92}.
\begin{lemma}\label{lem:Mhit0} Assume $\gamma'=\gamma''=\frac{1}{2}$ or $(\gamma',\gamma'')\in(1/2,1)\times [1/2,1]$. Let $M\ge 0$ be a continuous $(\mathcal{H}_t)$-local martingale and $T$ be an $(\mathcal{H}_t)$-stopping time so that for some $\delta\ge 0$ and $c_0>0$,
\begin{equation}\label{sqfnineq}
\frac{d\langle M\rangle_t}{dt}\ge c_01(t<T)M_t^{2\gamma'}(t+\delta)^{(1/2)-\gamma''}\ \hbox{for }t>0.
\end{equation}
If $\tau_M(0)=\inf\{t\ge 0:M_t=0\}$, then there is a $c_{\ref{lem:Mhit0}}(\gamma')>0$ such that
\[P(T\wedge\tau_M(0)\ge t|\mathcal{H}_0)\le c_{\ref{lem:Mhit0}}(\gamma')c_0^{-1}M_0^{2-2\gamma'}t^{\gamma''-(3/2)}\quad\hbox{for all }t\ge \delta/2.\]
\end{lemma}
\paragraph{Proof} If $\gamma'=\gamma''=\frac{1}{2}$, this follows from a
slight extension of the proof of Lemma~\ref{hittime}, so assume
$\gamma'\in(1/2,1)$. Let $V=T\wedge \tau_M(0)$. As usual there is a Brownian
motion $B(t)$ such that $M(t)=B(\langle M\rangle_t)$ for $t\le V$. By
\eqref{sqfnineq} we have
\begin{align*}
\int_0^V c_0(t+\delta)^{(1/2)-\gamma''}\,dt\le \int_0^V M_t^{-2\gamma'}\,d\langle M\rangle_t
&\le \int_0^{\langle M\rangle_V} B_u^{-2\gamma'}\,du\\
&\le \int_0^{\tau_B(0)} B_u^{-2\gamma'}\,du.
\end{align*}
If $L^x_t, x\in{\mathbb{R}},\, t\ge 0$ is the semimartingale local time of $B$, the Ray-Knight Theorem (see Theorem~VI.52.1 in \cite{rw87}) implies that the above gives
\begin{align}
\nonumber E\Bigl[(V+\delta)^{(3/2)-\gamma''}&-\delta^{(3/2)-\gamma''}\Big|\mathcal{H}_0\Bigr] \\
\nonumber &\le ((3/2)-\gamma'')c_0^{-1}\int_0^\infty x^{-2\gamma'}E(L^x_{\tau_B(0)}|B_0)\,dx\\
\nonumber&=((3/2)-\gamma'')c_0^{-1}\int_0^\infty x^{-2\gamma'}2(M_0\wedge x)\,dx\\
\label {Vlowerbnd}&\le c_1(\gamma')c_0^{-1}M_0^{2-2\gamma'}\ \hbox{(use $\gamma'>1/2$)}.
\end{align}
A bit of calculus shows that
\begin{equation}\label{calc}
(t+\delta)^{(3/2)-\gamma''}-\delta^{(3/2)-\gamma''}\ge \frac{1}{2}(\sqrt 3-\sqrt 2)t^{(3/2)-\gamma''}\hbox{ for all }t\ge \delta/2.
\end{equation}
Therefore by \eqref{Vlowerbnd} and \eqref{calc}, for $t\ge \delta/2$,
\begin{align*}
P(V\ge t|\mathcal{H}_0)&\le \frac{E\Bigl[(V+\delta)^{(3/2)-\gamma''}-\delta^{(3/2)-\gamma''}\Big|\mathcal{H}_0\Bigr]}{(t+\delta)^{(3/2)-\gamma''}-\delta^{(3/2)-\gamma''}}\\
&\le \frac{2c_1(\gamma')c_0^{-1}M_0^{2-2\gamma'}}{(\sqrt 3-\sqrt 2)t^{(3/2)-\gamma''}}\\
&\equiv c_{\ref{lem:Mhit0}}c_0^{-1}M_0^{2-2\gamma'}t^{\gamma''-(3/2).}\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\end{align*}
Define $\rho^V_j=\rho_j^{V,\delta_0,\epsilon}$ just as $\rho_i$ but with $\bar V^j_{t_j+t}$ in place of $\bar U^i_{s_i+t}$ and $y_j$ in place of $x_i$.
\begin{lemma}\label{rhoVbnd} $Q_i\left(\cup_{j=1}^{pN_\epsilon}\{\rho_j^V\le t\}\right)\le c_{\ref{rhobnd}}(t\vee\epsilon)p1(p\ge\epsilon)\hbox{ for all }\epsilon,p,t\in(0,1]$ and $i\in{\mathbb{N}}_\epsilon$.
\end{lemma}
\paragraph{Proof} All the $P$-local martingales and $P$-white noises arising in the definition of $\{\bar V^j,j\in{\mathbb{N}}_\epsilon\}$ remain such under $Q_i$ because they are all orthogonal to
\[\frac{dQ_i}{dP}\Big|_{\mathcal{F}_t}=1(t<s_i)+1(t\ge s_i)\frac{\bar U^i_{t\wedge(s_i+\bar\tau_i)}(1)}{\epsilon}.\]
The proof of Lemma~\ref{rhobnd} for $\{\rho_i\}$ under $P$ therefore applies to $\{\rho_j^V\}$ under $Q_i$.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
Recall we are trying to show that the killing measure $K^{i,U}_t$ associated with the $i$ cluster of $U$ grows slowly enough for small $t$. We will control the amount of killing here by controlling the amount of killing by the $V^j$'s. The following result essentially shows that with high probability for small $t$, there is no killing
during $[s_i,s_i+t]$ from the $V^j$'s which are born before time $s_i$. Note it is particularly important that there is no $V$ mass on the birth site of the $U^i$ cluster.
Recall from \eqref{bardef} that $\bar\delta=\bar\delta(\gamma)=\frac{1}{3}\left(\frac{3}{2}-2\gamma\right)$. We introduce
\[\underline\rho_i^V=\min_{j:t_j\le s_i}\rho_j^V.\]
\begin{lemma}\label{presimass} There is a constant $c_{\ref{presimass}}(\gamma)>0$ so that for $0<\delta_0\le \bar\delta(\gamma)$,
\begin{align*}
Q_i\Big(\Gamma_i^U(t)\cap\Big\{\cup_{j:t_j\le s_i}G(\bar V^j)\Big\}\neq&\emptyset, \underline \rho^V_i>2t\Big)\le c_{\ref{presimass}}(\gamma)(\epsilon\vee t)^{\bar \delta}\\
&\hbox{for all }\epsilon,t\in(0,1]\hbox{ and }s_i\le t.
\end{align*}
\end{lemma}
\paragraph{Proof}
Assume $\epsilon,t,s_i$ and $\delta_0$ are as above. Set $\alpha=\frac{1}{2}-\delta_0(\ge \frac{1}{3})$ and choose $n_0\le n_1\in{\mathbb{Z}}_+$ so that
\begin{equation}\label{n0cond}
2^{-n_0-1}<t\vee\epsilon\le 2^{-n_0},\quad 2^{-n_1-1}<\epsilon\le 2^{-n_1}.
\end{equation}
Assume that
\begin{equation}\label{rhoVbig}\underline \rho_i^V>2t,
\end{equation}
until otherwise indicated. Suppose $t_j\le s_i$ (hence $t_j<s_i$)
and
$$(t_j,y_j)\notin [0,s_i)\times [x_i-7\cdot 2^{-n_0\alpha},x_i+7\cdot 2^{-n_0\alpha}].$$ Then
\[|y_j-x_i|>7\cdot 2^{-n_0\alpha}\ge 7(t\vee\epsilon)^\alpha\ge t^\alpha+(t+s_i-t_j)^\alpha+2\sqrt\epsilon,\]
and so
\[\Gamma_i^U(t)\cap\Gamma_j^V(s_i+t-t_j)=\emptyset.\]
By \eqref{rhoVbig} we have $\rho^V_j>s_i-t_j+t$ and so by \eqref{Umod}, or more precisely its analogue for $\bar V^j$, we have
\begin{equation} \label{possVj}
\Gamma_i^U(t)\cap G(\bar V^j)\subset \Gamma_i^U(t)\cap \Gamma_j^V(s_i+t-t_j)=\emptyset.
\end{equation}
We therefore have shown that, assuming \eqref{rhoVbig},
\begin{equation}\label{possj}
\{(t_j,y_j):t_j\le s_i, \Gamma_i^U(t)\cap G(\bar V^j)\not= \emptyset\}\subset [0,s_i)\times [x_i-7\cdot 2^{-n_0\alpha},x_i+7\cdot 2^{-n_0\alpha}].
\end{equation}
Next we cover the rectangle on the right side of the above by rectangles as follows:
\begin{align*}
&R_n^0=[s_i-2^{-n+1},s_i-2^{-n}]\times[x_i-7\cdot 2^{-n\alpha},x_i+7\cdot 2^{-n\alpha}],\\
&R_n^r=[s_i-2^{-n},s_i]\times[x_i+7\cdot2^{-(n+1)\alpha},x_i+7\cdot 2^{-n\alpha}],\\
&R_n^\ell=[s_i-2^{-n},s_i]\times[x_i-7\cdot 2^{-n\alpha},x_i-7\cdot 2^{-(n+1)\alpha}].
\end{align*}
Then it is easy to check that
\begin{align}
\label{Rcontain1}
\cup_{n=n_0}^\infty (R^0_n\cup R^r_n\cup R_n^\ell)&\supset[s_i-2^{-n_0+1},s_i)\times [x_i-7\cdot 2^{-n_0\alpha},x_i+7\cdot 2^{-n_0\alpha}]\\
\label{Rcontain}&\supset [0,s_i)\times[x_i-7\cdot 2^{-n_0\alpha},x_i+7\cdot 2^{n_0\alpha}].
\end{align}
We group together those $\bar V^j$'s which have their initial ``seeds" in each of the above rectangles. That is, for $q=0,\ell,r$ consider
\begin{align*}
&V^{n,q}(t,x)=\sum_j1((t_j,y_j)\in R_n^q)V^j(t,x),\\
&{\tilde V}^{n,q}(t,x)=\sum_j1((t_j,y_j)\in R_n^q){\tilde V}^j(t,x),\\
&{\bar V}^{n,q}(t,x)=\sum_j1((t_j,y_j)\in R_n^q){\bar V}^j(t,x).
\end{align*}
We also let $V_t^{n,q}$, $\tilde V_t^{n,q}$ and $\bar V_t^{n,q}$ denote the corresponding measure-valued processes.
It follows from \eqref{possj} and \eqref{Rcontain} that
\begin{align}\label{HitBound0}
Q_i&(\cup_{t_j\le s_i}(G(\bar V^j)\cup\Gamma_i^U(t))\not=\emptyset,\ \underline\rho_i^V>2t)\\
\nonumber&\le \sum_{n=n_0}^{n_1}\sum_{q=0,r,\ell}Q_i(G(\bar V^{n,q})\cap\Gamma_i^U(t)\not=\emptyset,\ \underline\rho_i^V>2t)\\
\nonumber &\qquad+
Q_i(\cup_{n=n_1+1}^\infty\cup_{q=0,r,l}
\,(G(\bar V^{n,q})\cap\Gamma_i^U(t))\not=\emptyset).
\end{align}
We use different arguments to show that each of the two terms on the right hand side of~\eqref{HitBound0} is small.
For the second term a very crude argument works. Namely, for
the supports of the $\bar V^j$ clusters with initial ``seeds" in
$\cup_{n=n_1+1}^\infty\,(R_n^0\cup R_n^r\cup R_n^\ell)$ to intersect the
support of $U^i$, the $\bar V^j$ clusters must be born in $\cup_{n=n_1+1}^\infty\,(R_n^0\cup R_n^r\cup R_n^\ell)$, and the probability of this event
is already small. More precisely,
\begin{align}\label{HitBound01}
Q_i&(\cup_{n=n_1+1}^\infty\cup_{q=0,r,l}
\,(G(\bar V^{n,q})\cap\Gamma_i^U(t))\not=\emptyset)\\
\nonumber
&\le Q_i\Bigl(\eta_\epsilon^-\Bigl(\cup_{n=n_1+1}^\infty\,(R_n^0\cup R_n^r\cup R_n^\ell)\Bigr)>0\Bigr).
\end{align}
By Proposition~\ref{girs} and
the decomposition for $\bar U^i(1)$ in \eqref{eq:2.5} (see also~\eqref{barmassmart}), we have
\begin{equation}\label{Qiunif}
Q_i((x_i,y_j)\in A)=E_P\left(\frac{\bar U^i_{s_i+[(t_j-s_i)^+\wedge\bar\tau_i]}(1)}{\epsilon}1((x_i,y_j)\in A)\right)=P((x_i,y_j)\in A).
\end{equation}
This and the analogue of \eqref{Rcontain1} with $n_1+1$ in place of $n_0$, implies that the
right-hand side
of~\eqref{HitBound01} is at most
\begin{align}\label{tailbnd}
Q_i&\left(\eta_\epsilon^-([s_i-2^{-n_1},s_i)\times[x_i-7\cdot 2^{-(n_1+1)\alpha},x_i+7\cdot 2^{-(n_1+1)\alpha}])>0\right)\\
\nonumber &\le (\epsilon^{-1}2^{-n_1}+1)(14\cdot 2^{-(n_1+1)\alpha})\le 42\epsilon^\alpha.
\end{align}
Substitute this bound into \eqref{HitBound0} to get
\begin{align}\label{HitBound1}
Q_i&\left(\cup_{t_j\le s_i}(G(\bar V^j)\cup\Gamma_i^U(t))\not=\emptyset,\ \underline\rho_i^V>2t\right)\\
\nonumber&\le \sum_{n=n_0}^{n_1}\sum_{q=0,r,\ell}Q_i(G(\bar V^{n,q})\cap\Gamma_i^U(t)\not=\emptyset,\ \underline\rho_i^V>2t)
+ 42\epsilon^\alpha.
\end{align}
Now we are going to bound each term in the sum on the right hand side of~\eqref{HitBound1}.
To this end, in what follows, we assume that $n_0\leq n\leq n_1$, and, for $q=0,r,l$, set
\begin{align}\label{NDef}
N_t^{n,q}=\sum_j &\mathbf{1}((t_j,y_j)\in R_n^q)\int_0^t\int_{{\mathbb{R}}}\Bigr(V(s,x)^{2\gamma-1}V^j(s,x)\\
\nonumber&+(\bar V(s,x)^{2\gamma}-V(s,x)^{2\gamma})\frac{\tilde V^j(s,x)}{\tilde V(s,x)}\Bigl)^{1/2}\bar W^{j,V}(ds,dx).
\end{align}
Note that $N^{n,q}$ is a continuous local martingale under $Q_i$.
The treatment of the cases $q=0$ and $q=r,l$ is different. First, let
$q=0$. Basically, in this case, we will show that, the on the event $\{\underline\rho^V_i>2t\}$, the total
mass of $\bar V^{n,0}$ dies out with high probability before the time $s_i$ (and, in fact, even before $s_i-2^{-n-1}$), and
hence, obviously, with this high probability, the support of $\bar V^{n,0}$ does not intersect $\Gamma^U_i$. Let us make this precise.
We have from \eqref{eq:2.5},
\begin{equation}\label{barvdecomp}
\bar V^{n,0}_{t+(s_i-2^{-n})^+}(1)=\bar V^{n,0}_{(s_i-2^{-n})^+}(1)+\bar M^{n,0}_t,
\end{equation}
where
\[\bar V^{n,0}_{(s_i-2^{-n})^+}(1)=\int\int\mathbf{1}((s,y)\in R_n^0)\eta_\epsilon^-(ds,dy)+N^{n,0}_{(s_i-2^{-n})^+},\]
and
\begin{equation}\label{MDef}
\bar M_t^{n,0}=N^{n,0}_{t+(s_i-2^{-n})^+}-N^{n,0}_{(s_i-2^{-n})^+}
\end{equation}
is a continuous $\mathcal{F}_{t+(s_i-2^{-n})^+}$-local martingale under $Q_i$.
Assume for now that $s_i>2^{-n}$ since otherwise $\bar V^{n,0}_{s_i}(1)=0$ and the bound \eqref{barVn0} below is trivial. An easy localization argument shows that (recall that $n_0\le n\le n_1$),
\begin{align}\label{initmassbnd}
Q_i\Big(&\bar V^{n,0}_{(s_i-2^{-n})}(1)\ge 2^{-n(1+\alpha-\bar\delta)}\Big)\\
\nonumber&\le 2^{n(1+\alpha-\bar\delta)}Q_i\Bigl(\int\int1((s,y)\in R_n^0)\eta^-_\epsilon(ds,dy)\Bigr)\\
\nonumber&\le 2^{n(1+\alpha-\bar\delta)}\epsilon[\epsilon^{-1}2^{-n}+1]14\cdot2^{-n\alpha}\qquad\hbox{(by \eqref{Qiunif})}\\
\nonumber&\le 14(2^{-n\bar\delta})(2^n\epsilon+1)\le 28\cdot2^{-n\bar\delta}.
\end{align}
Now from \eqref{NDef} and \eqref{MDef}, if $t'\equiv s_i-2^{-n}+t< T'\equiv\min_{j:t_j\le s_i}(\rho^V_j+t_j)$,
then
\begin{align}
\nonumber\frac{d}{dt}\langle \bar M^{n,0}\rangle_t&=\int V(t',x)^{2\gamma-1}\bar V^{n,0}(t',x)+(\bar V(t',x)^{2\gamma}-V(t',x)^{2\gamma})\frac{\tilde V^{n,0}(t',x)}{\widetilde{V}(t',x)}\,dx\\
\nonumber&\ge \int V^{n,0}(t',x)^{2\gamma}+\widetilde{V}^{n,0}(t',x)^{2\gamma}\,dx\\
\nonumber&\ge 2^{-2\gamma}\int \bar V^{n,0}(t',x)^{2\gamma}1(|x-x_i|\le 7\cdot 2^{-n\alpha}+(2^{-n}+t)^\alpha+\sqrt\epsilon)\,dx\\
\label{sqfnM0bnd}&\ge 2^{-2\gamma}\bar V^{n,0}_{t'}(1)^{2\gamma}(2[7\cdot 2^{-n\alpha}+(2^{-n}+t)^\alpha+\sqrt \epsilon])^{1-2\gamma}.
\end{align}
In the last line we used Jensen's inequality and the fact that $T'> t'$ implies $\bar V^{n,0}(t',\cdot)$ is supported in the closed interval with endpoints
$x_i\pm (7\cdot2^{-n\alpha}+(t+2^{-n})^\alpha+\sqrt\epsilon)$. A bit of arithmetic (recall $2^{-n}\ge \epsilon$ for $n\le n_1$) shows that \eqref{sqfnM0bnd} implies for some
$c(\gamma)>0$,
\begin{align}\label{sqfnM0bnd2}
\frac{d}{dt}&\langle \bar M^{n,0}\rangle_t\ge c(\gamma)\Bigl(\bar V^{n,0}_{t+(s_i-2^{-n})}(1)\Bigr)^{2\gamma}[2^{-n}+t]^{\alpha(1-2\gamma)}\\
\nonumber&\hbox{for }t<T\equiv \Bigl(\min_{j:t_j\le s_i}(\rho^V_j+t_j)-(s_i-2^{-n})\Bigr)^+.
\end{align}
Note that $T$ is an $\mathcal{F}_{(s_i-2^{-n})+t}$-stopping time. Therefore \eqref{sqfnM0bnd2} allows us to apply Lemma~\ref{lem:Mhit0} to $t\to \bar V^{n,0}_{(s_i-2^{-n})+t}(1)\equiv M_t$ with $\gamma'=\gamma$, $\gamma''=\gamma-\delta_0(2\gamma-1)$, and $\delta=2^{-n}$. Here notice that $\delta_0\le 1/6$ implies $\gamma''\in[\frac{1}{2},\frac{3}{4}]$ and $\gamma''=1/2$ if $\gamma=1/2$. Therefore,
the lemma mentioned above, the fact that $\underline\rho^V_i>2t$ implies $T>t\ge s_i>2^{-n}$, and \eqref{initmassbnd} imply
\begin{align}\label{barVn0}
Q_i&(\bar V^{n,0}_{s_i-2^{-n-1}}(1)>0,\ \underline\rho^V_i>2t)\\
\nonumber&\le Q_i(\bar V^{n,0}_{s_i-2^{-n}}(1)\ge 2^{-n(1+\alpha-\bar\delta)})\\
\nonumber&\qquad +E_{Q_i}\Bigl[Q_i(T\wedge \tau_M(0)\ge 2^{-n-1}|\mathcal{F}_{s_i-2^{-n}})1(\bar V^{n,0}_{s_i-2^{-n}}(1)<2^{-n(1+\alpha-\bar\delta)})\Bigr]\\
\nonumber&\le (28)2^{-n\bar\delta}+c_{\ref{lem:Mhit0}}(\gamma)c(\gamma)^{-1}2^{-n(1+\alpha-\bar\delta)(2-2\gamma)}2^{-(n+1)(\gamma-\delta_0(2\gamma-1)-(3/2))}\\
\nonumber&\le c'(\gamma)\Bigl(2^{-n\bar\delta}+2^{-n((3/2)-2\gamma-2(1-\gamma)\bar\delta-\delta_0)}\Bigr)\quad(\hbox{by the definition of }\alpha)\\
\nonumber&\le c'(\gamma)\left(2^{-n\bar\delta}+2^{-n(3\bar\delta-2(1-\gamma)\bar\delta-\delta_0)}\right)\quad(\hbox{by the definition of }\bar\delta)\\
\nonumber &\le c_0(\gamma)2^{-n\bar\delta},
\end{align}
where $\delta_0\le \bar\delta$ and $\gamma\ge 1/2$ are used in the last line.
Next consider $\bar V^{n,r}$. The analogue of \eqref{barvdecomp} now is
\[\bar V^{n,r}_{s_i+t}(1)=\bar V^{n,r}_{s_i}(1)+\bar M^{n,r}_t,\]
where
\[\bar M^{n,r}_t=N^{n,r}_{s_i+t}-N^{n,r}_{s_i}.\]
An argument similar the derivation of \eqref{initmassbnd} shows that
\begin{equation}\label{initrmassbnd}
Q_i(\bar V^{n,r}_{s_i}(1)\ge 2^{-n(1+\alpha-\bar\delta)})\le 28\cdot2^{-n\bar\delta}.
\end{equation}
Next argue as in \eqref{sqfnM0bnd} and \eqref{sqfnM0bnd2} to see that for $s_i+t<T'\equiv\min_{j:t_j\le s_i}(\rho^V_j+t_j)$,
\begin{align*}
\frac{d}{dt}\langle \bar M^{n,r}\rangle_t&\ge 2^{-2\gamma}\bar V^{n,r}_{s_i+t}(1)^{2\gamma}\Bigl([7\cdot 2^{-(n+1)\alpha}+(2^{-n}+t)^{(1/2)-\delta_0}+\sqrt\epsilon]2\Bigr)^{1-2\gamma}\\
&\ge c'(\gamma)\Bigl(\bar V^{n,r}_{s_i+t}(1)\Bigr)^{2\gamma}(2^{-n}+t)^{\alpha(1-2\gamma)},
\end{align*}
where we again used $n_0\le n\le n_1$.
Now apply Lemma~\ref{lem:Mhit0} and \eqref{initrmassbnd}, as in the derivation of \eqref{barVn0}, to conclude that
\begin{equation}\label{barVnr}
Q_i(\bar V^{n,r}_{s_i+2^{-n}}(1)>0,\ \underline\rho^V_i>2t)\le c_1(\gamma)2^{-n\bar\delta}.
\end{equation}
If $\bar V_{s_i+2^{-n}}^{n,r}(1)=0$, then $\bar V^{n,r}_u(1)=0$ for all $u\ge s_i+2^{-n}$, and so if in addition, $\underline\rho^V_i>2t$, then by the definition of $\rho^V_j$,
\begin{align}\label{Vnrinclus}G(\bar V^{n,r})\subset \Bigl\{(s,x):&s_i-2^{-n}\le s\le s_i+2^{-n},\\
\nonumber& 7\cdot2^{-(n+1)\alpha}-(s-s_i+2^{-n})^\alpha-\sqrt\epsilon\\
\nonumber&\le x-x_i\le7\cdot2^{-n\alpha}+(s-s_i+2^{-n})^\alpha+\sqrt\epsilon\Bigr\}.
\end{align}
A bit of algebra (using our choice of the factor $7$ and $n_0\le n\le n_1$) shows that
\[x_i+2^{-n\alpha}+\sqrt\epsilon<x_i+7\cdot 2^{-(n+1)\alpha}-(2^{-n}+2^{-n})^\alpha-\sqrt\epsilon,\]
and so the set on the right-hand side of \eqref{Vnrinclus} is disjoint from $\Gamma_i^U(t)$. Therefore by \eqref{barVnr} we may conclude that
\begin{equation}\label{barVnr2}
Q_i(G(\bar V^{n,r})\cap\Gamma_i^U(t)\not=\emptyset, \underline\rho^V_i>2t)\le c_1(\gamma)2^{-n\bar\delta}.
\end{equation}
Of course the same bound holds for $G(\bar V^{n,\ell})$.
Note that $\bar V^{n,0}_{s_i-2^{-n-1}}(1)=0$ implies $\bar V^{n,0}_s(1)=0$ for all $s\ge s_i-2^{-n-1}$ and so $G(\bar V^{n,0})\cap\Gamma_i^U(t)$ is empty. Therefore \eqref{barVn0} and \eqref{barVnr2} show that the summation on the right-hand side of \eqref{HitBound1} is at most
\[\sum_{n=n_0}^{n_1}(c_0(\gamma)+2c_1(\gamma))2^{-n\bar\delta}\le c_2(\gamma)(t\vee\epsilon)^{\bar\delta}.\]
Substitute the above
into \eqref{HitBound1} to see that
\begin{align*}
Q_i&(\cup_{t_j\le s_i}(G(\bar V^j)\cup\Gamma_i^U(t))\not=\emptyset,\ \underline\rho_i^V>2t)\\
&\le 42\epsilon^\alpha+c_2(\gamma)(t\vee\epsilon)^{\bar\delta}\le c_{\ref{presimass}}(\gamma)(t\vee\epsilon)^{\bar\delta}.
\end{align*}
In the last line we used $\bar\delta\le 1/6<1/4\le \alpha$.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\paragraph{Proof of Lemma \ref{thetabnd}} Fix $0<\delta_0\le \bar\delta$, $t\in(0,1]$ and assume $s_i,s\le t$. By \eqref{Umod} and Lemma~\ref{KinU} on $\{\rho_i>s\}$ we have
\[K^{i,U}_{s_i+s}(1)=K^{i,U}(\Gamma_i^U(s))\le \sum_j K^{j,V}(\Gamma_i^U(s)),\]
where \eqref{eq:2.2} is used in the last inequality. Next use $S(K^{j,V})\subset G(\bar V^j)$ (by Lemma~\ref{KinU}) and $S(K^{j,V})\subset [t_j,\infty)\times{\mathbb{R}}$ to conclude that on
$$\{\rho_i>s\}\cap\Bigl\{\Bigl(\cup_{t_j\le s_i}G(\bar V^j)\Bigr)\cap \Gamma_i^U(t)=\emptyset\Bigr\}\equiv \{\rho_i>s\}\cap D_i(t),$$
we have
\begin{equation}\label{Kimassbnd1}
K^{i,U}_{s_i+s}(1)\le \sum_j 1(s_i<t_j\le s_i+s)K^{j,V}(\Gamma_i^U(s)).
\end{equation}
Another application of \eqref{Umod} and Lemma~\ref{KinU}, this time to $\bar V^j$, shows that for $t_j>s_i$,
\begin{equation}\label{SKjVinc}
S(K^{j,V})\cap([0,s_i+s]\times {\mathbb{R}})\subset \Gamma_j^V(s_i+s-t_j)\hbox{ on }\{\rho_j^V>s\}.
\end{equation}
An elementary calculation shows that
\begin{equation}\label{Gammaint}
\Gamma_i^U(s)\cap\Gamma_j^V(s_i+s-t_j)=\emptyset\hbox{ for }s_i< t_j\le s_i+s\hbox{ and }|y_j-x_i|>2(\sqrt\epsilon+s^{(1/2)-\delta_0}).
\end{equation}
If $F_i(t)=\cap_{j:t_j\le s_i+t}\{\rho_j^V>2t\}$, then use \eqref{SKjVinc} and \eqref{Gammaint} in \eqref{Kimassbnd1} to see that on \break
$D_i(t)\cap F_i(t)$, for $s<t\wedge\rho_i$,
\begin{align}\label{KLi}
K^{i,U}_{s_i+s}(1)&\le \sum_j 1(s_i<t_j\le s_i+s,\, |y_j-x_i|\le 2(\sqrt\epsilon+s^{(1/2)-\delta_0}))
K^{j,V}_{s_i+s}(1)\\
\nonumber&\equiv L^i(s).
\end{align}
Note that $L^i$ is a non-decreasing process. If we sum the second equation in \eqref{UVdefn} over $j$ satisfying $s_i<t_j\le s_i+s$, $|y_j-x_i|\le 2(\sqrt \epsilon+s^{(1/2)\delta_0})$, and denote this summation by $\sum_j^{(i)}$, then
\begin{align}
\nonumber L^i(s)&\le\hbox{$\sum_j^{(i)}$}K_{s_i+s}^{j,V}(1)+V_{s_i+s}^j(1)\\
\nonumber&=\int\int 1(s_i<t'\le s_i+s,\, |y'-x_i|\le 2(\sqrt\epsilon+s^{(1/2)-\delta_0}))\eta_\epsilon^-(dt',dy')\\
\label{Libnd}&\quad +\hbox{$\sum_j^{(i)}$}\int_0^{s_i+s}\int_{{\mathbb{R}}} V(s',x)^{\gamma-(1/2)}V^j(s',x)^{1/2}W^{j,V}(ds',dx).
\end{align}
Now take means in \eqref{Libnd}, use \eqref{Qiunif}, and use a standard localization argument to handle the $Q_i$ martingale term, to conclude that
\begin{align*}
E&_{Q_i}(L^i(s))\\
&\le E_{Q_i}\Bigl(\int\int 1(s_i<t'\le s_i+s,\, |y'-x_i|\le 2(\sqrt\epsilon+s^{(1/2)-\delta_0}))\eta^-_\epsilon(dt',dy')\Bigr)\\
&=\sum_j 1(s_i<j\epsilon\le s_i+s)\epsilon\int_0^1\int_0^1\int_{y_j-\sqrt\epsilon}^{y_j+\sqrt\epsilon}J((y_j-y')\epsilon^{-1/2})\epsilon^{-1/2}dy'\\
&\phantom{=\sum_j 1(s_i<j\epsilon\le s_i+s)\epsilon\int_0^1}\times 1(|y_j-x_i|\le (3\sqrt\epsilon+2s^{(1/2)-\delta_0}))dy_jdx_i\\
&\le 2(3\sqrt\epsilon+2s^{(1/2)-\delta_0})\Bigl(\sum_j 1(s_i<j\epsilon\le s_i+s)\epsilon\Bigr)\\
&\le 6(\sqrt\epsilon+s^{(1/2)-\delta_0})(s+\epsilon)\le 12(s+\epsilon)^{(3/2)-\delta_0}.
\end{align*}
A routine Borel-Cantelli argument, using the monotonicity of $L^i$, (take $s=2^{-n}$ for $N\le n\le \log_2(1/\epsilon)$) shows that for some $c_0(\delta_0)$, independent of $\epsilon$,
\begin{equation}\label{LiBC}
Q_i(L^i(s)\le (s+\epsilon)^{(3/2)-2\delta_0}\hbox{ for }0\le s\le u)\ge 1-c_0(\delta_0)(u\vee\epsilon)^{\delta_0}\ \forall u\ge 0.
\end{equation}
Apply \eqref{LiBC} in \eqref{KLi} and conclude
\begin{align}\label{thetaboundI}
Q_i(\theta_i<\rho_i\wedge t)&\le Q_i(K^{i,U}_{s_i+s}(1)>(s+\epsilon)^{(3/2)-2\delta_0}\ \exists s<\rho_i\wedge t)\\
\nonumber&\le Q_i(F_i(t)^c)+Q_i(D_i(t)^c\cap F_i(t))\\
\nonumber&\phantom{\le Q_i(F_i(t)^c)}\ +Q_i(L^i(s)>(s+\epsilon)^{(3/2)-2\delta_0}\,
\exists s<\rho_i\wedge t)\\
\nonumber&\le Q_i\left(\cup_{j\le (2t/\epsilon)\wedge N_\epsilon}\{\rho^V_j\le 2t\}\right)+Q_i(D_i(t)^c\cap\{\underline\rho^V_i>2t\})\\
\nonumber&\phantom{\le Q_i\left(\cup_{j\le (2t/\epsilon)\wedge N_\epsilon}\{\rho^V_j\le 2t\}\right)}\ +c_0(\delta_0)(t\vee\epsilon)^{\delta_0}.
\end{align}
Recall from Section~\ref{intro} that $N_\epsilon=\lfloor\epsilon^{-1}\rfloor$. The second term is at most $c_{\ref{presimass}}(\epsilon\vee t)^{\bar\delta}$ by Lemma~\ref{presimass}, and by Lemma~\ref{rhoVbnd}, if $4t\le 1$ and $\epsilon\le 1/2$, the first term is at most
\[Q_i\left(\cup_{j\le 4tN_\epsilon}\{\rho_j^V\le 2t\}\right)\le 8c_{\ref{rhobnd}}(t\vee\epsilon)t\le 8c_{\ref{rhobnd}}(t\vee\epsilon).\]
If $4t>1$ or $\epsilon>1/2$, the above bound is trivial as $c_{\ref{rhobnd}}\ge 1$.
We conclude from \eqref{thetaboundI} that
\[Q_i(\theta_i<\rho_i\wedge t)\le 8c_{\ref{rhobnd}}(t\vee\epsilon)+c_{\ref{presimass}}(\epsilon\vee t)^{\bar\delta}+c_0(\delta_0)(t\vee\epsilon)^{\delta_0}.\]
The result follows because $\delta_0\le \bar\delta\le 1$.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\section{Support properties of the process: Proof of Proposition~\ref{prop:1}} \label{sec:suppcomp}
\setcounter{equation}{0}
In this section we will prove the following result from Section~\ref{sec:lem4.4}:
\begin{prop}
\label{prop:1'}
Let $a>0$, $1>\gamma\ge 1/2$, and $Z$ be a continuous $C^+_{\rm rap}$-valued solution to the following SPDE
\begin{eqnarray}
\frac{\partial Z}{\partial t}&=& \frac{1}{2}\Delta Z + \sigma(Z_s\,, s,\omega)\dot W^1,
\end{eqnarray}
where $\dot W^1$ is a space time white noise, $\sigma$ is Borel$\times$previsable, and
\[\sigma(y,s,\omega)\geq ay^{\gamma},\;\;\forall s,y, P-{\rm a.s.}\; \omega.\]
Assume also for each $t>0$ we have
\begin{equation}\label{ZL2bound}\sup_{s\le t,x\in{\mathbb{R}}}E(Z(s,x)^2)<\infty.\end{equation}
Let $X$ be a continuous $C^+_{\rm rap}$-valued solution to the following SPDE, perhaps on a different space,
\begin{eqnarray}\label{gammaspde2}
\frac{\partial X}{\partial t}&=& \frac{1}{2}\Delta X+ aX^{\gamma}\dot W,
\end{eqnarray}
with $Z(0,\cdot)=X(0,\cdot)\in C^+_{\rm rap}$.
Let $A$ be a Borel set in ${\mathbb{R}}_+\times {\mathbb{R}}$. Then
\[ P({\rm supp}(Z)\cap A)=\emptyset)\geq P_{X_0}({\rm supp}(X)\cap A)=\emptyset).\]
\end{prop}
Recall from the discussion at the beginning of Section~\ref{sec:lem4.4} that for each $X_0\in C^+_{\rm rap}$ there is a unique law $P_{X_0}$ on $C({\mathbb{R}}_+,C^+_{\rm rap})$ of the solution to \eqref{gammaspde2}.
\begin{lemma}
\label{lem:1} Let $\gamma\in[1/2,1)$.
For any nonnegative $\phi\in L^1({\mathbb{R}})$, and $t,s\geq 0,$ there exists a sequence of $M_F({\mathbb{R}})$-valued processes $\{Y^{n}\}_{n\geq 0}$ such that $Y^n_0(dx)=\phi(x)dx$ and
\begin{eqnarray}
\label{eq:dual1}
E\left[ e^{-\langle \phi , Z_t\rangle}|\mathcal{F}^{Z}_s\right]&\geq &
E\left[ e^{-\langle \phi , X_{t-s}\rangle}|X_0=Z_s\right]\\
\label{eq:dual2}
&=& \lim_{n\rightarrow \infty}E^{Y^n}_{\phi}\left[ e^{-\langle Y^n_{t-s} , Z_s\rangle}\right],
\end{eqnarray}
where $P^{Y^n}_{\phi}$ is the probability law of $Y^n$.
\end{lemma}
\paragraph{Proof} We may assume without loss of generality that $a=1$, as only trivial adjustments are needed to the handle general $a>0$. First we will prove the lemma for $\gamma>1/2$ and then explain the modifications for the $\gamma=1/2$ case. For $\gamma\in (1/2,1)$,
(\ref{eq:dual2}) follows from Proposition 2.3 of~\cite{myt98w}. To simplify the exposition let us take $s=0$. For $s>0$ the proof goes along the same lines as it depends only on the martingale properties of $Z$.
By the proof of Lemma~3.3 in~\cite{myt98w} we get that
for each $n$ there exists a stopping time $\tilde{\gamma}_{k}(t)\le t$ and an $M_F({\mathbb{R}})$-valued process
$Y^n$ such that, for $\eta=\frac{2\gamma(2\gamma-1)}{\Gamma(2-2\gamma)}$, and
\[ g(u,y)=\int_0^u (e^{-\lambda y} -1 +\lambda y) \lambda^{-2\gamma -1} d\lambda,\;\; u,y\geq 0, \]
we have
\begin{eqnarray}
\label{eq:05_08}
\lefteqn{ E\left[ e^{-\langle Y^n_{\tilde{\gamma}_k(t)} , Z_{t-\tilde{\gamma}_k(t)}\rangle}|Y^n_0=\phi\right]
= E_{\phi}\left[ e^{-\langle\phi, Z_t\rangle}\right]}
\\
\nonumber
&&\mbox{}-
\frac{1}{2}E\Bigg[ \int_0^{\tilde{\gamma}_k(t)} e^{-\langle Y^n_{s} , Z_{t-s}\rangle}
\Bigl\{ \eta \int_{{\mathbb{R}}} (Y^n_s(x))^2 g(1/n, Z_{t-s}(x))dx
\\
\nonumber
&&\hspace*{1.5cm}
+ \langle\sigma(Z_{t-s})^2-(Z_{t-s})^{2\gamma}, (Y^n_s)^2\rangle\Bigr\} ds\Bigg]\\
&\leq&
\nonumber
E_{\phi}\left[ e^{-\phi, Z_t\rangle}\right]
\\
\nonumber
&&\mbox{}-
\frac{1}{2}E\left[ \int_0^{\tilde{\gamma}_k(t)} e^{-\langle Y^n_{s} , Z_{t-s}\rangle}
\eta \int_{{\mathbb{R}}} (Y^n_s(x))^2 g(1/n, Z_{t-s}(x))dx ds\right].
\end{eqnarray}
If $k=k_n=\ln(n)$, we can easily get (as in the proof of Lemma 3.4 of~\cite{myt98w}) that
\begin{eqnarray}\label{dualconvgt}
\lefteqn{\mbox{}\hspace*{-2cm} E\left[ \int_0^{\tilde{\gamma}_{k_n}(t)} e^{-\langle Y^n_{s} , Z_{t-s}\rangle}
\eta \int_{{\mathbb{R}}} (Y^n_s(x))^2 g(1/n, Z_{t-s}(x))dx\rangle ds\right] }
\\
\nonumber
&\leq& C \sup _{x, s\leq t} E\left[ Z_s(x)^2\right]k_n n^{2\gamma -2}\\
\nonumber
&\rightarrow& 0,\;\;{\rm as}\; n\rightarrow \infty.
\end{eqnarray}
Here we used \eqref{ZL2bound} in the last line.
Moreover, as is shown in the proof of Lemma~3.5 of~\cite{myt98w}, we have
\begin{eqnarray*}
P(\tilde{\gamma}_{k_n}(t)<t)&\rightarrow& 0,\;\;{\rm as}\; n\rightarrow \infty,
\end{eqnarray*}
or equivalently,
\begin{eqnarray*}
P(\tilde{\gamma}_{k_n}(t)=t)&\rightarrow& 1,\;\;{\rm as}\; n\rightarrow \infty.
\end{eqnarray*}
Hence we get from \eqref{eq:05_08},\eqref{dualconvgt} and the above,
\begin{eqnarray}
\nonumber
\lim_{n\rightarrow\infty} E\left[ e^{-\langle Y^n_{t} , Z_{0}\rangle}|Y^n_0=\phi\right]
&=& \lim_{n\rightarrow\infty} E\left[ e^{-\langle Y^n_{\tilde{\gamma}_{k_n}(t)} , Z_{t-\tilde{\gamma}_{k_n}(t)}\rangle}|Y^n_0=\phi\right]\\
\nonumber
&\leq& E\left[ e^{-\langle \phi, Z_{t}\rangle}\right],\;\;\forall t\geq 0.
\end{eqnarray}
But by Lemma~3.5 of of~\cite{myt98w} we have
\begin{eqnarray}
\label{dualconv2}
\lim_{n\rightarrow\infty} E\left[ e^{-\langle Y^n_{t} , Z_{0}\rangle}|Y^n_0=\phi\right]
&=& E\left[ e^{-\langle \phi, X_{t}\rangle}\right],\;\;\forall t\geq 0.
\end{eqnarray}
and we are done for $\gamma\in (1/2,1)$.
In the case $\gamma=1/2$, the proof is even easier. Now $X$ is just a super-Brownian
motion. Now take $Y^n=Y$ for all $n$, where $Y$ is a solution to the log-Laplace equation
$$ \frac{\partial Y_t}{\partial t}= \frac{1}{2}\Delta Y_t -\frac{1}{2}(Y_t)^2,$$
so that \eqref{dualconv2} is the standard exponential duality for super-Brownian motion.
Then~\eqref{eq:05_08} follows with $\tilde{\gamma}_k(t)=t$, and $\eta=0$, and so the result follows
immediately for $\gamma=1/2$.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{lemma}
\label{lem:2}
For any $k\geq 1$ and $0\leq t_1<t_2<\ldots < t_k$ and $\phi_1,\ldots,\phi_k\geq 0$,
\begin{eqnarray}
\label{eq:dual3}
E\left[ e^{-\sum_{i=1}^k \langle \phi_i , Z_{t_i}\rangle}\right]&\geq &
E\left[ e^{-\sum_{i=1}^k\langle \phi_i , X_{t_i}\rangle}\right].
\end{eqnarray}
\end{lemma}
\paragraph{Proof} The proof goes by induction. For $k=1$ it follows from the previous lemma. Suppose the equality holds for $k-1$. Let us check it for $k$.
\begin{eqnarray}
\nonumber
\lefteqn{E\left[ e^{-\sum_{i=1}^k \langle \phi_i , Z_{t_i}\rangle}\right] =
E\left[ e^{-\sum_{i=1}^{k-1}\langle \phi_i , Z_{t_i}\rangle} E\left[ e^{-\langle \phi_k , Z_{t_k}\rangle}
|\mathcal{F}^Z_{t_{k-1}}\right] \right]}\\
\label{eq:2.6}
&\geq& E\left[ e^{-\sum_{i=1}^{k-1}\langle \phi_i , Z_{t_i}\rangle} \lim_{n\rightarrow \infty}
E^{Y^n}_{\phi_k}\left[ e^{-\langle Y^n_{t_k-t_{k-1}} , Z_{t_{k-1}}\rangle} \right] \right]
\\
\nonumber
&=& \lim_{n\rightarrow \infty} E^{Y^n}_{\phi_k}\times E^{Z}\left[
e^{-\sum_{i=1}^{k-2}\langle \phi_i , Z_{t_i}\rangle -\langle \phi_{k-1}+ Y^n_{t_k-t_{k-1}} , Z_{t_{k-1}}\rangle}
\right]\\
\nonumber
&\geq& \lim_{n\rightarrow \infty} E^{Y^n}_{\phi_k}\times E^{X}\left[
e^{-\sum_{i=1}^{k-2}\langle \phi_i , X_{t_i}\rangle -\langle \phi_{k-1}+ Y^n_{t_k-t_{k-1}} , X_{t_{k-1}}\rangle}
\right],
\end{eqnarray}
where the inequality in~(\ref{eq:2.6}) follows by Lemma~\ref{lem:1}. and the last inequality follows by the induction
hypothesis. Now, for $\gamma\in (1/2,1)$, we use conditioning and Proposition~2.3 in~\cite{myt98w}
to get
\begin{eqnarray}
\nonumber
\lefteqn{\lim_{n\rightarrow \infty} E^{Y^n}_{\phi_k}\times E^{X}\left[
e^{-\sum_{i=1}^{k-2}\langle \phi_i , X_{t_i}\rangle -\langle \phi_{k-1}+ Y^n_{t_k-t_{k-1}} , X_{t_{k-1}}\rangle}
\right]}
\\
\nonumber
&=& E\left[ e^{-\sum_{i=1}^{k-1}\langle \phi_i , X_{t_i}\rangle} \lim_{n\rightarrow \infty}
E^{Y^n}_{\phi_k}\left[ e^{-\langle Y^n_{t_k-t_{k-1}} , X_{t_{k-1}}\rangle} \right] \right]\\
\label{eq:06_08}
&=& E\left[ e^{-\sum_{i=1}^{k}\langle \phi_i , X_{t_i}\rangle}\right],
\end{eqnarray}
and we are done for $\gamma\in(1/2,1)$. For $\gamma=1/2$, \eqref{eq:06_08} follows immediately again by conditioning,
and the fact that $Y=Y^n$ is a solution to the log-Laplace equation for super-Brownian motion.\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{lemma}
\label{lem:3}
For any non-negative and Borel measurable function $\psi$ on \mbox{${\mathbb{R}}_+\times {\mathbb{R}}$}
\begin{eqnarray}
\label{eq:dual4}
E\left[ e^{-\int_0^t\int_{{\mathbb{R}}} \psi(s,x) Z(s,x)dxds}\right]\geq
E\left[ e^{-\int_0^t\int_{{\mathbb{R}}} \psi(s,x) X(s,x)dxds}\right],\;\forall t\geq 0.
\end{eqnarray}
\end{lemma}
Before starting the proof, we recall the following definition.
\begin{definition}
We say that a sequence $\psi_n(x)$ of functions converges bounded-pointwise to
$\psi(x)$ provided $\lim_{n\to\infty}\psi_n(x)=\psi(x)$ for all $x$, and there
exists a constant $K<\infty$ such that $\sup_{n,x}|\psi_n(x)|\leq K$.
\end{definition}
\paragraph{Proof of Lemma~\ref{lem:3}}
First suppose that $\psi\in C_+({\mathbb{R}}_+\times {\mathbb{R}})$ is bounded.
Then let us choose an approximating sequence of bounded functions
$\phi^n_1\,,\ldots, \phi^n_{k_n}\in C_+({\mathbb{R}}_+)$ such that
\[ \sum_{i=1}^{k_n} \langle \phi_i, f_{t_i}\rangle \rightarrow \int_0^t \int_{{\mathbb{R}}}
\psi(s,x) f(s,x) \,dsdx,\;\forall t\geq 0,\]
for any $f\in D({\mathbb{R}}_+\,, C_+({\mathbb{R}}))$.
In this way for bounded $\psi\in C_+({\mathbb{R}}_+\times {\mathbb{R}})$ the result follows
immediately from Lemma~\ref{lem:2}. Now pass to the bounded-pointwise closure
of this class of $\psi$'s, that is the smallest class containing the above
continuous $\psi$'s which is closed under bounded-pointwise limits. Finally
take monotone increasing limits to complete the proof. \hfill\quad \vrule height7.5pt width4.17pt depth0pt
\paragraph{Proof of Proposition~\ref{prop:1'}}
Take
\[ \psi_n(s,x)= n 1_{A}(s,x). \]
Then by Lemma~\ref{lem:3} we have
\begin{eqnarray*}
E\left[ e^{-n Z(A)}\right]&\geq &
E\left[ e^{-n X(A)}\right],
\end{eqnarray*}
where $Z(A)\equiv \int_A Z(s,x)dxds$ and $X(A)\equiv \int_A X(s,x)dxds$.
Take $n\rightarrow\infty$ on both sides to get
\begin{eqnarray}
\label{eq:2.7}
P(Z(A)=0)\geq P(X(A)=0).
\end{eqnarray}
The required result follows immediately for $A$ open because then
\[\{\text{supp}(Z)\cap A=\emptyset\}=\{Z(A)=0\}.\]
It then follows for compact $A$ because
\[\{\text{supp}(X)\cap A=\emptyset\}=\cup_{n}\{\text{supp}(X)\cap A^{1/n}=\emptyset\},\]
where $A^{1/n}$ is the open set of points distance less than $1/n$ of $A$.
The general result now follows by
the inner regularity of the Choquet capacity $A\to P({\rm supp}(Z)\cap A\neq \emptyset)$ (see p. 39 of \cite{mey66}).
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\section{Proof of Theorem~\ref{thm:1.1}}\label{sec:constr}
\setcounter{equation}{0}
Let us fix $\epsilon\in (0,1]$. For this $\epsilon$ we construct the sequence of
processes mentioned in Theorem \ref{thm:1.1}, approximating them by a system of processes with ``soft-killing''.
Fix $n>0$ and define the sequence of processes $(U^{i,n}, V^{i,n}, \widetilde{U}^{i,n},
\widetilde{V}^{i,n})$ as follows. For any $\phi\in C^2_{b}({\mathbb{R}})$, let
\begin{eqnarray}
\nonumber
\left\{
\begin{array}{rcl}
U^{i,n}_t(\phi) &=& \langle J^{x_i},\phi\rangle\mathbf{1}(t\geq s_i)\\
&& +
\int_0^t \int_{{\mathbb{R}}}U^n(s,x)^{\gamma-1/2} U^{i,n}(s,x)^{1/2} \phi(x) W^{i,n,U}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t U^{i,n}_s(\frac{1}{2}\Delta \phi)\, ds - n
\int_0^t\langle U^{i,n}_s V^{n}_s,\phi\rangle\,ds, \;\;t\geq 0\,,
i\in \NN_{\epsilon},
\\
&&\mbox{}\\
V^{j,n}_t(\phi) &=& \langle J^{y_j},\phi\rangle\mathbf{1}(t\geq t_j) \\
&& + \int_0^t\int_{{\mathbb{R}}} V^n(s,x)^{\gamma-1/2} V^{j,n}(s,x)^{1/2} \phi(x) W^{j,n,V}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t V^{j,n}_s(\frac{1}{2}\Delta \phi)\, ds -
n\int_0^t \langle V^{j,n}_s U^{n}_s,\phi\rangle\,ds, \;\;t\geq 0\,, j\in\NN_{\epsilon}, \\
\nonumber\widetilde{U}^{i,n}_t(\phi) &=& \int_0^t \int_{{\mathbb{R}}}\left[ \left(\widetilde{U}^n(s,x)+U^n(s,x)\right)^{2\gamma} - U^n(s,x)^{2\gamma}\right]^{1/2}\\
\nonumber
&&\mbox{}\hspace*{1cm}\times
\sqrt{\frac{\widetilde{U}^{i,n}(s,x)}{\widetilde{U}^n(s,x)}} \phi(x) \widetilde{W}^{i,n,U}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t \widetilde{U}^{i,n}_s(\frac{1}{2}\Delta \phi)\, ds + n
\int_0^t\langle U^{i,n}_s V^{n}_s,\phi\rangle\,ds, \;\;t\geq 0\,, i\in \NN_{\epsilon},
\\
\widetilde{V}^{j,n}_t(\phi) &=& \int_0^t\int_{{\mathbb{R}}} \left[ \left(\widetilde{V}^n(s,x)+V^n(s,x)\right)^{2\gamma} - V^n(s,x)^{2\gamma}\right]^{1/2}\\
\nonumber
&&\mbox{}\hspace*{1cm}\times
\sqrt{\frac{\widetilde{V}^{j,n}(s,x)}{\widetilde{V}^n(s,x)}} \phi(x) \widetilde{W}^{j,n,V}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t \widetilde{V}^{j,n}_s(\frac{1}{2}\Delta \phi)\, ds + n
\int_0^t\langle V^{j,n}_s U^{n}_s,\phi\rangle\,ds, \;\;t\geq 0\,, j\in \NN_{\epsilon},
\label{tUVndefn}
\end{array}
\right.
&&
\label{eq:4.1}
\\
&&\mbox{}
\end{eqnarray}
where
\begin{eqnarray*}
U^{n}_t &=& \sum_i U^{i,n}_t\,, \hspace*{2cm}
V^{n}_t = \sum_j V^{j,n}_t\,,\\
\widetilde{U}^{n}_t &=& \sum_i \widetilde{U}^{i,n}_t\,, \hspace*{2cm}
\widetilde{V}^{n}_t =\sum_j \widetilde{V}^{j,n}_t\,,
\end{eqnarray*}
and $\{ W^{i,n,U}, W^{j,n,V},\widetilde{W}^{k,n,U},\widetilde{W}^{l,n,V}\}_{i,j,k,l\in\NN_{\epsilon}}$ is a collection of mutually independent white noises.
For $\phi\in C^2_{b}({\mathbb{R}})$, let $\{M_t^{i,n,U}(\phi)\}_{t\geq 0}, \{M_t^{j,n,V}(\phi)\}_{t\geq 0},$
$\{\widetilde{M}_t^{i,n,U}(\phi)\}_{t\geq 0},
\{\widetilde{M}_t^{j,n,V}(\phi)\}_{t\geq 0}$ denote the stochastic integrals on the right hand side of the
equations for $U^{i,n}, V^{j,n}, \widetilde{U}^{i,n}, \widetilde{V}^{j,n}$, respectively, in~(\ref{tUVndefn}). For each $n$, a solution taking values in
$(C^+_{\rm rap})^{4N_{\epsilon}}$ to the system of
above equations can be constructed via standard steps by extending the procedure in~\cite{shi94}. We will comment further on this point below.
We also define the following nondecreasing $M_F({\mathbb{R}})$-valued processes
\begin{eqnarray*}
K_t^{i,n,U}(\phi)&=& n
\int_0^t\langle U^{i,n}_s V^{n}_s,\phi\rangle\,ds,\; t\geq 0, \phi\in C_{b}({\mathbb{R}}), \\
K_t^{j,n,V}(\phi)&=& n
\int_0^t\langle V^{j,n}_s U^{n}_s\phi,\rangle\,ds,\; t\geq 0, \phi\in C_{b}({\mathbb{R}}).
\end{eqnarray*}
Clearly,
$$ \sum_{i\in \NN_{\epsilon}} K_t^{i,n,U}= \sum_{j\in \NN_{\epsilon}} K_t^{j,n,V}=: K_t^n,$$
and $(U^n,V^n,\widetilde{U}^n,\widetilde{V}^n)$ satisfies the following system of equations for $\phi\in C_b^2({\mathbb{R}})$:
\begin{eqnarray}
\nonumber
\left\{
\begin{array}{rcl}
U^{n}_t(\phi) &=& \sum_{i\in \NN_{\epsilon}}\langle J^{x_i},\phi\rangle\mathbf{1}(t\geq s_i)\\
&& +
\int_0^t \int_{{\mathbb{R}}}U^n(s,x)^{\gamma} \phi(x) W^{n,U}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t U^{n}_s(\frac{1}{2}\Delta \phi)\, ds - K^n_t(\phi), \;\;t\geq 0,
\\
&&\mbox{}\\
V^{n}_t(\phi) &=& \sum_{j\in \NN_{\epsilon}}\langle J^{y_j},\phi\rangle\mathbf{1}(t\geq t_j) \\
&& + \int_0^t\int_{{\mathbb{R}}} V^n(s,x)^{\gamma} \phi(x) W^{n,V}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t V^{n}_s(\frac{1}{2}\Delta \phi)\, ds - K^n_t(\phi), \;\;t\geq 0,
\\
\nonumber\widetilde{U}^{n}_t(\phi) &=& \int_0^t \int_{{\mathbb{R}}}\left[ \left(\widetilde{U}^n(s,x)+U^n(s,x)\right)^{2\gamma} - U^n(s,x)^{2\gamma}\right]^{1/2}\\
\nonumber
&&\mbox{}\hspace*{1cm}\times\phi(x) \widetilde{W}^{n,U}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t \widetilde{U}^{n}_s(\frac{1}{2}\Delta \phi)\, ds + K^{n}_t(\phi), \;\;t\geq 0,\\
\widetilde{V}^{n}_t(\phi) &=& \int_0^t\int_{{\mathbb{R}}} \left[ \left(\widetilde{V}^n(s,x)+V^n(s,x)\right)^{2\gamma} - V^n(s,x)^{2\gamma}\right]^{1/2}\\
\nonumber
&&\mbox{}\hspace*{1cm}\times \phi(x) \widetilde{W}^{n,V}(ds,dx) \\
\nonumber
&& \mbox{}+ \int_0^t \widetilde{V}^{n}_s(\frac{1}{2}\Delta \phi)\, ds + K^{n}_t(\phi), \;\;t\geq 0,
\end{array}
\right.
&&
\label{eq:4.3}
\\
&&\mbox{}
\end{eqnarray}
with
$W^{n,U}, W^{n,V}, \widetilde{W}^{n,U}, \widetilde{W}^{n,V}$ being a collection of independent space-time white noises.
For $i\in \NN_{\epsilon}$,
define $\bar{U}^{i,n}_t\equiv U^{i,n}_t+\widetilde{U}^{i,n}_t\,, \bar{V}^{i,n}_t
\equiv V^{i,n}_t+\widetilde{V}^{i,n}_t\,, t\in[0,T],$ and
\begin{eqnarray}
\label{11_08_1}
\bar{U}^{n}_t\equiv \sum_i \bar{U}^{i,n}_t\,,\;\;\;
\bar{V}^{n}_t\equiv \sum_j \bar{V}^{j,n}_t\,,\; t\in [0,T].
\end{eqnarray}
Since $\{ W^{i,n, U}, W^{j,n,V},\widetilde{W}^{k,n, U}, \widetilde{W}^{l,n,V}, i,j,k,l\in \NN_{\epsilon}\}$ is a collection
of independent white noises, and by stochastic calculus, one can easily show
that the
processes $\bar{U}^{n}, \bar{V}^{n}$ satisfy equations~(\ref{eq:2.8}) and so by
\cite{myt98w} they have laws on $D([0,T],C^+_{\rm rap})$ which are
independent of $n$.
Here we comment further on the construction of a solution $(U^{i,n}, V^{i,n}, \widetilde{U}^{i,n}, \widetilde{V}^{i,n})_{i\in {\mathbb{N}}_{\epsilon}}$ to~(\ref{tUVndefn}). As we have mentioned above, one can follow the procedure indicated in the proof of Theorem~2.6 in~\cite{shi94}
by extending it to systems of equations.
In the proof, one constructs an approximating sequence of processes
$\{(U^{i,n,k}, V^{i,n,k}, \widetilde{U}^{i,n,k}, \widetilde{V}^{i,n,k})_{i\in {\mathbb{N}}_{\epsilon}}\}_{k\geq 1}$
with
globally Lipschitz coefficients, and shows that this sequence is tight and each limit point satisfies~(\ref{tUVndefn}).
The only subtle point is that the drift coefficients $U^{i,n}(\cdot)V^n(\cdot)$ and
$V^{i,n}(\cdot)U^n(\cdot)$
in the system of limiting equations~(\ref{tUVndefn})
do not satisfy a linear growth condition. However, note that, by\eqref{11_08_1}, any solution to~(\ref{tUVndefn})
satisfies the following bounds
\begin{eqnarray}
\label{11_08_2}
U^{i,n}, \widetilde{U}^{i,n},U^{n}, \widetilde{U}^n\leq \bar{U}^n,\;\;\;\;\;
V^{i,n}, \widetilde{V}^{i,n},V^{n}, \widetilde{V}^n\leq \bar{V}^n,
\end{eqnarray}
where $\bar{U}^n$ and $\bar{V}^n$ have
good moment bounds by Lemma~\ref{pmom}. Hence, it is possible to construct
$\{(U^{i,n,k}, V^{i,n,k}, \widetilde{U}^{i,n,k}, \widetilde{V}^{i,n,k})_{i\in {\mathbb{N}}_{\epsilon}}\}_{k\geq 1}$
so that
the bound in Lemma~\ref{pmom} holds
uniformly in $k$: for any $q,T>0$, there exists $C_{q,T}$ such that
\begin{align*}
&\sup_{k\ge 1}\sup_{i\in{\mathbb{N}}_{\epsilon}}E\left[\sup_{s\le T,x\in{\mathbb{R}}}(U^{i,n,k}(s,x)^{q}
+ \widetilde{U}^{i,n,k}(s,x)^{q}
+V^{i,n,k}(s,x)^{q}
+\widetilde{U}^{i,n,k}(s,x)^{q})\right]
\\
&\quad\le C_{q,T}\,.
\end{align*}
With this uniform bound in hand, it is not difficult to check that the moment bound~(6.5) from~\cite{shi94} (which is
in fact \eqref{lem-X-replace-by-N} with $\lambda=0$),
holds for $\{U^{i,n,k}\}_{k\geq 1}, \{V^{i,n,k}\}_{k\geq 1},$ $\{ \widetilde{U}^{i,n,k}\}_{k\geq 1},$ $ \{\widetilde{V}^{i,n,k}\}_{k\geq 1}$,
for all $i\in {\mathbb{N}}_{\epsilon}$, on time intervals of the form $[\frac{(i-1)\epsilon}{2}, \frac{i\epsilon}{2}), i\in {\mathbb{N}}_{\epsilon}$, and $[N_{\epsilon}\epsilon, T]$. This, in turn,
by Lemma~6.3 in~\cite{shi94} implies the tightness of the corresponding
processes in $D^{\epsilon}({\mathbb{R}}_+, C^+_{\rm tem})$. Here
$$C_{\rm tem}:=\{f \in C({\mathbb{R}}): ||f||_{\lambda}
< \infty \text{ for any } \lambda <0\},$$ endowed with the topology induced
by the norms $||\cdot ||_{\lambda}$ for $\lambda<0$, and $C^+_{\rm tem}$ is the set of non-negative functions in $C_{\rm tem}$.
Finally, since the limiting processes $U^{i,n}$, $\widetilde{U}^{i,n}, i\in {\mathbb{N}}_{\epsilon},$ (respectively $V^{i,n}$, $\widetilde{V}^{i,n}, i\in {\mathbb{N}}_{\epsilon}$) are dominated by
$\bar{U} $ (respectively $\bar{V}$) in $D^{\epsilon}({\mathbb{R}}_+, C^+_{\rm rap})$,
it follows that $U^{i,n}$, $\widetilde{U}^{i,n}$, $V^{i,n}$, $\widetilde{V}^{i,n}, i \in{\mathbb{N}}_{\epsilon},$
are in $D^{\epsilon}({\mathbb{R}}_+, C^+_{\rm rap})$ as well. This, together with the domination~\eqref{11_08_2} and Lemma~\ref{qmom}, allows us to take functions in $C^2_{\rm tem}$ as test functions in \eqref{tUVndefn}, however for our purposes it will be enough to use functions from $C_b^2({\mathbb{R}})$ as test functions.
\medskip
Fix an arbitrary $T>1$.
\begin{remark}
\label{rem:04}
In what follows we are going to show the tightness of the sequence of the processes constructed above on the
time interval $[0,T]$. We will prove that limit points have the properties stated in Theorem~\ref{thm:1.1} on
$[0,T]$.
Since $T>1$ is arbitrary, this argument immediately yields the claim of the theorem on the time interval $[0,\infty)$.
\end{remark}
Define $E=[0,T]\times {\mathbb{R}}$. We identify a finite measure $K$ on $E$ with the non-decreasing path in $D([0,T],M_F({\mathbb{R}}))$ given by $t\to K_t(\cdot)=K([0,t]\times \{\cdot\})$.
\begin{prop}
\label{prop:4}
$\{(U^{i,n},\widetilde{U}^{i,n},V^{i,n}, \widetilde{V}^{i,n}, K^{i,n,U},K^{i,n,V})_{i\in\NN_{\epsilon}}\}_{n\geq 1}$
is tight in \\ $\left(C([0,T]\setminus \mathcal{G}_{\epsilon},M_F({\mathbb{R}}))^4\times M_F(E)^2\right)^{N_{\epsilon}}$.
Moreover, any limit point \\ $(U^{i},\widetilde{U}^{i},V^{i},\widetilde{V}^{i}, K^{i,U}, K^{i,V})_{i\in \NN_{\epsilon}}$
has the following properties:
\begin{itemize}
\item[(1)] $U^i, \widetilde{U}^i, V^i, \widetilde{V}^i\in C([0,T]\setminus\mathcal{G}_{\epsilon}, C^+_{\rm rap})\cap D^{\epsilon}([0,T], L^1({\mathbb{R}})),
\;\forall i\in \NN_{\epsilon}$;
\item[(2)] $K^{i,U}, K^{i,V}\in D^{\epsilon}([0,T], M_F({\mathbb{R}})),\;\forall i\in \NN_{\epsilon}$;
\item[(3)] $(U^i, \widetilde{U}^i, V^i, \widetilde{V}^i,K^{i,U},K^{i,V})_{i\in \NN_{\epsilon}}$ satisfy \eqref{UVdefn}-\eqref{tUVdefn}.
\end{itemize}
\end{prop}
The above proposition is the key for proving Theorem~\ref{thm:1.1}.
The proposition will be proved via a series of lemmas.
\begin{lemma}
\label{lem:01_7_1}
$\{K^n\}_{n\geq 1}$ is tight in $M_F(E)$ and $\{K^n_T(1)\}_{n\ge 1}$ is $L^1(dP)$-bounded.
\end{lemma}
\paragraph{Proof}
First note that by rewriting the equation~(\ref{eq:2.8}) for $\bar{U}^n$ in the mild form (see \eqref{eq:mild1}) one can
easily get that for any $\phi\in C_{b}^+({\mathbb{R}})$,
\begin{eqnarray}
\nonumber
E\left[\bar{U}^n_t(\phi)\right]&\leq& E\left[
\sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}}\int_{{\mathbb{R}}} p_{t-s_i}(z-y) J^{x_i}_{\epsilon}(y)\phi(z)\,dy\,dz\right]
\\
\nonumber
&=& \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_0^1\int_{{\mathbb{R}}}\int_{{\mathbb{R}}} p_{t-s_i}(z-y) J^{x}_{\epsilon}(y)\phi(z)\,dy\,dz\,dx\\
\nonumber
&=& \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}}\int_{{\mathbb{R}}}\int_{0}^1 p_{t-s_i}(z-y) J^{x}_{\epsilon}(y)\phi(z)\,dx\,dz\,dy
\\
\nonumber
&\leq& \epsilon \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}}S_{t-s_i}\phi(y) 1(|y|\leq 2)
\,dy
\\
\label{11_01}
&\leq& \sup_{s\leq t}\int_{{\mathbb{R}}} S_{s}\phi(y) 1(|y|\leq 2)\,dy,
\end{eqnarray}
where $\{S_t\}_{t\geq 0}$ is the Brownian semigroup corresponding to the transition density function $\{ p_t(x), t\geq 0,
x\in {\mathbb{R}}\}$.
For any nonnegative $\phi\in {C^2_{b}}({\mathbb{R}})$ we have from \eqref{eq:4.3},
\begin{eqnarray}
\nonumber
E\left[K^n_t(\phi)\right] &\leq& E\left[\sum_{i\in \mathcal{G}_{\epsilon}^{\rm odd}} \int_{{\mathbb{R}}} J^{x_i}_{\epsilon}(y)
\phi(y)\,dy\right] + E\left[ \int_0^t U^n_s\left(\left|\frac{\Delta \phi}{2}\right|\right)\,ds\right]
\\
\nonumber
&\leq& \sum_{i\in \mathcal{G}_{\epsilon}^{\rm odd}} \int_0^1\int_{{\mathbb{R}}} J^{x}_{\epsilon}(y)
\phi(y)\,dy\,dx + E\left[ \int_0^t \bar{U}^n_s\left(\left|\frac{\Delta \phi}{2}\right|\right)\,ds\right]\\
\nonumber
&\leq& \int_{{\mathbb{R}}} 1(|y|\leq 2)
\phi(y)\,dy \\
&&\mbox{}+ \int_0^t \sup_{r\leq s}\int_{{\mathbb{R}}} S_{r}\left(\left|\frac{\Delta \phi}{2}\right|\right)(y) 1(|y|\leq
2)\,dy\,ds.
\label{eq:06_7_1}
\end{eqnarray}
Now by taking $\phi=1$ we get that the sequence of the total masses $\{K^n_T(1)\}_{n\geq 1}$ is
bounded in $L^1(dP)$. Moreover for any $\delta>0$ we can choose
$R>3$ sufficiently large and $\phi$ such that $\phi(z)=0$ for $|z|\leq R-1, \phi(z)=1$ for $|z|\ge R$ with the
property that
$$ S_{t}\left(\left|\frac{\Delta \phi}{2}\right|\right)(y)\leq \delta,\;\; \forall t\in[0,T], y\in[-2, 2].
$$
This shows that
$$
E\left[ \int_{|z|\geq R} K^n_T(dz)\right]\leq E\left[K^n_T(\phi)\right] \leq 4T\delta,\;\;\forall n\geq 1,
$$
by~(\ref{eq:06_7_1}), and our choice of $\phi$ and $R$. This, in turn, together with the
$L^1(dP)$-boundedness of
total masses $\{K^n_T(1)\}_{n\geq 1}$, implies tightness of $\{K^n\}_{n\geq 1}$ in $M_{F}(E)$.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{cor}
\label{cor:15_1}
$\{K^{i,n,U}\}_{n\geq 1}$ and $\{K^{i,n,V}\}_{n\geq 1}$ are tight in $M_F(E)$ for any $i\in \NN_\epsilon$.
\end{cor}
\paragraph{Proof} The assertion follows immediately from the bound
$$ K^{n,i,U}, K^{n,i,V} \leq K^n\,,\; \forall n\geq 1, i\in \NN_{\epsilon}.$$
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
Before we start dealing with tightness of $\{(U^n,V^n,\widetilde{U}^n,\widetilde{V}^n,K^n)\}_{n\geq 1}$ we need to introduce a lemma
that will be frequently used.
\begin{lemma}
\label{lem:30_6_1}
\begin{itemize}
\item[{\bf (a)}] Let $\{W^n\}_{n\geq 1}$ be a sequence of $\{\mathcal{F}^n_t\}_{t\geq 0}$-adapted space-time white noises, and
$\{b^n(t,x,\omega)\}_{n\geq 1}$ be a sequence of
$\{\mathcal{F}^n_t\}_{t\geq 0}$-predictable$\times$Borel measurable processes such that
\begin{eqnarray}
\sup_{n\geq 1} \sup_{x\in {\mathbb{R}}}\sup_{t\in [0,T]} E\left[ |b^{n}(t,x,\cdot)|^p\right]<\infty,\;\text{for some $p>4$}.
\end{eqnarray}
Then the sequence of processes $\{X^n(t,x), \; t\in [0,T], x\in {\mathbb{R}}\}_{n\geq 1}$ defined by
$$ X^n(t,x) = \int_0^t \int_{{\mathbb{R}}} p_{t-s}(x-y) b^n(s,y,\cdot) W^n(ds,dy),\;\; t\in [0,T], x\in {\mathbb{R}},$$
have versions which are tight in $C([0,T], C_{\rm tem})$.
\item[{\bf (b)}] Let $W$ be an $\{\mathcal{F}_t\}_{t\geq 0}$-adapted space-time white noise, and
$b(t,x,\omega)$ be an $\{\mathcal{F}_t\}_{t\geq 0}$-predictable$\times$Borel measurable process such that
\begin{eqnarray}
\sup_{x\in {\mathbb{R}}}\sup_{t\in [0,T]} E\left[ |b(t,x,\cdot)|^p\right]<\infty,\;\text{for some $p>4$}.
\end{eqnarray}
Then the process $X$ defined by
$$ X(t,x) = \int_0^t \int_{{\mathbb{R}}} p_{t-s}(x-y) b(s,y,\cdot) W^n(ds,dy),\;\; t\in [0,T], x\in {\mathbb{R}},$$
has a version in $C([0,T], C_{\rm tem})$. If moreover, $|X(t,x)|\leq |\widetilde X(t,x)|$ for some $\widetilde X\in
D([0,T],C_{\rm rap})$ then $X\in C([0,T], C_{\rm rap})$.
\end{itemize}
\end{lemma}
\paragraph{Proof}
{\bf (a)} This assertion follows immediately from the estimates on increments of a stochastic integral (see e.g. step 2
in the proof of Theorem~2.2 of~\cite{shi94}, p. 432) and then an application of Lemmas~6.2 and 6.3(ii) from~\cite{shi94}.
\medskip
\noindent{\bf (b)} This again follows by using the estimates on increments of a stochastic integral (see again step 2
in the proof of Theorem~2.2 of~\cite{shi94}, p. 432) and then applying Lemmas~6.2 and 6.3(i) in~\cite{shi94},
to get that the process is in $C([0,T], C_{\rm tem})$. The last assertion is obvious.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{lemma}
\label{lem:01_7_2}
Let $$w^{n}= U^n-V^n,\; n\geq 1.$$
Then $\{w^n\}_{n\geq 1}$ is tight in $D^{\epsilon}([0,T], C_{\rm rap})$.
\end{lemma}
\paragraph{Proof}
By writing the equation for $w^n$ in mild form we get
\begin{eqnarray*}
w^n(t,x)&=& \int_0^t \int_{{\mathbb{R}}} p_{t-s}(x-y)(\eta^+_{\epsilon}(ds,dy)-\eta^-_{\epsilon}(ds,dy)) \\
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U^n(s,y)^{\gamma}
W^{n,U}(ds,dy)
\\
\nonumber
&&\mbox{}-
\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) V^n(s,y)^{\gamma}
W^{n,V}(ds,dy), \;\;t\geq 0, x\in {\mathbb{R}}. \nonumber
\end{eqnarray*}
Clearly, by the definition of $\eta^+_{\epsilon}, \eta^-_{\epsilon}$, the first term, $I(t,x)$ (being independent of $n$)
is tight in $D^{\epsilon}([0,T], C_{\rm rap})$.
Using the domination
\begin{eqnarray}
\label{eq:01_7_1}
U^n\leq \bar{U}^n\in D([0,T],C^+_{\rm rap}),\;\;
V^n \leq \bar{V}^n\in D([0,T],C^+_{\rm rap})\,,\;\;
\end{eqnarray}
and Lemma~s\ref{pmom} and \ref{lem:30_6_1}(a), the stochastic integral terms are tight in $C([0,T], C_{\rm tem})$. If $S^n(t,x)$ is the difference of the above stochastic integral terms then
the domination
$$|S^n(t,x)|\le \bar U^n(t,x)+\bar V^n(t,x)+|I(t,x)|\in D^\varepsilon([0,T],C^+_{\rm rap}),$$
and the definition of the norms on $C_{\rm tem}$ and $C_{\rm rap}$ shows that
$\{S^n\}$ is tight in $C([0,T],C_{\rm rap})$.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
Now we are ready to deal with the tightness of $\{(U^n,V^n,\widetilde{U}^n,\widetilde{V}^n,K^n)\}_{n\geq 1}$.
Let $L^p(E)$ denote the usual $L^p$ space with respect to Lebesgue measure on $E$.
\begin{lemma}
\label{lem:15_1}
\begin{itemize}
\item[{\bf (a)}]
$\{(U^n,V^n,\widetilde{U}^n,\widetilde{V}^n,K^n)\}_{n\geq 1}$ is tight in
$L^p(E)^4\times M_F(E)$ for any \break
$p\geq 1$.
Moreover any limit point has a version $$(U,V,\widetilde{U},\widetilde{V},K)\in D^{\epsilon}([0,T], C^+_{\rm rap}))^4\times
D^{\epsilon}([0,T], M_F({\mathbb{R}})).$$
\item[{\bf (b)}]
$$ t\mapsto \int_0^{t}\int_{{\mathbb{R}}} p_{t-s}(\cdot-y)K(ds,dy)\in D^{\epsilon}([0,T], C_{\rm rap}).$$
\item[{\bf (c)}] $\{K^n\}_{n\geq 1}$ is also tight in $C([0,T]\setminus \mathcal{G}_{\epsilon}, M_F({\mathbb{R}}))$, and any of its limit points
satisfies
$$ {\mathbf \Delta} K_t(1)\leq \epsilon,\; \forall t\in[0,T].$$
\end{itemize}
\end{lemma}
\paragraph{Proof}
{(a)}
We will give the proof just for the tightness of $\{(U^n,V^n,K^n)\}_{n\geq 1}$ and the properties of its limit points,
since the corresponding results for
$\{(\widetilde{U}^n,\widetilde{V}^n)\}_{n\geq 1}$ and its limit points will follow along the same lines.
Recall the domination (\ref{eq:01_7_1}), where the laws of the upper bounds are independent of $n$. By this domination we immediately get that
$$\{(U^{n}(s,x)dxds, V^{n}(s,x)dxds)\}_{n\geq 1}$$ is
tight in $(M_F(E)\times M_F(E))$. Recall also that by Lemma~\ref{lem:01_7_1}, $\{K^n\}_{n\geq 1}$ is tight in
$M_F(E)$.
This, the fact that the laws of $\bar U_n$, $\bar V_n$ are independent of $n$, and Lemma~\ref{lem:01_7_2} allows us to choose a convergent subsequence of $(U^n,V^n,K^n,w^n, \bar U^n,\bar V^n)$ in
$M_F(E)^3\times D([0,T],C_{\rm rap})^3$.
For simplicity of notation, we will again index this subsequence by
$n$. Denote the corresponding limit point by $(U,V,K,w,\bar U,\bar V)$.
Now, for any $\phi\in C_{b}({\mathbb{R}})$, let
\begin{eqnarray*}
M^{n,U}_t(\phi)&\equiv& \int_0^t \int_{{\mathbb{R}}}U^n(s,x)^{\gamma} \phi(x) W^{n,U}(ds,dx),\;\;t\in[0,T],\\
M^{n,V}_t(\phi)&\equiv& \int_0^t \int_{{\mathbb{R}}}V^n(s,x)^{\gamma} \phi(x) W^{n,V}(ds,dx)\;\;t\in[0,T],
\end{eqnarray*}
denote the martingales given by the stochastic integrals in the semimartingale decomposition~(\ref{eq:4.3})
for $U^n_t(\phi)$ and $V^n_t(\phi)$.
For any $\phi\in C_{b}({\mathbb{R}})$, use the Burkholder-Davis-Gundy inequality, and
again the domination~(\ref{eq:01_7_1}),
to get, that for any $p\geq 2, \lambda>0$,
\begin{eqnarray}
\label{eq:01_7_2}
\lefteqn{
E\left[ \left|M^{n,U}_{t}(\phi)-M^{n,U}_{u}(\phi)\right|^p\right]}\\
\nonumber
&\leq& C_p
\sup_{s\leq T, x\in {\mathbb{R}}}e^{\frac{\lambda p}{2} |x|}E\left[ \bar{U}(s,x)^{p\gamma}\right]
\left[ \int_{{\mathbb{R}}} e^{-\lambda|x|}\left|\phi(x) \right|^2\,dx\right]^{p/2}
(t-u)^{p/2},\\
\nonumber
&&
\phantom{\sup_{s\leq T, x\in {\mathbb{R}}}e^{\frac{\lambda p}{2} |x|}E\left[ \bar{U}(s,x)^{p\gamma}\right]\int_{{\mathbb{R}}} e^{-\lambda|x|}\left|\phi(x) \right|^2\,dx}\forall 0\leq u\leq t\leq T.
\end{eqnarray}
This, together with Lemma~\ref{qmom}(b) and Kolmogorov's tightness criterion, implies that
\begin{eqnarray}
\left\{M^{n,U}_{\cdot}(\phi)\right\}_{n\geq 1}\; \mbox{is tight in}\; C([0,T],{\mathbb{R}})
\end{eqnarray}
for any $\phi\in C_{b}({\mathbb{R}})$. Similarly,
\begin{eqnarray}
\left\{M^{n,V}_{\cdot}(\phi)\right\}_{n\geq 1}\; \mbox{is tight in}\; C([0,T],{\mathbb{R}})
\end{eqnarray}
for any $\phi\in C_{b}({\mathbb{R}})$.
Let ${\cal D}$ be a countable subset of
$C^2_{b}({\mathbb{R}})$ which is bounded-pointwise dense in $C_b({\mathbb{R}})$. That is, the smallest class containing ${\cal D}$ and closed under bounded pointwise limits contains $C_b({\mathbb{R}})$. By the above, we can take a further subsequence, which for simplicity we will index again by
$n$, so that
all the sequences of martingales $\left\{M^{n,U}_{\cdot}(\phi)\right\}_{n\geq 1}$,
$\left\{M^{n,V}_{\cdot}(\phi)\right\}_{n\geq 1}$
indexed by functions $\phi$ from ${\cal D}$, converge in $C([0,T],{\mathbb{R}})$.
For $\phi\in {\cal D}$, we will denote the limiting processes
by $M^U_{\cdot}(\phi), M^V_{\cdot}(\phi),$ respectively.
Now let us switch to a probability space where
\begin{eqnarray}\label{asconv}
&(U^n,V^n,K^n,w^n,\bar U^n,\bar V^n)\rightarrow (U,V,K,w,\bar U,\bar V), {\rm in}\; M_F(E)^3\times D([0,T],C_{\rm rap})^3,\\
\nonumber&(M^{n,U}(\phi_1),M^{n,V}(\phi_2))\rightarrow (M^{U}(\phi_1),M^{V}(\phi_2)),{\rm in}\; C([0,T],{\mathbb{R}})^2,
\; \forall \phi_1,\phi_2\in{\cal D},
\end{eqnarray}
${\rm as}\;n\rightarrow\infty,$ a.s.
In our next step, we will verify convergence of $\{(U^n, V^n)\}_{n\geq 1}$ in $L^p(E)^2$, for any $p\geq 1$. First,
by $L^1(dP)$-boundedness of the total mass of $K^n$ (Lemma~\ref{lem:01_7_1}) we have
\begin{eqnarray}
n E\left[ \int_0^T \int_{{\mathbb{R}}} U^n_s(x)V^n_s(x)dxds \right]&=&E\left[K^n_T(1)\right] \leq C,
\end{eqnarray}
uniformly in $n$ for some constant $C$. Therefore we get
\begin{eqnarray}
E\left[ \int_0^T \int_{{\mathbb{R}}} U^n_s(x)V^n_s(x)dxds \right] \rightarrow 0, \;\;{\rm as} \; n\rightarrow \infty,
\end{eqnarray}
and hence
\begin{eqnarray}
\int_0^T \int_{{\mathbb{R}}} (U^n_s(x)\wedge V^n_s(x))^2 dxds \rightarrow 0,
\end{eqnarray}
in $L^1(dP)$.
By taking another subsequence if necessary, we may assume
\[ (U^n_s(x)\wedge V^n_s(x)) \rightarrow 0,\;\;{\rm in }\; L^2(E),\; P-{\rm a.s.}\]
Now recall again the domination
$$ U^n \leq \bar{U}^n\to\bar{U}\hbox{ in }D([0,T], C^+_{\rm rap})\ P-{\rm a.s.}$$
which implies that for any $p\geq 1$,
\[ (U^n_s(x)\wedge V^n_s(x)) \rightarrow 0,\;\;{\rm in }\; L^p(E),\; P-{\rm a.s.}\]
Also by
\[ U^{n}_t(x)= (U^n_s(x)\wedge V^n_s(x)) + (w^n_t(x))^+,\]
we get that in fact
\begin{equation}\label{genpc} U^n\rightarrow (w)^+,\;\; {\rm in}\; L^p(E),\;\; \hbox{for any $p \geq 1$, } P-{\rm a.s.},\end{equation}
and hence $U(dt,dx)=w_t(x)^+ dtdx$. With some abuse of notation we denote the density of $U(dt,dx)$
by $U_t(x)$. Similarly we get
$$ V(dt,dx) = w_t(x)^- dtdx$$
and we denote its density by $V_t(x)$. In what follows we will use the continuous in space versions of the densities of
$U(dt,dx),V(dt,dx)$, that is, $U_t(x)=w_t(x)^+, \; V_t(x)=w_t(x)^-$, and hence, by Lemma~\ref{lem:01_7_2},
we get that $(U,V)\in D^{\epsilon}([0,T], C_{\rm rap})^2$. We delay the proof of the assertion that $K\in D^{\epsilon}([0,T],M_F({\mathbb{R}}))$ until
the proof of part {\bf (b)}.
\paragraph{(b)} Fix an arbitrary $\phi \in {\cal D}$.
We will go to the limit in (\ref{eq:4.3}) for $\{ U^n_{\cdot}(\phi)\}_{n\geq 1}$.
As $\{U^n\}_{n\geq 1}$ converges a.s. to $w^+$ in $L^2(ds,dx)$, and
$$U^n\leq \bar{U}^n\to\bar{U} \hbox{ in }D^{\epsilon}([0,T], C_{\rm rap}),$$ it is easy to see that $\{U^n_{\cdot}(\phi)\}_{n\geq 1}$
converges to
$w^+_{\cdot}(\phi)\equiv\int w^+_{\cdot}(x)\phi(x)\,dx$ in $L^2[0,T]$ a.s. As for the right-hand side,
$$\sup_{t\le T}\Bigl|\int_0^tU_s^n(\frac{1}{2}\Delta\phi)\,ds-\int_0^tU_s(\frac{1}{2}\Delta\phi)\,ds\Bigr|\le\| U^n-U\|_{L^2(E)}^{1/2}\|\Delta\phi/2\|_{L^2(E)}^{1/2}\to 0,$$
and in particular
$\{
\int_0^{\cdot} U^{n}_s(\frac{1}{2}\Delta \phi)\, ds\}_{n\geq 1}$ converges to
$
\int_0^{\cdot} U_s(\frac{1}{2}\Delta \phi)\, ds$
in $C([0,T],{\mathbb{R}})$
(and hence in $L^2[0,T]$).
By (a) $\{K^n(\phi)(ds)\}_{n\geq 1}$ converges to $K(\phi)(ds)$ as finite signed measures on $[0,T]$ a.s. and therefore
$\{K^n_{\cdot}(\phi)\}_{n\geq 1}$ converges in $L^2[0,T]$ to $K_{\cdot}(\phi)$ a.s. Since the immigration term
does not change with $n$,
it also converges in $L^2[0,T]$.
Now we have to deal with convergence of the stochastic integral term, that we denoted by $M^{n,U}(\phi)$.
We proved in~{\bf (a)} that $\{M^{n,U}(\phi)\}_{n\geq 1}$
converges a.s. in $C([0,T],{\mathbb{R}})$.
Moreover, by~(\ref{eq:01_7_2}), the martingales $M_t^{n,U}(\phi)$ are bounded in $L^p(dP)$ uniformly in $n$ and $t\in [0,T]$,
for all $p\geq 2$, and hence the limiting process is a continuous
martingale that we will call $M^{U}(\phi)$. Turning to its quadratic variation,
it follows from \eqref{genpc}
that the sequence $\{ (U^n)^{2\gamma}\}_{n\geq 1}$ converges
to $U^{2\gamma}$ in $L^2(E)$ a.s.
and this implies that,
\begin{eqnarray}
\label{eq:01_7_3}
\langle M^{n,U}_{\cdot}(\phi)\rangle_t &=& \int_0^t \int_{{\mathbb{R}}} U^n(s,x)^{2\gamma} \phi(x)^2\,dxds
\\
\nonumber
&\rightarrow & \int_0^t \int_{{\mathbb{R}}}U(s,x)^{2\gamma} \phi(x)^2\,dxds,
\;\;{\rm as}\; n\rightarrow \infty,\ P-{\rm a.s.}
\end{eqnarray}
Hence, again by boundedness of $M_t^{n,U}(\phi)$ in $L^p(dP), p\geq 2,$ uniformly in $t\in [0,T], n\geq 1,$
we get that the limiting continuous
martingale $M^{U}$ has quadratic variation
$$\langle M^{U}_{\cdot}(\phi)\rangle_t=\int_0^t \int_{{\mathbb{R}}}U(s,x)^{2\gamma}\phi(x)^2\,dxds$$
for any $\phi\in {\cal D}$. Since ${\cal D}$ is bounded-pointwise dense in $C_{b}({\mathbb{R}})$, $M^U$ can be extended to a martingale
measure on $E$, and
one can show by standard procedure
that there is a space-time white noise $W^U$ such that
$$ M^U_t(\phi)=\int_0^t \int_{{\mathbb{R}}} U(s,x)^{\gamma} \phi(x) W^U(ds,dx),\;\;t\in[0,T],
\;\forall \phi\in C_{b}({\mathbb{R}})\,.$$
Now we are ready to take limits in \eqref{eq:4.3} in $L^2([0,T])$.
We get
\begin{eqnarray}
\nonumber
U_t(\phi) &=& \sum_i\langle J^{x_i},\phi\rangle\mathbf{1}(t\geq s_i)\\
\nonumber
&& +
\int_0^t \int_{{\mathbb{R}}}U(s,x)^{\gamma} \phi(x) W^{U}(ds,dx) \\
&& \mbox{}+ \int_0^t U_s(\frac{1}{2}\Delta \phi)\, ds - K_t(\phi), \;\;\;t\in [0,T].
\label{eq:4.4}
\end{eqnarray}
Note that although some of the convergences leading to the above equation hold in $L^2[0,T]$,
all terms are right continuous in $t$ and so the equality
holds for all $t$, and not just for a.e. $t$. By equation~\eqref{eq:4.4} and the fact that $U\in D^{\epsilon}([0,T], C_{\rm rap})$ (from {\bf (a)}) we see that $K_\cdot(\phi)\in D^\epsilon([0,T],{\mathbb{R}})$. It then follows from $K\in M_F(E)$ that $K_\cdot \in D^{\epsilon}([0,T], M_F({\mathbb{R}}))$ and this proves the last part of {\bf (a)}.
Now we will rewrite the above equation in the mild form. The derivation is a
bit more complicating than e.g. of (\ref{eq:mild1}) for $\bar{U}$, due to the presence of the measure-valued term $K$.
For any $\phi\in C^+_b({\mathbb{R}})$, $t\in[0,T]\setminus \mathcal{G}_{\epsilon}$,
\begin{eqnarray}
\nonumber
U_t(\phi) &=& \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}} S_{t-s_i}\phi(y) J^{x_i}_{\epsilon}(y)\,dy \\
\nonumber
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} S_{t-s}\phi(y) U(s,y)^{\gamma}
W^{U}(ds,dy)\\
\nonumber
&&\mbox{}-\int_0^t\int_{{\mathbb{R}}} S_{t-s}\phi(y)K(ds,dy) \nonumber\\
\label{eq:23_11_12}
&=& \int_{{\mathbb{R}}}\phi(x)\sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}} p_{t-s_i}(y-x) J^{x_i}_{\epsilon}(y)\,dy\,dx \\
\nonumber
&&\mbox{} +\int_{{\mathbb{R}}} \phi(x)\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\,dx\\
\nonumber
&&\mbox{}-\int_{{\mathbb{R}}}\phi(x)\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y)\phi(y)K(ds,dy)\,dx , \;\;P-{\rm a.s.}, \nonumber
\end{eqnarray}
where the last equality follows by the Fubini and the stochastic Fubini theorems. Note that we take the time $t$ outside the set $\mathcal{G}_{\epsilon}$ since, for $t\in\mathcal{G}_{\epsilon}$, $K(\{t\}, dx)$ could be strictly positive, and with $p_0$ being a delta measure this creates difficulties with applying the Fubini theorem. Therefore the case of $t\in\mathcal{G}_{\epsilon}$ will be treated separately.
By {\bf (a)}, we know that
\begin{eqnarray}
\label{eq:23_11_1}
U\in D^{\epsilon}([0,T], C^+_{\rm rap}),\;\;P-{\rm a.s.}
\end{eqnarray}
By the domination
$$ U^{\gamma}\leq \bar{U}^{\gamma} \in D^\epsilon([0,T], C^+_{\rm rap}),$$
Lemma~\ref{pmom}, and Lemma~\ref{lem:30_6_1}(b) we may choose a version of the stochastic integral so that
\begin{eqnarray}
\label{eq:23_11_2}
t \mapsto \int_0^t\int_{{\mathbb{R}}} p_{t-s}(\cdot-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\in C([0,T], C_{\rm rap}),\;\;P-{\rm a.s.},
\end{eqnarray}
and in what follows we will always consider such a version. This, and
the fact that $K_\cdot \in D^{\epsilon}([0,T], M_F({\mathbb{R}}))$, implies that the equality in (\ref{eq:23_11_12}) holds
$P$-a.s. {\it for all} $t\in[0,T]\setminus \mathcal{G}_{\epsilon}$, and, hence, we get
\begin{eqnarray}
\label{eq:mild2}
U_t(x) &=& \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}} p_{t-s_i}(x-y) J^{x_i}_{\epsilon}(y)\,dy \\
\nonumber
&&+\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\\
\nonumber
&&\nonumber -\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y)K(ds,dy),\\
&& \;\;{\rm Leb-a.e.}\; x \in {\mathbb{R}}, \; \text{for each $t\in([0,T]\setminus \mathcal{G}_{\epsilon}$ }, P-{\rm a.s.} \nonumber
\end{eqnarray}
Now let us check that the above equation holds for all $ (t,x) \in ([0,T]\setminus \mathcal{G}_{\epsilon})\times{\mathbb{R}}$, $P$-a.s.
(recall again that Lemma~\ref{lem:30_6_1}(b) is used to select an appropriate jointly continuous version of the stochastic integral). First, note that the steps similar to those leading to~\eqref{eq:mild2} easily imply
\begin{eqnarray}
\label{eq:mild3}
U_t(x) &=& S_{t-r}U_r(x)+ \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, r<s_i\leq t} \int_{{\mathbb{R}}} p_{t-s_i}(x-y) J^{x_i}_{\epsilon}(y)\,dy \\
\nonumber
&&+\int_r^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\\
\nonumber
&&-\int_r^t\int_{{\mathbb{R}}} p_{t-s}(x-y)K(ds,dy) ,
\\
&& \nonumber \;\;{\rm Leb-a.e.}\; x \in {\mathbb{R}}, \; \text{for all $r,t\in [0,T]\setminus \mathcal{G}_{\epsilon}, r\le t,$ } P-{\rm a.s.} \nonumber
\end{eqnarray}
Lemma~\ref{lem:30_6_1}(b) could be easily strengthened to assure, that, in fact, the process
\begin{eqnarray}
\nonumber
X(r,t,x) &\equiv& \int_r^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U(s,y)^{\gamma}
W^{U}(ds,dy), \; 0\leq r\leq t\leq T, x\in {\mathbb{R}}, \\
&& \label{eq:23_11_4}
\mbox{is $P$-a.s. continuous in $(r,t,x)$},
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:23_11_5}
X(t,t,\cdot)=0, \forall t\in [0,T].
\end{eqnarray}
Again, to be more precise, there exists just a version of the process $X$ such that~\eqref{eq:23_11_4} holds, and, in what follows, we will always consider such a version.
As was already noted following Lemma~\ref{lem:mp-4.3},
\begin{eqnarray}
\label{eq:23_11_3}
t \mapsto\sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i\leq t} \int_{{\mathbb{R}}} p_{t-s_i}(\cdot-y) J^{x_i}_{\epsilon}(y)\,dy \in D^{\epsilon}([0,T], C^+_{\rm rap}),\;\;P-{\rm a.s.}
\end{eqnarray}
Let us take $A\subset \Omega$ such that $P(A)=1$ and for each $\omega\in A$, (\ref{eq:23_11_1}) and (\ref{eq:mild2}---\ref{eq:23_11_3}) hold.
Fix an arbitrary $\omega\in A$ and $(t,x)\in ((0,T]\setminus \mathcal{G}_{\epsilon})\times{\mathbb{R}}$. Then choose
$\{(r_l,z_k)\}_{l,k\geq 1}$ such that
the equality in (\ref{eq:mild3}) holds with $(r_l,t,z_k)$ in place of $(r,t,x)$, and
$(r_l,z_k)\rightarrow (t,x)\in ([0,T]\setminus \mathcal{G}_{\epsilon})\times{\mathbb{R}}$, as $l,k\rightarrow\infty$. Also assume that
$r_l<t$, for all $l\geq 1$. Note that both $\{(r_l,z_k)\}_{l,k\geq 1}\,,(t,x)$ may depend on $\omega$.
We would like to show
\begin{eqnarray}
\label{eq:23_11_13}
\lim_{k\rightarrow \infty} \int_{0}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y)K(ds,dy) =
\int_{0}^{t}\int_{{\mathbb{R}}} p_{t-s}(x-y)K(ds,dy).
\end{eqnarray}
Fix $\delta>0$.
By (\ref{eq:23_11_1}), (\ref{eq:23_11_4}), and (\ref{eq:23_11_5}) we can choose $l^*$ sufficiently large so that,
with $r^*\equiv r_{l^*}$, we have
\begin{eqnarray}
\label{eq:23_11_8}
\left| U_{t}(z_k) - S_{t-r^*}U_{r^*}(z_k) \right|
+ \left|\int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y) U(s,y)^{\gamma} W^{U}(ds,dy)\right| \leq \delta,\;\;
\end{eqnarray}
for all $k\geq 1.$
Note that we assume without loss of generality that
$$[r^*,t]
\subset [0,T]\setminus \mathcal{G}_{\epsilon}.$$
Now we are ready to show~(\ref{eq:23_11_13}).
First, by the bounded
convergence theorem and $K_\cdot\in D^\epsilon([0,T],M_F({\mathbb{R}}))$, we get
\begin{eqnarray}
\label{eq:23_11_10}
\int_0^{r^*}\int_{{\mathbb{R}}} p_{t-s}(z_k-y)K(ds,dy) \rightarrow \int_0^{r^*}\int_{{\mathbb{R}}} p_{t-s}(x-y)K(ds,dy),
\end{eqnarray}
as $k\rightarrow \infty$.
Next consider \eqref{eq:mild3} with $r=r^*$, $x=z_k$,
to conclude that
\begin{eqnarray}
U_{t}(z_k) &=& S_{t-r^*}U_{r^*}(z_k) \\
\nonumber
&&\mbox{} +\int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\\
\nonumber
&&\mbox{}-\int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y)K(ds,dy),\;\;\forall k\geq 1. \nonumber
\end{eqnarray}
Therefore,
\begin{eqnarray}
\label{eq:23_11_7}
\int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y)K(ds,dy)
&\leq& \left| U_{t}(z_k) - S_{t-r^*}U_{r^*}(z_k) \right|\\
\nonumber
&&\mbox{}+ \left|\int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\right|\\
\nonumber
&\leq& \delta,\;\;\forall k\geq 1,
\end{eqnarray}
where the last bound follows from~(\ref{eq:23_11_8}).
This together with Fatou's lemma and $K\in D^\epsilon([0,T],M_F({\mathbb{R}}))$ implies
\begin{eqnarray}
\label{eq:23_11_9}
\lefteqn{\int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(x-y)K(ds,dy)}\\
\nonumber
&\leq& \liminf_{k\to \infty} \int_{r^*}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y)K(ds,dy)\leq \delta.
\end{eqnarray}
(\ref{eq:23_11_7}), (\ref{eq:23_11_9}), and (\ref{eq:23_11_10}) imply
\begin{eqnarray}
\limsup_{k\rightarrow \infty} \left| \int_{0}^{t}\int_{{\mathbb{R}}} p_{t-s}(x-y)K(ds,dy)-
\int_{0}^{t}\int_{{\mathbb{R}}} p_{t-s}(z_k-y)K(ds,dy)\right|\leq 3\delta,
\end{eqnarray}
and since $\delta$ was arbitrary, (\ref{eq:23_11_13}) follows.
(\ref{eq:23_11_13}) together with (\ref{eq:23_11_1}), (\ref{eq:23_11_2}), (\ref{eq:23_11_3})
implies that the equality in~(\ref{eq:mild2}) holds for {\it all} $(t,x)\in ([0,T]\setminus \mathcal{G}_{\epsilon})
\times{\mathbb{R}}$ on a set of full probability measure. Moreover, since all the other terms in~(\ref{eq:mild2}) except
$\int_0^t\int_{{\mathbb{R}}} p_{t-s}(\cdot-y)K(ds,dy)$ are in $ D^{\epsilon}([0,T], C^+_{\rm rap})$, we get that, in fact,
$$t\mapsto \int_0^t\int_{{\mathbb{R}}} p_{t-s}(\cdot-y)K(ds,dy)\in C([0,T]\setminus \mathcal{G}_{\epsilon}, C^+_{\rm rap}),\; P-{\rm a.s.}$$
Now let $t\in \mathcal{G}_{\epsilon}$, and let us show that, at $t$, the $C^+_{\rm rap}$-valued mapping \break$r\mapsto \int_0^r\int_{{\mathbb{R}}} p_{r-s}(\cdot-y)K(ds,dy)$ is right continuous and with a left limit.
We will prove it for $t=s_j\in \mathcal{G}^{\rm odd}_{\epsilon}$
for some $j$ (for $t\in \mathcal{G}^{\rm even}_{\epsilon}$ the argument is the same, even simpler).
Note that the measure $K(\{s_j\},dx)$ is absolutely continuous with respect to
Lebesgue measure. This follows from \eqref{eq:4.4} and the fact that $U$ is in $D^\epsilon([0,T],C^+_{\rm rap})$.
We will denote the density of $K(\{s_j\},dx)$ by $K(\{s_j\},x), x\in {\mathbb{R}}$. Take $\eta>0$ sufficiently small such that
$(s_j,s_j+\eta]\subset [0,T]\setminus \mathcal{G}_{\epsilon}$. Then, since (\ref{eq:mild2}) holds for all $(t,x)\in ([0,T]\setminus \mathcal{G}_{\epsilon})
\times{\mathbb{R}}$, we get
\begin{eqnarray}
\label{eq:23_11_11}
U_{s_j+\eta}(x) &=& \sum_{s_i\in \mathcal{G}_{\epsilon}^{\rm odd}, s_i< s_j} \int_{{\mathbb{R}}} p_{s_j+\eta-s_i}(x-y) J^{x_i}_{\epsilon}(y)\,dy \\
\nonumber
&&\mbox{}+\int_{{\mathbb{R}}} p_{\eta}(x-y) J^{x_j}_{\epsilon}(y)\,dy \\
\nonumber
&&\mbox{} +\int_0^{s_j+\eta}\int_{{\mathbb{R}}} p_{s_j+\eta-s}(x-y) U(s,y)^{\gamma}
W^{U}(ds,dy)\\
\nonumber
&&\mbox{}-\int_0^{s_j+\eta}\int_{{\mathbb{R}}} p_{s_j+\eta-s}(x-y)(K(ds,dy)-\delta_{s_j}(ds)K(\{s_j\},dy))
\\
&&\mbox{}-\int_{{\mathbb{R}}} p_{\eta}(x-y)K(\{s_j\},y)\,dy,\;\;\forall x\in {\mathbb{R}}.
\;\; \nonumber
\end{eqnarray}
Take $\eta\downarrow 0$. Since the measure $(K(ds,dy)-\delta_{s_j}(ds)K(\{s_j\},dy))$ gives zero mass to the set $\{s_j\}\times{\mathbb{R}}$,
by the
argument similar to the one used in the case of $t\in [0,T]\setminus \mathcal{G}_{\epsilon}$,
we can easily derive that
\begin{align*}
&\int_0^{s_j+\eta}\int_{{\mathbb{R}}} p_{s_j+\eta-s}(\cdot-y)(K(ds,dy)-\delta_{s_j}(ds)K(\{s_j\},dy))\\
&\rightarrow
\int_0^{s_j}\int_{{\mathbb{R}}} p_{s_j-s}(\cdot-y)(K(ds,dy)-\delta_{s_j}(ds)K(\{s_j\},dy)),
\end{align*}
in $C_{\rm rap}$, as $\eta\downarrow 0$. Moreover,
$U_{s_j+\eta}(\cdot)$ and the first three terms on the right hand side
of~(\ref{eq:23_11_11}) converge in $C_{\rm rap}$. This immediately implies that the last term \break
$\int_{{\mathbb{R}}} p_{\eta}(\cdot -y)K(\{s_j\},y)\,dy$ also converges in $C_{\rm rap}$, and clearly the limit is
\begin{eqnarray}
\label{eq:24_11_2}
K(\{s_j\},\cdot)\in C_{\rm rap},
\end{eqnarray}
or more precisely a $C_{\rm rap}$-valued version of this density.
All together we get that (\ref{eq:mild2}) holds also for $t\in \mathcal{G}^{\rm odd} _{\epsilon}$ with $p_0$
being the Dirac measure; moreover the $C_{\rm rap}$-valued mapping
$r\mapsto \int_{0}^{r}\int_{{\mathbb{R}}} p_{r-s}(\cdot-y)K(ds,dy)$
is right
continuous at $t\in\mathcal{G}^{\rm odd} _{\epsilon}$. The existence of left-hand limits for $r\mapsto \int_{0}^{r}\int_{{\mathbb{R}}} p_{r-s}(\cdot-y)K(ds,dy)$ at $t\in\mathcal{G}^{\rm odd} _{\epsilon}$ follows by a similar argument. As we noted above, the same proof works for $t\in\mathcal{G}^{\rm even} _{\epsilon}$,
and this finishes the proof of {\bf (b)}.
\paragraph{(c)}
By the above $t\mapsto K_t$ is continuous on $[0,T]\setminus \mathcal{G}_{\epsilon}$.
Since $\{K^n\}$ is a sequence of continuous, non-decreasing measure-valued processes, its tightness in $M_F(E)$ immediately
implies tightness on all the open intervals between the jumps of the limiting process,
in the space of continuous measure-valued paths,
that is,
in $C([0,T]\setminus \mathcal{G}_{\epsilon}, M_F({\mathbb{R}}))$.
So, the only jumps $K$ may possibly have are at the points $s_i, t_i\in \mathcal{G}_{\epsilon}$. We recall that a jump of measure-valued process $K$ at any $t\in [0,T]$ equals $K(\{t\},dx)=K(\{t\},x)\,dx$, where by \eqref{eq:24_11_2} $K(\{t\},\cdot)\in C_{\rm rap}$ for all $t\in\mathcal{G}_\epsilon$.
We now calculate the sizes of those jumps. Consider the possible jump at $s_i$.
Assume $\phi$ is a non-negative function in $C^2_{c}({\mathbb{R}})$.
By~\eqref{eq:4.4} (and it's analogue for $V$), $U=w^+$, and $V=w^-$,
we have the following conditions on $w^{\pm}_{s_i}$:
\begin{eqnarray}
\label{eq:4.11}
{\mathbf \Delta}\langle w^+,\phi\rangle(s_i) &=& \langle J^{x_i},\phi\rangle-
\langle K(\{s_i\},\cdot), \phi\rangle\,,\\
\label{eq:4.12}
{\mathbf \Delta}\langle w^-,\phi\rangle(s_i) &=& - \langle K(\{s_i\},\cdot), \phi\rangle \leq 0.
\end{eqnarray}
The above are preserved under bounded pointwise limits in $\phi$ and so continue
to hold for any bounded Borel $\phi\ge 0$.
We consider two cases. First assume $\phi$ is such that
$${\rm supp}(\phi)\subset
\{x: w^-_{s_i-}(x)=0\}.$$
Then ${\mathbf \Delta}\langle w^-,\phi\rangle(s_i)=\langle w^-_{s_i},\phi\rangle \ge0$
and so (\ref{eq:4.12}) immediately implies that
$\langle K(\{s_i\},\cdot), \phi\rangle =0$.
Now let $\phi$ be such that
$${\rm supp}(\phi)\subset
\{x: w^+_{s_i-}(x)=0\}.$$
Then ${\mathbf \Delta}\langle w^+_{s_i},\phi\rangle=\langle w^+_{s_i},\phi\rangle \geq 0$
and so (\ref{eq:4.11}) immediately implies that
$\langle K(\{s_i\},\cdot), \phi\rangle \leq \langle J^{x_i},\phi\rangle $.
We may write $1=\phi_1+\phi_2$, where $\phi_i$ is as in Case $i$ ($i=1,2$) (because $w^+_{s_i-}(x)w^-_{s_i-}(x)\equiv 0$). It therefore
follows that
$${\mathbf \Delta}\langle
K_{s_i},1\rangle = \langle K(\{s_i\},\cdot), 1\rangle\leq \langle J^{x_i},1\rangle=\epsilon, $$
and we are done.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\begin{lemma}
\label{lem:15_2}
\begin{itemize}
\item[{\bf (a)}] For any $i\in\NN_{\epsilon}$,
$\{U^{i,n}\}_{n\geq 1}$, $\{\widetilde{U}^{i,n}\}_{n\geq 1}\,,$
$\{V^{i,n}\}_{n\geq 1},\{\widetilde{V}^{i,n}\}_{n\geq 1}$ are
tight in $C([0,T]\setminus\mathcal{G}_{\epsilon}, M_F({\mathbb{R}}))$.
\item[{\bf (b)}] For any $i,j\in \NN_{\epsilon}$, and $\phi_l\in C_{b}({\mathbb{R}}), l=1,\ldots, 4,$
$$\{(M^{i,n,U}(\phi_1),M_t^{j,n,V}(\phi_2),\widetilde{M}_t^{i,n,U}(\phi_3),\widetilde{M}_t^{j,n,V}(\phi_4))\}_{n\geq 1}$$ is
tight in $C([0,T],{\mathbb{R}})^4$.
\end{itemize}
\end{lemma}
\paragraph{Proof}
Fix an arbitrary $i\in\NN_{\epsilon}$. Let us first prove the tightness for
$\{U^{i,n}\}_{n\geq 1}$. By the non-negativity of $U^{i,n}$'s and the domination $U^{i,n}\leq \bar{U}^n\to \bar{U}\in
D([0,T], C^+_{\rm rap})$ a.s. (recall \eqref{asconv}), by Jakubowski's Theorem (see, e.g., Theorem II.4.1 in \cite{per02}) it is enough to prove
tightness of $\{U^{i,n}(\phi)\}_{n\geq 1}$ in $C([0,T]\setminus\mathcal{G}_{\epsilon}, {\mathbb{R}})$, for any $\phi\in C^2_{b}({\mathbb{R}})$.
From~(\ref{tUVndefn}) we get
\begin{eqnarray}
\label{eq:29_06_1}
U^{i,n}_t(\phi) &=& \langle J^{x_i},\phi\rangle\mathbf{1}(t\geq s_i)
+
M_t^{i,n,U}(\phi) \\
\nonumber
&& \mbox{}+ \int_0^t U^{i,n}_s(\Delta \phi/2)\, ds -K_t^{i,n,U}(\phi) , \;\;t\in[0,T]\,.
\end{eqnarray}
For any $p>2$, we use H\"older's inequality to bound the $p$-th moment of the increment of the
third term on the right hand side of~(\ref{eq:29_06_1}):
\begin{eqnarray}
\lefteqn{
E\left[ \left| \int_u^t U^{i,n}_s(\frac{1}{2}\Delta \phi)\, ds\right|^p\right]}\\
\nonumber
&\leq&
\sup_{s\leq T, x\in {\mathbb{R}}}e^{\lambda p |x|}E\left[ \bar{U}^n(s,x)^{p}\right]
\left[ \int_{{\mathbb{R}}} e^{-\lambda|x|} \left|\frac{1}{2}\Delta \phi(x) \right| \,dx\right]^{p}
(t-u)^p,\ \forall 0\leq u\leq t. \\
\end{eqnarray}
Now use Lemma~\ref{qmom}(b) and the Kolmogorov tightness criterion
to see that
\begin{eqnarray}
\label{29_06_2}
\left\{\int_0^{\cdot} U^{i,n}_s(\frac{1}{2}\Delta \phi)\, ds\right\}_{n\geq 1}
\mbox{ is tight in}\; C([0,T],{\mathbb{R}}), \;\forall \phi
\in C^2_{b}({\mathbb{R}}).
\end{eqnarray}
As for the martingale $M^{i,n,U}_{\cdot}(\phi)$, we can argue exactly as in the proof of tightness for
$\{M^{n,U}(\phi)\}_{n\geq 1}$ in Lemma~\ref{lem:15_1}(a), by using again the domination,
$U^{i,n}(s,\cdot)\le U^n(s,\cdot)\leq \bar{U}^n(s,\cdot), s\in [0,T],$ to show that,
\begin{eqnarray}
\label{eq:02_7_1}
\left\{M^{i,n,U}_{\cdot}(\phi)\right\}_{n\geq 1}\; \mbox{is tight in}\; C([0,T],{\mathbb{R}})
\end{eqnarray}
for any $\phi\in C_{b}({\mathbb{R}})$. As for $K^{i,n,U}$, it is dominated from the above by $K^{n}$ and
by Lemma~\ref{lem:15_1}(c), $\{K^n\}_{n\geq 1}$
is tight in $C([0,T]\setminus \mathcal{G}_{\epsilon}, M_F({\mathbb{R}}))$. Therefore $\{K^{i,n,U}\}_{n\geq 1}$ is also tight in the same space.
We combine this with (\ref{29_06_2}), (\ref{eq:02_7_1}) and (\ref{eq:29_06_1}) to finish
the proof of tightness of $\{U^{i,n}\}_{n\geq 1}$ in $C([0,T]\setminus \mathcal{G}_{\epsilon}, M_F({\mathbb{R}}))$.
As for $\{\widetilde{U}^{i,n}\}_{n\geq 1}$, we get by the same argument as above that
\begin{eqnarray}
\label{29_06_3}
\left\{\int_0^{\cdot} \widetilde{U}^{i,n}_s(\Delta \phi/2)\, ds\right\}_{n\geq 1}
\mbox{ is tight in}\; C([0,T],{\mathbb{R}}), \;\forall \phi
\in C^2_{b}({\mathbb{R}}).
\end{eqnarray}
For the martingale term, fix an arbitrary $\phi\in C_{b}$. We have again tightness of $\left\{\widetilde{M}^{i,n,U}_{\cdot}(\phi)\right\}_{n\geq 1}$
in $ C([0,T],{\mathbb{R}})$
by the same method as for $\left\{M^{i,n,U}_{\cdot}(\phi)\right\}_{n\geq 1}\,,$ by using the domination,
$$\left[ \left(\widetilde{U}^n(s,\cdot)+U^n(s,\cdot)\right)^{2\gamma} - U^n(s,\cdot)^{2\gamma}\right]^{1/2}
\sqrt{\frac{\widetilde{U}^{i,n}(s,\cdot)}{\widetilde{U}^n(s,\cdot)}}\leq \bar{U}^n(s,\cdot)^{\gamma}, \;s\in[0,T].
$$
The tightness of $\{V^{j,n}(\phi)\}_{n\geq 1}$ and $\{\widetilde{V}^{j,n}(\phi)\}_{n\geq 1}$ follows in exactly the same way.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\bigskip
In what follows we take
any converging subsequence of the processes from
Lemma~\ref{lem:15_2}(a), Lemma~\ref{lem:15_1}(a), and Corollary~\ref{cor:15_1}. Recall that ${\cal D}$ is the countable subset of
$C^2_{b}({\mathbb{R}})$ which is bounded-pointwise dense in $C_b({\mathbb{R}})$. By Lemma~\ref{lem:15_2}(b) we can take a further subsequence, if needed, so that
all the martingales from Lemma~\ref{lem:15_2}(b) indexed by functions from ${\cal D}$ converge in $C([0,T],{\mathbb{R}})$.
To simplify notation we will still index
this subsequence by $n$.
Let us also switch to the Skorohod space where all the processes mentioned in the previous paragraph
converge a.s.. Since
$(\bar{U}^{n}, \bar{V}^{n})$ has the same law as the weakly unique in $D^{\epsilon}([0,T], C^+_{\rm rap})^2$ solution to~(\ref{eq:2.8})
(by Theorem~1.1 of~\cite{myt98w}),
we may, and shall, assume that on our probability space
$(\bar{U}^{n}, \bar{V}^{n})\to(\bar{U}, \bar{V})$ in $D^{\epsilon}([0,T],C^+_{\rm rap})^2, \;$a.s.,
and, of course,
\begin{eqnarray}
\label{eq:11_06}
U^{i,n}, \widetilde{U}^{i,n}, U^{n}, \widetilde{U}^n&\leq& \bar{U}^n,\;\; \forall n\geq 1, \; i\in \NN_{\epsilon}, \\
\nonumber
V^{i,n}, \widetilde{V}^{i,n}, V^{n}, \widetilde{V}^n&\leq& \bar{V}^n,\;\; \forall n\geq 1,\; i\in \NN_{\epsilon}.
\end{eqnarray}
For $i\in \NN_{\epsilon}$, let
$$U, V, \widetilde{U}, \widetilde{V}, \bar{U} ,\bar{V}, K, U^{i}, V^{i}, \widetilde{U}^{i}, \widetilde{V}^{i}, K^{i,U}, K^{i,V}$$
be the limiting points of $\{U^{n}\}_{n\geq 1}, \{V^{n}\}_{n\geq 1},$ $\{\widetilde{U}^{n}\}_{n\geq 1},$ $ \{\widetilde{V}^{n}\}_{n\geq 1}, $ $\{\bar{U}^{n}\}_{n\geq 1},$ $ \{\bar{V}^{n}\}_{n\geq 1},$
$\{K^n\}_{n\ge 1},$ $ \{U^{i,n}\}_{n\geq 1}$, $\{V^{i,n}\}_{n\geq 1}$, $\{\widetilde{U}^{i,n}\}_{n\geq 1}$,
$\{\widetilde{V}^{i,n}\}_{n\geq 1}$, $\{K^{i,n,U}\}_{n\geq 1}$, $\{K^{i,n,V}\}_{n\geq 1}$, respectively.
Clearly w.p. 1 for all $t\in[0,T]\setminus\mathcal{G}_\epsilon$,
\begin{eqnarray}\label{Usum}
U_t &=& \sum_{i \in{\mathbb{N}}_{\epsilon}} U^{i}_t\,,\;\;\;\;\widetilde{U}_t = \sum_{i \in{\mathbb{N}}_{\epsilon}} \widetilde{U}^{i}_t\,,\\
V_t &=& \sum_{i \in{\mathbb{N}}_{\epsilon}}V^{i}_t\,,\;\;\;\;\widetilde{V}_t = \sum_{i \in{\mathbb{N}}_{\epsilon}} \widetilde{V}^{i}_t\,,
\end{eqnarray}
by the corresponding equations for the approximating processes,
$$\bar{U}_t=U_t+\widetilde{U}_t, \quad\bar{V}_t=V_t+\widetilde{V}_t\hbox{ for all }t\in[0,T]$$
by the same reasoning and Lemma~\ref{lem:15_1}(a), and
$$K=\sum_{i\in{\mathbb{N}}_\epsilon} K^{i,U}=\sum_{j\in{\mathbb{N}}_\epsilon} K^{j,V}.$$
By Lemma~\ref{lem:15_1}(a) we may take versions of $U,\widetilde{U},V,\widetilde{V},\bar{U},\bar{V}$ in $D^\epsilon([0,T],C^+_{\rm rap})$.. We next refine the state space of the subprocesses corresponding to the individual clusters.
\begin{lemma}
\label{lem:05_7_1}
For any $i\in \NN_{\epsilon}$,
$$\left(U^{i},\widetilde{U}^i,V^i,\widetilde{V}^{i},K^{i,U},K^{i,V}\right)\in
(D^{\epsilon}([0,T],M_F({\mathbb{R}}))\cap L^2(E))^4\times D([0,T],M_F({\mathbb{R}}))^2$$
and $\left(U^{i},\widetilde{U}^i,V^i,\widetilde{V}^{i},K^{i,U},K^{i,V}\right)_{i\in {\mathbb{N}}_{\epsilon}}$ satisfy (\ref{UVdefn}), \eqref{eq:2.2} and (\ref{tUVdefn}).
\end{lemma}
\paragraph{Proof}
Although $U^i$ (and similarly $V^i, \widetilde{U}^i, \widetilde{V}^i$) is defined
as a limit point of $\{U^{i,n}\}_{n\geq 1}$ in $C([0,T]\setminus \mathcal{G}_{\epsilon}, M_F({\mathbb{R}}))$, it can be also considered
as a limit of $\{U^{i,n}\}_{n\geq 1}$ in the weak $L^2(E)$ topology (in the sequel we denote the space $L^2(E)$ equipped
with the weak topology, by $L^{2,w}(E)$).
Indeed, since
by~(\ref{eq:11_06}), all $U^{i,n}, \widetilde{U}^{i,n}$ (resp. $V^{i,n}, \widetilde{V}^{i,n}$) are bounded from above
by $\bar{U}^n \to \bar{U}$ in $D([0,T],C^+_{\rm rap})$ (resp. $\bar{V}^n\to \bar{V}\hbox{ in } D([0,T],C^+_{\rm rap})$), we get that, in fact,
$$\{U^{i,n}\}_{n\geq 1}, \{\widetilde{U}^{i,n}\}_{n\geq 1}, \{V^{i,n}\}_{n\geq 1}, \{\widetilde{V}^{i,n}\}_{n\geq 1}$$
are all relatively compact in $L^{2,w}(E)$. This and the convergence of $\{U^{i,n}\}_{n\geq 1}$, $\{V^{i,n}\}_{n\geq 1}$,
$\{\widetilde{U}^{i,n}\}_{n\geq 1}$,
$\{\widetilde{V}^{i,n}\}_{n\geq 1}$, in $C([0,T]\setminus \mathcal{G}_\epsilon,M_F({\mathbb{R}}))$ as $n\rightarrow \infty$,
imply that
$$(U^{i,n},\widetilde{U}^{i,n}, V^{i,n},\widetilde{V}^{i,n}) \rightarrow (U^i,\widetilde{U}^i, V^i,\widetilde{V}^i),
\;\;{\rm in}\; L^{2,w}(E)^4,\; P-{\rm a.s.},\; {\rm as}\; n\rightarrow \infty.
$$
Therefore we have
$$(U^i,\widetilde{U}^i, V^i,\widetilde{V}^i)\in (C([0,T]\setminus \mathcal{G}_{\epsilon}), M_F({\mathbb{R}}))\cap L^2(E))^4.$$
\phantom{}From our earlier remark prior to Proposition~\ref{prop:4} and $K^{i,U},K^{u,V}\in M_F(E)$, we have
$$ (K^{i,U}, K^{i,V})\in D([0,T],M_F({\mathbb{R}}))^2.$$
Now let us derive the semimartingale decomposition for $U^i$. Consider the convergence of the
right-hand side of the equation for $U^{i,n}(\phi)$ in~(\ref{tUVndefn}). By convergence of $\{U^{i,n}\}_{n\geq 1}$ in $L^{2,w}(E)$ we
get that, for any $\phi\in C^2_{b}({\mathbb{R}})$ and any $t\le T$,
\begin{eqnarray}
\int_0^t \int_{{\mathbb{R}}} U^{i,n}_s(x)\frac{\Delta}{2}\phi(x)\,dxds
\rightarrow \int_0^t \int_{{\mathbb{R}}} U^{i}_s(x)\frac{\Delta}{2}\phi(x)\,dxds,\;\;{\rm as}\; n\rightarrow \infty.
\end{eqnarray}
Now fix an arbitrary $\phi\in {\cal D}$. By Lemma~\ref{lem:15_2}(b) we may assume that $M^{i,n,U}(\phi)$
converges a.s. in $C([0,T],{\mathbb{R}})$.
Moreover, using a bound analogous to~(\ref{eq:01_7_2}),
one can immediately get that, for any $p\geq 2$,
the martingale $M_t^{i,n,U}(\phi)$ is bounded in $L^p(dP)$ uniformly in $n$ and $t\in [0,T]$.
Hence, the limiting process is a continuous
$L^2$-martingale that we will call $M^{i,U}(\phi)$. For its quadratic variation,
recall that the sequence $\{ (U^n)^{2\gamma-1}\}_{n\geq 1}$ converges to $U^{2\gamma-1}$ {\it strongly} in $L^2(E)$ (by \eqref{genpc})
and this together with convergence of $\{U^{i,n}\}_{n\geq 1}$ in $L^{2,w}(E)$ implies that,
for any $\phi\in C_{b}({\mathbb{R}})$ and $t\le T$, w.p.1
\begin{eqnarray}
\label{eq:11.2}
\langle M^{i,n,U}(\phi)\rangle_t &=& \int_0^t \int_{{\mathbb{R}}} U^n(s,x)^{2\gamma-1} U^{i,n}(s,x) \phi(x)^2\,dxds
\\
\nonumber
&\rightarrow & \int_0^t \int_{{\mathbb{R}}}U(s,x)^{2\gamma-1} U^{i}(s,x) \phi(x)^2\,dxds,
\;\;{\rm as}\; n\rightarrow \infty.
\end{eqnarray}
Hence, again by boundedness of $M_t^{i,n,U}(\phi)$, in $L^p(dP), p\geq 2,$ uniformly in $t\in [0,T], n\geq 1,$
we get that the limiting continuous
martingale $M^{i,U}$ has quadratic variation
$$\langle M^{i,U}(\phi)\rangle_t=\int_0^t \int_{{\mathbb{R}}}U(s,x)^{2\gamma-1} U^{i}(s,x) \phi(x)^2\,dxds$$
for all $\phi\in {\cal D}\subset C_{b}({\mathbb{R}})$.
Moreover, by repeating the above argument for $V^{i,n}$ we get that $(U^{i}, V^{i})_{i\in \NN_{\epsilon}},$ solves the
following martingale problem:
\begin{eqnarray}
\nonumber
\left\{
\begin{array}{rcl}
&&\hspace*{-2.2cm}\mbox{For all $\phi_i, \psi_j\in {\cal D}\subset C^2_{b}({\mathbb{R}})$,}\\
&&\mbox{}\\
U^{i}_t(\phi_i) &=& \langle J^{x_i},\phi_i\rangle\mathbf{1}(t\geq s_i)
+ M_t^{i,U}(\phi_i) \\
&& \mbox{}+
\int_0^t U^{i}_s(\frac{1}{2}\Delta \phi_i)\, ds - K^{i,U}_t(\phi_i)
\;\;\forall t\in [0,T]\,,
i\in\NN_{\epsilon},
\\
&&\mbox{}\\
V^{j}_t(\psi_j) &=& \langle J^{y_j},\psi_j\rangle\mathbf{1}(t\geq t_j)
+ M_t^{j,V}(\psi_j) \\
\nonumber
&& \mbox{}+ \int_0^t V^{j}_s(\frac{1}{2}\Delta \psi_j)\, ds -
K^{j,V}_t(\psi_j), \;\;\forall t\in [0,T]\,, j\in\NN_{\epsilon},
\end{array}
\right.
&&
\label{eq:11.1}
\\
&&\mbox{}
\end{eqnarray}
where $M^{i,U}(\phi_i), M^{j,V}(\psi_j)$ are martingales such that
\begin{eqnarray}
\nonumber
\left\{
\begin{array}{rcl}
\langle M^{i,U}_{\cdot}(\phi_i), M_{\cdot}^{j,U}(\phi_j)\rangle_t &=& \delta_{i,j}
\int_0^t \int_{{\mathbb{R}}} U(s,x)^{2\gamma-1} U^{i}(s,x) \phi_i(x)^2\,dxds,\;\;\forall i,j\in\NN_{\epsilon},\\
\nonumber &&
\\
\nonumber
\langle M^{i,V}_{\cdot}(\psi_i), M_{\cdot}^{j,V}(\psi_j)\rangle_t &=& \delta_{i,j}
\int_0^t \int_{{\mathbb{R}}} V(s,x)^{2\gamma-1} V^{i}(s,x) \psi_i(x)^2\,dxds,\;\;\forall i,j\in\NN_{\epsilon},\\
&&\\
\langle M_{\cdot}^{i,U}(\phi_i), M_{\cdot}^{j,V}(\psi_j)\rangle_t &=&0, \;\;\forall i,j\in\NN_{\epsilon}.
\end{array}
\right.
&&
\label{eq:11.7}
\\
&&\mbox{}
\end{eqnarray}
Note that the equality in (\ref{eq:11.1})
holds for any $t$ in $[0,T]\setminus \mathcal{G}_{\epsilon}$ since both left- and right-hand sides are continuous processes on
$[0,T]\setminus \mathcal{G}_{\epsilon}$; moreover the right-hand side is cadlag on $[0,T]$. Using this and the domination $U^i_t\le \bar U_t$ and $V^i_t\le \bar V_t$ for $t\notin\mathcal{G}_\epsilon$, we may construct versions of $U^i$ and $V^i$ in $D^{\epsilon}([0,T], M_F({\mathbb{R}}))\cap L^2(E)$ so that equality in (\ref{eq:11.1})
holds for all $t$ in $[0,T]$. Clearly the martingale problem (\ref{eq:11.1})
can be also extended to all $\phi_i,\psi_j\in C^2_{b}({\mathbb{R}})$ by a limiting procedure, again using the
$L^p(dP)$ boundedness of the martingales for any $p\geq 2$.
Now let us handle the processes $(\widetilde{U}^i, \widetilde{V}^i), i\in{\mathbb{N}}_{\epsilon}$. By the same steps that were used to treat $(U^i, V^i)_{i\in {\mathbb{N}}_{\epsilon}}$
we get that $(\widetilde{U}^i, \widetilde{V}^i)_{i\in {\mathbb{N}}_{\epsilon}}$ satisfies the following martingale problem:
\begin{eqnarray}
\left\{
\begin{array}{rcl}
&&\hspace*{-2.2cm}\mbox{For all $\phi_i, \psi_j\in {\cal D}\subset C^2_{b}({\mathbb{R}})$,}\\
&&\mbox{}\\
\widetilde{U}^{i}_t(\phi_i) &=& \langle J^{x_i},\phi_i\rangle\mathbf{1}(t\geq s_i)
+ \widetilde{M}_t^{i,U}(\phi_i) \\
&& \mbox{}+
\int_0^t \widetilde{U}^{i}_s(\frac{1}{2}\Delta \phi_i)\, ds + K^{i,U}_t(\phi_i)
\;\;\forall t\in [0,T]\,,
i\in \NN_{\epsilon},
\\
&&\mbox{}\\
\widetilde{V}^{j}_t(\psi_j) &=& \langle J^{y_j},\psi_j\rangle\mathbf{1}(t\geq t_j)
+\widetilde{M}_t^{j,V}(\psi_j) \\
\nonumber
&& \mbox{}+ \int_0^t \widetilde{V}^{j}_s(\frac{1}{2}\Delta \psi_j)\, ds +
K^{j,V}_t(\psi_j), \;\;\forall t\in [0,T]\,, j\in \NN_{\epsilon},
\end{array}
\right.
&&
\label{eq:03_7_3}
\\&&\mbox{}
\end{eqnarray}
where by Lemma~\ref{lem:15_2} $\widetilde{M}^{i,U}(\phi_i), \widetilde{M}^{j,V}(\psi_j)$ are continuous processes. By the same argument as
before (the uniform in $n$ and $t$, boundedness $L^p(dP), p\geq 2,$ of the approximating martingales) they are
martingales and we would like to show that, for any $i,j\in\NN_{\epsilon}$,
\begin{eqnarray}
\nonumber
\left\{
\begin{array}{rcl}
\langle \widetilde{M}^{i,U}_{\cdot}(\phi_i), \widetilde{M}_{\cdot}^{j,U}(\phi_j)\rangle_t &=& \delta_{i,j}
\int_0^t\int_{{\mathbb{R}}}\frac{\left(\widetilde{U}(s,x)+U(s,x)\right)^{2\gamma}- U(s,x)^{2\gamma}}{\widetilde{U}(s,x)}\widetilde{U}^i(s,x)
\phi_i(x)^2 dx\,ds,\;\;\\
\nonumber
&&\\
\nonumber
\langle \widetilde{M}^{i,V}_{\cdot}(\psi_i), M_{\cdot}^{j,V}(\psi_j)\rangle_t &=& \delta_{i,j}
\int_0^t\int_{{\mathbb{R}}}\frac{\left(\widetilde{V}(s,x)+V(s,x)\right)^{2\gamma}- V(s,x)^{2\gamma}}{\widetilde{V}(s,x)}\widetilde{V}^i(s,x)
\psi_i(x)^2 dx\,ds,\;\;\\
&&\\
\langle \widetilde{M}_{\cdot}^{i,U}(\phi_i), \widetilde{M}_{\cdot}^{j,V}(\psi_j)\rangle_t &=&0.
\end{array}
\right.
&&
\label{eq:03_7_2}
\\
&&\mbox{}
\end{eqnarray}
As before the orthogonality of the limiting martingales follows easily by the uniform in $n$ and $t$, $L^p(dP),
p\geq 2,$ boundedness of
the approximating martingales, and their orthogonality. Next we calculate the quadratic variations. We will do it
just for $\widetilde{M}^{i,U}_{\cdot}(\phi)$, for some $i\in \NN_{\epsilon}$.
It is enough to show that for any
$\phi\in C_{b}({\mathbb{R}})$ and $t\in [0,T]$,
\begin{eqnarray}
\label{eq:03_7_1}
\lefteqn{
\int_0^t\int_{{\mathbb{R}}}\frac{\left(\widetilde{U}^n(s,x)+U^n(s,x)\right)^{2\gamma}- U^n(s,x)^{2\gamma}}{\widetilde{U}^n(s,x)}\widetilde{U}^{i,n}(s,x)
\phi(x)dx\,ds}\\
\nonumber
&\rightarrow&\int_0^t\int_{{\mathbb{R}}}\frac{\left(\widetilde{U}(s,x)+U(s,x)\right)^{2\gamma}- U(s,x)^{2\gamma}}{\widetilde{U}(s,x)}\widetilde{U}^i(s,x)
\phi(x)dx\,ds,\;
\end{eqnarray}
in $L^{1}(dP)$, ${\rm as}\;
n\rightarrow\infty$. Denote
$$ F(\tilde u, u) \equiv \left(\tilde u+ u \right)^{2\gamma}- u^{2\gamma}.
$$
Then, for any
$\phi\in C_{b}({\mathbb{R}})$ and $t\in [0,T]$, we get
\begin{eqnarray}
\lefteqn{
\left| \int_0^t\int_{{\mathbb{R}}}\frac{F(\widetilde{U}^n(s,x),U^n(s,x))}{\widetilde{U}^n(s,x)}\widetilde{U}^{i,n}(s,x)
\phi(x)dx\,ds\right.}\\
\nonumber
&&\left.\mbox{}-\int_0^t\int_{{\mathbb{R}}} \frac{F(\widetilde{U}(s,x),U(s,x))}{\widetilde{U}(s,x)}\widetilde{U}^i(s,x)
\phi(x)dx\,ds\right|\\
\nonumber
&\leq&
\left| \int_0^t\int_{{\mathbb{R}}}\left(\frac{F(\widetilde{U}^n(s,x),U^n(s,x))}{\widetilde{U}^n(s,x)}-
\frac{F(\widetilde{U}(s,x),U(s,x))}{\widetilde{U}(s,x)}\right) \widetilde{U}^{i,n}(s,x)
\phi(x)dx\,ds\right|\\
\nonumber
&&
\mbox{}+\left| \int_0^t\int_{{\mathbb{R}}}\frac{F(\widetilde{U}(s,x),U(s,x))}{\widetilde{U}(s,x)}\left( \widetilde{U}^i(s,x)-\widetilde{U}^{i,n}(s,x)\right)
\phi(x)dx\,ds\right|
\\
\nonumber
&\equiv& I^{1,n}+I^{2,n}.
\end{eqnarray}
Clearly
\begin{eqnarray}
\label{eq:29_06_3}
\frac{F(\widetilde{U}(s,x),U(s,x))}{\widetilde{U}(s,x)}\leq 2\gamma \bar{U}^{2\gamma-1}\in L^2(E),
\end{eqnarray}
and hence by convergence
of $\widetilde{U}^{i,n}$ to $\widetilde{U}^i$ in $L^{2,w}(E)$, a.s., we get that
$$ I^{2,n}\rightarrow 0,\;{\rm as}\; n\rightarrow \infty\ \ a.s.$$
and by dominated convergence it is easy to get that, in fact, the convergence is in $L^1(dP)$. As
for $I^{1,n}$, by using $\left|\frac{ \widetilde{U}^{i,n}(s,x)}{ \widetilde{U}^n(s,x)}\right|\leq 1$ we immediately get that
\begin{eqnarray}
\nonumber
I^{1,n}&\leq& \int_0^t\int_{{\mathbb{R}}}\left| F(\widetilde{U}^n(s,x),U^n(s,x))-
F(\widetilde{U}(s,x),U(s,x))\right|
\phi(x)dx\,ds\\
\nonumber
&&\mbox{}+ \int_0^t\int_{{\mathbb{R}}}\frac{F(\widetilde{U}(s,x),U(s,x))}{\widetilde{U}(s,x)}\left|\widetilde{U}(s,x)-\widetilde{U}^n(s,x)\right|
\phi(x)dx\,ds.
\end{eqnarray}
Use again~(\ref{eq:29_06_3}) and convergence of $\widetilde{U}^n$ and $U^n$ to $\widetilde{U}$ and $U$ respectively, in $L^p(E)$ for any
$p\geq 1$,
we immediately get that, $I^{1,n}\rightarrow 0$, a.s, as $n\rightarrow\infty$. Use again the dominated
convergence theorem to get that in fact the convergence holds in $L^1(dP)$, and~(\ref{eq:03_7_1}) follows.
As a result we get that $(U^i,V^i,\widetilde{U}^i,\widetilde{V}^i), i\in\NN_{\epsilon}$ solves the martingale problem
(\ref{eq:11.1}), (\ref{eq:11.7}), (\ref{eq:03_7_3}), (\ref{eq:03_7_2}), with all martingales corresponding to
different processes being orthogonal.
Now, as before, (see the proof of Lemma~\ref{lem:15_1}(b)), the
martingales in the martingale problem can be represented as stochastic integrals with respect to independent white
noises, and hence one immediately gets that $(U^i,V^i,\widetilde{U}^i,\widetilde{V}^i)_{i\in\NN_{\epsilon}}$ solves~(\ref{UVdefn}), \eqref{eq:2.2} and (\ref{tUVdefn}) but with
$(U^i,V^i,\widetilde{U}^i,\widetilde{V}^i)\in (D^{\epsilon}([0,T],M_F({\mathbb{R}}))\cap L^2(E))^4,\;i\in \NN_{\epsilon}$. Here we note that equality in \eqref{Usum} as $M_F({\mathbb{R}})$-valued processes extends to all $t\in[0,T]$ by right-continuity.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
To finish the proof of Proposition~\ref{prop:4} we next verify the following lemma.
\begin{lemma}
\label{lem:05_7_2}
$U^i, \widetilde{U}^i, V^i, \widetilde{V}^i\in C([0,T]\setminus\mathcal{G}_{\epsilon},C^+_{\rm rap})\cap D^{\epsilon}([0,T], L^1({\mathbb{R}})),
\;\forall i\in \NN_{\epsilon}$.
\end{lemma}
\paragraph{Proof}
We will prove it just for $U^i$, as the proof for the other terms goes along exactly along the same lines.
Similarly to the steps in the proof of Lemma~\ref{lem:15_1}(b), we first write the equation for $U^i$ in the mild form to get
\begin{eqnarray}
\label{eq:24_11_1}
U^i(t,x)&=& \int_{{\mathbb{R}}} p_{t-s_i}(x-y)J^{x_i}_{\epsilon}(y)\,dy \\
\nonumber
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U(s,y)^{\gamma-1/2}U^i(s,y)^{1/2}
W^{i,U}(ds,dy)
\\
\nonumber
&&\mbox{}-\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y)
K^{i,U}(ds,dy), \;\;{\rm Leb-a.e.}\; (t,x)\in([0,T]\setminus\mathcal{G}_{\epsilon})\times{\mathbb{R}}. \nonumber
\end{eqnarray}
We now argue as in the proof of part (b) of Lemma~\ref{lem:15_1}. The first term on the right hand side
of~(\ref{eq:24_11_1}) clearly belongs to
$D^{\epsilon}([0,T], C_{\rm rap})$. Similarly by the bound
$$ U^{\gamma-1/2}(U^i)^{1/2} \leq \bar{U}^{\gamma} \in D([0,T], C^+_{\rm rap}),$$
Lemma~\ref{pmom}, and Lemma~\ref{lem:30_6_1}(b), we see that the second term on the right-hand side is in
$C([0,T], C_{\rm rap})$. As for the third term on the right hand side one can use the domination
$K^{i,U}\leq K$, Lemma~\ref{lem:15_1}(b) to get that $K^{i,U}(\{t\}, dx)=0$ for any $t\in [0,T]\setminus\mathcal{G}_{\epsilon}$.
For $P$-a.s $\omega$, take arbitrary $(t,x)\in ([0,T]\setminus\mathcal{G}_{\epsilon})\times{\mathbb{R}}$ and $\{(t_k,z_k)\}_{k\geq 1}$, such that $\lim_{k\rightarrow
\infty} (t_k,z_k)=(t,x)$. Then by Lemma~\ref{lem:15_1}(b), we get that
$\{1(s< t_k)p_{t_k-s}(z_k-y)\}$ is uniformly integrable with respect to $K(ds,dy)$ and hence by domination it is also uniformly integrable with respect to $K^{i,U}(ds,dy)$. This gives continuity of the mapping
$$ (r,x)\mapsto \int_0^{r}\int_{{\mathbb{R}}} p_{r-s}(x-y)K^{i,U}(ds,dy)$$
on $([0,T]\setminus\mathcal{G}_{\epsilon})\times{\mathbb{R}}$, and again by domination we may easily show that
$$ r\mapsto \int_0^{r}\int_{{\mathbb{R}}} p_{r-s}(\cdot-y)K^{i,U}(ds,dy)\in C([0,T]\setminus\mathcal{G}_{\epsilon},C^+_{\rm rap}).$$
All together, this gives that the right hand side of ~(\ref{eq:24_11_1}) belongs to
$C([0,T]\setminus\mathcal{G}_{\epsilon}, C_{\rm rap})$. Hence there is a version of $U^i$ which is in $C([0,T]\setminus\mathcal{G}_{\epsilon}, C^+_{\rm rap})$ as well.
Note that, in fact, the above argument also easily implies that for any $t\in \mathcal{G}_{\epsilon}$,
\begin{eqnarray}
\label{eq:24_11_4}
U^i(r,\cdot) \rightarrow U^i(t-, \cdot), \;\;{\rm in} \; C_{\rm rap},\;P-{\rm a.s.},
\end{eqnarray}
as $r\uparrow t$, where
\begin{eqnarray}
\label{eq:24_11_3}
U^i(t-,x)&=& 1(t>s_i)\int_{{\mathbb{R}}} p_{t-s_i}(x-y)J^{x_i}_{\epsilon}(y)\,dy \\
\nonumber
&&\mbox{} +\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y) U(s,y)^{\gamma-1/2}U^i(s,y)^{1/2}
W^{i,U}(ds,dy)
\\
\nonumber
&&\mbox{}-\int_0^t\int_{{\mathbb{R}}} p_{t-s}(x-y)
(K^{i,U}(ds,dy)-\delta_{t}(ds)K^{i,U}(\{t\},dy), \;\;x\in {\mathbb{R}}. \nonumber
\end{eqnarray}
Indeed, for $(t,x)\in \mathcal{G}_{\epsilon}\times{\mathbb{R}}$, take again arbitrary $(t_k,z_k)$ such that $t_k\uparrow t$ and $z_k\rightarrow x$, as $k\rightarrow \infty$. Again by Lemma~\ref{lem:15_1}(b), we get that
$\{1(s< t_k)p_{t_k-s}(z_k-y)\}$ is uniformly integrable with respect to $(K(ds,dy)-K(\{t\},dy))$, hence by domination it is also uniformly integrable with respect to $(K^{i,U}(ds,dy)-\delta_{t}(ds)K^{i,U}(\{t\},dy))$. This easily implies that
$ U^i(t_k,z_k) \rightarrow U^i(t-,x)$, where $U^i(t-,x)$ satisfies~(\ref{eq:24_11_3}), and hence (\ref{eq:24_11_4})
follows.
Clearly, (\ref{eq:24_11_4}) implies that corresponding convergence also holds in $L^1({\mathbb{R}})$, and hence
to finish the proof of the lemma it is enough to show that for any $t\in \mathcal{G}_{\epsilon}$,
\begin{eqnarray}
\label{eq:24_11_5}
U^i(r,\cdot) \rightarrow U^i(t, \cdot), \;\;{\rm in} \; L^1({\mathbb{R}}),\;P-{\rm a.s.},
\end{eqnarray}
as $r\downarrow t$. Again, as in the proof of Lemma~\ref{lem:15_1}(b), we will show it for $t=s_j\in \mathcal{G}^{\rm odd}_{\epsilon}$ for some $j$. By~(\ref{eq:11.1}), we get that
\begin{eqnarray}
\label{24_11_6}
U^i_{s_j}(dx)&=& U^i_{s_j-}(dx)+ 1(s_i=s_j)J^{x_i}_{\epsilon}(x)\,dx - K^{i,U}(\{s_j\},dx).
\end{eqnarray}
Recall again that $K^{i,U}(\{s_j\},dx)$ is dominated by $K(\{s_j\},dx)$, which, in turn, by~(\ref{eq:24_11_2})
is absolutely continuous with a density function in $C^+_{\rm rap}\,$. Therefore $K^{i,U}(\{s_j\},dx)$ is also
absolutely continuous with a density function $K^{i,U}(\{s_j\},x), x\in{\mathbb{R}},$ bounded by a function in
$C^+_{\rm rap}$. This together with~(\ref{eq:24_11_4}), our assumptions on $J^{x_i}_{\epsilon}$ and~(\ref{24_11_6})
implies that $U^i_{s_j}(dx)$ is absolutely continuous with bounded density function
\begin{eqnarray}
\label{eq:24_11_7}
U^i_{s_j}(\cdot)\in L^1({\mathbb{R}}).
\end{eqnarray}
For any $\eta\in (0,\epsilon/2)$, by combining \eqref{24_11_6}, \eqref{eq:24_11_4} (with $t=s_j$) and \eqref{eq:24_11_1} (with $t=s_j+\eta$) we have,
\begin{eqnarray}
\label{eq:24_11_8}
U^i(s_j+\eta,\cdot)&=& S_{\eta}U^i(s_j,\cdot) \\
\nonumber
&&\mbox{} +\int_{s_j}^{s_j+\eta}\int_{{\mathbb{R}}} p_{s_j+\eta-s}(\cdot-y) U(s,y)^{\gamma-1/2}U^i(s,y)^{1/2}
W^{i,U}(ds,dy)
\\
\nonumber
&&\mbox{}-\int_{s_j}^{s_j+\eta}\int_{{\mathbb{R}}} p_{s_j+\eta-s}(\cdot-y)
(K^{i,U}(ds,dy)-\delta_{s_j}(ds)K^{i,U}(\{s_j\},dy), \;\;x\in {\mathbb{R}}. \nonumber
\end{eqnarray}
As $\eta\downarrow 0$, the convergence to zero in $C_{\rm rap}$ of the second and the third terms on the right hand side follows easily as
in the last part of the proof of Lemma~\ref{lem:15_1}(b). By~(\ref{eq:24_11_7}), the first term on the right hand side of~(\ref{eq:24_11_8}) converges to $U^i(s_j,\cdot)$ in $L^1({\mathbb{R}})$ and we are done.
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\paragraph{Proof of Proposition~\ref{prop:4}}
Except for property \eqref{eq:2.3}, Proposition~\ref{prop:4} follows from Corollary~\ref{cor:15_1}, and
Lemmas~\ref{lem:15_2}(a),~\ref{lem:05_7_1},~\ref{lem:05_7_2}. For \eqref{eq:2.3} we note that
\[U^i(t,x)V^j(t,x)\le U(t,x)V(t,x)=w^+(t,x)w^-(t,x)\equiv0.\]
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\paragraph{Proof of Theorem~\ref{thm:1.1}}
As we mentioned in Remark~\ref{rem:04}, since $T>1$ can be chosen arbitrary large, it is sufficient to prove the
theorem on the time interval $[0,T]$.
Clearly, by Proposition~\ref{prop:4} and the
definition of $\bar{U}^i=U^i+\widetilde{U}^i, \bar{V}^i=V^i+\widetilde{V}^i$, we immediately get that $$(\bar{U}^i,\bar{V}^i)\in
\left( C([0,T]\setminus\mathcal{G}_{\epsilon}, C^+_{\rm rap})\cap D^{\epsilon}([0,T], L^1({\mathbb{R}}))\right)^2,\;i\in\NN_{\epsilon},$$ and satisfies~(\ref{eq:2.5}) and(\ref{eq:2.8}).
We saw in Section~\ref{sec:setup} that \eqref{0early} and its analogue for $(U^i,V^j)$ follow from the other properties.
Then, by repeating the argument in the proof of Lemma~\ref{lem:05_7_2} and taking into account the absence of the terms
$K^{i,U}, K^{i,V}$ at the right hand side of the equations for $\bar{U}^i, \bar{V}^i$, we immediately get that, in fact,
$(\bar{U}^i,\bar{V}^i)\in
D^{\epsilon}([0,T], C^+_{\rm rap})^2,\;i\in\NN_{\epsilon},$
and
$\bar{U}^i_{s_i+\cdot}\in C([0,T-s_i],C^+_{\rm rap})$, $\bar{V}^i_{t_i+\cdot}\in C([t_i,T-t_i],C^+_{\rm rap}),\;i\in\NN_{\epsilon}\,,$ and part~(a) of the
theorem follows. Part (b) follows from Lemma~\ref{lem:15_1}(c).
\hfill\quad \vrule height7.5pt width4.17pt depth0pt
\bibliographystyle{alpha}
|
1,116,691,500,373 | arxiv | \section{Introduction}\label{sec:introduction}
Let $S$ be an orientable 2-manifold, closed or with finitely many punctures,
where the genus and the number of punctures are chosen so that $S$ admits a
hyperbolic structure. The {\em modular group} $\Mod{S}$ is the group
$\pi_0(\Diff{S})$, where admissible homeomorphisms preserve orientation.
If a mapping class $[F] \in \Mod{S}$ is {\em pseudo-Anosov} or {\it pA}, then there exists
a representative $F :S\to S$, a pair of invariant transverse measured
foliations $({\mathcal F}^u, \mu_u), ({\mathcal F}^s, \mu_s),$ and a real number $\lambda$, the {\em dilatation} of $[F]$, such that $F$ multiplies the transverse measure $\mu_u$ (resp. $\mu_s$) by $\lambda$ (resp. $\frac1\lambda$). The real number $\lambda$ is an invariant of the conjugacy class of $[F]$ in Mod$(S)$.
{\em Measured train tracks} are a partially combinatorial device that Thurston introduced to encode the essential properties of $({\mathcal F}^u, \mu_u), ({\mathcal F}^s, \mu_s)$. A train track $\tau$ is a branched 1-manifold that is embedded in the surface $S$. It is made up of vertices (called {\em switches}) and smooth edges (called {\em branches}), disjointly embedded
in $S$. See \cite[Section 1.3]{PH}.
Given a pA map $[F]$, there exists a train track $\tau\subset S$ that {\em fills} the surface, i.e., the complement of $\tau$ consists of (possibly punctured) discs, and $\tau$ is left invariant by $[F]$.
Moreover, $\tau$ is equipped with a transverse measure (resp. tangential measure) that is related to the transverse measure $\mu_u$ on $\mathcal F_u$ (resp. $\mu_s$ on $\mathcal F_s$).
In \cite{BH} Bestvina and Handel gave an algorithmic proof of Thurston's
classification theorem for mapping classes.
Their proof shows that, if $[F]$ is a pA map of $S$, then one may construct, algorithmically, a graph $G$, homotopic to $S$ when $S$ is punctured, and an induced map $f:G \to G,$ that we call a {\it train track map}. For every $r \geq 1$ the restriction of $f^r$ to the interior of every edge is an immersion.
It takes a vertex of $G$ to a vertex, and takes an edge to an edge-path which has no backtracking.
Let $e_1, \dots, e_n$ be the unoriented edges of $G$. Knowing $G$ and $f:G\to G$, they construct a somewhat special measured train track $\tau$, and we will always assume that our $\tau$ comes from their construction.
The {\em transition matrix} $T$ is an $n \times n$ matrix whose entry $T_{i,j}$ is the number of times the edge path $f(e_j)$ passes over $e_i$ in either direction, so that all entries of $T$ are non-negative integers.
If $[F]$ is pA, then $T$ is irreducible and it has a dominant real eigenvalue $\lambda$, the {\em Perron-Frobenius eigenvalue} \cite{gantmacher2}.
The eigenvalue $\lambda$ is the dilatation of $[F]$.
The left (resp.\ right) eigenvectors of $T$ determine tangential (resp.\ transversal) measures on $\tau$, and eventually determine $\mu_s$ (resp. $\mu_u$).
In this paper we study the structure of the characteristic polynomial $\det(xI-T)$ of the transition matrix $T$.
Let $V(G)$ be the vector space of real weights on the edges of the Bestvina-Handel graph $G$.
Let $f_*:V(G) \to V(G)$ be the map induced from the train track map $f:G \to G$.
We denote $\chi(f_*)=\det(xI-T)$.
It is well-known that $\chi(f_*)$ depends on the choice of $f:G\to G$.
The first new result in this paper is the discovery that, after dividing $\chi(f_\ast)$ by a polynomial that is determined by the way that a train-track map acts on certain vertices of $G$, one obtains a quotient polynomial which is a topological invariant of $[F]$. This polynomial arises via an $f_*$-invariant direct sum decomposition of the $\mathbb R$-vector space of transverse measures on $\tau$. It is the characteristic polynomial of the action of $f_*$ on one of the summands. We call it the {\it homology polynomial} of $[F]$, for reasons that will become clear in a moment. We will construct examples of pA maps on a surface of genus 2 which have the same dilatation, but are distinguished by their homology polynomials.
Like $\chi(f_\ast)$, our homology polynomial is the characteristic polynomial of an integer matrix, although (unlike $T$) that matrix is not in general non-negative. We now describe how we found it.
We define and study an $f_*$-invariant subspace $W(G,f) \subset V(G)$. The subspace $W(G,f)$ is chosen so that weights on edges determine a transverse measure on the train track $\tau$ associated to $G$ and $f:G\to G$. We study $f_* |_{W(G,f)}$. See page 427, lines 20- to 13- of \cite{T}, where the mathematics that underlies $f_\ast |_{W(G,f)}$ is described by Thurston. Our first contribution in this paper is to make the structure that Thurston described there concrete and computable, via an enhanced form of the Bestvina-Handel algorithm.
This allows us to prove that the characteristic polynomial $\chi(f_\ast |_{W(G,f)})$ is an invariant of the mapping class $[F]$ in Mod$(S)$. This polynomial $\chi(f_\ast |_{W(G,f)})$ is our homology polynomial. It contains the dilatation of $[F]$ as its largest real root, and so is divisible by the minimum polynomial of $\lambda$.
Its degree depends upon a careful analysis of the action of $f_\ast$ on the vertices of $G$.
Investigating the action of $f_\ast$ on $W(G,f)$, we show that $W(G,f)$ supports a skew-symmetric form that is $f_\ast$-invariant. The existence of the symplectic structure was known to Thurston and also was studied by Penner-Harer in \cite{PH}, however it is unclear to us whether it was known to earlier workers that it could have degeneracies.
See Remark~\ref{rem:completeness}.
We discovered via examples that degeneracies do occur.
In $\S$\ref{sec:2nd polynomial invariant}
we investigate the radical $Z$ of the skew-symmetric form, i.e. the totally degenerate subspace of the skew-symmetric form, and arrive an $f_\ast$-invariant decomposition of $W(G,f)$ as $Z \oplus (W(G,f)/Z)$. This decomposition leads to a product decomposition of the homology polynomial as a product of two additional new polynomials, with both factors being invariants of $[F]$. We call the first of these new polynomials,
$\chi(f_\ast |_Z)$, the {\it puncture polynomial} because it is a cyclotomic polynomial that relates to the way in which the pA map $F$ permutes certain punctures in $S$. As for $\chi(f_\ast |_{W(G,f)/Z})$, our {\it symplectic polynomial}, we know that it arises from the action of $f_\ast$ on the
symplectic space $W(G,f)/Z$, but we do not fully understand it at this writing. Sometimes the symplectic polynomial is irreducible, in which case it is the minimum polynomial of $\lambda$. However we will give examples to show that it can be reducible, and even an example where it is
symplectically reducible. Thus its relationship to the minimum polynomial of $\lambda$ is not completely clear at this writing.
Summarizing, we will prove:
\begin{theorem}\label{thm:summarize}
Let $[F]$ be a pA mapping class in {\rm Mod}$(S)$, with Bestvina-Handel train track map $f:G\to G$ and transition matrix $T$.
\begin{enumerate}
\item
The characteristic polynomial $\chi(f_\ast)$ of $T$ has a divisor, the homology polynomial $\chi(f_\ast |_{W(G,f)})$, which is an invariant of $[F]$. It contains $\lambda$ as its largest real root, and is associated to an induced action of $F_*$ on $H_1(X, \mathbb R)$, where $X$ is the surface $S$ when $\tau$ is orientable and its orientation cover $\tilde S$ when $\tau$ is non-orientable.
\item
The homology polynomial $\chi(f_\ast |_{W(G,f)})$ decomposes as a product of two polynomials $\chi(f_\ast |_Z)\cdot \chi(f_\ast |_{W(G,f)/Z})$, each a topological invariant of $[F]$.
\begin{enumerate}
\item
The first factor, the puncture polynomial $\chi(f_\ast |_Z)$, records the action of $f_*$ on the radical of a skew-symmetric form on $W(G,f)$. It has topological meaning related to the way in which $F$ permutes certain punctures in the surface $S$.
It is a palindromic or anti-palindromic polynomial, and all of its roots are on the unit circle.
\item
The second factor, the symplectic polynomial $\chi(f_\ast |_{W(G,f)/Z})$, records the action of $f_\ast$ on the non-degenerate symplectic space $W(G,f)/Z$. It contains $\lambda$ as its largest real root. It is palindromic. If irreducible, it is the minimum polynomial of $\lambda$, but it is not always irreducible.
\end{enumerate}
\item
The homology polynomial, being a product of the puncture and symplectic polynomials, is palindromic or anti-palindromic.
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{thm:summarize} can be found in $\S$\ref{sec:1st polynomial invariant} and $\S$\ref{sec:2nd polynomial invariant} below. In $\S$\ref{sec:applications} we give several applications, and prove that our three invariants behave nicely when the defining map $F$ is replace by a power $F^k$. The paper ends, in $\S$\ref{sec:examples} with a set of examples which give concrete meaning to our ideas. The first such example, Example~\ref{ex:filling-curves}, defines three distinct maps $F_1,F_2,F_3$ on a surface of genus 2, chosen so that all three have the same dilatation. Two of the three pairs are distinguished by any one of our three invariants. The third map was chosen so that it probably is not conjugate to the other two, however our invariants could not prove that.
{\bf Acknowledgments}
The authors would like to thank Jeffrey Carlson for his help with the proof
of Corollary~\ref{cor:homology polynomial is invariant}, which was stated as a conjecture in an earlier
draft. They thank Dan Margalit for suggesting a way to find examples of non-conjugate maps with the same dilatation. They would also like to thank Mladen Bestvina, Nathan Dunfield, Jordan
Ellenberg, Ji-young Ham, Eriko Hironaka, Eiko Kin, Chris Leininger,
Robert Penner, and Jean-Luc Thiffeault for their generosity in
sharing their expertise and their patience in responding to questions. They also thank the referee who digested the content of an earlier version of this paper and told us why it needed the major revision that is given in these pages.
\section{Proof of Part (1) of Theorem~\ref{thm:summarize}} \label{sec:1st polynomial invariant}
We begin our work in $\S$\ref{subsec:preliminaries} by recalling some well-known facts from \cite{BH} that relate to the construction of the train track $\tau$ by adding infinitesimal edges to the graph $G$. After that, in $\S$\ref{subsec:W(G,f)}, we introduce the space $W(G, f)$ of transverse measures, which plays a fundamental role throughout this paper. Rather easily, we will be able to prove our first decomposition and factorization theorem. Thus at the end of $\S$\ref{subsec:W(G,f)} we have our homology polynomial in hand, but we do not know its meaning, have not proved it is an invariant, and don't know how to compute it. In $\S$\ref{subsec:basis for W(G,f)} we prepare for the work ahead by constructing a basis for $W(G,f)$.
We also learn how to find the matrix for the action of $f_\ast$ on the basis. With that in hand, in $\S$\ref{subsec:homology poly} we identify the vector space $W(G,f)$ with a homology space. We will be able to prove Corollary~\ref{cor:homology polynomial is invariant}, which asserts that the homology polynomial $\chi(f_\ast |_ {W(G,f)})$ is a topological invariant of the conjugacy class of our pA map $[F] $ in ${\rm Mod}(S)$.
(Later in $\S$\ref{sec:examples}, we will use it to distinguish examples of pA maps acting on the same surface and having the same dilatation.)
\subsection{Preliminaries}\label{subsec:preliminaries}
It will be assumed that the reader is familiar with the basic ideas of the algorithm of Bestvina-Handel \cite{BH}. The mapping class $[F]$ will always be pA. Further, assume that we are given the graph $G\subset S$, homotopic to $S$, and a train track map $f:G \to G$. We note that if $S$ is closed, the action of $[F]$ always has periodic points with finite order, and the removal of a periodic orbit will not affect our results, therefore without loss of generality we may assume that $S$ is finitely punctured.
Following ideas in \cite{BH} we construct a train track $\tau$ from $f:G\to G$ by equipping the vertices of $G$ with additional structure:
Let $e_1, e_2 \subset G$ be two (non-oriented) edges originating at the same vertex $v$.
Edges $e_1$ and $e_2$ belong to the same {\em gate} at $v$ if for some $r>0$, the edge-paths $f^r(e_1)$ and $f^r(e_2)$ have a nontrivial common initial segment.
If $e_1$ and $e_2$ belong to different gates at $v$ and there exists some exponent $r>0$ and an edge $e$ so that $f^r(e)$ contains
$e_2e_1$ or $e_1e_2$
as a subpath, then we connect the gates associated to $e_1, e_2$ with an {\em infinitesimal edge.}
In this way, a vertex $v \in G$ with $k$ gates becomes an infinitesimal $k$-gon in the train track $\tau$.
While this $k$-gon may be missing one-side, the infinitesimal edges must connect all the gates at each vertex, \cite[Section~3.3]{BH}.
In addition to the infinitesimal edges, $\tau$ also has {\em real edges} corresponding to the edges of $G$.
Hence, a {\em branch} of $\tau$, in the sense of Penner-Harer \cite{PH}, is either an infinitesimal edge or a real edge.
It is natural to single out the following properties of the vertices of $G$:
\begin{definition}{\bf (Vertex types)}\label{def:vertex-type}
See Figure~\ref{fig:vertex-types}.
A vertex of $G$ is {\em odd} (resp.\ {\em even}) if its corresponding infinitesimal complete polygon in $\tau$ has an odd (resp.\ {\em even}) number of sides, and it is {\em partial} if its infinitesimal edges form a polygon in $\tau$ with one side missing.
Partial vertices include the special case where $v$ has only
two gates connected by one infinitesimal edge; we call such vertices {\em evanescent}. The symbol $w_i$ will denote the weight of $i$-th gate, and $x_i$ will denote the weight of $i$-th infinitesimal edge.
\end{definition}
\begin{figure}[htpb!]
\centering
\psfrag{w0}{$w_0$}
\psfrag{w1}{$w_1$}
\psfrag{w2}{$w_2$}
\psfrag{w3}{$w_3$}
\psfrag{w4}{$w_4$}
\psfrag{x0}{$x_0$}
\psfrag{x1}{$x_1$}
\psfrag{x2}{$x_2$}
\psfrag{x3}{$x_3$}
\psfrag{x4}{$x_4$}
\psfrag{A}{\small odd}
\psfrag{B}{\small even}
\psfrag{C}{\small partial}
\psfrag{D}{\small evanescent}
\subfloat{
\includegraphics[width=0.2\textwidth]{oddvertex}
\label{fig:oddvertex}}
\hfill
\subfloat{
\includegraphics[width=0.2\textwidth]{evenvertex}
\label{fig:evenvertex}}
\hfill
\subfloat{
\includegraphics[width=0.2\textwidth]{partialvertex}
\label{fig:partialvertex}}
\hfill
\subfloat{
\includegraphics[width=0.2\textwidth]{evanescentvertex}
\label{fig:evanescentvertex}}
\caption{Shaded disks enclose infinitesimal (partial) polygons in $\tau$ that correspond to the vertices of $G$.}\label{fig:vertex-types}
\end{figure}
\begin{remark}
In $\S$\ref{sec:examples} the reader can find several examples illustrating the graph $G$ with infinitesimal polygons associated to particular pA mapping classes.
In those illustrations the vertices of the graphs have been expanded to shaded discs which show the structure at the vertices.
In the sketch of a train track that XTrain generates, the branches at each gate do not appear to be tangent to each other.
This was done for ease in drawing the required figures.
The reader should keep in mind that all the branches at each gate are tangent to each other.
\end{remark}
We recall properties of non-evanescent vertices that are preserved under a train track map:
\begin{lem}{\em\cite[Proposition~3.3.3]{BH}}
\label{prop333}
For $k\geq 3$, let $O_k$ be the set of odd vertices with $k$ gates,
$E_k$ the set of even vertices with $k$ gates, and $P_k$ the set of
partial vertices with $k$ gates. Then the restriction of $f:G\to G$ to each of these sets is a permutation of the set.
Moreover, for each $($non evanescent$)$ vertex $v$ with at least three gates, $f$ induces a bijection between the
gates at $v$ and the gates at $f(v)$ that preserves the cyclic order.
\end{lem}
\begin{remark}
The number of evanescent vertices of a train track representative is {\em not} an invariant of the underlying mapping class. Examples exist where a train track map has a representative with evanescent vertices, and another without.
\end{remark}
\begin{definition}{\bf (Orientable and non-orientable train tracks)} \label{def: tau orientable}
Choose an orientation on each branch of a train track $\tau$. A train track is {\em orientable} if we can orient all the branches so that, at every switch, the angle between each incoming branch and each outgoing branch is $\pi$. For example, see the train tracks $\tau_1$ and $\tau_2$ that are given in Figure~\ref{fig:filling-curves-tt} of $\S$\ref{sec:examples}. After adding the infinitesimal edges, one sees that $\tau_1$ is orientable, but $\tau_2$ is not.
\end{definition}
Here is an easy observation.
\begin{lem}\label{vertextypelem}
If $G$ has an odd vertex, then the corresponding train track $\tau$ is non-orientable. This condition is sufficient, but not necessary.
\end{lem}
\begin{proof}
If $v$ is an odd vertex, then there exists no consistent orientation for the corresponding infinitesimal polygon in $\tau$. The example in Figure~\ref{fig:filling-curves-tt} shows that the condition is not necessary.
\end{proof}
\begin{remark}
We do not know any immediate visual criterion beyond the one in Lemma~\ref{vertextypelem} for detecting non-orientability.
The two train tracks $\tau_1,\tau_2$ in Figure~\ref{fig:filling-curves-tt} of this paper both have 2 vertices, one even and one partial, and $\tau_1$ is orientable whereas $\tau_2$ is non-orientable.
If all the vertices are partial, then the train track may be either orientable (see \cite[Example~4.2]{pbexp}) or non-orientable (see Example~\ref{ex:penner}); also, a non-orientable train track may have odd, even, and partial vertices at the same time (see Example~\ref{ex:evenodd}).
\end{remark}
\subsection{The space $W(G,f)$ and the first decomposition} \label{subsec:W(G,f)}
Given a graph $G$ of $n$ edges, one always has an $\mathbb{R}$-vector space $V(G) \simeq \mathbb{R}^n$ of weights on $G$.
Our goal in this section is to define a subspace $W(G,f) \subset V(G)$ of `transverse measures on $G$'. This space is the natural projection of the measured train track $\tau$ to a space of measures on $G$.
It will play a fundamental role in our work.
In our setting, all train tracks are {\em bi-recurrent}, that is, recurrent and transversely recurrent, cf. \cite[p.20]{PH}.
To define our space $W(G,f)$, we apply Penner-Harer's work described in \cite[Section 3.2]{PH}, where bi-recurrence is assumed.
Let $V(\tau)\cong \mathbb{R}^{n+n'}$, where $n$ (resp. $n')$ is the number of the real (resp. infinitesimal) edges of train track $\tau$.
Penner-Harer defined a subspace $W(\tau) \subset V(\tau)$ of assignments of (possibly negative) real numbers, one to each branch of $\tau$, which satisfy the {\em switch conditions.} That is, if $\eta \in W(\tau)$ then at each switch of $\tau$, the sum of the weights on the incoming branches equals to the sum on the outgoing branches.
For example, in Figure~\ref{fig:switch}-(A), $\eta(a) = \eta(b_1)+ \eta(b_2).$
\begin{figure}[htpb!]
\subfloat[]{
\psfrag{a}{$a$}
\psfrag{b}{$b_2$}
\psfrag{c}{$b_1$}
\includegraphics[width=0.2\textwidth]{switch}
}
\hspace{3cm}
\subfloat[]{
\psfrag{a}{$b_1$}
\psfrag{b}{$b_2$}
\psfrag{c}{$b_3$}
\psfrag{d}{$b_4$}
\psfrag{e}{$a$}
\includegraphics[height=2cm]{valenceshift-copy}
\label{fig:valences}}
\caption{(A) A switch of valence $3$. (B) A switch of valence $5$.}\label{fig:switch}
\end{figure}
\begin{definition}
There is a natural surjection $\pi:\tau\to G$ which is defined by collapsing all the infinitesimal (partial) polygons to their associated vertices in $G$ and taking each real edge in $\tau$ to the corresponding edge in $G$.
Let $W(G, f) = \pi_\ast(W(\tau))$. That is, $W(G, f)\subset V(G)$ is the subspace whose elements admit an extension to a (possibly negative) transverse measure on $\tau$.
The name $W(G,f)$ has been chosen to reflect the fact that our subspace depends not only on $G$, but also on the action $f:G\to G$.
\end{definition}
Here is a useful criterion for an element of $V(G)$ to be in $W(G,f)$:
\begin{lem}\label{switchlem}
An element $\eta\in V(G)$ belongs to $W(G, f)$ if and only if for each non-odd vertex the alternating sum of the weights at the incident gates is zero.
\end{lem}
\begin{proof}
Assume that $\eta\in V(G)$ belongs to $W(G, f)$.
Let $v \in G$ be a vertex with $k$ gates, i.e., $v$ corresponds to an infinitesimal $k$-gon, possibly partial, in the train track $\tau$.
Let $w_0, \ldots, w_{k-1} \in \mathbb{R}$ be the weights of $\eta$ at the incident gates of $v$, and let $x_0, \ldots, x_{k-1} \in \mathbb{R}$ (or $x_0, \ldots, x_{k-2}$ if $v$ is a partial vertex) be the weights of the infinitesimal edges.
See Figure~\ref{fig:vertex-types}.
The weights on the infinitesimal edges may turn out to be negative real numbers.
We determine when an assignment of weights to the real edges admits an extension to the infinitesimal edges that satisfies the switch conditions.
If $v$ is odd or even with $k$ gates, then the switch condition imposes:
\[
\begin{array}{ccccc}
x_{k-1} & + & x_0 & = & w_0,\\
x_0 & + & x_1 & = & w_1,\\
x_1 & + & x_2 & = & w_2,\\
&& \vdots &&\\
x_{k-2} & + & x_{k-1} & = & w_{k-1}.\\
\end{array}
\]
If $k$ is odd, this system of equations has a unique solution, regardless of the weights $w_i$.
If $k$ is even, the system is consistent if and only if $\sum_{i=0}^{k-1}(-1)^i w_i = 0$.
If $v$ is partial with $k$ gates, then the switch condition imposes:
\[
\begin{array}{ccccc}
& & x_0 & = & w_0,\\
x_0 & + & x_1 & = & w_1,\\
x_1 & + & x_2 & = & w_2,\\
&& \vdots &&\\
x_{k-3} & + & x_{k-2} & = & w_{k-2},\\
x_{k-2} & & & = & w_{k-1}.\\
\end{array}
\]
This system has a unique solution if and only if $\sum_{i=0}^{k-1} (-1)^i w_i = 0$.
\end{proof}
\begin{lem}\label{eq:inclusion}
$W(G,f)$ is an invariant subspace of $V(G)$ under $f_\ast$, i.e.,
$f_\ast (W(G,f)) \subseteq W(G,f).$
\end{lem}
\begin{proof}
Suppose $v$ is a non-odd vertex and mapped to a non-odd vertex $f(v)$.
Let $\eta\in W(G,f)$.
By Lemma~\ref{switchlem}, the alternating weight sum of $\eta$ at the incident gates of $v$ is $0$.
Lemma~\ref{prop333} implies that all the weights of $\eta$ at the infinitesimal edges for $v$ is inherited to the weights of $f_\ast \eta$ at the infinitesimal edges for $f(v)$.
In addition, we account for an edge $e \subset G$ whose image $f(e)=e_0e_1\cdots e_k$ passes through the vertex $f(v)$.
Assume that $\eta$ has weight $w=\eta(e)$ at the edge $e$.
If a sub-edge-path $e_ie_{i+1}$ passes through $f(v)$ then edges $e_i$ and $e_{i+1}$ belong to adjacent gates at $f(v)$ and the contribution of $e_i e_{i+1}$ to the alternating sum of weights of the gates at $f(v)$ is $w-w=0$.
Therefore, the alternating weight sum for $f_\ast \eta$ at the incident gates for $f(v)$ is $0$.
By Lemma~\ref{switchlem}, $f_\ast \eta \in W(G,f)$.
\end{proof}
The dimension of the vector space $W(G,f)$ can be computed combinatorially by inspecting a train track associated to the pair $(G,f)$:
\begin{lemma}\label{lem:dimW(G,f)}
$(1)$
If $\tau$ is orientable, then
\begin{eqnarray*}
\dim W(G,f)&=& \#( \mbox{edges of }G) - \#(\mbox{vertices of }G) +1, \\
W(G,f) &\cong& Z_1(G; \mathbb{R}) \cong H_1(G; \mathbb{R}) \cong H_1(S; \mathbb{R}).
\end{eqnarray*}
In particular, the switch conditions are precisely the cycle conditions.
$(2)$
If $\tau$ is non-orientable, then
$$\dim W(G,f)= \#( \mbox{edges of }G) - \#(\mbox{non-odd vertices of }G).$$
\end{lemma}
\begin{proof}
Assume that $\tau$ is orientable.
By Lemma~\ref{vertextypelem}, $G$ has no odd vertices.
For $\eta \in W(G,f)$ and a non-odd vertex $v$, let
$$w_i^v:= w_i^v(\eta)= \mbox{ the weight of } \eta \mbox{ at the } i^{\rm th} \mbox{ gate of the vertex }v.$$
By Lemma~\ref{switchlem}, $\sum_{i}(-1)^i w^v_i = 0$.
We number the gates at $v$ so that the orientation of the real edges at $2i$-th (resp. $(2i+1)$-th) gate is inward (resp. outward).
If $G$ has $m$ vertices, $v_1, \cdots, v_m$, then we have a system of
$m$ equations:
\begin{eqnarray*}
w^{v_1}_0 + w^{v_1}_2 + w^{v_1}_4 + \cdots &=&
w^{v_1}_1 + w^{v_1}_3 + w^{v_1}_5 + \cdots, \\
w^{v_2}_0 + w^{v_2}_2 + w^{v_2}_4 + \cdots &=&
w^{v_2}_1 + w^{v_2}_3 + w^{v_2}_5 + \cdots, \\
& \vdots & \\
w^{v_m}_0 + w^{v_m}_2 + w^{v_m}_4 + \cdots &=&
w^{v_m}_1 + w^{v_m}_3 + w^{v_m}_5 + \cdots.
\end{eqnarray*}
The sum of the left hand sides is equal to the sum of the right hand sides.
Since $\tau$ is oriented, the last equation follows from the other $m-1$ equations, i.e., the switch conditions are {\em not} independent.
Therefore,
\begin{eqnarray*}
\dim W(G,f) &=& \dim V(G) - (m-1) \\
&=& \#(\mbox{edges of } G) - \#(\mbox{vertices of } G) +1.
\end{eqnarray*}
With respect to the orientations of the edges of $G$, let $\partial : C_1(G; \mathbb{R}) \to C_0(G; \mathbb{R})$ be the boundary map of the chain complex.
There is a natural isomorphism $V(G) \cong C_1(G; \mathbb{R})$.
If $\gamma \in Z_1(G; \mathbb{R})$ is a cycle, then the cycle condition $\partial \gamma=0$ is equivalent to the alternating sum condition $\sum_i (-1)^i w_i^v(\gamma) = 0$ at each vertex $v\in G$.
By Lemma~\ref{switchlem} we obtain $W(G, f)\cong Z_1(G; \mathbb{R})$.
In fact, the Euler characteristic of $s$-punctured genus $g$ surface $S$ is;
$\chi(S)=2-2g-s = \#(\mbox{vertices of } G) - \#(\mbox{edges of } G).$
Thus $\dim W(G, f)=2g+s-1=\dim H_1(S; \mathbb{R})$.
In the non-orientable case, the switch conditions are satisfied if and only if the alternating sum of the weights of gates around each even or partial vertex is zero (Lemma~\ref{switchlem}).
Moreover, all these conditions are independent of each other \cite[Lemma 2.1.1]{PH}, so that the number of independent constraints is the number of non-odd vertices. Since $\dim V(G)$ is the
number of edges, statement (2) follows.
\end{proof}
Now we know that $V(G)$ can be identified with the $1$-chains $C_1(G)$ in the orientable case, we can extend the boundary map $\partial : C_1(G) \to C_0(G)$ to the following map $\delta$ on $V(G)$:
\begin{definition}{\bf (The map $\delta$)}\label{def of delta}
Assume that the graph $G$ has $m$ non-odd vertices, $v_1, \cdots, v_m$.
Define a linear map $\delta : V(G) \to \mathbb{R}^m,$ $\eta \mapsto \delta(\eta)$ such that
$$(k^{\rm th} \mbox{ entry of the vector } \delta(\eta))=
\sum_i (-1)^i w_i^{v_k} (\eta),$$
the alternating sum of the weights of $\eta$ at the gates incident to the vertex $v_k$. These weights satisfy the following conditions:
\begin{itemize}
\item
If $\tau$ is oriented, then we determine the sign of each gate to be compatible with the orientation of the real edges of $\tau$.
The alternating sum is defined without ambiguity.
For an example, in Figure~\ref{fig:filling-curves-tt}, left sketch, a plus (resp. minus) sign may be assigned at each gate, according as the real edges are oriented toward (resp. away from) the gate.
\item
If $\tau$ is non-orientable, we assign alternating signs to the incident gates for each non-odd vertex. Since $\tau$ is non-orientable, the assignments will be local and not global.
Clearly, the alternating sum depends on the choice of the sign assignment.
\end{itemize}
\end{definition}
\begin{lem}\label{lem:mapdelta}
In both the orientable and non-orientable cases $W(G, f) \cong \ker\delta$.
Moreover, if $m$ is the number of non-odd
vertices of $G$, then
$$
\dim ({\rm im}\,\delta) =
\left\{
\begin{array}{ll}
m - 1,& \mbox{if }\tau \mbox{ is orientable,} \\
m, & \mbox{if }\tau \mbox{ is non-orientable.}
\end{array}
\right.
$$
\end{lem}
\begin{proof}
Lemma~\ref{switchlem}, immediately implies that $W(G,f) \cong \ker\delta.$ The dimension count follows from Lemma~\ref{lem:dimW(G,f)}.
\end{proof}
Finally we prove:
\begin{theorem}\label{thm:1st decomposition}
{\em({\bf First Decomposition Theorem})}
\begin{eqnarray}
V(G) & \cong & W(G,f) \oplus {\rm im}\, \delta. \label{direct sum} \\
\chi(f_\ast) & = & \chi(f_\ast|_{W(G,f)})\chi(f_\ast|_{{\rm im}\,\delta}). \label{polynomial}
\end{eqnarray}
The degree of $\chi(f_\ast|_{W(G,f)})$ (resp. $\chi(f_\ast|_{{\rm im}\,\delta})$) is the dimension of
$W(G,f)$ (resp. ${\rm im}\,\delta$), as given in Lemma~\ref{lem:dimW(G,f)} (resp. Lemma~\ref{lem:mapdelta}).
\end{theorem}
\begin{proof}
From Lemma~\ref{lem:mapdelta}, identifying ${\rm im}\, \delta$ with the quotient $V(G)/W(G,f)$ we obtain (\ref{direct sum}).
By the same argument as in the proof of Lemma~\ref{eq:inclusion}, we obtain $f_\ast ({\rm im}\,\delta) \subset {\rm im}\,\delta$.
This along with $f_\ast (W(G,f)) \subset W(G,f)$ yields (\ref{polynomial}).
\end{proof}
\subsection{A basis for $W(G,f)$ and the matrix for the action of $f_\ast$ on $W(G,f)$} \label{subsec:basis for W(G,f)}
The goal of this section is to find a basis for $W(G,f) \subset V(G)$ and learn how to determine the action of $f_\ast$ on the basis.
With regard to the basis, it will be convenient to consider three cases separately: the cases when $\tau$ is orientable, when $\tau$ is non-orientable and has odd vertices, and when $\tau$ is non-orientable but has no odd vertices. That is accomplished in $\S$\ref{sssec:basis1}, \ref{sssec:basis2} and \ref{sssec:basis3}. Having the basis in hand, in $\S$\ref{sssec:basis4} we learn how to compute the action of $f_\ast$ on the basis elements.
\subsubsection{Basis for $W(G,f)$, orientable case}\label{sssec:basis1}
If the train track $\tau$ associated to $G$ is orientable, we choose an orientation for $\tau$, thereby inducing an orientation on the edges of $G$, and choose a maximal spanning tree $Y \subset G$. Every vertex of $G$ will be in $Y$. We consider all edges $e$ of $G$ that are not in $Y$, and construct a set of vectors $\{\eta_e \in V(G)\}$ and prove that the constructed set is a basis for $W(G,f) \subset V(G)$.
For each $e\in G\setminus Y$, find the unique shortest path in $Y$ joining the endpoints of $e$.
The union of this path and $e$ forms an oriented loop $L_e$ in which the edge $e$ appears exactly once, the orientation being determined by that on $e$. If the orientation of edge $e' \subset G$ agrees (resp. disagrees) with the orientation of $e'\subset L_e$, then we assign a weight of $1$ (resp. $-1$) to $e'$. In particular, $e\subset L_e$ has weight 1.
The edges not in $L_e$ are assigned weight of $0$. In this way we obtain a vector $\eta_e \in V(G)$ whose entries are the assigned weights.
By construction, $\eta_e$ satisfies the criterion for a transverse measure, described in Lemma~\ref{switchlem}, therefore $\eta_e$ is an element of $W(G,f)$.
We now show that $\{\eta_{e_1},\dots,\eta_{e_l}\}$ is a basis for $W(G,f)$ where $e_1, \cdots, e_l$ are the edges of $G\setminus Y$
and $l = \#( \mbox{edges of }G) - \#(\mbox{vertices of }G) +1.$
Note that if $e_i,e_j\in (G\setminus Y)$ with $i \not= j$, then $e_j\notin L_{e_i}$. Therefore $\eta_{e_i}(e_j) = 1$ if and only if $i=j$, because all edges not in $ L_{e_i}$ have weight $0$, i.e., the vectors $\eta_{e_1}, \cdots, \eta_{e_l}$ are linearly independent. Consulting Lemma~\ref{lem:dimW(G,f)} we see that we have the right number of linearly independent elements, so we have found a basis for $W(G,f)$.
\begin{example}\label{ex-of-basis-orientable}
Go to $\S$\ref{sec:examples} below and
see Example~\ref{ex:filling-curves} and its accompanying Figure~\ref{fig:filling-curves-tt}-(1). The train track for this example is orientable. The space $V(G)$ has dimension 5. Order the edges of $G$ as $a,b,c,d,e$. The edge $a$ and the vertices $v_0,v_1$ form a maximal tree $Y \subset G$, with edges $b,c,d,e \notin Y$, so that $W(G,f)$ has dimension 4. We have the loops $L_b = ab$; $L_c= \overline ac$; $L_d = \overline ad$; $L_e = ae$, so that $W(G,f)$ has basis
$\eta_b = (1,1,0,0,0)'$; $\eta_c = (-1,0,1,0,0)'$; $\eta_d = (-1,0,0,1,0)'$; $\eta_e=(1,0,0,0,1)'$, where `prime' means transpose.
\end{example}
\subsubsection{Basis for $W(G,f)$, non-orientable case with odd vertices}\label{sssec:basis2}
If $G$ has an odd vertex $v_0$ then choose a maximal spanning tree $Y \subset G$.
Let $V$ be the set of vertices of $G$.
Define a height function $h: V \to \mathbb{N}\cup \{0\}$ by
$$h(v)= (\mbox{the distance between $v$ and $v_0$ in }Y).$$
We obtain a forest $Y' \subset Y$ by removing from $Y$ all the edges each of which connects an odd vertex and the adjacent vertex of smaller height.
See Figure~\ref{forest}.
\begin{figure}[htpb!]
\begin{center}
\psfrag{v0}{$v_0$}
\psfrag{v1}{$v_1$}
\psfrag{v2}{$v_2$}
\psfrag{v3}{$v_3$}
\psfrag{v4}{$v_4$}
\includegraphics[width=.9\textwidth]{forest}
\end{center}
\caption{A tree $Y$ (left) and a forest $Y'$ (right).
Hollow dots $v_0, \cdots, v_4$, are odd vertices. Black dots are non-odd vertices.}
\label{forest}
\end{figure}
The forest $Y'$ contains all the vertices of $G$ with exactly one odd vertex in each connected component.
Now, let $e$ be an edge that is not in $Y'.$
We can find two (possibly empty) shortest paths in $Y'$ each of which connects an endpoint of $e$ to an odd vertex.
The union of $e$ and the two paths forms an arc, which we denote by $L_e$.
(If both of the endpoints of $e$ belong to the same tree component of $Y'$, then $L_e$ becomes a loop containing one odd vertex.)
Assign a weight of $1$ to $e$ and weight of $0$ to the edges that are not in $L_e$.
To the other edges in $L_e$, we assign weights of $\pm 1$ so that at each non-odd vertex the criterion of transverse measure (Lemma~\ref{switchlem}) is satisfied.
This defines an element $\eta_e$ of $W(G,f)$.
Let $e_1, \cdots, e_l$ be the edges of $G \setminus Y'$. By the construction, we have $\eta_{e_i}(e_j) = 1$ if and only if $i=j$, so the vectors $\eta_{e_1}, \cdots, \eta_{e_l}$ are linearly independent.
Since $l = \#(\mbox{edges of }G) - \#(\mbox{non-odd vertices of }G),$
Lemma~\ref{lem:dimW(G,f)} tells us that $l = \dim W(G,f)$, hence $\{\eta_{e_1}, \cdots, \eta_{e_l}\}$ is a basis of $W(G,f)$.
\begin{example}
See Example~\ref{ex:evenodd}.
The graph $G$ has two odd vertices $v_0$ and $v_4$.
We choose a maximal tree $Y$ whose edges are $a, c, d, j$. This gives us a forest $Y'$ with two components.
One consists of the single vertex $v_4$, and the other consists of vertices $v_0, v_1, v_2, v_3,$ and edges $a, c, j$.
The edge $h$ is not in $Y'$, and its endpoints are $v_1$ and $v_4.$ The associated arc $L_h = c h.$
Since $c$ and $h$ share the same gate at $v_1$, $\eta_h$ satisfies
$\eta_h(c) = -1,$ $\eta_h(h) = 1$, and $0$ for rest of the edges.
The graph $G$ has 10 edges $a,b, \dots, j$, with $a, c, j$ in $Y'$. The vector space $W(G,f)$ has dimension 7. The edge $h$ is not in $Y'$, and its endpoints are $v_1$ and $v_4.$ Then $L_h = ch.$
Since $c$ and $h$ share the same gate at $v_1$, $\eta_h$ satisfies
$\eta_h(c) = -1,$ $\eta_h(h) = 1$, and $0$ for rest of the edges.
The edge $b$ is also not in $Y'$, and its endpoints are $v_2$ and $v_3$, so $L_b = cjbac$ is an arc whose endpoints coincide at $v_0$ and $\eta_b$ satisfies
$\eta_b(a) = \eta_b(b) = \eta_b(j)=1$, and $0$ for rest of the edges. The other five basis elements are constructed in a similar way.
\end{example}
\subsubsection{Basis for $W(G,f)$, non-orientable case with no odd vertices}\label{sssec:basis3}
In this case, we can find a simple loop $\mathcal L_0 \subset G$ that does not admit an orientation consistent with the train track $\tau$.
If $\mathcal L_0$ misses any vertices of $G,$ then we define $\mathcal L_1$ by adding an edge with exactly one vertex in $\mathcal L_0.$
If $\mathcal L_1$ misses any vertices, we define $\mathcal L_2$ by adding an edge with exactly one vertex in $\mathcal L_1,$ etc.
Ultimately we obtain a connected subgraph $\mathcal L$ that is homotopy equivalent to a circle and contains all vertices of $G$.
If $e$ is an edge outside $\mathcal L,$ we can find paths in $\mathcal L$ from each endpoint of $e$ to the loop $\mathcal L_0,$ resulting in a path $L_\bullet$ that contains $e$ and with endpoints in $\mathcal L_0$.
Now we can find paths $L_1, L_2$ in $\mathcal L_0$ so that $L_1 \cup L_2 = \mathcal L_0$ and the endpoints of $L_1, L_2$, called $v_a$ and $v_b$, agree with those of $L_\bullet$.
It is possible that $v_a$ and $v_b$ are the same vertex, in which case we set $L_2 = \emptyset.$ (This happens in Example ~\ref{ex-basis-non-ori2}, where our construction is applied to Example~\ref{ex:penner}.)
Let $\eta^0 \in V(G)$ be a vector which assigns $1$ to edge $e$, $\pm 1$ to the other edges in $L_\bullet$, and $0$ to the edges not in $L_\bullet$, so that the alternating weight sum is $0$ at all the vertices but $v_a, v_b$.
Next, let $\eta^1 \in V(G)$ (resp. $\eta^2 \in V(G)$) be a vector which assigns $\pm 1$ to the edges of $L_1$ (resp. $L_2$) and $0$ to the other edges not in $L_1$ (resp. $L_2$) so that $\eta^0 + \eta^1$ (resp. $\eta^0 + \eta^2$) has the alternating sum of weights equal to $0$ at all the vertices of $G$ but $v_a$.
In particular, at vertex $v_b$, for $i=1,2$,
$(\mbox{the alternating sum of weights of } \eta^0 + \eta^i)=0,$
hence
$$(\mbox{the alternating sum of weights of } \eta^1 - \eta^2 \mbox{ at } v_b)=0.$$
While, at vertex $v_a$
$$(\mbox{the alternating sum of weights of } \eta^1 - \eta^2 \mbox{ at } v_a)=\pm 2.$$
For, if it were $0$ then the loop $\mathcal L_0$ can admit an orientation consistent with $\tau$, which is a contradiction.
Therefore, at $v_a$,
$$(\mbox{the signed weight of }\eta^1) =
- (\mbox{the signed weight of }\eta^2).$$
This means, at $v_a$, we have
$(\mbox{the alternating sum of weights of } \eta^0 + \eta^1)=0$
if and only if
$(\mbox{the alternating sum of weights of } \eta^0 + \eta^2) \neq 0$.
If $(\mbox{the alternating sum of weights of } \eta^0 + \eta^1)=0$ then define $\eta_e := \eta^0 + \eta^1$ and $L_e := L_\bullet \cup L_1$.
Otherwise define $\eta_e := \eta^0 + \eta^2$ and $L_e := L_\bullet \cup L_2$.
By construction, $\eta_e$ satisfies the alternating sum condition of Lemma~\ref{switchlem}, hence $\eta_e \in W(G,f)$.
Note that the number of edges in $\mathcal L$ is equal to the number of vertices of $G$, so by Lemma~\ref{lem:dimW(G,f)} the number $l$ of edges outside $\mathcal L$ is equal to $\dim W(G,f)$.
Suppose $e_1, \cdots, e_l$ are the edges of $G \setminus \mathcal L$, then $\eta_{e_i}(e_j) = 1$ if and only if $i=j$,
i.e., vectors $\eta_{e_1}, \cdots, \eta_{e_l}$ are linearly independent.
This proves that $\{\eta_{e_1}, \cdots, \eta_{e_l}\}$ is a basis of $W(G,f)$.
\begin{example}\label{ex-basis-non-ori2}
See Example~\ref{ex:penner}.
The partial vertex $v_0$ has ten gates.
We assign signs alternatively to the gates, which imposes orientations on the edges $b, c, e$.
However, the gates of each edge $a, d$ or $f$ have the same sign, hence $a, d, f$ do not admit consistent orientations.
We may choose the loop $\mathcal L=\mathcal L_0$ to be the union of $v_0$ and the edge $f$.
For the edge $b$, the loop $L_b$ consists of a single edge $b$ and $v_0$.
The element $\eta_b$ has
$$\eta_b(b)=1, \mbox{ and } \
\eta_b(a)=\eta_b(c)=\eta_b(d)=\eta_b(e)=\eta_b(f)=0.$$
For the edge $a$, the loop $L_a = a \cup f$ and $\eta_a$ has
$$\eta_a(a)=\eta_a(f)=1, \mbox{ and } \
\eta_a(b)=\eta_a(c)=\eta_a(d)=\eta_a(e)=0.$$
The alternating sum of the weights of the ten gates is zero for both $\eta_a$ and $\eta_b$.
Lemma~\ref{switchlem} guarantees that $\eta_a, \eta_b \in W(G,f)$
\end{example}
\subsubsection{The matrix for the action of $f_*$ on $W(G,f)$} \label{sssec:basis4}
With respect to the basis of $W(G,f)$ described in $\S$\ref{sssec:basis1}, \ref{sssec:basis2} and \ref{sssec:basis3}, let $A$ denote the matrix representing the map
$f_\ast |_{W(G,f)}$.
With this $A$ we can compute the homology polynomial $\chi(f_\ast |_{W(G,f)})$.
We compute $A$ explicitly as follows:
Let $e_1, \cdots, e_n$ be the edges of $G$.
Let $\zeta_1, \cdots, \zeta_n$ be the standard basis of $V(G)\cong \mathbb{R}^n$, where $\zeta_i(e_j)= \delta_{i, j}$ the Kronecker delta.
Let $l=\dim W(G,f)$.
Suppose that $\{\eta_1, \cdots, \eta_l\}$ is a basis constructed as in $\S$\ref{sssec:basis1}, \ref{sssec:basis2} and \ref{sssec:basis3}.
Reordering the labels, if necessary, we may assume $\eta_j = \eta_{e_j}$ for $j = 1, \cdots, l$.
Now let $Q$ be an $n \times l$ matrix whose entries $q_{i,j} \in \mathbb{Z}$ satisfy
$\eta_j= \sum_{i=1}^n q_{i,j} \zeta_i$.
Let $T$ be the transition matrix for the train track map $f:G\to G$, and let $P: \mathbb{R}^n \to \mathbb{R}^l$ be the projection onto the first $l$ coordinates.
Then $A$ is an $l \times l$ integer matrix with
$A=PTQ.$
See Example 5.1 for the calculation of the matrix $A$.
We note that Corollary~\ref{cor:isomorphism of f_ast}, in the next section, implies that $A \in GL(l, \mathbb{Z})$.
\begin{remark}
A related question is the computation of the `vertex polynomial' $f_\ast |_{{\rm im}\,\delta}$, even though that polynomial is not a topological invariant and so is only of passing interest. We mention it because in special cases it may be easiest to compute the homology polynomial from the characteristic polynomial of $T$ by diving by the vertex polynomial.
See examples~\ref{ex:k89} and \ref{ex:evenodd} below, where the computation of the vertex polynomial is carried out in two cases, using data that is supplied by XTrain.
\end{remark}
\subsection{The orientation cover and the homology polynomial} \label{subsec:homology poly}
While we have the first decomposition theorem in hand, and have learned how to compute the homology polynomial, that is the characteristic polynomial of the action of $f_\ast$ on $W(G,f)$, we have not proved that it is an invariant of $[F]$ and we do not understand its topological meaning. All that will be remedied in this section. Our work begins by recalling the definition of the orientation cover $\tilde S$ of $S$, introduced by Thurston \cite[p.427]{T}, see also \cite[p.184]{PH}. After that we will establish several of its properties. See Proposition~\ref{prop: orientation cover is orientable}. In Theorem~\ref{thm:involution} and Corollary~\ref{cor:isomorphism of f_ast} we study the homology space $H_1(\tilde S; \mathbb{R})$ and its relationship to our vector space $W(G,f)$ in the case when $\tau$ is non-orientable. At the end of the section, in
Corollary~\ref{cor:homology polynomial is invariant}, we establish the important and fundamental result that the homology polynomial is an invariant of the mapping class $[F]$ in Mod$(S)$.
\begin{definition} {\bf (Angle between two branches at a switch)} \label{def:angles at a switch}
For a switch in $\tau$, fix a very small neighborhood that only contains the switch and the branches meeting at the switch.
Within this neighborhood, we orient the branches in the direction outward from the switch.
This allows us to define the {\em angle} between two branches that meet at the switch.
Since they always meet tangentially, this angle is either $0$ or $\pi$.
If the angle is $0$, then we say that the branches form a {\em corner}.
For example, the angle between the branches $a$ and $b_1$ in Figure~\ref{fig:switch} (B) is $\pi$, whereas the angle between the branches $b_1$ and $b_2$ is $0$ and $b_1, b_2$ form a corner.
\end{definition}
\begin{definition}{\bf (The orientation cover)}\label{def:orientation cover}
Let $F:S\to S$ be a pA homeomorphism with non-orientable train track $\tau$.
Add a puncture to $S$ for each 2-cell of $S \setminus \tau$ corresponding to an odd or even vertex of $G$.
The resulting surface $S'$ deformation retracts to
$\tau$, i.e., $\pi_1(S') = \pi_1(\tau)$.
Each (not necessarily smooth) loop $\gamma \subset \tau$ consists of branches of $\tau$.
We define a homomorphism
$\theta:\pi_1(\tau)\to \mathbb Z/2 \mathbb Z$
which maps a loop in $\tau$ to $0$ if and only if it has an even number of corners (see Definition~\ref{def:angles at a switch}). The {\em orientation cover} $p:\tilde S\to S$ associated to $\tau$ is obtained from the double cover $\tilde{S'}$ of $S$ corresponding to $\ker\theta$ by filling in the punctures in $\tilde{S'}$ that do not belong to the original punctures of $S$.
At the same time, the non-orientable train track $\tau$ lifts to an orientable train track $\tilde\tau \subset \tilde S$.
Collapsing the infinitesimal $($partial$)$ polygons in $\tilde\tau$ to vertices, we obtain a graph $\tilde G$, that is a double branched cover of $G$.
\end{definition}
Note that the branch points of $p: \tilde S\to S$ are precisely the odd
vertices of $G$.
Intuitively, the effect of passing to the orientation cover
is a partial unrolling of loops in $\tau$ that do not admit a consistent
orientation.
For example, Figure~\ref{fig:unroll} illustrates what happens near the branch point for a vertex of valence three.
\begin{figure}[htpb!]
\begin{center}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{c}{$c$}
\psfrag{ta}{$\tilde a$}
\psfrag{tb}{$\tilde b$}
\psfrag{tc}{$\tilde c$}
\psfrag{tap}{$\tilde a'$}
\psfrag{tbp}{$\tilde b'$}
\psfrag{tcp}{$\tilde c'$}
\includegraphics[width=.65\textwidth]{unroll-copy}
\end{center}
\caption{Unrolling an odd vertex.}\label{fig:unroll}
\end{figure}
\begin{proposition}\label{prop: orientation cover is orientable}
Assume that $\tau$ is non-orientable.
\begin{enumerate}
\item The orientation cover $\tilde\tau$ is an orientable train track.
The natural involution $\iota: \tilde S\to \tilde S$, or the deck transformation, reverses the
orientation of $\tilde\tau$.
\item A puncture of $S$ corresponds to two punctures of $\tilde S$ if
and only if a loop around the puncture is homotopic to a loop in $\tau$
with an even number of corners.
Otherwise, the puncture lifts to one puncture in $\tilde S$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) By Definition~\ref{def:orientation cover} we have $p_*(\pi_1(\tilde \tau))=\ker\theta$, hence every loop in $\tilde\tau$ has an even number of corners, and so $\tilde\tau$ can be consistently oriented. If the
involution $\iota$ did not reverse the orientation of $\tilde\tau$, then
the orientation of $\tilde\tau$ would induce a consistent orientation of
$\tau$, but $\tau$ is not orientable.
(2) A loop in $\tau$ lifts to two loops in $\tilde\tau$ if and only if
it has an even number of corners. If it has an odd number of corners, then its concatenation with itself has a unique lift.
\end{proof}
\begin{proposition}\label{or-cover-matrix}
A train track map $f:G\to G$ has two lifts
$\tilde f_{\rm op} : \tilde G\to \tilde G$, {\em orientation preserving}, and
$\tilde f_{\rm or} : \tilde G\to \tilde G$, {\em orientation reversing}.
They are related to each other by
$\tilde f_{\rm or} = \iota \cdot \tilde f_{\rm op}$,
where $\iota:\tilde G \to \tilde G$ is the deck translation.
Let $n$ be the number of edges in $G$.
Then there exist $n\times n$ non-negative matrices $A, B$ such that
$A + B = T$, the transition matrix of $f$, and
$\tilde f_{\rm op}$ and $\tilde f_{\rm or}$ are represented as:
$\tilde f_{\rm op} =
\scriptsize
\begin{bmatrix}
A & B \\
B & A
\end{bmatrix}$
and
$\tilde f_{\rm or} =
\scriptsize
\begin{bmatrix}
B & A \\
A & B
\end{bmatrix}.
$
Their characteristic polynomials are
$\chi((\tilde f_{\rm op})_*) = \chi(f_*) \det(A-B)$
and
$\chi((\tilde f_{\rm or})_*) = \chi(f_*) \det(B-A).$
\end{proposition}
The above proposition confirms that pA maps, $F: S\to S$; $\tilde F_{\rm op}:\tilde S \to \tilde S$; and $\tilde F_{\rm or}: \tilde S \to \tilde S$, all have the same dilatation.
\begin{proof}
Even though the train track $\tau$ is non-orientable, we assign an orientation to each edge of $G$.
Let $e_1, \cdots, e_n$ be the oriented edges of $G$.
We denote the lifts of $e_k \subset G$ by $\tilde e_k, \tilde e_k' \subset \tilde G$.
Since $\tilde \tau$ is orientable, we choose an orientation, which induces orientations of $\tilde e_k, \tilde e_k'$.
Denote the orientation cover by $p:\tilde G \to G$.
Proposition~\ref{prop: orientation cover is orientable}-(1) implies that there are two choices:
$p(\tilde e_k)=e_k$ or $p(\tilde e_k)=\overline e_k$, where $\overline e_k$ is the edge $e_k \subset G$ with reversed orientation.
We choose to assume that $p(\tilde e_k)=e_k$, which implies that $p(\tilde e_k')=\overline e_k$.
We define an orientation preserving lift $\tilde f_{\rm op}: \tilde G \to \tilde G$ in the following way.
For an edge $e \subset G$, let $f(e)_{\rm head}$ (resp. $f(e)_{\rm tail}$) denote the first (resp. last) letter of the word $f(e)$.
For each twin edges $\tilde e, \tilde e' \subset \tilde G$, we choose
\begin{equation}\label{choice of head}
\tilde f_{\rm op} (\tilde e)_{\rm head}
:=
\widetilde{f(e)_{\rm head}},
\qquad
\tilde f_{\rm op} (\tilde e')_{\rm head}
:=
\widetilde{f(e)_{\rm tail}}'.
\end{equation}
Next, we define the word $\tilde f_{\rm op} (\tilde e)$ to be the word $f(e)$ with each letter $e_i$ in $f(e)$ replaced by $\tilde e_i$ or $\tilde e_i'$ so that the resulting word corresponds to a connected edge-path in $\tilde G$.
Due to the choice (\ref{choice of head}), the choice between $\tilde e_i$ and $\tilde e_i'$ is uniquely determined.
The word $\tilde f_{\rm op} (\tilde e')$ is given by the word $\tilde f_{\rm op} (\tilde e)$ {\em read from the right to left}, then replace $\tilde e_i$ by $\tilde e_i'$ and $\tilde e_i'$ by $\tilde e_i$.
We define an orientation reversing train track map by $\tilde f_{\rm or} := \iota \cdot \tilde f_{\rm op}$.
Let $\{ \zeta_1, \cdots, \zeta_n, \zeta_{1}', \cdots, \zeta_{n}' \}$ be the standard basis of $V(\tilde G) \simeq \mathbb{R}^{2n}$, where $\zeta_k, \zeta_k'$ correspond to $\tilde e_k, \tilde e_k' \subset \tilde G$ respectively.
From the constructions of $\tilde f_{\rm op}$ and $\tilde f_{\rm or}$, with respect to this basis, their transition matrices are of the form
$\scriptsize
\begin{bmatrix}
A & B \\
B & A
\end{bmatrix}$
and
$
\scriptsize
\begin{bmatrix}
B & A \\
A & B
\end{bmatrix}
$
respectively, for some non-negative $n\times n$ matrices $A$ and $B$ satisfying $A + B = T$.
The formulae on characteristic polynomials follow from basic row and column reductions.
\end{proof}
In Example~\ref{ex:penner} there is a sketch of the orientation cover of
a non-orientable train track which has no odd vertices.
Also one can see explicit computations of $\tilde f_{\rm op}$ and $\tilde f_{\rm or}$ and matrices $A, B$.
We are finally in a position to understand the topological meaning of $W(G,f)$:
\begin{theorem}\label{thm:involution}
Assume that $\tau$ is non-orientable,
Let $\iota: \tilde S\to \tilde S$ be the involution of the orientation cover. Let $E^+$ and $E^-$ be the eigenspaces of $\iota_\ast: H_1(\tilde S; \mathbb{R})\to H_1(\tilde S; \mathbb{R})$ corresponding to the eigenvalues $1$ and $-1$, so that $H_1(\tilde S; \mathbb{R}) \cong E^+ \oplus E^-.$ Then $E^+ \cong H_1(S; \mathbb{R})$ and $E^- \cong W(G,f)$.
\end{theorem}
\begin{proof}
Fix an orientation of $\tilde\tau$ once and for all. This determines an orientation of $\tilde G$.
Since $\iota$ is an involution, the only possible eigenvalues are $\pm 1$, and $\iota_\ast$ is diagonalizable.
For a homology class $\xi\in H_1(S; \mathbb{R})$, let $\tilde \xi\in H_1(\tilde S,
\mathbb{R})$ denote its lift to the orientation cover. Since $p\cdot\iota=p$, we have $\iota_\ast\tilde\xi = \tilde\xi$. Thus $H_1(S, \mathbb{R})\subseteq E^+.$
Each edge $e\subset G$ has two lifts $\tilde e$ and $\tilde e' \subset \tilde G$.
Let $\zeta_e$ be the basis element of $V(G)$ corresponding to the (unoriented) edge $e$, and let $\zeta_{\tilde e}$ be the basis element
of $C_1(\tilde G; \mathbb{R}) \cong C_1(\tilde S; \mathbb{R})$ corresponding to the (oriented) edge $\tilde e$.
Define a homomorphism: $\phi: V(G)\to C_1(\tilde S; \mathbb{R})$ by $\zeta_e \mapsto \zeta_{\tilde e}+\zeta_{\tilde e'}.$
Recall that a basis element $\eta_e$ of $W(G,f)$, introduced in
$\S$~\ref{sssec:basis2} and $\S$~\ref{sssec:basis3}, has a corresponding arc or loop $L_e \subset G$ where $\eta_e$ assigns weight of $\pm 1$ satisfying the alternating sum condition.
Different edges $e_1, e_2$ correspond to distinct $L_{e_1}$ and $L_{e_2}$.
Moreover, when $L_e$ is an arc, its end points are odd vertices which are branch points of the orientation cover, so the lift of $L_e$ is a closed curve in $\tilde S$.
Hence, the restriction of $\phi$ to $W(G, f)$ is injective.
Assume $\eta_e = \sum_i \eta_e( e_i ) \zeta_{e_i}$.
By Proposition~\ref{prop: orientation cover is orientable}-(1), the involution takes $\iota: \tilde e \mapsto - \tilde e'$. We have:
$$\iota_\ast \phi(\eta_e)
=
\iota_\ast (\sum_i \eta_e( e_i ) (\zeta_{\tilde{e_i}} + \zeta_{\tilde{e_i}'}))
= \sum_i \eta_e( e_i ) (-\zeta_{\tilde{e_i}'} - \zeta_{\tilde{e_i}})
= - \phi(\eta_e),$$
i.e., $\phi(W(G,f))\subseteq E^-.$
Comparison of Euler characteristics along with Lemma~\ref{lem:dimW(G,f)} shows that
$$
\dim H_1(\tilde S; \mathbb{R}) =\dim H_1(S; \mathbb{R}) + \dim W(G,f),
$$
which implies $H_1(S; \mathbb{R}) \cong E^+$ and $\phi(W(G,f)) \cong E^-$.
\end{proof}
In Lemma~\ref{eq:inclusion} we proved that $f_\ast (W(G,f)) \subseteq W(G,f).$
In fact, a stronger statement holds.
\begin{corollary}\label{cor:isomorphism of f_ast}
The restriction map $f_\ast |_{W(G,f)} : W(G,f) \to W(G,f)$ is an isomorphism.
\end{corollary}
\begin{proof}
Regardless of the orientability of $\tau$, the fact that $F: S \to S$ is a homeomorphism implies that the induced map $F_\ast : H_1(S; \mathbb{R}) \to H_1(S; \mathbb{R})$ is an isomorphism.
Suppose that $\tau$ is orientable.
The isomorphism $W(G, f) \cong H_1(S; \mathbb{R})$ in Lemma~\ref{lem:dimW(G,f)} allows us to identify $f_\ast |_{W(G,f)}$ with $F_\ast$, which is an isomorphism.
Suppose that $\tau$ is non-orientable.
Let $\{ \eta_e \}_{e \in E}$ be a basis of $W(G,f)$ constructed as in $\S$\ref{sssec:basis2} and $\S$~\ref{sssec:basis3}.
Since the map $\phi: W(G,f) \to E^-$ in the proof of Theorem~\ref{thm:involution} is an isomorphism, the set $\{ \phi( \eta_e ) \}_{e\in E}$ is a basis of $E^-$.
Let $\tilde F: \tilde S \to \tilde S$ be a lift of $F: S \to S$.
It induces an isomorphism
$\tilde F_\ast : H_1(\tilde S; \mathbb{R}) \to H_1(\tilde S; \mathbb{R})$ and a train track map $\tilde f : \tilde G \to \tilde G$.
Since $\tilde S$ deformation retracts to $\tilde G$, we can identify $\tilde F_\ast$ with $\tilde f_\ast : H_1(\tilde G; \mathbb{R}) \to H_1(\tilde G; \mathbb{R})$.
Using the same notation as in the proof of Theorem~\ref{thm:involution}, we have
\begin{eqnarray*}
\iota_\ast \tilde f_\ast (\phi (\eta_e))
& = &
\iota_\ast \tilde f_\ast
\left(
\sum_i \eta_e( e_i ) (\zeta_{\tilde{e_i}} + \zeta_{\tilde{e_i}'})
\right) \\
& = &
\iota_\ast \sum_i \eta_e( e_i ) (
\zeta_{\tilde f_\ast (\tilde{e_i})} +
\zeta_{\tilde f_\ast (\tilde{e_i}')}
) \\
& = &
\sum_i \eta_e( e_i ) (
-\zeta_{\tilde f_\ast (\tilde{e_i}')}
-\zeta_{\tilde f_\ast (\tilde{e_i})}
)
= -\tilde f_\ast (\phi (\eta_e)).
\end{eqnarray*}
Hence
$\tilde F_\ast (E^-) = \tilde f_\ast (E^-) \subset E^-$.
Since $H_1(\tilde S; \mathbb{R})$ is finite dimensional and $\tilde F_\ast$ is an isomorphism, we obtain that $\tilde F_\ast|_{E^-}= f_\ast|_{W(G,f)}$ is an isomorphism.
\end{proof}
The topological invariance of the homology polynomial was stated as a conjecture in an earlier draft. Reading
that draft, Jeffrey Carlson pointed the authors to a connection they had missed, making our conjecture an immediate consequence of Theorem~\ref{thm:involution}. We are grateful for his help.
\begin{corollary}\label{cor:homology polynomial is invariant}
The homology polynomial $\chi(f_\ast|_{W(G,f)})$ is an invariant of the pA mapping class $[F]$.
\end{corollary}
\begin{proof}
Let $\tilde F: \tilde S \to \tilde S$ be a lift of the pA map
$F:S\to S$.
By Theorem~\ref{thm:involution} we have
$\chi(f_\ast|_{W(G,f)}) = \chi(\tilde F_\ast|_{E^-})$.
Since the eigenspace $E^-$ is an invariant of $[F]$, so is our polynomial $\chi(f_\ast|_{W(G,f)})$.
\end{proof}
This concludes the proof of Part (1) of Theorem~\ref{thm:summarize}.
\section{Proof of Parts (2) and (3) of Theorem~\ref{thm:summarize}}\label{sec:2nd polynomial invariant}
Having established the meaning of $W(G,f)$ and the invariance of the homology polynomial, our next goal is to understand whether it is irreducible, and if not to understand its factors. At the same time, we will investigate its symmetries.
With those goals in mind, we show that there is a well-defined and $f_*$-invariant skew-symmetric form on the space $W(G,f)$. See Proposition~\ref{prop:properties of skew-sym. form}. We define the subspace $Z\subset W(G,f)$ to be the space of degeneracies of this skew-symmetric form. We are able to interpret the the action of $f_\ast$ on $Z$ geometrically, as being a permutation of certain punctures on $S$.
In Theorem~\ref{thm:2nd decomposition} we will prove that the space $W(G,f)$ has a decomposition into summands that are invariant under the action of $f_\ast$, and that as a consequence the homology polynomial decomposes as a product of two polynomials, $ \chi(f_\ast|_Z)$ and $\chi(f_\ast|_{W(G, f)/Z})$. We call them the {\it puncture} and {\it symplectic} polynomials. Like the homology polynomial, both are invariants of $[F]$ in Mod$(S)$. We also establish their symmetries in Theorem~\ref{thm:2nd decomposition}, and understand the precise meaning of the puncture polynomial. The symplectic polynomial contains $\lambda$ as its largest real root. When irreducible, it coincides with the minimum polynomial of $\lambda$, but in general it is not irreducible. At this writing we do not understand when it is or is not reducible.
\subsection{Lifting the basis elements for $W(G,f)$ to $W(\tau)$}\label{subsec:transitional & terminal}
Our work begins with a brief diversion, to establish a technical result that will be needed in the sections that follow.
We have shown how to construct basis elements $\eta_{e_1},\dots,\eta_{e_l}$ for $W(G,f)$. We now build on this construction to give an explicit way to lift each $\eta_e \in W(G,f)$ to an element $\eta_e' \in W(\tau)$ in such a way that $\pi_\ast(\eta_e')=\eta_e$, where $\pi_* : W(\tau) \to W(G,f)$ is the natural surjection.
Although there are infinitely many lifts of any given basis element $\eta_e$, our construction of a specific $\eta_e'$ will be useful later. The issues to be faced in lifting $\eta_e$ to $\eta_{e'}$ are the assignment of weights to the infinitesimal edges.
\begin{definition}
Suppose that $v \in G$ is a vertex with $k$ gates numbered $0, \cdots, k-1,$ counterclockwise.
For $i= 0, \cdots, k-1$, we define a {\em transitional} element $\sigma_i \in V(\tau)$ that assigns $1$ to the $i$-th infinitesimal edge for the vertex $v$, and $0$ to the remaining branches of $\tau$.
In other words, in Figure~\ref{fig:vertex-types}, $x_i = 1$ and $x_j = 0$ for $j\neq i$.
Suppose that $v \in G$ is an odd vertex with $k$ gates.
For $i= 0, \cdots, k-1$,
we define a {\em terminal} element $\omega_i \in V(\tau)$ which assigns $\pm \frac12$ to the incident infinitesimal edges for $v$
so that the $i$-th gate has weight $w_i = 1$ and $j$-th $(j \neq i)$ gate has weight $w_j = 0$, cf. Figure~\ref{fig:weightedpath}; and assigns $0$ for rest of the branches of $\tau$.
\end{definition}
\begin{figure}[htpb!]
\begin{center}
\psfrag{h}{$\frac12$}
\psfrag{mh}{$-\frac12$}
\psfrag{o}{$1$}
\psfrag{mo}{$-1$}
\psfrag{z}{$0$}
\includegraphics[width=.7\textwidth]{weightedpath-copy}
\end{center}
\caption{Transitional and terminal elements.}
\label{fig:weightedpath}
\end{figure}
For both the orientable and the non-orientable case, our basis element $\eta_e \in W(G,f)$ is a vector whose entries are $\pm 1$ or $0$.
Recall that the edges whose weights are $\pm 1$ form a loop or an arc, denoted by $L_e$ in $\S$~\ref{sssec:basis1}, \ref{sssec:basis2} and \ref{sssec:basis3}.
At an even or a partial vertex of $L_e$, suppose $L_e$ goes through the $i$-th and $j$-th gates ($i \leq j$).
To $\eta_e$ we add consecutive transitional elements $\sigma_i, \sigma_{i+1}, \cdots, \sigma_{j-1}$ with alternating signs so that the switch condition is satisfied (cf. the upper middle circle in Figure~\ref{fig:weightedpath}).
Repeat this procedure for all the non-odd vertices of $L_e$.
If $L_e$ is a loop, it yields an element $\eta_e'$ of $W(\tau)$.
When $L_e$ is an arc (i.e., $\tau$ is non-orientable with odd vertices), the two endpoints of $L_e$ are odd.
Suppose $L_e$ enters the $i$-th gate of an odd vertex.
After adding transitional elements as above, we further add terminal element $\omega_i$ or $-\omega_i$ so that the switch condition is satisfied at all the incident gates of the odd vertex.
Proceed in this way for the other odd vertex as well, and we obtain an element $\eta_e'$ of $W(\tau)$.
\subsection{A skew-symmetric form on $W(G,f)$}\label{subsec:skew-symmetric form on W(G,f)}
In this section we define a skew-symmetric form $\langle \cdot , \cdot \rangle$ on $W(G,f)$.
To get started, we slightly modify the skew-symmetric form on $W(\tau)$ introduced by Penner-Harer~\cite[p.182]{PH}:
For a branch $b \subset \tau$ and $\eta \in W(\tau)$, let $\eta(b)$ denote the weight that $\eta$ assigns to $b$.
At a switch of valence $k$ ($k \geq 3$), label the branches $a, b_1, \cdots, b_{k-1}$ as in Figure~\ref{fig:switch}.
The cyclic order of $a, b_1, \cdots, b_{k-1}$ is determined by the orientation of the surface and the embedding of $\tau$ in $S$.
We define a skew-symmetric form:
$$
\langle \eta, \zeta \rangle_{W(\tau)} :=
\frac12
\sum_{\substack{
\text{switches} \\
\text{in } \tau}
} \
\sum_{i < j}
\begin{vmatrix}
\eta(b_i) & \eta(b_j)\\
\zeta(b_i) & \zeta(b_j)
\end{vmatrix},
\quad
\mbox{ for } \eta, \zeta \in W(\tau).
$$
Recall the surjective map $\pi : \tau \to G$ collapsing the infinitesimal (partial) polygons to vertices.
\begin{definition}
For $\eta, \zeta\in W(G,f)$ there exist $\eta', \zeta'\in W(\tau)$ so that $\pi_\ast(\eta') = \eta$ and $\pi_\ast(\zeta')=\zeta$. We define a skew-symmetric form on $W(G,f)$ by:
$$\langle \eta, \zeta \rangle_{W(G,f)} := \langle \eta', \zeta' \rangle_{W(\tau)}$$
\end{definition}
\begin{proposition} \label{prop:properties of skew-sym. form}
The skew-symmetric form $\langle \cdot , \cdot \rangle_{W(G,f)}$ has the following properties:
\begin{enumerate}
\item It is well-defined.
\item When $\tau$ is orientable, $\langle \eta, \zeta \rangle_{W(G,f)} $ is the homology intersection number of $1$-cycles associated to $\eta$ and $\zeta$.
\item When $\tau$ is non-orientable, recall that $E^{\pm}$ are the eigenspaces of the deck transformation $\iota : \tilde S \to \tilde S$ for the orientation cover studied in Theorem~\ref{thm:involution}.
Since $p: \tilde S \to S$ is a {\em double} branched cover, we have the following results, to be compared with \cite[p.187]{PH}:
\begin{enumerate}
\item The restriction of the intersection form on $H_1(\tilde S; \mathbb{R})$ to $E^+$ is twice the intersection form on $H_1(S; \mathbb{R})$.
\item The restriction of the intersection form on $H_1(\tilde S; \mathbb{R})$ to $E^-$ is twice the skew-symmetric form $\langle \cdot, \cdot \rangle_{W(G,f)}$.
\end{enumerate}
\item For all $\eta, \zeta\in W(G,f)$, we
have $\langle f_\ast\eta, f_\ast\zeta \rangle_{W(G,f)}
= \langle \eta, \zeta \rangle_{W(G,f)}.$
\end{enumerate}
\end{proposition}
\begin{proof}
(1)
It suffices to show that for any $\eta' \in \ker \pi_\ast \subset W(\tau)$ and $\zeta' \in W(\tau)$, the product $\langle \eta' , \zeta' \rangle_{W(\tau)}=0$.
Since $\pi_\ast(\eta') = \vec{0}$, $\eta'$ assigns $0$ to any real edge $b \subset \tau$, it follows that the weight of $\eta'$ at any gate is $0$.
If a vertex $v\in G$ is odd or partial, $\eta'$ assigns $0$ to any infinitesimal edges associated to the vertex $v$.
Therefore, $v$ does not contribute to $\langle \eta' , \zeta' \rangle_{W(\tau)}$.
If a vertex $v\in G$ is even with $k$ gates, the weights $x_0, \cdots, x_{k-1}$ that $\eta'$ assigns to the infinitesimal edges for $v$ form an alternating sequence: $x_i = (-1)^i x_0$.
The contribution of the even vertex $v$ to $\langle \eta' , \zeta' \rangle_{W(\tau)}$ is:
$$
\frac{1}{2} \sum_{i=0}^{k-1}
\begin{vmatrix}
x_i & x_{i-1}\\
y_i & y_{i-1}\\
\end{vmatrix}
= \frac{x_0}2 \sum_{i=0}^{k-1}
\begin{vmatrix}
(-1)^i & (-1)^{i-1}\\
y_i & y_{i-1}\\
\end{vmatrix}
= \frac{x_0}2 \sum_{i=0}^{k-1}
(-1)^i(y_i+y_{i-1})
=0,
$$
where $y_i$ are the weights assigned by $\zeta'$ and indices are modulo $k$.
(2)
Assertion (2) is established in Lemma 3.2.2 of \cite{PH}.
(3)
Assertion (3) follows directly from Theorem~\ref{thm:involution}.
(4)
If $\tau$ is orientable, then we identify $W(G,f)\cong H_1(S; \mathbb{R})$.
Since $F: S\to S$ is homeomorphism, the homology intersection number is preserved under $f_\ast : H_1(S ; \mathbb{R}) \to H_1(S ; \mathbb{R})$ and the assertion follows.
If $\tau$ is non-orientable, then, passing to the orientation cover, $\tilde F_\ast : H_1(\tilde S ; \mathbb{R}) \to H_1(\tilde S ; \mathbb{R})$ preserves the homology intersection number.
The assertion then follows from (i) $\tilde F_\ast |_{E^-} = f_\ast|_{W(G,f)}$ and (ii) assertion (3) of this proposition.
\end{proof}
Knowing that $\langle \cdot, \cdot \rangle_{W(G,f)}$ is well-defined,
we can compute $\langle \eta_1, \eta_2 \rangle_{W(G,f)}= \langle \eta_1', \eta_2' \rangle_{W(\tau)}$
by using the basis elements $\eta_1, \eta_2 \in W(G,f)$ discussed in $\S$~\ref{sssec:basis1}, \ref{sssec:basis2} and \ref{sssec:basis3}, and their particular extensions $\eta_1', \eta_2' \in W(\tau)$ introduced in $\S$~\ref{subsec:transitional & terminal}.
For this, it is convenient to study how transitional and terminal elements contribute to the skew-symmetric form.
Straight forward calculation of determinants at incident gates yields the following:
\begin{proposition}\label{localcomputation}
Let $v$ be a vertex with $k$ incident gates, numbered $0,\ldots, k-1,$ conterclockwise.
We have
$\langle \sigma_i, \sigma_j \rangle = \langle \sigma_0, \sigma_{j-i} \rangle$,
$\langle \omega_i, \sigma_j \rangle = \langle \omega_0, \sigma_{j-i} \rangle$
and
$\langle \omega_i, \omega_j \rangle = \langle \omega_0, \omega_{j-i} \rangle$ for $0\leq i\leq j\leq k-1$.
Moreover,
$$
\langle \sigma_0, \sigma_i \rangle =
\begin{cases}
-\frac12 & \text{if $i=1$}\\
\frac12 & \text{if $i=k-1$}\\
0 & \text{otherwise}
\end{cases}
\qquad
\langle \omega_0, \sigma_i \rangle =
\begin{cases}
-\frac12 & \text{if $i=0$}\\
\frac12 & \text{if $i=k-1$}\\
0 & \text{otherwise}
\end{cases}
$$
$$
\text{ and } \quad
\langle \omega_0, \omega_i \rangle =
\begin{cases}
\frac{(-1)^i}2 & \text{if $i\neq 0$}\\
0 & \text{if $i=0$.}
\end{cases}
$$
\end{proposition}
\subsection{Degeneracies of the skew-symmetric form and the second decomposition}
\label{subsec:radical}
In this section, we investigate the totally degenerate subspace of $W(G,f)$,
$$
Z := \{ \eta \in W(G,f) \ | \
\langle \zeta, \eta \rangle = 0 \
\mbox{ for all }
\zeta \in W(G,f) \},
$$
the {\it radical} of the skew-symmetric form. It will lead us, almost immediately, to the second decomposition theorem and another new invariant of pA maps. We begin by showing how $Z$ has already appeared in our work, in a natural way.
\begin{proposition}\label{prop:dim Z}
Let $s$ be the number of punctures of $S$.
\begin{enumerate}
\item If $\tau$ is orientable, $\dim Z=s-1$.
\item If $\tau$ is non-orientable, then
\begin{eqnarray*}
\dim Z & = &
\# ( \mbox{punctures of $S$ that correspond to two punctures in } \tilde S) \\
& = &
\#(\mbox{punctures of $S$ represented by loops in $\tau$} \\
&& \quad \mbox{with even numbers of corners}).
\end{eqnarray*}
\end{enumerate}
\end{proposition}
\begin{proof}
In the orientable case, recall Lemma~\ref{lem:dimW(G,f)} which states $W(G,f) \cong H_1(S; \mathbb{R})$, and Lemma 3.2.2 of \cite{PH} which shows that our skew-symmetric form agrees with the homology intersection form.
Hence the space $Z$ is generated by the homology classes of $s$ loops around the punctures.
Because their sum is null-homologous, they are linearly dependent and $\dim Z = s-1$.
In the non-orientable case, let $\tilde S$ be the orientation cover of $S$ (Definition~\ref{def:orientation cover}). Recall the eigen spaces $E^{\pm}$ for the deck transformation $\iota: \tilde S \to \tilde S$, cf. Theorem~\ref{thm:involution}.
Let $s$ (resp.\ $r$) be the number of punctures of
$S$ that lift to two (resp.\ single) punctures in $\tilde S$, and
let $\alpha_1, \beta_1, \ldots, \alpha_s, \beta_s, \gamma_1, \ldots,
\gamma_r$ be the homology classes of loops around the punctures of
$\tilde S$, chosen so that $\iota_\ast\alpha_i=\beta_i$ for all $i=1,
\ldots, s$ and $\iota_\ast\gamma_j = \gamma_j$ for all $j=1, \ldots, r$
and oriented so that their sum is zero.
The radical of the homology intersection form on $H_1(\tilde S; \mathbb{R})$ is spanned by
\begin{eqnarray*}
&&\spn{\alpha_1, \beta_1, \ldots,
\alpha_s, \beta_s, \gamma_1, \ldots, \gamma_r} \\
& = & \spn{\alpha_1-\beta_1, \ldots, \alpha_s-\beta_s, \alpha_1+\beta_1,
\ldots, \alpha_s+\beta_s, \gamma_1, \ldots, \gamma_r}.
\end{eqnarray*}
Note that $\alpha_i-\beta_i\in E^-$ and $\alpha_i+\beta_i, \gamma_j \in E^+$ for all $i=1, \ldots, s$ and $j=1, \ldots, r$.
By Theorem~\ref{thm:involution} and assertion (3) of Proposition~\ref{prop:properties of skew-sym. form}, we obtain that $Z = \spn{\alpha_1-\beta_1, \cdots, \alpha_s-\beta_s}$.
Clearly, $\alpha_1-\beta_1, \cdots, \alpha_s-\beta_s$ are linearly independent, i.e., $\dim Z = s$.
\end{proof}
\begin{corollary}\label{rem:onepuncture}
Assume that $S$ is once punctured and that $\tau$ is non-orientable. Then $\dim Z =1$ if and only if $\dim W(G,f)$ is odd.
\end{corollary}
\begin{proof}
The induced skew-symmetric form on $W(G, f)/Z$
is non-degenerate, and so the dimension of $W(G, f)/Z$ is even.
Thus, $\dim Z$ is odd if and only if $\dim W(G, f)$ is odd.
Since $S$ has exactly one puncture, Proposition~\ref{prop:dim Z} yields that $\dim Z\leq 1$, and the corollary follows.
\end{proof}
\begin{remark}\label{rem:completeness}
Straightforward modifications of our arguments show that if $\tau$ is a
non-orientable train track (not necessarily induced by a train track
map), then the dimension of ${\rm rad}\, W(\tau)$ is the number of
complementary regions of $\tau$ with even numbers of corners. In
particular, if $\tau$ is complete, then the complement of $\tau$
consists of triangles and monogons (\cite[Theorem~1.3.6]{PH}), and
so the skew-symmetric form on $W(\tau)$ is non-degenerate in this case.
Although not explicit stated, this is the case covered by \cite[Theorem~3.2.4]{PH}.
\end{remark}
\begin{theorem}\label{thm:2nd decomposition}
\textbf{\em (Second Decomposition Theorem)}
The map $f_\ast$ preserves the decomposition
$W(G,f) \cong (W(G,f)/Z) \oplus Z,$ so that
$\chi(f_\ast|_{W(G,f)}) = \chi(f_\ast|_{W(G,f)/Z})\chi(f_\ast|_Z).$
Moreover, we have:
\begin{enumerate}
\item
The polynomial $\chi(f_\ast|_Z)$ is an invariant of the pA mapping class $[F] \in {\rm Mod}(S)$.
The restriction $f_\ast|_Z$ encodes how $F$ permutes the punctures whose projections to $\tau$ have even numbers of corners.
In particular, $f_\ast|_Z $ is a periodic map, so that all the roots of $\chi(f_\ast|_Z)$ are roots of unity and the polynomial $\chi(f_\ast|_Z)$ is palindromic or anti-palindromic.
\item
The polynomial $ \chi(f_\ast|_{W(G,f)/Z})$ is an invariant of $[F]$.
The skew-symmetric form $\langle \cdot, \cdot \rangle_{W(G,f)}$ naturally induces a symplectic form on $W(G,f)/Z$.
The map $f_\ast$ induces a symplectomorphism of $W(G,f)/Z$. Hence the polynomial $\chi(f_\ast|_{W(G,f)/Z})$ is palindromic.
\item
The homology polynomial $\chi(f_\ast |_{W(G,f)})$ is either palindromic or anti-palindromic.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose $\eta \in Z.$
By assertion (4) of Proposition~\ref{prop:properties of skew-sym. form}, we have $0 =
\langle \eta, \zeta \rangle = \langle f_\ast(\eta), f_\ast(\zeta) \rangle$ for
all $\zeta\in W(G,f)$.
By Corollary~\ref{cor:isomorphism of f_ast}, $f_\ast|_{W(G,f)}$ is surjective, and so $f_\ast(\eta)\in Z$.
Thus $f_\ast$ preserves the decomposition $(W(G,f)/Z)\oplus Z$.
(1)
The restriction $f_\ast|_Z$ is periodic because of Proposition~\ref{prop:dim Z} and the fact that $[F]$ permutes the punctures of $S$.
Hence all the roots of $ \chi(f_\ast|_Z)$ are roots of unity.
Moreover, if $\mu$ is a root of $ \chi(f_\ast|_Z)$, then $\frac1\mu =
\bar{\mu}$ is also a root of $ \chi(f_\ast|_Z)$ because $ \chi(f_\ast|_Z)\in \mathbb{R}[x]$. This implies
that $\chi(f_\ast |_Z)$ is palindromic or anti-palindromic.
(2)
By the definition of $Z$, the skew-symmetric form induces a non-degenerate form on $W(G,f)/Z$.
This together with Proposition~\ref{subsec:skew-symmetric form on W(G,f)}-(4) implies that the polynomial $\chi(f_\ast|_{W(G,f)/Z})$ is palindromic. It is an invariant of $[F]$ because it is the quotient of two polynomials, both of which have been proved to be invariants.
(3)
The homology polynomial $\chi(f_\ast |_{W(G,f)} )$ is either palindromic or anti-palindromic because it is a product of two polynomials, one of which is palindromic and the other of which is either palindromic or anti-palindromic.
\end{proof}
This concludes the proof of parts (2) and (3) of Theorem~\ref{thm:summarize}. Since the proof of part (1) was completed in
$\S$\ref{sec:1st polynomial invariant}, it follows that Theorem~\ref{thm:summarize} has been proved.
\section{Applications}\label{sec:applications} In this section we give several applications of Theorem~\ref{thm:summarize}. Corollary~\ref{cor:combinatorial invariants of [F]} summarizes the numerical class invariants of $[F]$ that, as a consequence of Theorem~\ref{thm:summarize}, can be computed from the train track $\tau$ by simple counting arguments.
Corollary~\ref{cor on p(x)} is an application to fibered hyperbolic knots in 3-manifolds.
Corollary~\ref{cor:powers} shows that our three polynomials behave very nicely under the passage $[F] \to[F^k]$.
\begin{cor} \label{cor:combinatorial invariants of [F]}
Under the same notation as in Theorem~\ref{thm:summarize},
let $n, v, v_o$ be the number of edges, vertices, odd vertices respectively in the
graph $G$.
Let $s$ $($resp. $r)$ be the number of punctures of $S$ which are represented by loops in $\tau$ with even $($resp. odd$)$ numbers of corners.
Let $g$ $($resp. $\tilde{g})$ be the genus of $S$ $($resp. its orientation
cover $\tilde{S})$.
\begin{enumerate}
\item The orientability $($or non-orientability$)$ of $\tau$ is a class invariant of $[F]$.
\item The degree of the homology polynomial is a class invariant. It is:
$$\deg \chi(f_\ast |_{W(G,f)}) =
\begin{cases}
n - v + 1 & \text{if $\tau$ is orientable,}\\
n - v + v_o & \text{if $\tau$ is not orientable.}
\end{cases}
$$
\item The degree of the puncture polynomial is a class invariant. It is:
$$\deg \chi(f_\ast |_Z) =
\begin{cases}
s-1 & \text{if $\tau$ is orientable,}\\
s & \text{if $\tau$ is not orientable.}
\end{cases}
$$
\item The degree of the symplectic polynomial is a class invariant. It is:
$$\deg \chi(f_\ast |_{W(G,f)/Z}) =
\begin{cases}
2g & \text{if $\tau$ is orientable,}\\
2(\tilde g - g)& \text{if $\tau$ is not orientable.}
\end{cases}
$$
\end{enumerate}
\end{cor}
\begin{remark}
Assertion (4) implies that the dilatation of $[F]$ is the largest real root of a polynomial of degree $2d$, where $2d \leq 2g$ $($resp. $2(\tilde g -g))$. However, this bound is not sharp because, as will be seen in Example~\ref{ex:k89}, the symplectic polynomial is not necessarily irreducible.
\end{remark}
\begin{proof}
Assertion (1) is clear. See Lemma~\ref{lem:dimW(G,f)} for (2).
In the orientable case each puncture is represented by a loop in $\tau$ with an even number of corners. This, together with Proposition~\ref{prop:dim Z}, implies (3).
To prove (4), let $v_e, v_p$ be the number of even vertices, partial vertices in $G$, respectively.
If $\tau$ is orientable, the assertion is clear.
If $\tau$ is non-orientable, the Euler characteristics of $S$ and $\tilde S$ are
\begin{eqnarray*}
\chi(S) &=& v_o + v_e + v_p - n = 2-2g-(r+s), \\
\chi(\tilde S) &=& v_o + 2v_e + 2v_p -2n=2-2 \tilde g - (r+2s).
\end{eqnarray*}
From Lemma~\ref{lem:dimW(G,f)} and Proposition~\ref{prop:dim Z}, we have:
$$\dim (W(G, f) / Z)= (n -v_e - v_p) -s = 2 (\tilde g - g).$$
Note that the sum of the degrees of the symplectic and puncture polynomials is the degree of the homology polynomial.
\end{proof}
\begin{cor}\label{cor on p(x)}
$(1) $
The symplectic polynomial $\chi(f_\ast|_{W(G,f)/Z})$ is an invariant of fibered hyperbolic links in $3$-manifolds.
$(2)$
Assume that $M$ is a homology $3$-sphere and $K \subset M$ is a fibered hyperbolic knot whose monodromy admits an orientable train track.
Let
$\chi(x)=\chi(f_\ast|_{W(G,f)/Z})$ and $\Delta_K(x)$ denote the Alexander polynomial of $K$. Then
$$
\chi(x) = \left\{
\begin{array}{ll}
\Delta_K(x) & \mbox{ if } f \mbox{ is orientation preserving, } \\
\Delta_K(-x) & \mbox{ if } f \mbox{ is orientation reversing. }
\end{array}
\right.
$$
\end{cor}
\begin{proof}
(1)
Let $L\subset M$ be a link.
Thurston proved that a 3-manifold $M \setminus L$ is fibered over $S^1$ with a pA monodromy $[F]\in {\rm Mod}(S)$ if and only if $M\setminus L$ is hyperbolic. Combining his result with assertion (2) of Theorem~\ref{thm:2nd decomposition}, we obtain the first claim.
(2)
Let $F_\ast: H_1(S; \mathbb{R}) \to H_1(S; \mathbb{R})$ be the induced map.
By assertion (1) of Proposition~\ref{prop:dim Z}, the space $Z$ is trivial.
Lemma~\ref{lem:dimW(G,f)} tells us that $W(G,f)/Z \cong W(G,f) \cong H_1(S; \mathbb{R})$. Since ($\pm$)-gate is mapped to a ($\mp$)-gate if and only if $f : G \to G$ is orientation reversing, we have
$$f_\ast|_{W(G,f)/Z} = \left\{
\begin{array}{ll}
F_\ast & \mbox{ if } f \mbox{ is orientation preserving, } \\
-F_\ast & \mbox{ if } f \mbox{ is orientation reversing. }
\end{array}
\right. $$
The fact that $\Delta_K( \pm x)=\chi( \pm F_\ast)$ yields the statement.
\end{proof}
\begin{rem}
Corollary~\ref{cor on p(x)}-(2) can be seen as a refinement of Rykken's \cite[Theorem 3.3]{rykken}.
\end{rem}
Our final application is to prove that our three polynomials, $\chi(f_\ast |_{W(G,f)})$, \\ $ \chi(f_\ast|_{W(G,f)/Z})$ and
$\chi(f_\ast|_Z)$ behave in a very nice way under the passage $[F] \to [F^n]$:
\begin{corollary}\label{cor:powers}
Let $n > 0.$
If $f: G \to G$ represents a pA mapping class $[F]$, then $f^n$ represents $[F^n]$.
Suppose $\chi(f_\ast|_{W(G,f)/Z}) = \prod_i (x-z_i)$ and $\chi(f_\ast|_Z) = \prod_j (x-w_j)$ where $z_i, w_j \in \mathbb{C}$, then
\begin{eqnarray*}
\chi(f^n_\ast|_{W(G,f^n)/Z}) &=& \prod_i (x-z_i^n), \\
\chi(f^n_\ast|_Z) &=& \prod_j (x-w_j^n),\\
\chi(f^n_\ast |_{W{(G,f)}}) &=& \prod_i (x-z_i^n)\prod_j (x-w_j^n).
\end{eqnarray*}
\end{corollary}
\begin{proof}
Note that the pA maps $[F]$ and $[F^n]$ act on the same surface and share the same graph $G$ and the associated train track $\tau$.
The direct sum decomposition in Theorem~\ref{thm:2nd decomposition} tells us that
$f^n_\ast|_{W(G,f^n)/Z} = (f_\ast|_{W(G,f)/Z})^n$ and $f^n_\ast|_Z = (f_\ast|_Z)^n.$
Since $z_i$ and $w_j$ are eigenvalues of $f_\ast|_{W(G,f)/Z}$ and $f_\ast|_Z$ respectively, the desired equations follow. The product decomposition for the homology polynomial follows from the fact that it is a product of the other two polynomials.
\end{proof}
\section{Examples}\label{sec:examples}
All of our examples were analyzed with the software package XTrain
\cite{pbexp}, with some help from Octave \cite{octave}. This package is an adaptation of the Bestvina-Handel algorithm to once-punctured surfaces. Our illustrations
show a train track $\tau$ (in the sense of \cite{BH}) embedded in
a once-punctured surface. Regardless of whether $\tau$ admits an orientation,
we equip individual edges of the graph $G$ with a direction for the purpose of
specifying the map $f: G\to G$, although they coincide when $\tau$ is
orientable. We remark that the limitations of the available software, at this time, to once-punctured surfaces means that the puncture polynomial in our examples is always either 1 or $x-1.$
A surface is shown as a fundamental domain in the
Poincar\' e model for $\mathbb H^2$, with the identification pattern on the boundary given
by the labels on edges. For example, a side crossed by an edge labeled
$a$ will be identified with the side crossed by the edge labeled
$\bar{a}$. These labels also indicate the direction of an edge; $a$ is
the first half, $\bar{a}$ the second one. The shaded regions in the pictures
contain all the infinitesimal edges associated with a vertex, as in
Figure~\ref{fig:vertex-types}. To recover the graph $G$, collapse each shaded
region to a point. In each example, we give the associated train track map on
edges of $G$. That map determines the maps on the vertices.
\begin{ex} \label{ex:filling-curves}
We illustrate Corollary~\ref{cor:homology polynomial is invariant} with a triplet of examples.
Sketches (1), (2) and (3) of Figure~\ref{fig:filling-curves} show three copies of a once-punctured genus 2 surface $S= S_{2, 1}$, each containing two simple closed curves $u_i$ and $v_i$. In all three cases $u_i\cup v_i$ fills $S$, that is the complement of the union of the two curves is a family of discs. In all three cases the geometric intersection $i(u_i\cap v_i) = 6.$
\begin{figure}[htpb!]
\centerline{\includegraphics[scale=.5] {filling-curves.eps}}
\caption {Curves on a surface of genus 2.} \label{fig:filling-curves}
\end{figure}
We use our curves to define three diffeomorphisms of $S$ by the formula $F_i = T_{v_i}^{-1} T_{u_i}$ where $T_c$ denotes a Dehn twist about a simple closed curve $c$.
By a theorem of Thurston (Theorem 14.1 of \cite{FM}) there is a representation of the free subgroup of $\rm{Mod}(S)$ generated by $T_{u_i}$ and $T_{v_i}$ in ${\rm PSL}(2, \mathbb R)$ which sends the product
$F_i$ to the matrix $\scriptsize \left(\begin{matrix} 1 & 0 \\ -6 & 1 \end{matrix}\right)^{-1} \left(\begin{matrix} 1 & 6 \\ 0 & 1 \end{matrix}\right) =
\left(\begin{matrix} 1 & 6 \\ 6 & 37 \end{matrix}\right).$ By the Thurston's theorem, $F_i$ is pA and its dilation is the largest real root of the characteristic polynomial $x^2-38x+1$ of this matrix, that is $37.9737\dots$ in all three cases.
We ask two questions: Are $F_1,F_2,F_3$ conjugate in ${\rm Mod}(S)$, and if they are not conjugate can our invariants distinguish them?
See \cite{pbexp} for a choice of {\it standard curves} $a_0,d_0,c_0,d_1,c_1$ on $S$. To obtain the needed input data for the computer software XTrain \cite{pbexp} we must express our maps as products of Dehn twists about these curves. For simplicity, we denote the Dehn twist $T_{a_0}$ by the same symbol $a_0$. Using $A_i, C_i, D_i$ for the inverse Dehn twists of $a_i, c_i, d_i$, we find, after a small calculation, that:
\begin{eqnarray*}
F_1
& = &
c_0 d_0 d_1 A_0 C_1 c_1 d_1 c_0 d_0 A_0 D_0 C_0 D_1 C_1 c_1 a_0 D_1 D_0 C_0 \\
&&
c_1 b_0 (C_1 D_1)^6
c_1 d_1 c_0 d_0 a_0 D_0 C_0 D_1 C_1 (d_1 c_1)^6
B_0 C_1 \\
F_2
& = &
c_1 (d_1 c_1)^6 a_0
c_1 d_1 c_0 d_0 A_0 D_0 C_0 D_1 C_1
A_0 (C_1 D_1)^6 C_1
c_1 d_1 c_0 d_0 a_0 D_0 C_0 D_1 C_1 \\
F_3
& = & A_0D_0c_0d_0a_0B_1D_1c_0d_1b_1A_0D_0B_1D_1C_0d_1b_1d_0a_0B_1D_1 C_0d_1b_1A_0D_0C_0 d_0a_0(d_1c_1)^6
\end{eqnarray*}
Focussing on $F_1$ and $F_2$ first, XTrain tells us that the associated transition matrices are:
\begin{equation*}
T_1 =
\begin{bmatrix}
15&7&14&23&16 \\
10&6&10&16&11 \\
4&2&5&7&5 \\
2&1&2&3&1 \\
10&5&10&16&12
\end{bmatrix}
\ {\rm and} \ \
T_2 =
\begin{bmatrix}
6 & 10 & 5 & 6 & 10\\
5 & 11 & 5 & 7 & 10\\
5 & 10 & 6 & 6 & 10\\
6 & 12 & 6 & 7 & 12\\
5 & 10 & 5 & 5 & 11
\end{bmatrix}
\end{equation*}
\noindent In both cases $\chi(f_\ast) = {\det}(xI-T_i) = x^5 -41x^4 + 118 x^3 -118 x^2 + 41 x -1,$
with largest real root $37.9737\dots$, as expected.
XTrain tells that the train tracks $\tau_1,\tau_2$ for our two examples are the ones that are illustrated in Figure~\ref{fig:filling-curves-tt}.
\begin{figure}[htpb!]
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{c}{$c$}
\psfrag{d}{$d$}
\psfrag{e}{$e$}
\psfrag{A}{$\overline a$}
\psfrag{B}{$\overline b$}
\psfrag{C}{$\overline c$}
\psfrag{D}{$\overline d$}
\psfrag{E}{$\overline e$}
\psfrag{i}{(1) $\tau_1$, orientable}
\psfrag{ii}{(2) $\tau_2$, non-orientable}
\psfrag{0}{$v_0$}
\psfrag{1}{$v_1$}
\centerline{\includegraphics[scale=.6]{filling-curves-tt-copy.eps}}
\caption {Train tracks for the maps $F_1,F_2$ of Example~\ref{ex:filling-curves}.} \label{fig:filling-curves-tt}
\end{figure}
With the train tracks $\tau_1,\tau_2$ in hand we can see, immediately, that $F_1$ and $F_2$ are inequivalent, because $\tau_1$ is orientable and $\tau_2$ is not. Also, by Lemma~\ref{lem:dimW(G,f)} the dimension of $W(G_i, f_i)$, which is the degree of the homology polynomial, is $4$ (resp. $3$) when $i=1$ (resp. $2$).
We compute the homology and symplectic polynomials of $F_1$ and $F_2$ explicitly.
For $F_1$, a basis of $W(G_1, f_1)$, $\{\eta_b,\eta_c,\eta_d,\eta_e\}$, were computed in Example~\ref{ex-of-basis-orientable}.
Set $e_1 = b, e_2=c, e_3=d, e_4=e, e_5=a$ and follow the instructions in $\S$\ref{sssec:basis4} for finding the matrix $A_1$ representing $(f_1)_\ast |_{W(G_1, f_1)}$. We obtain:
\begin{equation*}
A_1=
\scriptsize{
\begin{bmatrix}
1&0&0&0&0 \\
0&1&0&0&0 \\
0&0&1&0&0 \\
0&0&0&1&0
\end{bmatrix}
\begin{bmatrix}
6&10&16&11&10 \\
2&5&7&5&4 \\
1&2&3&1&2 \\
5&10&16&12&10 \\
7&14&23&16&15
\end{bmatrix}
\begin{bmatrix}
1&0&0&0 \\
0&1&0&0 \\
0&0&1&0\\
0&0&0&1 \\
1&-1&-1&1
\end{bmatrix}
}
=
\scriptsize{
\begin{bmatrix}
16& 0& 6& 21\\
6 & 1 & 3 & 9 \\
3 & 0 & 1 & 3\\
15& 0& 6& 22
\end{bmatrix}
}
\end{equation*}
Its characteristic polynomial is $1 - 40 x + 78 x^2 - 40 x^3 + x^4 = (-1+x)^2 (1-38x +x^2),$
which is the homology polynomial.
Corollary~\ref{cor:combinatorial invariants of [F]}-(4) tells that the symplectic polynomial has degree $4$, hence it coincides with the homology polynomial.
For $F_2$, we apply $\S$\ref{sssec:basis3} and set the non-orientable loop $\mathcal L_0 = a \cup v_0$ and subgraph $\mathcal L = \mathcal L_1= a \cup b \cup v_0 \cup v_1$. Edges $c, d, e$ are not in $\mathcal L$.
We reorder the edges and call $e_1=c, e_2=d, e_3=e, e_4=a, e_5=b$. With this order, basis vectors of $W(G_2, f_2)$ are
$\eta_a = (1, 0, 0, 1, 2)'$;
$\eta_c = (0, 1, 0, 0, 0)'$;
$\eta_e = (0, 0, 1, 0, -1)'$;
where ``prime'' means the transpose. Following the instructions in $\S$\ref{sssec:basis4}, we obtain the matrix $A_2$ representing $(f_2)_*|_{W(G_2, f_2)}$.
\begin{equation*}
A_2 =
\scriptsize{
\begin{bmatrix}
1&0&0&0&0 \\
0&1&0&0&0 \\
0&0&1&0&0
\end{bmatrix}
\begin{bmatrix}
6 & 6 & 10 & 5 & 10\\
6 & 7 & 12 & 6 & 12\\
5 & 5 & 11 & 5 & 10\\
5 & 6 & 10 & 6 & 10\\
5 & 7 & 10 & 5 & 11
\end{bmatrix}
\begin{bmatrix}
1&0&0 \\
0&1&0 \\
0&0&1 \\
1&0&0\\
2&0&-1
\end{bmatrix}
}
=
\scriptsize{
\begin{bmatrix}
31 & 6 & 0\\
36 & 7 & 0\\
30 & 5 & 1
\end{bmatrix}
}
\end{equation*}
The homology polynomial is
$\det(xI-A_2)=-1 + 39 x - 39 x^2 + x^3 =(-1 + x) (1 - 38 x + x^2)$, which means $\dim W(G_2, f_2)=3$. By Corollary~\ref{rem:onepuncture}, the symplectic polynomial has degree $2$, so it is $1 - 38 x + x^2$.
We turn to $F_3$. From XTrain, we learn that the homology polynomials for $F_2$ and $F_3$ are the same. However, observe that the curves $u_1,v_1,u_2,v_2,v_3$ are all non-separating on $S$, but $u_3$ is separating. From this it follows that there cannot be an element $F'\in {\rm Mod}(S)$ that maps $u_3$ to either $u_i$ or $v_i, \ i=1,2$. Thus $[F_3]$ is very likely not conjugate to either $[F_1]$ or $[F_2]$ in ${\rm Mod}(S)$.
\end{ex}
\begin{ex}[Figure~\ref{fig:k89}]\label{ex:k89}
This example shows that the symplectic polynomial need not be irreducible over the rationals. The monodromy of the hyperbolic knot $8_9$ \cite{knotsandlinks} is represented by the following train track map:
\begin{figure}[htpb!]
\begin{center}
\input{k8_9.psfrag}
\includegraphics[width=0.5\textwidth]{k8_9}
\end{center}
\caption{Example~\ref{ex:k89}: The knot $8_9$. Orientable train track.}
\label{fig:k89}
\end{figure}
$$
\begin{array}{lll}
a: (v_{3}, v_{2}) \mapsto e &
b: (v_{2}, v_{0}) \mapsto g &
c: (v_{1}, v_{3}) \mapsto b\\
d: (v_{1}, v_{2}) \mapsto bi &
e: (v_{0}, v_{1}) \mapsto hed &
f: (v_{2}, v_{1}) \mapsto d\\
g: (v_{1}, v_{3}) \mapsto fgh&
h: (v_{3}, v_{0}) \mapsto ic &
i: (v_{0}, v_{1}) \mapsto a
\end{array}
$$
We have that $\chi(f_\ast) = x^9 -2x^8 +x^7 -4x^5 +4x^4 -x^2 +2x -1$.
Fix a basis of ${\rm im}\,\delta$, $\{ v_0 - v_1, v_2-v_0, v_3-v_0\}.$ With respect to this basis, $f|_{\rm im \delta}$ is represented as:
$$\scriptsize
\begin{bmatrix}
0 & -1 & 0 \\
-1 & 0 & 0 \\
1 & -1 & -1
\end{bmatrix}
$$
whose characteristic polynomial is $x^3+x^2-x-1$.
Therefore $\chi(f_\ast |_{W(G,f)})$ is:
$$(x^9 -2x^8 +x^7 -4x^5 +4x^4 -x^2 +2x -1)/(x^3+x^2-x-1) = x^6-3x^5+5x^4-7x^3+5x^2-3x+1.$$
Proposition~\ref{prop:dim Z}-(1)
tells us that $\chi(f_\ast |_Z) = 1$, so that $\chi(f_\ast |_{W(G,f)/Z}) = x^6-3x^5+5x^4-7x^3+5x^2-3x+1$.
Since $f:G\to G$ is orientation preserving, by Corollary~\ref{cor on p(x)}-(2), we have $\Delta_{8_9}=x^6-3x^5+5x^4-7x^3+5x^2-3x+1$.
It further factors as $(x^3-2x^2+x-1)(x^3-x^2+2x-1)$.
It is interesting that these factors are no longer palindromic, and one contains the dilatation $\lambda$ as a root and the other $1/\lambda$.
\end{ex}
A similar analysis based on the hyperbolic knot $8_{10}$ shows that $\chi(f_\ast|_{W(G, f)/Z})$ need not even be symplectically irreducible.
It has a non-orientable train track and
$$
\chi(f_\ast|_{W(G,f)/Z})= (x+1)^2
(x^{10}-3x^9+3x^8-4x^7+5x^6-5x^5+5x^4-4x^3+3x^2-3x+1).
$$
The train tracks in the remaining examples are all non-orientable.
\begin{ex}[Figure~\ref{fig:penner}]\label{ex:penner}
This example was suggested to us by Robert Penner.
Figure~\ref{fig:pennersub1}
\begin{figure}[htpb!]
\begin{center}
\psfrag{c1}{$c_1$}
\psfrag{c2}{$c_2$}
\psfrag{c3}{$c_3$}
\psfrag{c4}{$c_4$}
\psfrag{c5}{$c_5$}
\subfloat[Positive (inverse) Dehn twists about $c_1, c_2, c_3$, ($c_4, c_5$).]{\includegraphics[width=0.45\textwidth]{penner_twists}
\label{fig:pennersub1}
}
\input{penner.psfrag}
\subfloat[Train track $\tau$]{\includegraphics[width=0.45\textwidth]{penner}
\label{fig:pennersub2}
}
\end{center}
\caption{Example~\ref{ex:penner}: Penner's pA map, non-orientable train track.}
\label{fig:penner}
\end{figure}
shows five curves $c_1, \ldots, c_5$ on the genus $3$ surface.
The example is the product of positive Dehn twists about $c_1, c_2, c_3$ and inverse twists about $c_4, c_5$.
The map $f$ acts as follows:
\input{penner.gr}
There is exactly one vertex (which is partial) and the train track is
non-orientable. The only vertex is fixed by the map and $\chi(f_\ast|_{{\rm im}\,\delta}) = x-1$.
We have $\dim W(G,f) = 5$, hence the skew-symmetric product is
degenerate and $\dim Z=1$, which means $\chi(f_\ast|_Z) = x-1$.
The characteristic polynomial factors as
$\chi(f_*) = (x^4-11x^3+22x^2-11x+1)(x-1)(x-1).$
The symplectic polynomial $\chi(f_\ast|_{W(G,f)/Z}) = x^4-11x^3+22x^2-11x+1$ is irreducible, hence in this example it is necessarily the minimum polynomial of its dilatation. (Note that this was not the case in Example~\ref{ex:k89} above.)
If one orients the real and infinitesimal edges of the train track $\tau$ in Figure~\ref{fig:pennersub2} locally so that all orientations are consistent around the single partial vertex $v_0$, one sees that the loops $a,d$ and $f$ do not have globally consistent orientations, whereas the loops $b,c$ and $e$ do.
Since $G$ has no odd vertices, the orientation cover is just an ordinary double cover as illustrated in Figure~\ref{fig:penner-cover}.
\begin{figure}[htpb!]
\begin{center}
\input{mixed.psfrag}
\psfrag{a'}{$a'$}
\psfrag{b'}{$b'$}
\psfrag{c'}{$c'$}
\psfrag{d'}{$d'$}
\psfrag{e'}{$e'$}
\psfrag{f'}{$f'$}
\psfrag{1}{$\overline a'$}
\psfrag{2}{$\overline b'$}
\psfrag{3}{$\overline c'$}
\psfrag{4}{$\overline d'$}
\psfrag{5}{$\overline e'$}
\psfrag{6}{$\overline f'$}
\psfrag{0}{$v_0$}
\psfrag{!}{$v_1$}
\includegraphics[width=1\textwidth]{penner-Cover2}
\end{center}
\caption{ The orientation cover $\tilde \tau$ of $\tau$ in Figure~\ref{fig:penner}.}
\label{fig:penner-cover}
\end{figure}
Each edge, say $a \subset G$, lifts to two copies $a, a' \subset \tilde G$ and the vertex $v_0 \in G$ lifts to $v_0, v_1 \in \tilde G$.
We choose an orientation for $\tilde \tau$.
As claimed in Proposition~\ref{prop: orientation cover is orientable}, twin edges have opposite orientations.
Proposition~\ref{or-cover-matrix} implies that there are two covering maps; orientation preserving and orientation reversing.
The following is the orientation preserving train track map $\tilde f_{\rm op} : \tilde G \to \tilde G$:
$$
\begin{array}{lcllcl}
a: (v_1, v_0) &\mapsto & ada
&
a': (v_1, v_0) & \mapsto & a' d' a'
\\
b: (v_0, v_0) & \mapsto & dac'd'a'b
&
b': (v_1, v_1) & \mapsto & b' a d c a' d'
\\
c: (v_1, v_1) & \mapsto & c e' b' a d c
&
c' : (v_0, v_0) & \mapsto & c'd' a' bec'
\\
d: (v_0, v_1) & \mapsto & dac' d' a' b efc e' b'ad
&
d': (v_0, v_1) & \mapsto & d' a' bec' f' e' b'adca'd'
\\
e: (v_0, v_0) & \mapsto & efce'b'ad a dac'd' a' b e
&
e': (v_1, v_1) & \mapsto & e' b'adca'd' a' d' a' bec' f' e'
\\
f: (v_0, v_1) & \mapsto & efce'b'ad a dac'd' a' b e f
&
f': (v_0, v_1) & \mapsto & f' e' b'adca'd' a' d' a' bec' f' e'
\end{array}
$$
We order the edges $a, b, \cdots, f, a', b', \cdots, f'$. Then the transition matrix of $\tilde f_{\rm op}$ has the form
$\scriptsize
\begin{bmatrix}
A & B\\
B & A
\end{bmatrix}$
where
\begin{equation*}
A=
{\scriptsize
\begin{bmatrix}
2 & 1 & 1 & 2 & 3 & 3 \\
0 & 1 & 0 & 1 & 1 & 1 \\
0 & 0 & 2 & 1 & 1 & 1 \\
1 & 1 & 1 & 2 & 2 & 2 \\
0 & 0 & 0 & 1 & 2 & 2 \\
0 & 0 & 0 & 1 & 1 & 2
\end{bmatrix}}
\qquad \mbox{and} \qquad
B=
{\scriptsize
\begin{bmatrix}
0 & 1 & 0 & 1 & 1 & 1\\
0 & 0 & 1 & 1 & 1 & 1\\
0 & 1 & 0 & 1 & 1 & 1\\
0 & 1 & 0 & 1 & 1 & 1\\
0 & 0 & 1 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}}.
\end{equation*}
Note that $A+B$ is the transition matrix for $f : G\to G$.
Its characteristic polynomial is
$$(-1 + x)^4 (1 - 4 x + x^2) (1 - 3 x + x^2) (1 - 11 x + 22 x^2 -
11 x^3 + x^4),$$
and the symplectic polynomial is
$$
\chi(\tilde f_{\rm op} {}_\ast |_{W(\tilde G,\tilde f)/\tilde Z})=
(-1 + x)^2 (1 - 4 x + x^2) (1 - 3 x + x^2) (1 - 11 x + 22 x^2 -11 x^3 + x^4).
$$
The following is the orientation reversing train track map $\tilde f_{\rm or} : \tilde G \to \tilde G$.
$$
\begin{array}{lcllcl}
a: (v_1, v_0) &\mapsto & \overline a' \overline d'\overline a'
&
a': (v_1, v_0) & \mapsto & \overline a \overline d \overline a
\\
b: (v_0, v_0) & \mapsto & \overline d' \overline a' \overline c \overline d \overline a \overline b'
&
b': (v_1, v_1) & \mapsto & \overline b \overline a' \overline d' \overline c' \overline a \overline d
\\
c: (v_1, v_1) & \mapsto & \overline c' \ \overline e \overline b \overline a' \overline d' \overline c'
&
c': (v_0, v_0) & \mapsto & \overline c \overline d \overline a \overline b' \overline e' \overline c
\\
d: (v_0, v_1) & \mapsto & \overline d' \overline a' \overline c \overline d \overline a \overline b' \overline e' \overline f' \overline c' \overline e \overline b \overline a' \overline d'
&
d': (v_0, v_1) & \mapsto & \overline d \overline a \overline b' \overline e' \overline c \overline f \overline e \overline b \overline a' \overline d' \overline c' \overline a \overline d
\\
e: (v_0, v_0) & \mapsto & \overline e' \overline f' \overline c' \overline e \overline b \overline a' \overline d' \overline a' \overline d' \overline a' \overline c \overline d \overline a \overline b' \overline e'
&
e': (v_1, v_1) & \mapsto & \overline e \overline b \overline a' \overline d' \overline c' \overline a \overline d \overline a \overline d \overline a \overline b' \overline e' \overline c \overline f \overline e
\\
f: (v_0, v_1) & \mapsto & \overline e' \overline f' \overline c' \overline e \overline b \overline a' \overline d' \overline a' \overline d' \overline a' \overline c \overline d \overline a \overline b' \overline e' \overline f'
&
f': (v_0, v_1) & \mapsto & \overline f \overline e \overline b \overline a' \overline d' \overline c' \overline a \overline d \overline a \overline d \overline a \overline b' \overline e' \overline c \overline f \overline e
\end{array}
$$
We observe that the transition matrix for $\tilde f_{\rm or}$ is
$
\scriptsize
\begin{bmatrix}
B & A\\
A & B
\end{bmatrix}.
$
Its characteristic polynomial is
$$(-1 + x)^2 (1 + x)^2 (1 + 3 x + x^2) (1 + 4 x + x^2) (1 - 11 x +
22 x^2 - 11 x^3 + x^4), $$
and the symplectic polynomial is
$$
\chi(\tilde f_{\rm or} {}_\ast|_{W(\tilde G,\tilde f)/\tilde Z})=
(1 + x)^2 (1 + 3 x + x^2) (1 + 4 x + x^2) (1 - 11 x + 22 x^2 - 11 x^3 + x^4).
$$
The dilatation cannot distinguish the pA maps $F:S\to S$, $\tilde F_{\rm op}: \tilde S\to \tilde S$ and $\tilde F_{\rm or}: \tilde S\to \tilde S$, but our symplectic polynomial can distinguish the three.
It seems to be an open question to describe all the ways to construct all pA maps having a fixed dilatation.
\end{ex}
\begin{ex}[Figure~\ref{fig:evenodd}]\label{ex:evenodd}
The following map shows that even, odd, and partial vertices can
coexist in the same (non-orientable) train track:
\begin{figure}[htpb!]
\begin{center}
\input{mixed.psfrag}
\includegraphics[width=0.5\textwidth]{mixed}
\end{center}
\caption{Example~\ref{ex:evenodd}: A train track with even, odd, and partial vertices.}
\label{fig:evenodd}
\end{figure}
\input{mixed.gr}
The characteristic polynomial factors as
$$
\chi(f_\ast|_{W(G,f)/Z}) \chi(f_\ast|_Z) \chi(f_\ast|_{{\rm im}\,\delta})=
(x^6 - 3 x^5 + x^4 - 5 x^3 + x^2 - 3 x + 1) (x-1) (x^3-x^2-x+1).
$$
The factor $\chi(f_\ast|_{{\rm im}\, \delta})=(x^3-x^2-x+1)$ is the characteristic
polynomial of the matrix
$$
\begin{bmatrix}
1&0&0\\
0&0&1\\
0&1&0
\end{bmatrix},
$$
which describes how non-odd vertices $v_1, v_2, v_3$ are permuted by $[F]$.
\end{ex}
\bibliographystyle{amsplain}
|
1,116,691,500,374 | arxiv | \section{Integrability in classical and quantum theory}
The notion of integrability arose in celestial mechanics and referred to
systems for which the equation of motion can be solved in closed analytic form
without the necessity to resort to controlled approximations (perturbation
theory). The prototype model was the Kepler two-body system, whereas more than
two celestial bodies lead to non-integrable situations which can only be
approximated (in principle with unlimited numerical accuracy). For the
non-integrable case the terminology does not just mean that no analytic
solution was found, but rather points to the existence of a proof that such a
solution does not exist.
As the mathematical sophistication evolved, physicists and mathematicians
developed model-independent criteria for integrability. A modern definition
which is sufficiently general to cover classical mechanics is in terms of a
\textit{complete set of conservation laws in involution} \cite{Arn}.
This definition was extended from mechanics to include \textit{classical field
theory} where, according to Noether's theorem, a symmetry in the Lagrangian
setting leads to a conserved current and integrability means that there exists
an infinite complete set of conserved currents in involution. Quantum
mechanics is basically what is obtained from classical mechanics by
"quantization"; the fact that this process is not an isomorphism but a more
artistic kind of correspondence (the problem of ordering of operator products)
did not affect the inference of quantum integrability via quantization from
its classical counterpart. The best known illustration is the quantum analog
of the Kepler problem i.e. the hydrogen atom. In this case the conservation
laws which lead to integrability can be elegantly presented in terms a
spectrum-setting $O(4,2)$ group symmetry. Anomalies which could prevent
conservation laws to be inherited from their classical counterpart usually
need the presence of infinitely many degrees of freedom and hence occur
predominantly in quantum field theory (QFT).
There are many models of QFT which have remained outside the range of
Lagrangian quantization because no Lagrangian which fits them has been found;
in particular most of the so-called d=1+1 factorizing models, for which
explicit expressions for formfactors of quantum fields were constructed within
the bootstrap-formfactor program remained without a classical Lagrangian name.
Their given name refers to internal symmetries or to analogies with lattice
models. An example for such a situation is the scaling Z(N) Ising model in
\cite{BFK}, as the authors emphasize in the introduction of their paper.
Factorizing models constitute a nontrivial class of models with generally
noncanonical short distance behavior which owe their existence proof to
operator-algebraic methods. These methods are significantly different from the
quantum mechanical and measure-theoretical functional methods used by Glimm
and Jaffe \cite{G-J} in the 60s for establishing the existence of certain
canonical (superrenormalizable) models in the Lagrangian setting; in fact the
new method is based on \textit{modular localization} \cite{Lech2}.
Among the few integrable QFT which allow a Lagrangian presentation, the
Sine-Gordon model is the most prominent. The first indication about its
integrability came from the famous quasiclassical observations on the
Sine-Gordon particle spectrum by Dashen-Hasslacher and Neveu \cite{DHN}. But
even in cases like this, where the Lagrangian quantization setting even
provides a renormalized perturbation series, the latter still carries the
stain of divergence of all QFT perturbation series; Lagrangian quantization
"baptizes" a model with a name from classical field theory and permits a
perturbative expansion, but it does not lead to a proof of the existence of a
QFT behind this formalism, so that the problem of mathematically controlled
approximations cannot even be formulated.
Despite the observational success of the lowest terms in powers of the
coupling strength, one does not even know whether the power series is at least
an asymptotic approximand in the limit of vanishing coupling; the numerical
successes of renormalized QED and the subsequent observational achievements of
the standard model have no direct bearing on the mathematical consistency of
what is being approximated. Compared with other areas of theoretical physics
this is quite unique and shows, that in spite of almost 90 years which passed
since it was discovered, QFT still is not anywhere near its closure.
Recent insight into integrable d=1+1 factorizing model did not result from
refinements of the classical parallelism of the Lagrangian- or functional-
quantization setting. Rather it has been revealed through representation
theory, a method which was first introduced in 1939 by Wigner as a means to
obtain an intrinsic systematic classification of wave function spaces of
relativistic particles instead of having to cope with an ever-growing
confusing zoo of field equations, many leading to equivalent descriptions. In
the context of QFT, representation theoretical methods were first used in
connection with current algebras and chiral conformal QFTs. The discovery of
integrable QFTs with a particle interpretation started with
Dashen-Hasslacher-Neveu's \cite{DHN} quasiclassical observations about the
Sine-Gordon particle spectrum and took an interesting turn after it was
realized that it could be viewed as an exact realization of the "nuclear
democracy" particle spectrum which originated in the bootstrap setting of
scattering functions in d=1+1 \cite{STW}. The second important step was the
discovery of the representation theoretical bootstrap-formfactor setting
\cite{K-W}. This project reached its present perfection after it was realized that:
\begin{itemize}
\item Unitary and crossing symmetric elastic two-particle scattering functions
which obey the Yang-Baxter consistency relations can be classified
\cite{BKTW}\cite{K} and lead to combinatorial factorization formula for
n-particle elastic S-matrices. Zamolodchikov's formal algebraization added a
useful tool \cite{Zam} to the implementation of this (originally analytic)
classification project.
\item The bootstrap-formfactor program associates to each such scattering
function\footnote{In the presence of backward scattering and/or inner symmetry
indices the scattering function is a matrix function which fulfills the
Yang-Baxter equation \cite{Ba-Ka}.} explicitly computable formfactors of local
covariant fields from the local equivalence class (Borchers class) of
(composite) fields. The relation of the scattering function to an associated
QFT is unique (uniqueness of the solution of the inverse scattering problem
within the bootstrap-formfactor setting).
\item The creation and annihilation operators of the Zamolodchikov-Faddeev
(Z-F) algebra turned out to be the Fourier components of covariant
vacuum-polarization-free generators (PFGs) of interacting wedge-localized
algebras \cite{Sch1}\cite{Sch2}. They are special objects within the theory of
"modular localization" which permits to "emulate" wedge-localized products of
free fields \textit{inside the associated wedge-localized interacting algebra}.
\item The action of translations on a wedge-localized algebra together with
that of the modular reflection (in d=1+1 the TCP operation) generate a net of
right and left directed wedge algebras whose double cone intersections are
compactly localized algebras which act cyclic and separating on the vacuum
\cite{Lech1}\cite{Lech2}. In these papers the new constructive setting based
on modular localization obtained its first mathematical formulation. Combined
with the control of phase space degrees of freedom in the form of
\textit{modular nuclearity,} this approach led to the first existence proof of
QFTs with non-canonical short distance behavior within the algebraic setting
of QFT.
\item A sharp division between \textit{temperate} and \textit{non-temperate}
vacuum-polarization-free generators (PFGs) of wedge-localized algebras gives
rise to a dichotomy of integrable and non-integrable QFTs.
\end{itemize}
The integrable PFGs with their Z-F algebraic structure, apart from being
nonlocal (i.e. wedge- instead of point- local), relate particles and fields in
the standard way. Non-integrable PFGs on the other hand have translation
non-invariant domain properties which are radically different from those which
one meets in the standard formulation of fields or even with operators used in
nonlocal or noncommutative extensions \cite{BBS}. The much weaker relation of
particles with localized algebras and fields resulting from non-temperate PFGs
accounts for the difficulties one faces in studying non-integrable models of
QFT. It explains in particular why even 8 decades after its discovery, a
mathematically controlled construction of non-integrable models of QFT
remaines an open problem \cite{E-J}. Although our presentation in section 5
will not solve these problems, it does try to place them into sharper focus
which may be useful for future constructive attempts.
The DHR superselection theory \cite{Haag}, which constructs a full QFT from
its local observables algebras, leads to a different kind of "partial"
integrability which is of a more kinematical kind. In theories which are
non-integrable in the previous (dynamical) sense, the superselection strucure
of observable algebras can be described in terms of an exactly computable
combinatorial algebraic structure, even though the observable algebra itself
remains non-integrable. With some hindsight one can find out which integrable
algebraic superselection structure belongs to a specific non-integrable net of
local observable algebras. Since Lagrangian quantization leads directly to the
perturbation theory of the full field algebra (whose fixed-point algebra under
a compact inner symmetry group action defines the observable algebra), this
distinction between observables and their superselection structure is mainly
of conceptual, but hardly of practical interest.
The situation becomes more interesting for conformal QFT, in which case the
separation of observables from the superselection charge-carrying fields
arises in a natural way through the Huygens principle, according to which
observables commute for space- \textit{and time-like separations}. In the
context of QFT this limits the notion of observable algebras to algebras which
are generated by pointlike fields with integer scale dimension, whereas fields
which carry anomalous dimensions are considered as superselected charge
carriers similar to the DHR superselection of massive QFT. This attaches to
the anomalous dimensions a "Huygens superselection" aspect.
Indeed, the anomalous scale spectrum appears in the spectrum of the center of
the universal covering of the conformal group, and unlike the anomalous spin
spectrum in the Bargman-Wigner representation theory of the Lorentz group in
d=1+2, it is accompanied by the geometric covering of the (compactified)
Minkowski space. The latter consists of infinitely many "heavens" above and
"hells" below \cite{Lu-Ma} and hence the anomalous dimensions which control
the short distance behavior also show up in phase factors which arise from the
\textit{conformal rotation} in passing from one to Minkowski spacetime copy to
the next. \ \
In an appropriately extended terminology the anomalous dimensions together
with the braid-permutation group $\mathbf{BP}_{\infty}~$\cite{Fenn} structure
belong to the kinematical aspects of a conformal theory. In d=1+1 the
conformal group and the observable algebra split into right and left parts and
the $\mathbf{BP}_{\infty}~$group simplifies and becomes the braid group
$\mathbf{B}_{\infty},$ whereas the observable algebras (current,
energy-momentum) reduce to well-studied infinite Lie-algebras. The
combinatorial braid group algebra has interesting connections with Vaughn
Jones \cite{Jones} mathematical work on subfactors. In the last section this
situation will be presented in more details.
The natural setting for the modular theory is that of \textit{local quantum
physics} (LQP) \cite{Haag} which appeared in a rudimentary form already in
1957 in Haag's first attempt to formulate QFT in an intrinsic way \cite{Lille}
i.e. without leaning on the parallelism to the classical world of Lagrangian
fields. This formulation competed with the (at that time already existing)
framework by Wightman \cite{Wight} in which for the first time quantum fields
were identified with operator-valued Laurent Schwartz distributions. The
understanding of the inherently singular behavior of quantum fields as
compared to their classical counterparts was the key for "taming" the
ultraviolet divergence problem, which before almost caused the abandonment of
QFT. Haag's LQP approach attributes to these fields the role of (singular)
generators of a net of localized operator algebras; another helpful analogy is
that to coordinatizations in geometry.
Both settings were strongly influenced by Wigner's 1939 particle
classification in terms of group representation concepts for the inhomogeneous
Lorentz group (the Poincar\'{e} group $\mathcal{P)}$ as an alternative to the
quantization of classical field equations. Haag's basic idea was simple and
almost naive: measurements of local observables in a spacetime region
$\mathcal{O}$ which have a certain duration in time (the duration of the
activation of a particle counter) and a spatial extension within $\mathcal{O}$
should be members of an ensemble of operators forming an operator algebra
$\mathcal{A(O}).$ An experimentalist does not have to know the internal
structure of his particle- and radiation- counters, his only means to increase
precision is to improve their spacetime localizing sensitivity as well as
using several of them in coincidence and anti-coincidence arrangements. For
extracting scattering data from QFT it is not necessary to know the detailed
properties of an individual $\mathcal{O}$-localized observable, the
information that it belongs to a localized ensemble $\mathcal{A(O})$ and to
identify its superselected charge suffices \cite{Haag}.
The first test of this idea came from the derivation of the
Lehmann-Symanzik-Zimmermann (LSZ) scattering formula from the large-time
behavior of operators used in scattering theory in which only their
affiliation to a localized ensemble is used; differences between individual
operators in $\mathcal{A(O})$ only show up in adjustable numerical
normalization factors of their asymptotic limits (the \textit{insensitivity of
the S-matrix against local changes} \cite{Haag}). This viewpoint of
\textit{placing ensembles in form of localized operator algebras into the
center stage} has shown its soundness in numerous special situations; last not
least it accounts for the fact that there is no necessity for adding the
concept of probability to QFT since, in contrast to Heisenberg's QM to which
Born had to add the probability interpretation for individual events, the
thermal aspect together with its associated stochastic probability is
intrinsic in QFT. In fact it is a consequence of the mathematical description
of causal localization in the form of \textit{modular localization} which is
the foundational principle of LQP.
Born's spatial localization in terms of projectors from the spectral
decomposition of the selfadjoint position operator leads to a localization for
which the spatial restriction of the vacuum state (in the second quantized
description of Schr\"{o}dinger's QM) to the observables inside a localization
region does not acquire any additional property beyond those which it had as
an expectation value on the global observables. This is radically different in
LQP where the restriction of the vacuum to a spacetime localized algebra
$\mathcal{A(O})$ acquires a thermal manifestation which the \textit{global
vacuum} as an expectation value \textit{on all observables} did not have
\cite{E-J}. In mathematical terms, the restriction of the pure global vacuum
to localized variables is an impure thermal state to the causally closed
"world" associated with a local subalgebra of observables. This phenomenon is
best described in the setting of modular operator theory which among other
things attaches a \textit{modular Hamiltonian} $H_{mod}~$which is uniquely
determined by the pair ($\mathcal{A(O)},\Omega$)\ where $\Omega$ denotes the
global vacuum. The restricted vacuum is a thermal state which obeys the
KMS\footnote{The KMS relation is an anytic relation which is fulfilled by
tracial Gibbs states. It survives in the thermodynamic limit when the tracial
characterization is lost \cite{Haag}.} relation with respect to $H_{mod}.$
This property shows most clearly the conceptual rift between QM and QFT.
In the \textit{Einstein-Jordan conundrum} Einstein and Jordan \cite{E-J} came
close to this insight\textit{; }the observation that Jordan's subvolume
fluctuations in QFT are indistinguishable from Einstein's thermal black body
system as a consequence of "localization-thermality" which results from the
restriction of the pure vacuum state could have clinched the conundrum; but
the situation in 1925 was too much under the spell of the conceptual
revolution caused by the freshly discovered QM to permit the perception of
this subtle difference; so its full understanding had to await more than 8
decades. The phenomenon of vacuum polarization, which was discovered some
years later by Heisenberg, has a strong relation to the E-J
conundrum\footnote{In fact in a private correspondence Heisenberg challenged
Jordan to account for a logarithmic divergence resulting from the vacuum
polarization cloud at the endpoints of his localization interval \cite{Du-Ja}
\par
.
\par
. $~$} since the formation of vacuum polarization clouds at the boundary of a
localization region (leading to area proportionality of \textit{localization
entropy}) and the thermal property of the reduced vacuum state are two sides
of the same coin.
There can be no doubt that Einstein, who had a livelong philosophical problem
with Born's assignment of probabilities to individual events in QM, would have
embraced the the ensemble probability from thermalization of a subvolume
restricted vacuum state in QFT. After all it was the subvolume fluctuations in
the statistical mechanics at finite temperature which led him to the
corpuscular aspect of light \cite{E-J}. But time was not yet ripe to
understand that subvolume restriction in QFT in constrast to QM lead to a loss
of purity of the restricted vacuum state which results in thermal behavior
without a "heat bath" being present
The existence of a thermal manifestation of causal localization is an
unavoidable consequence of Haag's quantum adaptation of the Faraday-Maxwell
"Nahewirkungsprinzip" \cite{Haag} (together with Einstein's refinement of
causality in Minkowski spacetime). In fact causal locality and its more recent
mathematical formulation as \textit{modular localization} became the
cornerstone of Haag's local quantum physics (LQP) and the thermal
manifestation of the subvolume restricted vacuum is a consequence.
The perception of the presence of this intrinsic ensemble probability could
have vindicated Einstein's philosophical resistance against Born's
probability, so that many physicists nowadays would not think of Einstein's
reluctance as being the stubborn resistence of an old man against the tides of
the time. Einstein may have even accepted Born's assignment of probability to
individual events if it were possible to show that within a better conceptual
understanding of the limiting relation of QM with the more fundamental local
QFT, the probabilistic manifestation of a localized ensemble passes to Born's
quantum mechanical probability assigned to individual\footnote{QM lacks the
ensemble aspect which results from modular localization; The Born-localization
resulting from the projectors of the spectral decomposition of the has does
not have an intrinsic ensemble-probabiliy interpretation; Born had to
postulate it..} events even though causal localization and vacuum polarization
disappear in such a limit.
As most ideas which do not result from the extension of an already existing
formalism but rather emerged from philosophical contemplations about physical
principles, their content is usually more subtle than that of the supporting
intuitive arguments. Already for the localization inside a non-compact
spacetime region as big as a Rindler wedge only a uniform acceleration in the
wedge direction can keep a particle counter (observer) inside a non-compact
wedge region (the Unruh Gedankenexperiment \cite{Unruh}); and when it comes to
the localization inside a compact causally closed region (e.g. the double cone
completion of a spatial ball) it becomes impossible in models with a mass-gap
to visualize the realization of modular localization in spacetime
geometric/physical terms since the region preserving modular group becomes an
abstract automorphism of the localized operator algebra.
The thermal manifestation of modular localization is an unavoidable
consequence of Haag's adaptation of classical causality (Maxwell equations) to
the requirements of quantum theory, and hence it is fully present in QFT. The
impurity of the state resulting from the restriction of the pure global vacuum
state to the ensemble of observables contained in localized (without loss of
generality causally closed) algebra and its description in terms of a KMS
state associated with a "modular Hamiltonian" is a mathematical fact,
notwithstanding the difficulties to find direct observational
verifications\footnote{There is an ongoing discussion whether thermal
radiation effects in Unruh situations can be seen in subnuclear laboratory
experiments.} which underlines the subtlety of causal localization in QT.
It is not important that physical principles can be directly verified in terms
of existing hardware or whether their intuitive support under closer
conceptual scrutiny appears somewhat metaphoric; important is primarily the
mathematical precision of the formulation and the wealth of the theoretical
and observational consequences. For the Unruh Gedankenexperiment, the
\textit{dependence of the state of the observer} attributes a certain aura of
\textit{fleetingness}, but this is to some extend overcome if the horizon is
an \textit{intrinsic property of the spacetime metric in form of an event
horizon}. The curvature contained in the spacetime metric does not create the
thermal aspects of Hawking radiation, it rather replaces the
observer-dependent fleeting aspect of the causal horizon by a more robust
observer-independent \textit{event horizon} and in this way favors the
macroscopic detection of localization-thermality in astrophysical observations.
In contrast to the volume-proportional heat bath entropy, the
\textit{localization entropy~of quantum matter }is proportional to the
dimensionless area $A/\varepsilon^{2}\rightarrow\infty,~\varepsilon
\rightarrow0$ where $\varepsilon$\textit{~}is the surface "roughness" i.e. the
thickness of the layer which is conceded to the attenuation of the vacuum
polarization cloud. In the Bekenstein conjecture the finite black hole entropy
(formally obtained by replacing the parameter $\varepsilon~$by the Planck
distance) refers to the hypothetical degrees of freedom of a future quantum
theory of gravity. We believe that this is related to 't Hooft's brickwall
picture \cite{brick}. Such a surface divergence in the limit of sharp
localization appeared for the first time in Heisenberg's treatment of vacuum
polarization caused by localization of conserved (dimensionless) charges
\cite{E-J}.
It is not surprising that the subtlety of the principle of causal localization
has led to misunderstandings in particular in string theory (last section).
Although the correction of deep errors stemming from misunderstandings is a
powerful method to highlight the importance of a principle, we will not follow
such a path here and instead refer to other publications \cite{cau
\cite{foun}\cite{response}.
There remains the question why these important localization properties of QFT
have not been seen already long ago. The answer is that they did not play an
important role in the Tomonaga-Schwinger-Feynman-Dyson discovery of
renormalized perturbation theory; the essential step from the old perturbation
theory (in textbooks by Heitler and Wentzel), which had insurmountable
problems with vacuum-polarization, to the modern formulation was the
\textit{implementation of a covariant formulation}. Covariance under the
Poincar\'{e} automorphisms of Minkowski spacetime is closely related to causal
localization, but it generally leads to stronger results.
The sections 2 and 4 prepare the ground for "modular localization" which is
used in section 5 for a characterization of integrability/non-integrability in
terms of properties of generators of wedge-localized operator algebras. Since
modular localization is a quite subtle recent concept, and not every reader is
prepared to invest in conceptual/mathematical ideas without some indication
about their physical relevance for important open problems of particle
physics, it may be helpful to indicate what one hopes to achieve from a
reformulation of the BRST gauge theory in terms of string-localized
vectorpotentials (section 3); this new view may even lead to revisions of the
Higgs issue.
\section{The modular localization approach of QFT}
There are two routes to modular localization, a mathematical access and a more
physical-conceptual path. The mathematical route starts from the
Tomita-Takesaki modular theory of operator algebras and makes contact with QFT
by applying this to Haag's LQP algebraic formulation of QFT in terms of
spacetime-indexed nets of operator subalgebras \cite{Haag}. An important step
was the recognition by Bisognano and Wichmann \cite{Bi-Wi}\cite{Mund} that the
abstract Tomita-Takesaki modular group $\Delta_{\mathcal{A}}^{it}$ and the
modular reflection $J$ acquire a direct geometric-physical meaning in terms of
particle physics concepts in case of a wedge-localized operator
subalgebra\footnote{A general wedge results from $W_{0}=\left\{ \left\vert
x_{0}\right\vert <x_{1}\right\} ~$by applying Poincar\'{e} transformations.}
$\mathcal{A(}W),$ whereas the modular objects for compact localized algebras
$\mathcal{A(O})$ can be in principle determined by representing the region of
interests (rather its causal completion) and the associated algebras using
intersections of wedges and wedge-localized algebras \cite{Summers}.
Modular theory began in the middle of the 60s as a joint venture between
mathematics and physics when, at a conference in Baton Rouge \cite{Borch},
mathematician interested in operator algebras (Kadison, Tomita, Takesaki) met
physicists (Haag, Hugenholz, Winnink) who just had finished work on an
intrinsic formulation of statistical \textit{quantum mechanics of open
systems} which avoids the non-covariant box-quantized Gibbs states by starting
directly in the thermodynamic infinite volume limit \cite{Haag}.
In this work they used an older analytic trick which Kubo, Martin and
Schwinger to avoid computing Gibbs traces. In the open system setting this
acquired a fundamental conceptual significance; in this way the KMS property
became part of the joint mathematics/physics heritage, combining modular
theory of operator algebras (where it led to Connes famous classification work
about von Neumann algebras) with statistical mechanics of open systems.
Whereas the box-quantized thermal Gibbs states always stay within the setting
of the standard (type I$_{\infty})~$quantum mechanical algebras $B(H)$ of all
bounded operator on a Hilbert space, the thermodynamic limit\footnote{The
tensor factorization of type I$_{\infty}~$"thermofield theory" breaks down and
the algebra changes its type.} changes the algebraic type into what afterwards
in Connes classification was called the unique hyperfinite type III$_{1}$ von
Neumann factor algebra.
At the beginning of the 60s Araki \cite{Haag} had already shown that such
algebras, which have quite different properties from those met in QM, occur in
the form of local operator algebras in QFT. Together with the statistical work
on open systems this was suggestive of an conceptual connection of thermal
behavior and localization but at that time this remained unnoticed. It came as
quite a surprise when a decade afterwards Bisognano and Wichmann \cite{Bi-Wi}
discovered that the \textit{monad} \footnote{The short name which we will use
for the (up to isomorphism unique) befor-mentioned operator algebra. Besides
the standard algebra $B(H)$ of all bounded operators this the only type of
operator algebra which one encounters in continuous quantum systems
\cite{Jakob}.}, realized as a wedge-localized subalgebra $\mathcal{A}(W),$ has
modular data which have well-known physical/geometrical significance,
including the KMS property. As a result it became clear that localization,
thermalization, and the generation of vacuum polarization clouds are
inexorably intertwined.
Using the representation theoretical access to modular localization, we begin
our presentation with extracting modular objects from Wigner's classification
of irreducible positive energy representations of the Poincar\'{e} group. This
concept was not available to Wigner who realized that the Born localization
based on the position operator to the relativistic inner product (the
\textit{Newton-Wigner localization}) was not the right concept which could his
particle representation theory with QFT. This may explain why Wigner, after
his important contribution to QFT immediately after its discovery, maintained
a lifelong critical distance with respect to its later developments. Whereas
the Born localization is extrinsic\footnote{Born localization entered QM
through his famous probabilistic interpretation of (the Born approximation of)
the scattering amplitude i.e. the cross section. This was afterwards extended
to the position operator and its associated wave functions. .} to QM, the
modular localization and its thermal-probabilistic aspect is intrinsic, i.e.
it only uses concepts from the representation theory of the Poincar\'{e}
group. For matters of notational simplicity we restrict our presentation to
the case of a scalar massive particle.
It has been realized, first in a special context in \cite{Sch1} and then in a
general mathematical rigorous setting which covers all positive energy
representations in \cite{BGL} (see also \cite{Fa-Sc}\cite{MSY}), that there
exists a natural localization structure on the Wigner representation space for
any positive energy representation of the proper Poincar\'{e} group. The
starting point is an irreducible $(m>0,s=0)$ one-particle representation of
the Poincar\'{e} group on a Hilbert space $H$\footnote{Since positive energy
representations are completely reducible this works for all such
representations, not only irreducible ones.} of wave functions with the inner
product
\begin{equation}
\left( \varphi_{1},\varphi_{2}\right) =\int\bar{\varphi}_{1}(p)\varphi
_{2}(p)\frac{d^{3}p}{2p_{0}
\end{equation}
For other (higher spin, massless) representations the relation between the
momentum space wave function on the mass shell (or light cone) and the
covariant wave functions is more involved as a consequence of the presence of
intertwiners $u(p,s)$ which connect the unitary with the covariant
representations. Selecting a wedge region $W_{0}=\{x\in\mathbb{R}^{d
,x^{d-1}>\left\vert x^{0}\right\vert \}$ one notices that the unitary
wedge-preserving boost $U(\Lambda_{W}(\chi=-2\pi t))=\Delta^{it}$ commutes
with the antiunitary reflection $J_{W}$ on the edge of the wedge (i.e. along
the coordinates $x^{d-1}-x^{0}$). The distinguished role of the wedge region
is that it produces a commuting pair of boost and antiunitary reflection. This
has the unusual (and perhaps even unexpected) consequence that the closed,
antiunitary operator (the Tomita S-operator)
\begin{align}
S_{W} & :=J_{W}\Delta^{\frac{1}{2}},~~S_{W}^{2}\subset1\\
& since~~J\Delta^{\frac{1}{2}}J=\Delta^{-\frac{1}{2}}\nonumber
\end{align}
which is intrinsically defined in terms of Wigner representation data, is
\textit{involutive on its dense domain} and has a unique closure (unchanged
notation) with $ranS=domS.$
The involutivity means that the s-operator has $\pm1$ eigenspaces; since it is
antilinear, the +space multiplied with $i$ changes the sign and becomes the -
space; hence it suffices to introduce a notation for just one eigenspac
\begin{align}
K(W) & =\{domain~of~\Delta_{W}^{\frac{1}{2}},~S_{W}\psi=\psi
\},~K(W)~closed\label{K}\\
& J_{W}K(W)=K(W^{\prime})=K(W)^{\prime},\text{ }duality\nonumber\\
& \overline{K(W)+iK(W)}=H,\text{ }K(W)\cap iK(W)=0\nonumber
\end{align}
It is important to be aware that, unlike QM, we are dealing here with real
(closed) subspaces $K$ of the complex one-particle Wigner representation space
$H$. An alternative is to directly work with complex dense subspaces
$K(W)+iK(W)$ (third line). Introducing the \textit{graph norm} in terms of the
positive operator$~\Delta,$ the dense complex subspace becomes a Hilbert space
in its own right. The second and third line require some more explanation: the
upper dash on regions denotes the causal disjoint (the opposite wedge),
whereas the dash on real subspaces stands for the symplectic complement with
respect to the symplectic form $Im(\cdot,\cdot)$ on $H.$
The two properties in the third line are the defining relations of what is
called the \textit{standardness property} of a real
subspace\footnote{According to the Reeh-Schlieder \cite{Haag} theorem a local
algebra $\mathcal{A(O})$ in QFT is in standard position with respect to the
vacuum i.e. it acts on the vacuum in a cyclic and separating manner. The
spatial standardness, which follows directly from Wigner representation
theory, is just the one-particle projection of the Reeh-Schlieder property.};
any standard $K$ space permits to define an abstract s-operato
\begin{align}
S(\psi+i\varphi) & =\psi-i\varphi\label{inv}\\
S & =J\Delta^{\frac{1}{2}}\nonumber
\end{align}
whose polar decomposition (written in the second line) yields two modular
objects, a unitary modular group $\Delta^{it}$ and an antiunitary reflection
$J,$ which generally have however no geometric significance. The domain of the
Tomita $S$-operator is the same as the domain of $\Delta^{\frac{1}{2}},$
namely the real sum of the $K$ space and its imaginary multiple. In our case
this domain is determined solely in terms of Wigner group representation theory.
Coming from QFT, the complex $domS_{W}$ can be understood as the complex dense
space which is provided by projecting the dense Reeh-Schlieder domain
(obtained by applying fields smeared with W-supported test functions to the
vacuum \cite{Haag}) to the one-particle space, and the closed $K(W)$ space
results from projecting only Hermitian operators. The modular localization
approach provides a constructive access to QFT whithout quantization by
avoiding any parallelism with the less fundamental classical field
theory\footnote{In Jordan's terminology "without classical crutches"
\cite{Jor}.}. As will be demonstrated in section 4, this is quite easy in the
absence of interactions, but leads to new concepts which pose novel problems
in the presence of interactions (section 5). The $J~$transformed $K$-spaces
(\ref{K}) permit a more direct description in terms of symplectic complements
\begin{equation}
K^{\prime}:=JK=\{\chi|~Im(\chi,\varphi)=0,~all\text{ }\varphi\in K\}
\end{equation}
It is easy to obtain a net of K-spaces by $U(a,\Lambda)$-transforming the
K-space of a particular $W_{0}.$ A bit more tricky is the construction of
sharper localized subspaces via intersections
\begin{equation}
K(\mathcal{O})
{\displaystyle\bigcap\limits_{W\supset\mathcal{O}}}
K(W)
\end{equation}
where $\mathcal{O}$ denotes a causally complete smaller region (e.g.
non-compact spacelike cone, compact double cone). Intersection may not be
standard, in fact they could even be zero, in which case the theory allows
localization in $W$ (it always does), but not in $\mathcal{O}.$ One can show
that the intersection for non-compact \textit{spacelike cones} $\mathcal{O=C}$
is always standard for all positive energy representations \cite{BGL}.
Standardness for compact double cone regions $\mathcal{O=D}$ leads to
pointlike localized generating wave functions (wave-function-valued Schwartz
distributions). This applies to$\ (m>0,s)$ and to massless finite helicity
representations, whereas the Wigner massless infinite spin family with
$\mathcal{K(D)}=0,\ K(\mathcal{C})~$standard, requires semiinfinite spacelike
string generating wave functions. In the functorial relation between Wigner
wave functions and quantum fields this leads to pointlike/stringlike localized
generating free fields. The positive energy Wigner representations fall into 3
families: positive mass, zero mass finite helicity and zero mass infinite
helicity. Only the third class, for which the two-dimensional Euclidean little
group is faithfully represented, requires stringlike generating wave functions.
Since all states in QFT carry a unitary representation of the Poincar\'{e}
group which permits a (discrete or continuous) decomposition into irreducible
components, this closes the issue of modular state-localization. Modular
operator localization of free fields follows state-localization (section 4),
however modular localization of interacting operator algebras requires a
subtle refinement which accounts for new conceptual problems posed by
interactions (section 5).
For the explicit construction of the pointline free fields of arbitrary mass
and finite spin it is somewhat easier to follow Weinberg \cite{Weinbook} and
compute covariant \textit{intertwiners} which map momentum space creation and
annihilation operators into covariant fields. This is possible because in the
absence of interactions modular localization and covariance are equivalent
requirements and the differences between Wigner's "little group" and its
representation accounts fully for the differences in localization between the
three representation families.
Leaving out the string-localized infinite spin family, the result of the
"covariantization" associates one unitary (m,s) Wigner representation with an
infinite family of generating covariant spinorial wave functions
$\Psi^{(A,\dot{B})}$ whose spinorial undotted/dotted spinorial indices are
related to the physical spin s through the following inequalities\footnote{For
convenience of notation our spinorial indices are half of the standard ones.
\begin{align}
\left\vert A-\dot{B}\right\vert & \leq s\leq\left\vert A+\dot{B}\right\vert
,\text{ }m>0\label{line1}\\
s & =\left\vert A-\dot{B}\right\vert ,~m=0 \label{line2
\end{align}
One notices that in the zero mass case the vector representation
($A=1/2,B=1/2$) for s=1 and the ($A=1,B=1$)~for s=2 are missing i.e. precisely
those fields which correspond to the classic electromagnetic vectorpotential
and for s=2 to the metric tensor. These gaps in the massless case have
important physical consequences.
Wigner's representation theory for positive energy representations of the
Poincar\'{e} group combined with the calculation of intertwiners via
covariantization represents a completely intrinsic quantum path to free
fields. The passing from generating covariant wave function to covariant
quantum fields only requires to reinterpret the momentum space wave functions
$a^{\ast}(p)$ and their antiparticle counterpart $b(p)$ as canonical
creation/annihilation operator
\begin{equation}
\Psi^{(A,\dot{B})}(x)=\frac{1}{\left( 2\pi\right) ^{3/2}}\int\frac{d^{3
p}{2p_{0}}\{e^{ipx}u^{(A,\dot{B})}(p)\cdot a^{\ast}(p)+e^{-ipx}v^{(A,\dot{B
)}(p)\cdot b(p)\} \label{cov
\end{equation}
Here the dot stands for the summation over physical spin components and the
dependence on the spinorial component of the ($A,\dot{B})$ representations
(ranging over $2A+1$ respectively $2\dot{B}+1$ values) on the $\Psi$ and the
$u$, $v$ intertwiners have been omitted. The covariantization leading to the
intertwiners uses only group theory, it can be found in the first volume of
Weinberg's well-known book \cite{Weinbook}.
There is a subtle consequence of modular localization which one encounters in
the second ($m=0,s\geq1$) representation class of massless finite helicity
representations (the photon-graviton family). Whereas in the massive case the
relation of the physical spin $s$ with the formal spin in the spinorial fields
follows the angular momentum composition rules which leads to the spinorial
restrictions (\ref{line1}) \cite{Weinbook}, the zero mass finite helicity
family in the second line has a significantly reduced number of spinorial
descriptions. Different from classical Maxwell theory, where pointlike
vectorpotentials are perfectly acceptable (constrained) classical fields,
their quantum counterparts do not appear in the covariantized Wigner's list
(\ref{line1}).
The explanation of this dilemma, which also leads to its cure, is that the
loss of pointlike quantum potentials is the result of a \textit{clash between
the Hilbert space structure (positivity) and pointlike localization}. The
missing spinorial fields in (\ref{line2}), as compared to (\ref{line1}),
reappear after relaxing the localization from pointlike to stringlike
\cite{MSY}. Both kind of fields are singular limits of operators localized in
causally closed regions; pointlike fields in case of double cone localized and
semi-infinite stringlike fields in case of spacelike cone localized operators.
Once one allows stringlike covariant fields i.e. $\Psi^{(A,\dot{B})}(x,e)$
localized on spacelike half-lines $x+\mathbb{R}_{+}e,$ the full range of
spinorial realizations (\ref{line2}) is available. These generating free
fields are covariant and "string-local
\begin{align}
U(\Lambda)\Psi^{(A,\dot{B})}(x,e)U^{\ast}(\Lambda) & =D^{(A,\dot{B
)}(\Lambda^{-1})\Psi^{(A,\dot{B})}(\Lambda x,\Lambda e)\label{string}\\
\left[ \Psi^{(A,\dot{B})}(x,e),\Psi^{(A^{\prime},\dot{B}^{\prime})
(x^{\prime},e^{\prime}\right] _{\pm} & =0,~x+\mathbb{R}_{+}e><x^{\prime
}+\mathbb{R}_{+}e^{\prime}\nonumber
\end{align}
although the Wigner representation itself (and its functorially related local
operator algebra, see next section) remains pointlike generated, since the
field strengths suffice for its generation. But for other purposes the
potentials are indispensable. (see below). The use of covariant string fields
also facilitates the construction of finite "gauge bridges" (using the
terminology of gauge theory) between two matter fields with opposite charges.
The third Wigner family, the \textit{infinite spin representations, }resisted
all attempts to understand their localization properties for a very long time.
Its local generators have no analog in classical Lagrangian field theory and
also did not appear in Weinberg's intertwiner formalism \cite{Weinbook}. Only
after the concept of modular localization was applied to Wigner's
representation theoretical construction \cite{BGL}\cite{MSY}, it became clear
that this is a case of a string-like generated Wigner representation. The
differences in localization properties can be traced back to the different
representations of the Wigner "little groups". For m=0 representations the
little group is the two-dimensional Euclidean group, but only for the
\textit{faithful} $E(2)$ representation the non-compactness of the group makes
itself felt in the necessity to introduce stringlike generators. The
interwiners do not have spinorial indices, instead they depend on a string
direction $e$ and the dot in (\ref{cov}) stands for an infinite sum reflecting
the infinite dimensional nature of the $E(2)$ representation space.
The conventional description of vectorpotentials, which one obtains from
quantization of their classical counterpart, maintains the pointlike
formalisms at the expense of Hilbert space structure. The application of the
BRST formalism gives the physically correct results only for gauge invariant
quantities, which are automatically pointlike localized \cite{nonlocal}. But
the physical electric charge-carrying operators are known to allow no
localization which is sharper then an arbitrary thin spacelike cone with
stringlike generating fields which remain outside the BRST formalism. The use
of the string-localized vectorpotentials exposes the origin of their
string-like localization by coupling the quantum matter to string-like
vectorpotentials. The result is that their stringlike localization is exported
to the matter field, which in zero order perturbation was point-like
localized. Whereas the vectorpotential continues to lead to pointlike field
strengths, there is no linear operation which undoes the string localization
of the charge-carrying field.
Even in the case of free fields the use of stringlike vectorpotentials
protects against incorrect application of the gauge formalism. A well-known
illustrations is the Aharonov-Bohm effect in QFT\footnote{The standard A-B
effect refers to QM in an external magnetic field.}. It is not necessary to
use vectorpotentials, but if one decides to use them it is important to work
with the string-localized potential since the pointlike indefinite metric
potential (the Feynman gauge) gives a wrong answer \cite{charge
\cite{nonlocal}.
Stringlike massless higher spin "potentials" have a better short distance
behavior than their associated field strength (whose short distance dimension
increases with $s)$; in fact in all cases one finds potentials with $d_{sd}=1$
which is the smallest dimension allowed by Hilbert space positivity and also
the largest for satisfying the power counting renormalization requirement in
up to quadrilinear polynomial couplings. As a result there are candidates for
renormalizable interactions for any spin. Of course renormalization theory is
more than power-counting; one also has to show that an extension of the
Epstein-Glaser iterative implementation of causal commutativity can be
implemented for stringlike fields.
Stringlike localized covariant fields can also be constructed in massive
higher spin theories. In this case the pointlike potentials exist, but their
dimensions increases with $s$.~The string-like description and the BRST
formalism both use massive potentials with $d_{sd}=1.~$Even the relation to
the pointlike physical Proca potential is formally similar \cite{Rio
\begin{align}
& A_{\mu}(x,e)=A_{\mu}^{P}(x)+\partial_{\mu}\phi(x,e),~A_{\mu}^{BRST
(x)=A_{\mu}^{P}(x)+\partial_{\mu}\phi^{S}(x)\label{proca}\\
& \left\langle A_{\mu}(x,e)A_{v}(x^{\prime},e^{\prime})\right\rangle
=\frac{1}{\left( 2\pi\right) ^{3}}\int\frac{d^{3}p}{2p_{0}
e^{-ip(x-x^{\prime})}\{-g_{\mu\nu}-\nonumber\\
& -\frac{p_{\nu}p_{\nu}(e,e^{\prime})}{(pe)_{-i\varepsilon}(pe^{\prime
})_{-i\varepsilon}}+\frac{p_{\mu}e_{\nu}}{(pe)_{-i\varepsilon}}+\frac{p_{\mu
}e_{\nu}^{\prime}}{(pe^{\prime})_{-i\varepsilon}}\}\nonumber
\end{align}
where the $\varepsilon$-prescription refers to the way in which the real
boundary in $e$ has to be approached. The difference in (\ref{proca}) is that
the scalar St\"{u}ckelberg field $\phi^{S}(x)$ has the opposite metric (it is
related to $A_{\mu}^{BRST}(x)$ via the s-operation of the BRST formalism
$sA_{\mu}^{BRST}=\partial_{\mu}u,~s\phi^{S}=u$). The use of the directional
derivative $\partial_{e}=\sum e^{\alpha}\partial_{e^{\alpha}}$ leads to an
even stronger formal connectio
\begin{align*}
\partial_{e}A_{\mu}(x,e) & =\partial_{\mu}v(x),~\partial_{e}\phi(x,e)=v(x)\\
sA_{\mu}^{BRST}(x) & =\partial_{\mu}u(x),~s\phi^{S}(x)=u(x)
\end{align*}
where $v,~$the counterpart of $u,$ is pointlike. This stringlike description
is reminiscent of Mandelstam's attempt \cite{Mand} to avoid indefinite metric
by expressing the dynamics in terms of field strength only. Indeed a
stringlike potential is uniquely determined by the field strength and a
spacelike direction $e;$ but the introduction of a separate operator $A_{\mu
}(x,e)$ which fluctuates in both $x$ and $e$ is preferable, because the
improvement of the short distance $x$-fluctuation which is crucial for
renormalization, and its prize in terms of the appearance of infrared
fluctuations, is placed into evidence.
The more surprising consequences of modular localization are certainly those
which appear in the title of this paper; they will be presented in section 5.
\section{Some expected consequences}
The stringlike reformulation of gauge theories leads to a setting which has
some formal similarities with the BRST formalism\footnote{Concerning the
application of the BRST formalism to massive vectormeson we follow Scharf's
book \cite{Scharf}.} without suffering from its limitations. Its use for
massive vectormesons relates their string-localized description, which has the
mild short distance behavior needed for renormalizability, with the standard
physical description in terms of pointlike Proca field, similar to the
relation between the pointlike indefinite metric BRST vectorpotential with the
Proca field (\ref{proca}). There are reasons to believe that such a relation
has its multiplicative counterpart in a relation between a stringlike
interacting matter field (unavoidable as the result of its coupling with the
string-localized potential) and a multiplicatively corresponding pointlike
matter field which together with the pointlike vectormeson leads to a better
description (\textit{after} the issue of renormalization has been settled).
Both set of fields belong to the same theory, they are relatively local
generating fields acting in the same Hilbert space. The pointlike fields
permit to present the result in the standard way, but in contradistinction to
interactions between fields with $s<1,$ they are \textit{not} suitable for
staying within the power-counting limit required by renormalization. In
contrast to the BRST description one does not need Krein spaces; the role of
the short distance-improving indefinite metric $A_{\mu}^{BRST~}$is now taken
over by the stringlike vectorpotential which, apart from being
string-localized, is expected to belong to the same QFT as the Proca field.
The conceptual advantage of the stringlike formulation emerges in a much
stronger form in the zero mass limit. The Proca potential cannot have a zero
mass limit since a physical (Hilbert space) pointlike zero mass
vectorpotential does not exist. In fact there are two different BRST settings
\cite{Scharf}; one for massive vectormesons, in which case the BRST formalism
is only used on the level of the free asymptotic fields, and the other for the
massless gauge theories, where the BRST s-operation has a perturbative
dependence of the coupling strength. In the stringlike formulation everything
is under one conceptual roof, the pointlike potentials are only lost in the
zero mass limit. Computation with stringlike fields should be done for massive
vectormesons; one hopes that the connection of string-localization and
infrared properties is more amenable than in the massless limit.
For the matter fields, the main change between the stringlike matter fields
and its would-be multiplicative related pointlike counterpart is expected to
occur in the long-distance regime, whereas for short distances there should be
essentially no change. For nonabelian zero mass models one naturally expects
that the linear field strength remains stringlike and only appropriate
composites maintain their pointlike localization i.e. the logic of classical
gauge invariance is replaced by the identification of a pointlike generated
observable substrate embedded in a string-generated QFT.
Approaching QED from the massive string-localized side has the advantage that
physical charged fields (infraparticle fields), which have been known for a
long time to acquire a non-compact localization \cite{Haag}, are now part of
the perturbative formalism. Whereas in the existing formulation they only
appear as a computational device in a prescription for computing on-shell
photon inclusive scattering cross sections, their expected new role is to
represent electric charge-carrying physical "infraparticle" fields with admit
(off-shell) correlation function. The largest gain of insight from
string-localization is expected to come from approaching Yang-Mills theories
via massive self-interacting vectormesons. Whereas the physical origin of
infrared singularities of the unphysical pointlike matter fields in the BRST
setting remains unexplained (or is blamed on a not yet understood
nonperturbative regime), the stringlike formulation may reveal a different story.
The string-localization plays a pivotal role for the understanding of the
off-shell infrared divergencies of self-interacting massive vectormesons in
the zero mass limit. In the nonabelian case only color-neutral composites of
the field strengths are expected to maintain their pointlike localization in
the massless limit. It is perfectly conceivable that gluon strings exist in
the Hilbert space but that the expectation of the energy-momentum operator in
a state created from the vacuum by a gluon operator leads to a diverging
global energy.
This could have interesting consequences for the asymptotic freedom issue and
for the confinement idea. A beta-function computed in the dimensional
regularization formalism without the Callan-Symanzik equation from where it
originates remains incomplete. One needs to represent the infrared-divergent
massless Yang-Mills theory as the massless limit of a perturbatively
accessible massive theory\footnote{The prototype illustration is the
derivation of the C-S equation of the massive Thirring model. In this case the
vanishing of beta to all orders insures the existence of the massless limit as
a conformal QFT \cite{Lo-Go}.} in order to derive a C-S equation. The existing
derivation of asymptotic freedom is tantamount to a consistency argument: the
inverse sign of the beta function is consistent with the expected perturbative
short distance behavior of a nonexisting renormalized perturbation theory.
The reformulation Y-M interaction in terms of stringlike vectorpotentials
permits to question this scenario; massive stringlike selfinteracting
vectormesons may have a massless limit, a situation which cannot be achieved
in the pointlike BRST formalism. Assuming that Callan-Symanzik equations also
can be established in the stringlike setting, such a step would combine
massive $s=1$ fields and their massless limit under one roof, just as in case
of interactions with $s<1~$for which this limit for pointlike fields. Such a
scenario, if it can be made to work, could change the whole perspective on QFT
and its use in particle physics. String-localized potentials with their
string-improved short distance dimension $d_{sd}=1$ are potential candidates
for renormalizable interactions between \textit{all} $(m,s\geq1)$ quantum fields.
The second area which could suffer significant modifications is the issue
around the Higgs boson. From a "Schwingerian" point of view what remains of
the complex matter field in the screened "phase" of scalar QED (which is a
three-parametric renormalizable QFT in terms of $m_{sc},g_{sc},e)~$is a real
scalar matter field, after its remaining real partner participated in the
conversion of the photon into a massive vectormeson. The screened phase of QED
is one in which the \textit{Maxwell charge} (related to the identically
conserved current in $\partial^{\mu}F_{\mu\nu}=j_{\nu})$ is "screened" i.e.
the integral over $j_{0}$ vanishes. Schwinger's screening
idea\footnote{Schwinger's attempt to find a perturbative realization in spinor
QED failed and he instead used two-dimensional QED which became known as the
"Schwinger model". He was apparently not aware that by replacing spinors by
complex scalar fields he could have had a perturbative realization of his idea
\cite{Swieca}.} is a QFT extension of Debye screening in QM, except that in
QFT this is more radical (change of particle spectrum). This idea was later
backed up by a theorem due to Swieca \cite{Sw}\cite{Swieca} which states that
this situation permits precisely two alternative realizations: either the
charge is $\neq0,~$in which case the mass-shell is converted into the milder
cut singularity of infraparticle, or there is a mass gap which leads to a zero
charge (the screening phase).
Its discoverers Higgs \cite{Higgs} and \cite{Eng}\cite{Gur} presented the
result as a two step process in which a spontaneous symmetry breaking is
followed by a transfer of the massless Goldstone boson to the vectorpotential
which, as a result of gaining an additional degree of freedom, becomes a
massive vectormeson. This idea was helpful from a computational viewpoint
(since the perturbative Goldstone spontaneous symmetry breaking mechanism was
well-known at the time of Higgs). The result is the same as that computed from
Schwinger-Higgs screening except that there is no intermediate spontaneous
symmetry breaking. Unfortunately the terminology "broken gauge invariance" led
to a rather widespread incorrect understanding of the Higgs mechanism. There
is \textit{no symmetry breaking}, unless one wants to consider the appearance
of odd terms in the self-interaction of the remaining screened (real) scalar
field as a spontaneous $\mathbb{Z}_{2}$ symmetry breaking.
The renormalization theory of massive vectormesons appeared first in the form
of massive QED \cite{L-S}. Some loose ends of that treatment concerning the
passing from the indefinite metric description to the unitary gauge were
overcome in the BRST formalism. In recent work \cite{Due}\cite{Ga} which
extends Scharf's \cite{Scharf} pathbreaking work\footnote{Besides his
formulation of operator gauge invariance Scharf uses only the intrinsic logic
of the BRST formalism.} on the BRST setting of massive vectormesons, the
absence of any symmetry breaking is emphasized \textit{against the mainstream
view}. To the extend that the mainstream pays attention to this work, also the
interest in Schwinger-Higgs screening and the screening theorem by Swieca
(which had a foundational impact on the development of LQP\footnote{It led
Buchholz and Fredenhagen to discover the connection of spectral gaps with the
stringlike generating property of superselected charge-carrying fields (as the
only possibility which replaces the pointlike localization in case of
non-existence of pointlike generators) \cite{Haag}.} but never made it into
the mainstream) may receive the attention it deserves.
The conclusion of absence of symmetry breaking in the massive BRST formulation
is based on the observation that its typical formal indicators do not appear
in this setting \cite{Scharf}. The intrinsic (independent of the computational
method) argument for the presence of the Schwinger-Higgs screening is however
the vanishing of the Maxwell-charge associated with the identically conserved
(massive) Maxwell current. In contrast the conserved current in the Goldstone
spontaneous symmetry breaking model \textit{diverges} as the result of its
coupling to the massless Goldstone boson. The screening picture may not be
helpful in perturbative computations, but it quite easy to verify that the
mass gap in the BRST leads to the screened Maxwell charge.
The most exiting application of the stringlike formalism is certainly its use
for the clarification of the Higgs issue: are self-interactions of
vector-mesons only possible in the presence of a Higgs particle? The BRST
formalism leads to an affirmative answer \cite{Scharf} but it would be better
to understand this from the localization principle instead of leaving it to
the consistency of the indefinite matric BRST formalism. If this result is not
confirmed, all the hype about the world of QFT breaking down without a Higgs
particle was irrelevant and the presentation of LHC results with two competing
theoretical results would liberate particle physics from its self-generated
religious-ideological freight. If on the other hand the stringlike setting
confirms the necessity of the Higgs, it will present the best starting point
to find out whether the presence of \textit{low spin satellites} of
(self)interacting higher spin massive particles is a general consequence of
modular localization or whether this is a peculiarity of selfinteracting
massive vectormesons.
Apart from some mathematical problems as adjusting the Epstein-Glaser
renormalization framework to the needs of string-localization \cite{Mund}, the
calculations for most of the mentioned problems are still in progress
\cite{col}. Stringlike perturbation requires new concepts and is more
involved; the pointlike massive fields in terms which one hopes to express the
result of massive vectormeson interactions are not the the same fields as
those which are used to stay below the power-counting limit.
As one does not have to understand details about causal localization for
handling the renormalization theory of pointlike fields, one expects to find
rules for perturbing couplings involving stringlike fields which avoid the
intricacies of modular localization and lead to a simple description between
stringlike fields needed for renormalization and their pointlike counterparts.
Since in \cite{MSY} the idea of their use originated from the desire of a more
detailed understanding of the work on modular localization \cite{BGL} in the
setting of Wigner's representation theory, this connection with problems of
"real particle physics" alluded to in this section may also raise the readers
interest in the more abstract presentation of modular localization in the
following sections.
It is somewhat harder to speculate whether there could be some physical use
for the "kinematical" strings which are the generator of the string-localized
Wigner infinite spin representation. It is hard to imagine that any kind of
compactly localized counter can detect them or that they can be produced from
local interactions of ordinary particles. Weinberg \cite{Weinbook} dismissed
the infinite spin representations on the ground that "nature does not use
them", but it is questionable whether in times of dark matter one can uphold
this dismissal.
A yet different kind of spacelike strings arises in d=1+2 Wigner
representations with anomalous spin \cite{Mu1}. The modular localization
approach preempts the spin-statistics connection already in the one-particle
setting, namely if s is the spin of the particle (which in d=1+2 may take on
any real value) then one finds for the connection of the symplectic complement
with the causal complement the generalized duality relation
\begin{equation}
ZK(\mathcal{O}^{\prime})=K(\mathcal{O})^{\prime
\end{equation}
where the square of the twist operator $Z=e^{\pi is}~$is easily seen (by the
connection of Wigner representation theory with the two-point function) to
lead to the statistics phase $=Z^{2}$ \cite{Mu1}. Here the strings have a more
virtual existence, they serve to keep track of localization of plektons when
the representation of the universal covering group is activated by anomalous
spin but differential geometry does not provide a spacetime covering.
Different from all previous cases these representations have no functorially
associated free fields; surprisingly the QFT of anyons and plektons
(nonabelian representations of the braid group) do not even permit integrable
models \cite{B-M}.
\section{Algebraic aspects of modular theory}
A net of real subspaces $K(\mathcal{O})$ $\subset$ $H_{1}$ for a finite spin
(helicity) Wigner representation can be "second quantized"\footnote{The
terminology "second quantization" is a misdemeanor since one is dealing with a
rigorously defined functor within QT which has little in common with the
artful use of that parallellism to classical theory called "quantization".}
via the CCR (Weyl) respectively the CAR quantization functor; in this way one
obtains a covariant $\mathcal{O}$-indexed net of von Neumann algebras
$\mathcal{A(O)}$ acting on the bosonic or fermionic Fock space $H=Fock(H_{1})$
built over the one-particle Wigner space $H_{1}.$ For integer spin/helicity
values the modular localization in Wigner space implies the identification of
the \textit{symplectic} complement with the \textit{geometric} complement in
the sense of relativistic causality, i.e. $K(\mathcal{O})^{\prime
}=K(\mathcal{O}^{\prime})$ (spatial Haag duality in $H_{1}$). The Weyl functor
takes this spatial version of Haag duality into its algebraic counterpart,
whereupon the symplectic complement passes to the von Neumann commutant subalgebra.
One proceeds as follows: for each Wigner wave function $\varphi\in
K(\mathcal{O})\in H_{1}$ the associated (unitary) Weyl operator is defined a
\begin{align}
& Weyl(\varphi):=expi\{a^{\ast}(\varphi)+a(\varphi)\}\in B(H)\\
\mathcal{A(O}) & :=alg\{Weyl(\varphi)|\varphi\in K(\mathcal{O
)\}^{^{\prime\prime}},~~\mathcal{A(O})^{\prime}=\mathcal{A(O}^{\prime
})\nonumber
\end{align}
where $a^{\ast}(\varphi)$ and $a(\varphi)$ are the usual Fock space creation
and annihilation operators of a Wigner particle with wave function $\varphi$.
This is a functorial relation between localization subspaces of the
one-particle space and localized subalgebras. Defining the algebra in terms of
the double commutant converts it into a von Neumann algebra i.e. a weakly
closed operator algebra.
This functorial relation between real subspaces and von Neumann algebras via
the Weyl functor preserves the causal localization and commutes with the
improvement of localization through intersections $\cap$ according t
\[
K(\mathcal{O})=\cap_{W\supset O}K(W),~\mathcal{A(O})=\cap_{W\supset
O}\mathcal{A}(W)
\]
The functorial relation can be conveniently expressed in the commuting diagra
\begin{align}
& \left\{ K(W)\right\} _{W}\longrightarrow\left\{ \mathcal{A}(W)\right\}
_{W}\label{cd}\\
& \ \ \downarrow\cap~~~\ \ \ \ \ \ \ \ \ \ ~\ ~\downarrow\cap\nonumber\\
~~ & \ \ \ K(\mathcal{O})\ \ \ \longrightarrow\ \ ~\mathcal{A(O})\nonumber
\end{align}
Here the vertical arrows denote the tightening of localization by
intersections whereas the horizontal ones stand for the action of the Weyl
functor. This commuting diagram expresses the functorial relation between
particles and fields in the absence of interactions and represents a
straightforward functorial extension of Wigner's representation
theory\footnote{Here we consider modular localization as part of Wigner's
theory because the modular localization is constructed within positive energy
representations of the Poincar\'{e} group \cite{BGL}.}. In the interacting
case this functorial connection is lost and the particle-field relations
becomes significantly more subtle. Its conceptual/mathematical complexity in
the case of non-integrable interactions is to blame for the 80 years lasting
lack of progress in proving even the mathematical existence of a model behind
its Lagrangian description, not to mention the problem of controlled
approximations. A new attempt concerning this age-old problem which is based
on the distinguished role of wedge-localized subalgebras will be presented in
the next section.
The case of half-integer spin representations is analogous \cite{Fa-Sc}, apart
from the fact that there is a mismatch between the causal and symplectic
complements which must be taken care of by a \textit{twist operator }$Z$ ; as
a result one has to use the CAR functor instead of the Weyl functor. In d=1+2
one encounters an exception; the Bargman-Wigner representation theory permits
anomalous spin which turns out to be connected with braid group statistics. As
already mentioned, this is the only known case for which (for $s\neq
$semiinteger)$~$there is no functorial relation between localized subspaces
and localized von Neumann subalgebras.
In case of the large family of irreducible zero mass "infinite spin"
representations, for which the lightlike "little group" is faithfully
represented, the functorial relation leads to string-localized generating
fields which generate the Fock space and the non-compact localized subalgebra
acting in it. There is an argument which excludes the existence of
compact-localized (generated by pointlike composites) subalgebras \cite{MSY},
but unfortunately it is not conclusive.
A discrete basis of local covariant field coordinatizations is defined by Wick
composites of the free fields. The case which deviates furthest from classical
behavior is the pure stringlike infinite spin representation for which the
class of relative string-localized fields form a \textit{continuous} family of
composites. Its non-classical aspects, in particular the absence of a
Lagrangian, is the reason why the spacetime description in terms of
semiinfinite string fields has been discovered only more than 60 years after
it appeared in Wigners classification \cite{BGL}\cite{MSY}.
Using the standard notation $\Gamma$ for the second quantization functor which
maps real localized (one-particle) subspaces into localized von Neumann
algebras, and extending this functor in a natural way to include the images of
the $K(\mathcal{O})$-associated modular objects (for which for which we use
the same notation $S,\Delta,J),$ one arrives at a special case of the Tomita
Takesaki modular theory for the interaction-free standard pair ($\mathcal{A(O
),\Omega$)\footnote{The functor second quantization functor $\Gamma$ preserves
the standardness i.e. maps the spatial one-particle standardness into its
algebraic counterpart.
\begin{align}
& H_{Fock}=\Gamma(H_{1})=e^{H_{1}},~\left( e^{h},e^{k}\right)
=e^{(h,k)}\label{mod}\\
& \Delta=\Gamma(\delta),~J=\Gamma(j),~S=\Gamma(s)\nonumber\\
& SA\Omega=A^{\ast}\Omega,~A\in\mathcal{A}(\mathcal{O}),~S=J\Delta^{\frac
{1}{2}}\nonumber
\end{align}
The Tomita-Takesaki theorem is about the action of the two modular objects
$\Delta^{it}$ and $J$ on the algebr
\begin{align}
\sigma_{t}(\mathcal{A(O})) & \equiv\Delta^{it}\mathcal{A(O})\Delta
^{-it}=\mathcal{A(O})\\
J\mathcal{A(O})J & =\mathcal{A(O})^{\prime}=\mathcal{A(O}^{\prime})\nonumber
\end{align}
in words: the reflection $J$ maps an algebra (in standard position) into its
von Neumann commutant and the unitary group $\Delta^{it}$ defines an
one-parametric automorphism-group $\sigma_{t}$ of the algebra. In this form
(but without the last geometric statement involving the geometrical causal
complement $\mathcal{O}^{\prime})$ the theorem hold in complete mathematical
generality for "standard pairs" ($\mathcal{A},\Omega$). The free fields and
their Wick composites are "coordinatizing" singular generators of this
$\mathcal{O}$-indexed net of operator algebras in the sense that the smeared
fields $A(f)$ with $suppf\subset\mathcal{O}$ are (unbounded operators)
affiliated with $\mathcal{A(O})$ and generate $\mathcal{A(O})$ in an
appropriate mathematical sense$.$
Within the classifications of von Neumann algebras these local algebras are of
a very different type as their global counterpart. The latter is of the same
type as quantum mechanical algebras, namely an algebra of all bounded
operators on a Hilbert space $B(H)$. The local subalgebras are however
isomorphic to the aforementioned monad \cite{Haag}. More important than
understanding its position within Connes type classification for our purpose
is to characterize a monad by its use in physics. This is facilitated by the
fact that there are only two types which one needs for the formulation of
continuous\footnote{For discete combinatorial algebraic structures, as one
encounters them in lattice theories, also type II$_{1}~$enters, see remarks in
last section.} quantum physics: type $I_{\infty}$ (or $B(H)$, the algebra of
all bounded operators on a Hilbert space) in QM, and monads as localized
subalgebras in QFT \cite{Jakob}, where only the global algebra is a $B(H).$
The relevance of the T-T modular theory for interacting QFT is based on the
standardness of ($\mathcal{A(O}),\Omega$) (more generally for all finite
energy states) which is a consequence of the Reeh-Schlieder theorem
\cite{Haag}. The definition of the Tomita involution $S$ through its action on
the dense set of states (guarantied by the standardness of $\mathcal{A}$) as
$SA\Omega=A^{\ast}\Omega,$ and the action of the two modular objects
$\Delta,J$ (\ref{mod}) is part of the general setting of the modular
Tomita-Takesaki theory of abstract operator algebras in "standard position";
standardness is the mathematical terminology for the physicist's
Reeh-Schlieder property i.e. the property of that the action of a localized
subalgebra on the vacuum vector\footnote{In QFT any finite energy vector
(which of course includes the vacuum) has this property, as well as any
nondegenerated KMS state.} $\Omega\in H$ is cyclic $\overline{\mathcal{A(O
)\Omega}=H$ and $\mathcal{A(O})$ contains no "annihilators" of $\Omega.$
The important property which renders this useful as a new constructive tool in
the presence of interactions, is that for $\left( \mathcal{A}(W),\Omega
\right) ~$ the antiunitary involution $J$ depends on the interaction, whereas
$\Delta^{it}$ continues to be uniquely fixed by the representation of the
Poincar\'{e} group i.e. by the particle content. In fact it has been known
since Jost's seminal work on TCP \cite{Jost} (including the TCP covariance of
the S-matrix) that the interacting TCP operator is related to its free
(incoming) counterpart through the S-matrix; the $J,$ which represents the
reflection on the edge of the wedge, only differes from TCP by a $\pi
$-rotation. Rewritten in terms of the reflection $J$ on the edge of a wedge
this reads a
\begin{equation}
J=J_{0}S_{scat} \label{scat
\end{equation}
In this form it attributes the role of a relative modular invariant (between
the interacting and free wedge-localized algebra) to the S-matrix and as a
result became a constructive tool of QFT \cite{Sch1}.
It is precisely this "semilocal" property of $S_{scat}$ in connection with
wedge-localization which opened the way for the inverse scattering
construction within the setting of the bootstrap-formfactor project.
The physically relevant facts emerging from modular theory in the general
setting can be condensed into the following statements:
\begin{itemize}
\item \textit{The domain of the unbounded operators }$S(\mathcal{O})$\textit{
is fixed in terms of intersections of the wedge localized algebras
}$\mathcal{A(O})=\cap_{W\supset\mathcal{O}}\mathcal{A}(W).$\textit{ The
domains associated to }$S(W)~$\textit{and }$domS(W)$\textit{ is}
\textit{determined by the representation of the Poincare group (and hence by
the particle content alone). These dense domains change with }$\mathcal{O
$\textit{ i.e. the dense set of localized states has a bundle structure.}
\item \textit{The complex domains }$DomS(\mathcal{O})=K(\mathcal{O
)+iK(\mathcal{O})$\textit{ in }$H_{Fock}~$\textit{decompose into real
subspaces }$K(\mathcal{O})=\overline{\mathcal{A(O})^{sa}\Omega}.$\textit{ This
decomposition contains dynamical information which in case }$\mathcal{O
=W$\textit{ includes the }$S_{scat}$\textit{-matrix (\ref{scat}). In the next
section arguments will be presented which suggests that with the help of a new
emulation formalism, which extends Wigner's representation approach to the
presence of interactions, the }$S_{scat}$\textit{-matrix under appropriate
conditions may fix }$\mathcal{A}(W)$\textit{ uniquely.}
\item \textit{ The restriction of the vacuum state to a local operator algebra
}$\mathcal{A(O})$\textit{ leads to a KMS relation at inverse modular
temperature }$\beta=1
\[
\left\langle AB\right\rangle =\left\langle Be^{-H_{mod}}A\right\rangle
,~e^{-itH_{mod}}:=\Delta^{it
\]
\textit{This localization-caused thermal behavior is accompanied by an area
proportional localization-entropy (section1). It has a variety of important
physical consequences.}
\end{itemize}
Modular localization is intimately connected to the holistic
aspect\footnote{For emphasising the importance of this property for the issue
of the cosmological constant we refer to the paper: "Quantum Field Theory Is
Not Merely Quantum Mechanics Applied to Low Energy Effective Degrees of
Freedom" by Hollands and Wald\ \cite{Ho-Wa}.} of QFT which places this theory
into a sharp contrast with QM (even in relativistic QM in the form of the
\textit{direct particle theory} (DPI) \cite{interface}. A one-dimensional
quantum mechanical chain or string of oscillators can be embedded into a space
of arbitrary dimensions since quantum mechanical (Born) localization is not an
intrinsic aspect of QT. On the other hand an embedding of a lower-dimensional
into a higher dimensional QFT is not possible; or to phrase it the other way
around: the restriction of a QFT to a lower dimensional submanifold
"remembers" (as a result of its holistic nature) that it is the restriction of
a more complete theory. In particular it is not possible to embed a
one-dimensional chiral theory into a higher dimensional QFT; a fact which is
overlooked in the incorrect picture of an embedding of a chiral theory into
its inner symmetry space ("target space") used in string theory (see the last
section for more remarks).
The holistic intrinsic nature of QFT presents itself most forcefully in the
possibility of characterizing a quantum field theory by the positioning of a
finite number of copies of an abstract monad in a shared Hilbert space. A
"modular inclusion" of one monad into another defines a chiral QFT, for a
3-dimensional theory one needs 4 modular-positioned monads, and placing 7
monads into a specific modular position leads to d=1+3 QFTs \cite{Kaehler
\cite{interface}. \textit{The positioning in Hilbert space determines not only
the algebraic substrate (the kind of quantum matter) of a QFT, but also
reveals its Minkowski spacetime localization properties and the action of the
Poincar\'{e} group on it}. The interpretation of a modular inclusion of two
monads is context-dependent; if one does not place additional monads into the
same Hilbert space, it defines a chiral theory on which the M\"{o}bius group
acts; a larger number of appropriately placed monads leads to higher
dimensional local nets of operator algebras. It is this intrinsic relation of
the abstract algebraic modular positioning of a finite number of monads in a
Hilbert space\ to the concrete localization of quantum matter in spacetime
(GPS positioning for LQP) that provides the strongest illustration of
"holistic"; there is no other theory of quantum matter which is "relational"
in this extreme way. As mentioned it forbids embedding of a lower into a
higher dimensional QFT and places severe restrictions on "dimensional
reduction" in QFT. Quantization is not a boundless game in which classical
manipulations can be interchanged with quantization; the holistic nature of
QFT shows its limitation and at the same time opens new perspectives. The
problem is that one cannot see these limitations on the level of Lagrangian
quantization; they would become visible if one tries to \textit{"curl up"
extra dimensions in explicitly computed correlation functions} by a
mathematically controlled operation on their correlations function which
maintains the holistic nature of QFT\footnote{If the model has sufficient
analyticity properties which allow real/imaginary time Wick-rotations, one can
"curl up" a time component by taking the high temperature limit in a KMS state
und create a new time direction by Wick rotation. But this is not what the
proponents of Kaluza-Klein reductions in QFT have in mind.} instead of
manipulationing Lagrangians, which is the way the Kaluza-Klein dimensional
reduction works in classical field theory.
It is interesting to briefly look at the difficulties which our QFT ancestors
encountered with these holistic aspects. From the time of the "Einstein-Jordan
conundrum" \cite{hol} through Jordan's subsequent discovery of QFT,
Heisenberg's discovery of vacuum polarization, Unruh's Gedankenexperiment and
Hawking's radiation up to the problem of the origin of the cosmological
constant, in all those cases the holistic nature of QFT asserts itself.
\section{"Emulation" as an extension of Wigner's representation theoretical
setting}
The \textit{bootstrap-formfactor program} \cite{Kar2} was based on the
assumption that the multiparticle components of a localized excitation of the
vacuum $A\left\vert 0\right\rangle ,$ $A\in\mathcal{A(O})$ in theories in
d=1+1 with an "integrable" S-matrix (purely elastic 2-particle
scattering\footnote{The elastic two-particle amplitude in d=1+1 is the only
scattering amplitude which cannot be distinguished from the identity
contribution by cluster factorization (equality of the product of two particle
plane wave inner products with the energy-momentum delta function), by which
factorizing models indicate their kinematical proximity to free fields.})
written in terms of rapidity variables are meromorphic functions which,
besides the degenerate represesentation of the permutation group from particle
statistics, possess a different nontrivial representation of this group under
\textit{analytic exchanges} (through analytic continuations) of rapidities.
This led the above authors to a change in notation in which the statistics
degeneracy is removed by encoding it into the left to right decreasing order
of the numerical values of rapidities
\begin{equation}
\left\langle 0\left\vert A\right\vert \theta_{1},\theta_{2},..\theta
_{k}\right\rangle ,~\theta_{1}>\theta_{2}>..>\theta_{k} \label{va
\end{equation}
so that other orderings can be used to represent the result of analytic
changes of ordering. The basic analytic change, the analytic transposition
between two adjacent $\theta,$ is defined in terms of a crossing symmetric
unitary two-particle S-matrix\footnote{These elastic S-matrices were obtained
from the classification of solutions of a "bootstrap" project for scattering
functions (solutions of the Yang-Baxter equations in case of matrix valued
scattering functions) \cite{Kar2}.}. From this analytic transposition one then
constructs an analytic representation of the permutation group. In contrast to
the degeneracy of the statistics representation which can be "dumped" into the
ordering prescription, this analytic representation is encoded into the
changes of orders of rapidities in the vacuum formfactor (\ref{va}).
Together with the crossing property which connects the multi-particle
components of local excitations of the vacuum $A\left\vert 0\right\rangle $
with particle formfactors between arbitrary multi-particle states together
with some general wisdom from LSZ scattering theory, this was basis of the
bootstrap-formfactor construction of a QFT. Within these rules the so called
inverse scattering problem admits a unique solution. In contrast to the
perturbative approach based on Lagrangian or functional quantization, the only
name for most of the models constructed in this way comes from their
scattering function; a few S-matrices (the most prominent is the Sine-Gordon
model) permit a perturbative relation with a Lagrangian from which these
models inherit their classical name.
This analytic representation of the permutation group called for an
algebraization in terms of a new kind of noncommuting particle operators. This
was formally achieved in \cite{Zam} and the associated algebra became known as
the Zamododchikov-Faddeev (Z-F) algebra (after Faddeev added a missing
c-number term). For a long time the physical interpretation of these operators
remained a mystery. Since the central property of QFT is \textit{causal
localizability,} it was natural to look for a relation of these generating
operators to the localization properties of QFT. The deviation of the Z-F
commutation relations from those of the standard creation/annihilation
operators exclude pointlike localization of their Fourier transforms.
As explained in the first two sections, a particular well-suited formulation
of QFT for unravelling the spacetime significance of operators is Haag's
"local quantum physics" (LQP) which places a net of localized observable
algebras and their "charged" representation sectors (from which together with
the observables one can construct a charge-carrying "field-net") into the
center stage and assigns to pointlike covariant fields the role of generators
of localized algebras without attaching a preferential status to a particular
field (apart from conserved currents which arise from the local implementation
of global symmetries). An important step in the development of an intrinsic
algebraic description was the aforementioned observation by Bisognano and
Wichmann \cite{Bi-Wi} (and its subsequent application to the Unruh effect and
Hawking's black thermal radiation by Sewell \cite{Sew}) that localization
within a spacetime wedge has a deep relation to the Tomita-Takesaki modular
theory of operator algebras including its thermal (KMS) aspects.
The application of these ideas to the bootstrap-formfactor project resulted
\cite{AOP} in the identification of the Fourier transforms of the Z-F
operators with generators of wedge algebras for integrable models. As free
fields, their one-time application to the vacuum state creates a one-particle
state, but different from the former, the presence of infinite vacuum
polarization clouds in interacting QFTs can in general not be avoided in their
iterative application to the vacuum. The Z-F operators turned out to be
special realizations of \textit{polarization-free-generators} (PFGs) in which
the relation between particles and fields resembled those in free field models.
A subsequent more foundational study based on "modular localization"
\cite{BBS} revealed that there exist two types of PFGs, temperate ones, whose
domains are translation invariant and whose particlelike Fourier transforms
fulfill Z-F algebra commutation relations), and nontemperate PFGs which exist
as operator-valued distributions only on wedge-localized test
functions\footnote{Unlike quantum fields they are not (operator-valued)
Schwartz distributions.} whose analytic on-shell restriction are only dense in
the $L^{2}~$integrable space of all wave functions. It turns out that
temperate PFGs only exist in d=1+1 and correspond to the family of integrable
models (factorizing models) of the bootstrap-formfactor program. Within this
restricted set of temperate wedge-generators, the so-called inverse problem
($S_{scat}\rightarrow QFT$) has a unique solution in that the (necessarily)
elastic $S_{scat}$ determines precisely one net of operator algebras (or a
Borchers equivalence class of local fields \cite{Haag}). The simpler
particle-field connection, which comes with temperateness and accounts for all
models which are integrable in the sense of the bootstrap-formfactor program,
led to view temperateness of PFG generators of wedge-localized algebras as
\textit{the~}foundational characterization of integrability of the associated QFT.
This new understanding of integrability in terms of localization reached its
present final touch in the work of Lechner \cite{Lech1}\cite{Lech2} who, by
solving the existence problem (the existence of a nontrivial net of compact
localized operators by intersecting wedge algebras) for many integrable
models, contributed an important constructive step to the almost 90 year
search for the mathematical control of QFT.
The resulting setting of integrable QFTs should be viewed as the
generalization of Wigner's representation theoretical approach for the
classification of one-particle spaces and a functorial construction of free
field algebraic nets in the presence of interactions (see previous section).
Wigner's motivation was that local quantum physics, as being more fundamental
than classical physics, should not be subjected to the parallelism to
classical theory implied in Lagrangian or functional quantization; in this he
echoed Jordan's 1929 dictum \cite{Jor} that QFT has to be formulated "without
classical crutches"; in the present context this is done in terms of a
concrete representation theoretical construction which is reminiscent of
Wigner's method in the absence of interactions.
The reformulation of the analytic bootstrap-formfactor project for integrable
models into a representation theoretical setting in terms of one-particle PFGs
as generators of wedge-localized operator algebras constitutes a successful
step in the adaptation of Wigner's idea to the realm of interactions. Although
nothing is known about the solution of the inverse problem within the vast
area of nontemperate (=nonintegrable) models (see later), one may ask whether
among all the QFTs which can possibly be connected with a given unitary
crossing symmetric S-matrix there is one for which the above idea of an
\textit{analytic change of order} permits a generalization.
To formulate such a problem one must first generalize the situation in
\cite{BBS} from one-particle PFGs to multi-particle states. This can be done
by using the same concepts from modular localization, since the dense set of
states of the modular Tomita operator $S$ also includes a dense sets of
multi-particle states of arbitrary high particle number which form a basis in
the Wigner-Fock Hilbert space of a QFTs with a complete particle
interpretation. Whereas in a later section we will also consider interacting
conformal QFTs which are known to have no particle interpretation and lead to
a different kind of only \textit{partial} integrability, in the present
context "integrability" (without an additional specification) will always
refer to the integrability in the sense of temperate particle wedge-localized
PFGs in QFTs \textit{describing particles}. In any interacting QFT with a
complete particle interpretation in terms of a incoming free field algebra
$\mathcal{A}_{in}$ there holds the following theorem
\begin{lemma}
(\cite{BBS}) Any state $\left\vert \psi\right\rangle \in domS_{\mathcal{A
(W)}=domS_{\mathcal{A}_{in}(W)}=dom\Delta^{\frac{1}{2}}~$can be generated from
the vacuum by operators $A~$and $A_{\mathcal{A}(W)}~$in each of the two
algebra
\begin{align}
& \left\vert \psi\right\rangle =A\left\vert 0\right\rangle =A_{\mathcal{A
(W)}\left\vert 0\right\rangle ,~A\in\mathcal{A}_{in}(W),~A_{\mathcal{A}(W)
\in\mathcal{A}(W)\label{lem}\\
& A\overset{bijec}{\leftrightarrow}A_{\mathcal{A}(W)
~closed,~affiliated~to~\mathcal{A}_{\mathcal{A}(W)}\nonumber
\end{align}
\end{lemma}
Here the second line defines the "emulated" operators $A_{\mathcal{A}(W)}$ as
the image of wedge-localized free field operators under the bijection; the
Reeh-Schlieder property of the commutant algebras together with modular
operator theory (previous section) insures the denseness of their domains;
their closability (for their closure the same notation will be maintained) and
the unique affiliation with $\mathcal{A}(W)$ is clinched by the appropriately
formulated commutativity of the closed $A_{\mathcal{A}(W)}$ with the Haag-dual
commutant $\mathcal{A}(W)^{\prime}=\mathcal{A}(W^{\prime}).$ It is precisely
this properly defined \textit{commutativity with the commutant of the
interacting algebras }which\textit{ }insures\textit{ }the affiliation of the
emulats\textit{ }to the interacting algebra of the chosen model and not to any
other interacting algebras which happens to have the same particle content.
Modular localization of states is much weaker that localization of operators,
the two concepts only meet in the absence of interactions.
The crucial question is: how much does one have to know about the structure of
the interacting algebra $\mathcal{A}(W)$ in order to construct the emulats; is
the knowledge of the S-matrix in $J=S_{scat}J_{0}$ already enough for their
explicit construction? We know that for integrable models the particle
conserving two-particle scattering functions determine the Z-F algebra and
hence the QFT model, but can one also expect such a situation for
nonintegrable models when $S_{scat}$ describes particle creation? We will
present some consistency arguments which support such an idea, but are still
far away from a proof. These arguments involve quite novel ideas and so we
hope that the reader will find it sufficiently interesting to follow them.
Note that these unbounded emulats, whose unique existence in a given
interacting theory with a complete particle interpretation is guarantied, do
not by themselves form an algebra since they neither can be multiplied (even
in cases where this is possible $\left( AB\right) _{\mathcal{A}(W)}\neq
A_{\mathcal{A}(W)}B_{\mathcal{A}(W)}$), nor does the formation of emulats
commute with taking adjoints $(A^{\ast})_{\mathcal{A}(W)}\neq(A_{\mathcal{A
(W)})^{\ast}$\footnote{In fact it is easy to see that (\cite{BBS}) $\left(
A_{\mathcal{A}(W)}\right) ^{\ast}\left\vert 0\right\rangle =S_{scat
A\left\vert 0\right\rangle $}. Although the original algebra cannot be
reconstructed directly, the objects obtained from polar and spectral
decompositions of the $\mathcal{A}(W)$ affiliated emulats can be used for its construction.
The hope that $\mathcal{A}(W)$ can be reconstructed solely from $S_{scat}$ is
based on the before mentioned idea of an analytic ordering change. An
important preparatory step is the formulation of the KMS propery in terms of
emulats and its relation to the crossing property of formfactors. For this it
is convenient to introduce the following notation, for simplicity we stay in
d=1+1. Wit
\begin{align}
& A(f_{1},.f_{n})\equiv:A(f_{1})...A(f_{n}):,~A(f_{1},.f_{n})\in
\mathcal{A}_{0}(W),~suppf_{i}\subset W\\
& A(f_{1},.f_{n})\left\vert 0\right\rangle =\left\vert \check{f
_{1},.\right\rangle _{in},~~A(f)=\int a^{\ast}(\theta)\check{f}(\theta
)d\theta+h.c.,~p=m(ch\theta,sh\theta)\nonumber
\end{align}
denoting Wick-products of wedge-smeared free fields, these operators applied
to the vacuum create n-particle states in momentum space wave functions
$\check{f}$ which are the mass shell restrictions of the Fourier transform of
the $f$ in terms of rapidity variables $\theta.$ With $B$ denoting a generic
operator in the free field algebra $B\in\mathcal{A}_{0}(W)$\footnote{For free
fields the operator algebra is generated by exponential Weyl operators.}$~$or
an affiliated Wick-ordered composite, the KMS relation (section 4) in a (for
later purposes) useful form for the product of three operators read
\begin{equation}
\left\langle BA(f_{n},.f_{l+1})A(f_{l},.f_{1})\right\rangle \overset{KMS
{=}\left\langle A(f_{l},.f_{1})\Delta BA(f_{n},.f_{l+1})\right\rangle
,~B~composite\in\mathcal{A}_{0}(W) \label{r
\end{equation}
where the existence of the analytically continued boost $\Delta=e^{-2\pi
K_{W}},$ ($K_{W}=$ generator of W-preserving Lorentz boost) inside the right
hand correlation functions is guarantied by modular theory or by explicit
calculation within the free field Wick ordering formalism. Applying the
$A(f_{l},..f_{1})$ on the right hand side of (\ref{r}) to the bra vacuum, the
wave functions inside the bra vector are the complex conjugate of the
$\check{f},$ which in turn (using the analyticity resulting from the wedge
localization) are identical to the analytically continued original wave functions.
Finally by absorbing $\Delta^{\frac{1}{2}}$of the $\Delta$ into the analytic
continuation of the complex conjugate antiparticle wave functions (here the
localization is important) we arrive again at the original wave functions:
\begin{align}
& \int\int\check{f}_{1}(p_{1})..\check{f}_{n}(p_{n})\left\{ \left\langle
0\left\vert B\right\vert p_{1},.p_{n}\right\rangle -\left\langle p_{1
,.p_{l}\left\vert \Delta^{\frac{1}{2}}B\right\vert p_{n},.p_{l+1}\right\rangle
\right\} +contr.=0\label{f}\\
& \left\langle 0\left\vert B\right\vert p_{1},.p_{n}\right\rangle
=\left\langle p_{1},.p_{l}\left\vert \Delta^{\frac{1}{2}}B\right\vert
p_{n},.p_{l+1}\right\rangle =:\left\langle -p_{1},.-p_{l}\left\vert
B\right\vert p_{n},.p_{l+1}\right\rangle ,~p_{i}\neq p_{k}\nonumber
\end{align}
where the contraction terms arise from Wick-contractions between the two
Wick-products on the left hand side of (\ref{r}). The free field distributions
inside the curled bracket are actually square integrable; so the first line
can be extended to all square integrable wave functions which then results in
the second line. It is easy to see that in the presence of antiparticles the
only change in the above relation is the assignment of the bra-momenta to
antiparticles, in short $p\rightarrow\bar{p}.$
The interacting analog of the free KMS relation for modular wedge localization i
\begin{align}
& \left\langle BA^{(1)}{}_{\mathcal{A}(W)}A^{(2)}{}_{\mathcal{A
(W)}\right\rangle \overset{KMS}{=}\left\langle A^{(2)}{}_{\mathcal{A
(W)}\Delta~BA^{(1)}{}_{\mathcal{A}(W)}\right\rangle \label{KMS}\\
& \left\langle BA^{(1)}{}_{\mathcal{A}(W)}A^{(2)}\right\rangle =\left\langle
A^{(2)}{}_{\mathcal{A}(W)}\Delta~BA^{(1)}\right\rangle =\nonumber\\
& =(S_{scat}A^{(2)\ast}\Omega,\Delta^{\frac{1}{2}}BA^{(1)}\Omega
),~since~(A^{(2)}{}_{\mathcal{A}(W)})^{\ast}\Omega=S_{scat}A^{(2)\ast
\Omega\nonumber
\end{align}
Rewriting this relation in terms of emulats for multiparticle states and using
the fact that the emulats acting on the vacuum are multiparticle states, one
obtains a pre-form of a particle crossing relation in which the particle
content of the emulat in the middle of the left hand side is still unknow
\begin{align}
& \left\langle 0\left\vert B(A^{(1)}(\check{f}_{1},.\check{f}_{l
)_{\mathcal{A}(W)}\right\vert \check{f}_{l+1},.\check{f}_{n}\right\rangle
=\int\check{f}_{1}(p_{1}),.\check{f}_{l}(p_{l})~_{out}\left\langle \bar{p
_{n},.\bar{p}_{l+1}\left\vert \Delta^{\frac{1}{2}}B\right\vert \check{f
_{1}..\check{f}_{l}\right\rangle _{in}~\label{c}\\
& \int..\int\frac{d^{3}p_{1}}{2p_{10}}..\frac{d^{3}p_{n}}{2p_{10}}\check
{f}_{1},..\check{f}_{n}\{\left\langle 0\left\vert B(A^{(1)}(p_{1
,.p_{l})_{\mathcal{A}(W)}\right\vert p_{l+1},.p_{n}\right\rangle -\nonumber\\
~~ & -\left\langle \bar{p}_{n},.\bar{p}_{l+1}\left\vert \Delta^{\frac{1}{2
}B\right\vert p_{1}..p_{l}\right\rangle
\}~\ \ ~=0\ \ \ \ \ \ \ \ \ \ \ \ \ \ ~\nonumber
\end{align}
In the sequel it will be shown how an extension of the idea of "analytic
ordering change" solves this problem and also suggests a formula for
formfactors of emulats as bilinear forms between multi-particle states. As in
the integrable case, we assume that the vacuum formfactor is locally analytic.
The presence of cuts resulting from inelastic multiparticle thresholds
prevents such formfactors to be boundary values of meromorphic functions as in
the integrable case, but these threshold cuts do not force them to be worse
than locally square integrable. In this case we may pass from wedge-localized
analytic wave function to n-particle wave functions with ordered supports and
obtain
\begin{align}
& \left\langle 0\left\vert B(A^{(1)}(\theta_{1},..\theta_{l}))_{\mathcal{A
(W)}\right\vert \theta_{l+1},.\theta_{n}\right\rangle _{in}\equiv\left\langle
0\left\vert B\right\vert \theta_{1},...\theta_{n}\right\rangle _{in
\label{ana}\\
~~ & for~~\theta_{1}>\theta_{2}....>\theta_{n}\nonumber
\end{align}
without having to know anything about the result of analytic order changes
within the $\theta_{1},..\theta_{l}~$cluster and relative to the remaining
$n$-$l$ particle cluster. In contradistinction to the integrable case,
analytic changes through the multi-particle threshold cuts will inevitably be
path-dependent, in particular there will be no analytic representation of the
permutation group\footnote{A similar case in x-space occurs if one extends a
d=1+2 Wightman setting to fields with braid.group statistics.}.
Our free field illustration (\ref{f}) suggests that the desired algebraic
structure should retract to the Wick-formalism in case $S_{scat}=1$ and to the
integrable analytic formalism for purely elastic 2-particle scattering. So we
are looking for a formula which describes the action of a PFG (a one-particle
emulat for simplicity) on an n-particle state in terms of a sum of terms in
which the $\theta$-dependent creation- and annihilation- components pass
through a particle cluster in order to arrive at its natural ordered position.
Assuming $...\theta_{k-1}>\theta_{k}>\theta>\theta_{k+1}>...$ we seek a
formula for the action of the distributional rapidity space creation component
$C(\theta)$ of the PFG $A(\check{f})_{\mathcal{A}(W)}$ of the type
\begin{align}
& C(\theta)\left\vert \theta_{1},..,\theta_{k},\theta_{k+1},..\theta
_{n}\right\rangle =\label{action}\\
& \sum_{l}\int d\vartheta_{1}..\int d\vartheta_{l}F_{\theta_{1}..\theta
_{k},\theta}(\vartheta_{1},..\vartheta_{l})\left\vert \vartheta_{1
,..\vartheta_{l},\theta,\theta_{k+1},..\theta_{n}\right\rangle \nonumber
\end{align}
where the $F$ only depends on the $\theta_{i}$ which are bigger than $\theta$
(i.e. which have been passed to achieve the natural order). If $\theta$ is
already larger than all the others $\theta_{i}$ in the state, the creation
part simply adds a particle. If $C(\theta)$ has to pass through a k-cluster to
arrive at its ordered position the result is required to be of the above form
whereas the annihilation component $C(\theta+i\pi)$ of $A(\check
{f})_{\mathcal{A}(W)}$ leads to a delta function $\delta(\theta-\theta_{k})$
multiplied with an integrand $F_{\theta_{1}..\theta_{k-1},\theta+i\pi
}(\vartheta_{1},..\vartheta_{l})\left\vert \vartheta_{1},..\vartheta
_{l},\theta_{k+1},..\theta_{n}\right\rangle .$ The sum over l takes into
consideration that processes of passing through particle clusters create and
annihilate particle states of arbitrary high particle number, whereas the
$\theta_{k+1}..\theta_{n}$ on the right of the k-cluster remain unchanged.
The second requirement is that $F$ should not contain more detailed
information about the interacting algebra $\mathcal{A}(W)$ than those
contained in $S_{scat}$ which enters the wedge localization as a relative
modular invariant. If the emulats depend on more detailed properties of
$\mathcal{A}(W)$, they remains outside the range of the present method.
The still undetermined $F$ will now be specified in terms of the "grazing
shot" amplitud
\begin{align}
F_{\theta_{1}..\theta_{k},\theta}(\vartheta_{1},..\vartheta_{l}) & =\sum
_{s}\int d\chi_{1}..\int d\chi_{s}S^{\ast}(\chi_{1},..\chi_{s}\rightarrow
\vartheta_{1},..\vartheta_{l})\cdot\\
\cdot S(\theta_{1},..\theta_{k},\theta & \rightarrow\chi_{1},..\chi
_{s},\theta)\nonumber
\end{align}
which consists of a product of a scattering amplitude, in which one particle
with rapidity $\theta~$scatters together with $k$ other particles such that
remains unchanged (second line). This is multiplied with the complex conjugate
of a second amplitude whose purpose is to compensate all processes which would
have occured even in the absence of grazing shot rapidity $\theta.$ Hence
without the presence of $\theta$ nothing happens i.e. $F$ reduced to (particle
matrixelements of) the unit operator.\ This construction allows an extension
from one-particle PFGs to multi-particle emulats.
In case of integrable models the S-matrix does not change the particle number
and the grazing shot S-matrix $F$ (\ref{action}) reduces to a multiplication
with a product of elastic S-matrices \cite{Kar2},
\begin{equation}
S(\theta-\theta_{1})..S(\theta-\theta_{k})
\end{equation}
where in the case of the action of the annihilation components one S-factor is
replaced by a delta contraction. In fact the whole idea behind the
construction is to obtain a formula for a general S-matrix which in the
integrable case passes to the known expression.
In the nonintegrable case this presentation is extremely formal since we know
from \cite{BBS} that nontemperate PFGs are only meaningful as wedge-localized
$A(f)_{\mathcal{A}(W)}$ acting on wedge-localized states. Hence it is better
to think of the PFGs as bilinear forms between particle state
\begin{equation}
\left\langle \theta_{1}^{\prime},..\theta_{m}^{\prime}\right\vert
C(\theta)\left\vert \theta_{1},..\theta_{n}\right\rangle
\end{equation}
and restore the localizing wave functions which one needs for passing to
operators $A(f)_{\mathcal{A}(W)}$ after having computed these formfactors of
the $C(\theta)^{\prime}s.$
The rapidities are uniformization variables which remove the elastic
threshold, so that formfactors of integrable models are meromorphic functions
in the multi-$\theta$ plane. The general analytic ordering picture assumes
that the only singularities are cuts from higher inelastic thresholds which
are locally square integrable. But if locally square integrable rapidity wave
functions are admitted, we may extend the validity of the KMS relation from
wedge localized wave functions to wave functions with ordered square
integrable supports. In this way the analytic ordering picture connects the
formfactor crossing identity (including its analytic property) to the KMS
crossing identity for formfactors (\ref{ana}
\begin{align}
\left\langle 0\left\vert B\right\vert p_{1}...p_{n}\right\rangle _{in} &
=~_{out}\left\langle -\bar{p}_{l+1},.-\bar{p}_{n}\left\vert B\right\vert
p_{1},.p_{l}\right\rangle _{in}\\
for\text{ }(\theta_{1},..\theta_{1}) & >(\theta_{l+1},..\theta_{n})
\end{align}
It is remarkable that the interacting crossing identity just looks like its
free field counterpart, apart from the fact the the bra-vectors refer to
outgoing particle. Its derivation is not affected by the fragile conceptual
status of our grazing shot construction for the action of emulats on
multi-particle states.
Its origin from the KMS property of wedge-localization shows that this
property in the center of particle theory shares its conceptual roots with
those of the thermal manifestation of localization (the Einstein-Jordan
conundrum of subvolume fluctuations, the Unruh Gedankenexperiment and Hawking
radiation from black hole event horizons). In particular it has no relation to
the crossing in the dual model and string theory which results from crossing
properties of (Mellin transforms of) conformal correlation functions (section
4). String theory is the result of a fundamental misunderstanding of the
subtleties of causal localization.
Although the grazing shot Ansatz reduces to the Wick contraction formula in
the absence of interactions and to relations obtained from the Z-F algebra in
the integrable case, these are only rather weak consistency requirements. The
crucial step in establishing its correctness is the verification of
wedge-localit
\[
\left[ JA(f)_{\mathcal{A}(W)}J,A(g)_{\mathcal{A}(W)}\right]
=0,~suppf,g\subset W
\]
which, apart from the trivial case of vacuum and one-particle matrix elements,
the author was unable to do. This property may also turn out be useful for a
perturbative determination of of PFGs $A(W)_{\mathcal{A}(W)}$ which one
expects to be analog the iterative use of causality in the Epstein-Glaser
iteration for pointlike fields. If the divergence of the on-shell perturbation
series would be related to the pointlike singular character of fields, one
expects that on-shell perturbation theory for wedge-local operators should converge.
Hence the grazing shot construction still hangs in the air and it is presently
not worthwhile to present this idea in more details. In fact the reason why
this is mentioned here at all is that the author firmly believes that even
incomplete or failed attempts on important problems in particle physics should
not go unmentioned; reporting on them is not less important than presenting
established facts. The proof of mathematical existence of nontrivial models
outside the narrow setting of integrability and the discovery of controlled
approximations remains still the paramount problem of QFT even after almost 90
years after its inception. The resounding \textit{observational success}
resulting from low orders of the (unfortunately) \textit{diverging}
perturbative series has in no way disburdened QFT from its
conceptual-mathematical fragility; the idea that the low orders are an
asymptotic solution in the limit of vanishing interaction strength remains an
unproven conjecture, even after Dyson pointed to this problem more than half a
century ago.
In the integrable case the inverse problem has a unique solution, but there is
no argument that excludes the possibility that those elastic S-matrices also
admit \textit{non-integrable solutions}. The lack of a uniqueness proof even
includes the absence of interactions in the sense of $S_{scat}=1,$ i.e. the
question whether besides the massive free field, the Hilbert space can
accommodate other local nets with the same representation of the Poincar\'{e}
group and the TCP operator (and hence the same modular data\footnote{In
\cite{1977} it was shown that as a consequence of Huygens principle, conformal
QFT leads to the uniqueness of the inverse problem for $S_{scat}=1.$}.
Without additional restrictions it is in fact quite easy to construct a
continuous infinity of covariant wedge-localized algebras. Any unitary
operator in $H$ which preserves the vacuum and commutes with the
wedge-preserving Lorentz boost and the reflection $J_{0}$ on the edge of the
wedge of the for
\begin{equation}
V=e^{i\eta},~\eta=\sum\frac{1}{n!}\int\tilde{\eta}(x_{1},..x_{n
):A(x_{1})..A(x_{n}):dx_{1}..dx_{n
\end{equation}
will lead to a net of wedge algebras with $S_{scat}=1\ $which, apart from the
special case that $V$ implements an automorphism of $\mathcal{A}_{0}(W),$ is
inequivalent to the net generated by a free field. The restrictions on the
coefficient functions in the rapidity parametrization $\tilde{\eta}(\theta
_{1},..\theta_{n})$ for d=1+1 resulting from the shared modular data are very
mild: the coefficient functions $\tilde{\eta}~$can only depend on $\theta
$-differences, the $J_{0}$-invariance leads to a reality condition and
$V\left\vert 0\right\rangle =\left\vert 0\right\rangle $ requires the absence
of terms with only creation operators.
The passing from a fixed $W$ to the net of wedges $W_{x}$ ($x=apex$) is
achieved by applying translations $U(x)
\begin{equation}
\mathcal{A}(W_{x})\equiv V_{x}\mathcal{A}_{0}(W_{x})V_{x}^{\ast
,~V_{x}=U(x)VU(x)^{\ast
\end{equation}
Unlike the previous method which was based on the temperate/nontemperate
dichotomy of emulats, it would be very difficult to separate integrable models
with $S_{scat}=1$ from this huge set of possibilities\footnote{The consistency
of this method with the emulation construction may lead to further
restrictions.}.
This argument only concerns the connection between $S_{scat}=1$ and the family
of wedge-localized local nets. In order to arrive at the full net, one still
has to intersect wedge algebras in order to obtain compact localized double
cone algebras $\mathcal{A(D)}$ and one knows from the work of Lechner
\cite{Lech2} that the requirements about the \textit{cardinality of phase
space degrees of freedom,} which insures the nontriviality of intersections,
are extremely restrictive; in fact most wedge nets will not possess nontrivial
double cone nets. This still leaves the possibility that the looked for
uniqueness may arise in the problem of forming intersections.
The aim of the present approach based on wedge-localization is closely related
to the old S-matrix setting; it shares with the ideas behind the abandoned
S-matrix bootstrap and Mandelstam's later attempts (to use spectral
representations for the description of analytic properties of elastic
scattering amplitudes\footnote{It is however incompatible with dual model and
string theory constructions and their S-matrix interpretations.}) its
proximity to laboratory observables. The S-matrix (formfactor of the identity)
and formfactors are on-shell objects (correlations restricted to the
mass-shell) which are directly accessible to experiments. They contain much
more information than Lagrangians which lead to a unique perturbative series.
The constructions in this section are thought of as a first step to construct
the full (off-shell) QFT.
This makes them \textit{top-to-bottom approaches} in the sense that one starts
with a list of well-understood properties, which one expects a QFT to
describe, and then sets up the mathematics to understand their consequences.
Pointlike localized fields and their correlations are far removed from
particles and their on-shell manifestations; they form the
\textit{bottom-to-top} setting of Lagrangian or functional quantization; in a
mass-shell based top-to-bottom approach they are only expected to appear at
the end (in case they are still needed). In quantization approaches one
follows a translation dictionary which forces a more fundamental QFT to follow
formal rules which correspond to those of a less fundamental classical theory.
As a result, the physical interpretation and content only emerges at the end
of perturbative calculations and there is hardly any mathematical controll
about what one is doing. A top to bottom approach as the present attempt cuts
all classical connections and replaces them by a tighter
conceptual-mathematical control.
Perhaps the conceptually most useful analogy to what is done here is to
consider it as an extension of the Wigner representation theoretical approach
to the presence of interactions. The functorial connection between Wigner
particle and free fields breaks down in the presence of any
interaction\footnote{This is different in QM where the interacting
Schr\"{o}dinger equation remains functorially related to its operator Fock
space formulation.}. The relation between particles and their free fields to
interacting wedge-localized emulats can be viewed as its substitute. This much
weaker particle-field connection just exposes an intrinsic difficulties of
interactions which prevented the construction of d=1+3 interacting fields. It
is certainly premature to think of attaching a catch name of this still very
tentative attempt of getting a nonperturbative grip on non-integrable
QFTs\footnote{A warning exemple is string theory whose name has nothing to do
with its content.}.
\section{Conformal integrability}
There exists a different notion of "kinematic" integrability which is not
directly related to the dynamics of a model, but rather refers to a discrete
combinatorial structure of its countable superselected localizable charge
sectors which the DHR superselection theory \cite{Haag} uniquely associates to
a local (neutral and invariant under inner symmetries) observable algebra.
More explicitly, it refers to the structure of the set of equivalence classes
of localizable representations of the observable net \{$\mathcal{A(O
)\}_{\mathcal{O}\in R^{4}}$ given in its vacuum representation. In the case of
massive theories with Bose/Fermi statistics this structure turns out to have
the form of a tracial state on a discrete (type II$_{1}$) operator algebra
which contains the infinite permutation group algebra whose representation
theory is responsible for the particle/field statistics. In fact this algebra
turns out to be the dual of a compact "internal symmetry" algebra which
commutes with the Poincar\'{e} group \cite{Haag} \cite{DR}. The internal
symmetry group acts on a larger uniquely determined "field algebra" which
contains the observable algebra as a fixed point algebra under the action of
the symmetry group \cite{DR}. This construction demystifies the concept of
inner symmetries, which dates back to Heisenberg's phenomenological
introduction of isospin into nuclear physics. Its conceptual origin in QT is
the principle of causal localization of quantum observables and their
superselected localized representations which can be combined into a field
algebra on which a compact group acts\footnote{To put it bluntly: compact
groups arise from quantum causal localization in the presence of mass gaps.
.}. Although it has no counterpart in classical physics, for the application
of the method of Lagrangian or functional quantization it is necessary to read
this property back into the classical setting.
In previous sections we have seen that causal localizability results in vacuum
polarization, thermal manifestations and associated intrinsic ensemble
probabilities. The DHR theory adds the superselection theory and inner group
symmetries of field algebras in which the observable algebras are embedded as
fix-point algebras under the action of the internal symmetry to this list.
The relation between local observables and their extension into a field
algebra becomes particularly interesting in theories which cannot be described
in terms of Lagrangian quantization; the most prominent family of such
theories are conformally invariant QFTs\footnote{The indirect way of
interpreting conformal theories as massless limits of perturbatively
accessible massive models was only successfull in the case of the massive
Thirring model for which the perturbative Callen.Symanzik equation comes with
a vanishing beta function \cite{Lo-Go}..}. In that case the observable
algebras obey the Huygens principle i.e. the vanishing of (graded) commutators
of pointlike fields also for timelike distances. Apart from chiral conformal
theories which live on lightlike lines so that the distinction between space-
and time- like disappears and for which the observable algebras (current or
energy-momentum algebras) as well as the generators of their representations
(braid group commutation relations) in many typical cases can be explicitly
constructed, the "Huygens" observable algebras in higher dimensions are more
complicated and do not seem to be integrable.
However the anomalous spectrum of scale dimension in conformal theories seems
to be susceptible to systematic classification. This is because it is defined
in terms of the phases of a unitary operator, the generator of the center $Z$
of the conformal covering group. Whereas the "Huygens observable algebra"
lives on the compactified Minkowski spacetime $M^{c}$, the field algebra,
which is generated by pointlike fields with anomalous dimensions, is localized
on its universal covering $\widetilde{M^{c}}$\footnote{Equivalently they can
be interpreted as operator-valued distributional sections on $M^{c}.$}$.$ As
in the simpler case of chiral models, where the nontrivial center is closely
related to the issue of plektonic statistics (braid-group commutation
relations of fields), one is accustomed to view such properties as being more
on the kinematic than on the dynamic side, although such distinctions become
somewhat blurred outside that kind QFT which is used for the description of
particles to which conformal QFT definitely does not belong. In the sequel we
will argue that the the spectrum of anomalous dimensions is indeed accessible
to rigorous classification and that therefore the terminology kinematical (or
better "partial") integrability of conformal QFTs is quite appropriate.
Integrability without additional specification will be reserved for the full
dynamical integrability which is limited to d=1+1 as explained in the previous section.
The method to investigate partial integrability in conformal QFT is
nonperturbative and, as the constructive approach to integrable massive
models, uses representation theory in particular the representation
theoretical methods of the DHR\ superselection theory. Already for the
low-dimensional chiral models the role of the infinite permutation group
$\mathbf{P}_{\infty}$ is taken over by the much richer representation theory
of the braid group $\mathbf{B}_{\infty}.$ As the DHR theory led to tracial
states ("Markov traces") on the infinite permutation group, chiral conformal
theories require a classification of representations associated with tracial
states on $\mathbf{B}_{\infty}.$
Tracial states on combinatorial algebras in a much more general context are an
important tool in Vaughn Jones subfactor theory \cite{Jones}. The partial
integrability of the braid group representation structure, even in cases where
it was not possible to compute n-point correlation functions of observable
fields\footnote{A notable exception is the chiral Ising QFT \cite{Re-Sch}.}
led to a mutually interesting and fruitful connection with subfactor theory
(which may be viewed as a vast extension of group representation theory).
Whereas $d\geq1+2$ interacting models with a complete particle interpretation
are always non-integrable (previous section), their superselection structure
which forms a discrete tracial algebra, is by definition integrable since
tracial states on words in extended group algebra of the permutation
$\mathbf{P}_{\infty}$ or braid group $\mathbf{B}_{\infty}$ are computable by
combinatorial methods. This is of particular interest in higher dimensional
conformal theories for which DHR superselection setting suggests the relevance
of a braid-permutation group $\mathbf{PB}_{\infty}$ whose
representation-theoretical studies are still in their its infancy \cite{Fenn}.
To investigate this partial integrability one needs to know some structural
properties of conformal QFTs, in particular that anomalous dimensions are
labels of superselection sectors of observable algebras. The braid group and
the spacetime covering aspect arises through the time-like Huygens structure,
whereas the permutation group enters as usual through spacelike commutativity;
but the $\mathbf{PB}_{\infty}$ group is not simply a product of its two
subgroups! A detailed knowledge about local observables is not required; it is
not necessary to know what is "inside" each superselection sector, the rules
of their compositions and decompositions suffice. The combinatorial algebras
of the Hecke or Birman-Wenzl type are typical algebraic structures which arise
in this context \cite{R-S}.
There have been some misconceptions in the recent literature about the status
of conformal QFT within particle theory\footnote{In many contemporary articles
the fact that the tree-approximation of conformal theory (isomorphic to the
classical structure) allow a restriction to a zero mass shell has been used to
incorrectly allege that they can describe quantrum particles in the sense of
scattering theory and the S-matrix.}. Conformal QFT was first proposed in the
beginning of the 60s, but as a result of their remoteness from particle theory
in particular scattering theory the interest in them waned quickly. Most of
the intuitive arguments against their direct use in particle theory were later
made rigorous. Here are some of them
\begin{enumerate}
\item A conformal field with canonical (free field) short distance behavior is
inevitably identical to a free field theory \cite{old}.
\item A conformal QFT cannot be perturbatively constructed from free massless
fields and the perturbative behavior of massive renormalizable d=1+3 models
(contrary to some models in d=1+1\footnote{The most prominent exeption is the
massive Thirring model. In fact the suspicion that $\beta(g)\equiv0$ which led
the derivation of the Callan-Symanzik equation to all orders \cite{Lo-Go} came
from the observation of softness in m$\rightarrow0.$}) is not "soft" in a
sense which would allow to take a massless limit within the perturbative
Lagrangian setting.
\item The LSZ scattering limits of interacting conformal fields
vanish\footnote{The Hilbert space positivity forces the K\"{a}ll\'{e}n-Lehmann
spectral measure to have a singularity which is milder than a mass-shell delta
function.}.
\end{enumerate}
The proof for 1. and 3. is actually quite simple, the first follows from the
fact that the canonicity of scale dimension requires a free field behavior for
short distances whin in conformal theories implies the freeness of the field
itself. The third is a consequence of the fact that the increase of the short
distance dimension above its smallest possible value allowed by positivity
(that for a free field) automatically reduces the singularity at the zero mass
shell $p^{2}=0$ which then is too weak to match the dissipating behavior of
wave packets which would be necessary in order to arrive at a nontrivial LSZ
limit. The QED prescription, which interprets photon-inclusive cross sections
as the observable manifestations of charged particle does not work for
assigning a particle interpretation to conformal theories.
The airy use of conformal QFTs in many recent publication shows that particle
physics is in the process of loosing its history since none of the old
arguments showing the problematic relation of conformal QFT with particle
theory has been addressed. It is not forbidden to think about conformal
theories of resulting from massive theories in a hypothetical limit in which
all particle creation threshold fall on top of each other, but as long as
there is no such massive theory with a perturbative Callan-Symanzik equation
with $\beta=0~($see remarks in section 3) this is of not much use.
Although conformal theories play no direct role in particle theory, their
apparent mathematical simplicity make them ideal "theoretical laboratories"
for the study of structural problems of QFT. Conformal transformations relate
compact to non-compact regions and in this way extends the concept of modular
localization. Low dimensional chiral theories were the first theories for
which representation theoretical nonperturbative methods led to proof of
existence as part of their explicit construction before such methods were also
successfully applied for massive integrable models (previous section). They
played an important role in the adaptation of the Tomita-Takesaki modular
theory of operator algebras to problems of modular localization, and led to a
fruitful meeting of minds between algebraic QFT and the subfactor theory.
An important step in the history of conformal QFT was the understanding of the
role of the Huygens principle in the definition of conformal observables and
anomalous dimension-carrying charged fields which led in 1975 to a conformal
decomposition theory \cite{S-S}\cite{S-S-V}\cite{Lu-Ma}. There were two
viewpoints about conformal invariance; one can either say that conformal
fields "live" (are univalued) on \textit{the universal covering of the
compactified Minkowski spacetime }$\widetilde{M^{c}},$ or that they are
distribution-valued sections on $M^{c}.$ In the first case \cite{Lu-Ma} (which
probably goes back to Irving Segal) one encounters infinitely many "heavens"
above and "hells" below $M^{c}~$and there exists a \textit{generator of the
center} of the universal conformal covering group $Z\in\widetilde{SO(4,2)
$\ (for d=1+3) such that $Z^{n},$ $n~integer~$numbers those heavens and hells
and $n=0$ corresponding to the compactification $M^{c}$ of our living
spacetime. The center is a certain \textit{conformal rotation} at the angle
$2\pi$ whose spectrum results in the formula $specZ=\left\{ e^{i2\pi
d_{\alpha}}\right\} $ where $d_{\alpha}$ runs over the (anomalous)~conformal
field dimensions.
There is an analogy of this situation to the physics of plektons in d=1+2. In
this case the Poincar\'{e} group $\mathcal{P}$ has an infinite covering
$\widetilde{\mathcal{P}},$ but the spacetime has none. The Wigner-Bargmann
representation theory of positive energy representations in d=1+2 however
creates a kind of covering due to the semiinfinite string-like nature of the
plektonic wave functions \cite{Mund3}. Apart from that difference the
anomalous spatial spin corresponds to the anomalous dimension and the
plektonic statistics (anyons are abelian plektons) resembles an imagined
"timelike braided exchange", with the resulting statistical phase \cite{Haag}
corresponding to the eigenvalue of $Z.$ For a long time it was suspected that
there is some kind of free or at least integrable plektonic QFT associated
with the Wigner-Bargman representation, but meanwhile this idea has been
disproven \cite{B-M}.
The prerequisite for conformal observables is that their scale dimension is
integer\footnote{For semiinteger dimension as they already occur for free
spinors it is necessary to take the double covering of $M_{c}.$ These fields
fulfill an extended Huygens principle on the double covering.}; typically
their pointlike generators are conserved currents or the energy-momentum
tensor which result from the "localization" of global symmetries; but in
principle any field with integer dimension satisfies the Huygens propertyn and
hence can be included into the observables. Such local fields live on $M^{c}$
and commute with the center $Z~$of the conformal group $\widetilde{SO(4,2)}.$
As a result their commutators are concentrated on the mantle of the light cone
which in turn implies that their correlations functions are multivariable
rational analytic functions \cite{Ni-Re-To}. Despite their simple appearance,
nontrivial d=1+3 Huygens fields have not yet been constructed and the problem
of their integrability remains unresolved.
The anomalous dimensions play the role of generalized superselected charges
carried by the anomalous dimensional fields\footnote{The analogy works better
with squares of charges since the matter-antimatter charge compensation has no
counterpart the composition of anomalous dimensions.}. The application of the
spectral decomposition theory with respect to the center $Z$ leads to the
following decomposition of fields
\begin{align}
A(x) & \rightarrow A_{\alpha,\beta}(x)\equiv P_{\alpha}A(x)P_{\beta
,~Z=\sum_{\alpha}e^{i2\pi d_{\alpha}}P_{\alpha}\\
& A_{\alpha,\beta}(x)B_{\beta,\gamma}(y)=\sum_{\beta^{\prime}}R_{\beta
,\beta^{\prime}}^{(\alpha,\gamma)}(x,y)B_{\alpha,\beta^{\prime}
(y)A_{\beta^{\prime},\gamma}(x)~\nonumber
\end{align}
The R-matrices depend discontinuously on spacetime, they are locally constant
but different for time- and space- like separations. For time-like separation
the distinction positive/negative timelike is topologically similar to
left/right distinction in chiral theories.
These decompositions appear first for abelian (anyonic) R-matrices in
\cite{S-S}; only after the path-breaking work in the 80s by Belavin,Polyakov
and Zamolodchikov \cite{BPZ} they were generalized to the nonabelian braid
group (plektonic) representations which appear in the exchange algebras of
chiral models \cite{R-S}\cite{F-G}. Although (apart from free fields) chiral
conformal field theories do not describe particles, it is customary to refer
to the quanta, which carry a discrete representation of the conformal rotation
and lead to R-matrix commutation relations of the Artin braid group, as "plektons".
The topological similarity of the positive and negative time-like Huygens
region leads one to expect the anomalous dimension spectra to be connected
with braid group representations. But since there is also the requirement from
spacelike commutation which leads to the nontrivially combined $\mathbf{BP
_{\infty}$ group (the "words" in $\mathbf{B}_{\infty}$ intertwine nontrivially
with those in $\mathbf{P}_{\infty}$), the kind of $\mathbf{B}_{\infty}$
representations as they occur in chiral theories (Hecke-, Birman-Wenzl
algebras,..) are not expected to re-appear in this form in higher dimensional
CFT. Some of the new problems were mentioned in \cite{old}.
It is not difficult to write the defining relation between the $b_{i},t_{i}$
$~$generators of $\mathbf{BP}_{\infty}~$\cite{Fenn}$.$ The $Z$ spectrum of any
4-dim. conformal model should belong to one representation of these relations.
But unlike in the chiral case, where the exponential Bose field model was
available a long time before the later systematic construction of families of
chiral models in the work of \cite{BPZ}, there exists presently no
illustrative nontrivial example; the conformal invariant \textit{generalized
free field} (which results from the AdS free field by applying the AdS-CFT
correspondence) is too far away from physical fields\footnote{Its abundance of
degrees of freedom leads to the before-mentioned pathological timelike
causality properties and the absence of reasonable thermodynamik behavior.} in
order to be of much interest. Recently proposed dimensional spectra on the
basis of analogies with those of transfer matrices of Ising lattice models
\cite{Bei} are not supported by the Huygens structure of conformal observables
and their expected dimensional spectra from the $\mathbf{BP}_{\infty}$
representation structure.
The group theoretical origin of the simplification in d=1+1 is the
factorization of its conformal group $SO(2,2)=SL(2,R)\times SL(2,R)~$which
leaves the 3-parametric Moebius group as the space-time symmetry of a chiral
theory on $\mathbb{R}$ or its compactification $S^{1}~$(with the possibility
to extend it to Diff(S$^{1}$)). The group theoretical factorization is
followed by a decomposition of the d=1+1 conformal observables into its chiral
components living on separate light rays. The chiral theories have proven to
be the most susceptible to the classification and construction of their their
superselected representation sectors and sector creating plektonic fields; in
particular for "rational" models (i.e. models with a finite number of
representations) many explicit constructions are available \cite{Ka-Lo}.
The first illustrative model for the decomposition theory was the exponential
Boson field \cite{S-S}. In this case the analogy of anomalous dimension with
superselecting charges takes a very concrete form. In a somewhat formal way of
writing
\begin{align}
j(x) & =\partial_{x}V(x),~~\left\langle j(x),j(x^{\prime})\right\rangle
\sim\frac{1}{(x-x^{\prime}+i\varepsilon)^{2}}\\
\Psi^{(q)}(x) & =e^{iqV(x)},~[Q,\Psi^{(q)}(x)]=q\Psi^{(q)}(x),~Q=\int
j(x^{\prime})dx^{\prime}\nonumber\\
\Psi^{(q)}(x) & =\sum_{q^{\prime}-q^{\prime\prime}=q}\Psi_{q^{\prime
},q^{\prime\prime}}^{(q)}(x),~\Psi_{q}^{q^{\prime}}(x)\equiv P_{q^{\prime
}\Psi_{q}(x)P_{q^{\prime\prime}}\nonumber
\end{align}
In the last line the $P_{q^{\prime}}$ are the projectors onto the continuous
subspaces $H_{q^{\prime}}$ where $q^{\prime}~$runs over all continuous
superselected charge values in a nonseparable Hilbert space. This model
belongs to the class of non-rational chiral models, but by enlarging the
observable algebra to include $\Psi^{(q)}(x)$ fields with $q^{\prime
s~$leading to integer scale dimensions $q^{2}
\begin{equation}
U(\lambda)\Psi^{(q)}(x)U(\lambda)^{\ast}=\lambda^{q^{2}}\Psi^{(q)}(\lambda x)
\end{equation}
the number of superselected sectors becomes finite (quantized charges) and the
resulting model is "rational" and integrable.
The situation becomes especially interesting for an n-component current
algebra
\[
j_{k}(x)=\partial_{x}V_{k}(x),~k=1..n,~\Psi^{(\vec{q})}(x)=e^{i\vec{q}\vec
{V}(x)
\]
In that case the maximal local extensions of the current algebra are
classified in terms of integer n-dimensional lattices and their superselected
sectors are characterized in terms of their dual lattices. In this setting the
selfdual lattices correspond precisely to situations with a trivial
superselecting structure (the vacuum sector is the only sector). The
corresponding selfdual lattices correspond to the largest exceptional finite
groups, the most mysterious among them is known as the "moonshine" group. The
fact that quantum localization lead to such subtle group theoretic properties
gives an impression of the conceptual depth and its many unexpected relation
to other mathematical and physical concepts.
But conceptual subtleties can also lead to pertinacious misunderstandings. The
most prominent arose from the picture of an "embedding" of this n-component
current theory into its n-component inner symmetry space called
\textit{target} space of string theory. The idea was to convert the
n-component inner symmetry space of the charge-carrying \textit{sigma-model
}$\Psi^{(\vec{q})}(x)$\textit{ fields} into a "target" space which carries a
non-compact group representation, specifically a positive energy
representation of the Poincar\'{e} group.
It turns out that the infinitely many oscillators of a (supersymmetrically
extended) current theory permit precisely one solution in form of the
10-parametric highly reducible \textit{superstring} representation. But in
order to achieve this one has to organize these oscillator degrees of freedom
in a different way from that required by the modular localization on a chiral
lightlike line. In fact already the spectrum of the multicomponent
superselected chiral charge does not match the mass spectrum of the
superstring representation (different zero modes) so that the chiral "source"
theory and the "target representation" of the Poincare group live in different
Hilbert spaces. The localization of a positive energy representation is a fait
accompli and it is point- and not stringlike\footnote{There are no
string-localized infinite spin representation components in the reducible
superstring representation.}. The relation of the chiral model and the
Poincar\'{e} group acting in its putative target space neither support an
embedding of a lower dimensional QFT into a higher dimensional one (which
according to the holistic aspects of different modular localization is
\textit{never possible in QFT}) nor a stringlike localization.
In fact in the sense of loalization in Minkowski spacetime all the non zero
oscillator degrees of freedom form a quantum mechanical oscillator chain in
the inner space "on top of a localization point". One may call this a string
(at the risk of creating confusions), except that in QM localization is not a
spacetime-related intrinsic property but depends on what the working physicist
wants to make out of it. Given the 50 year domination of string theory and the
foundational role of causal localization in QFT these still ongoing
misunderstandings represents certainly the deepest schism which ever occurred
within particle physics.
The source-target terminology incorrectly anticipated that the relation
between the chiral conformal QFT and ST can be understood as an embedding of
the chiral source into the n-component target. Whereas such imbedding (and
their Kaluza-Klein inversions) are perfectly possible in classical field
theories and even in QM, this is not possible in QFT. The reason is that
quantum causal localization is "holistic" whereas localization in classical
field theory or quantum mechanics is not. The holistic organization of
oscillators for implementing the localization of the currents and their
associated sigma model fields on the lightlike line is simply not the same as
that which comes with a positive energy representation and the related
localization on the target.
The only memory which the target space use of the oscillators has about the
chiral current model is that the mass spectrum from the superstring
representation is (up to a scale-setting numerical factor) equal to the
anomalous dimension spectrum of the conformal composites which appear in the
converging global operator expansion of the operator product of two sigma
model fields. This is connected to Mack's observation \cite{Mack1}\cite{Mack2}
that the dual model masses are obtained from the Mellin transform of global
operator product expansions in conformal QFT. Would-be particle spectra in
string theory; the dual model have their origin in anomalous dimension spectra
of conformal QFT and the conformal origin of dual model crossing has no
relation with the crossing in particle physics (section 5) and it is not
comprehensible why the rarity of finding representations of the Poincar\'{e}
group in a target space construction of a chiral theory (the superstring and
its M-theoretic modifications are the only representations) should attach a
fundamental significance which led string theorists to claim that we are
living in a dimensionally reduced 10 dimensional spacetime. The reductionist
trend in QFT supports the idea that a foundational theory should not admit
alternative realizations of the same underlying principle, but it does not
suggest that rarity of a construction should attach a foundational
significance to it.
One can learn a lot from corrections of incorrect ideas about the meaning and
consequences of causal localization in ST. More extensive presentations of
these misunderstandings about localization can be found in \cite{response}.
There are good reasons to also maintain a skeptic attitude with respect to all
ideas which originated from string theory, even if afterwards they were
presented within the setting of QFT as in the case of the AdS-CFT
correspondence. In relations between QFTs in different dimensions the touchy
point is the degrees of freedom issue and its implications for causal
localization; this has simply no counterpart in classical field theory nor in
quasiclassical approaximations. A QFT may fulfill local commutativity
(Einstein causality) but violate the causal completion property as a result of
having more degrees of freedom in the causal completion $\mathcal{A
(\mathcal{O}^{\prime\prime})$ than there were in the original region
$\mathcal{A}(\mathcal{O})\varsubsetneq\mathcal{A(O}^{\prime\prime}).$ This
contradicts our ideas of causal propagation; "poltergeist" degree of freedoms
which enter the region of causal determination from nowhere should not occur
in a physically acceptable theory. Their absence is a property of relativistic
propagation of classical Euler-Lagrange equation and enters formally QFT
through Lagrangian quantization; but in a general setting of QFT this has to
be separately added (the time-slice property in \cite{H-S}).
A typical example for a generating field which violates this causality
requirement as the consequence of too many phase space degrees of freedom is
the before mentioned appropriately chosen generalized free field (a free field
with continuous mass distribution). But precisely such fields appears when one
computes the conformal field which according to the AdS-CFT correspondence
results from a AdS free field \cite{Du-Re}. The algebraic derivation of the
AdS-CFT correspondence shows that this phenomenon is intrinsic to this kind of
correspondence. It has its counterpart in the opposite direction
$CFT\rightarrow AdS$, where one finds that there are not enough degrees of
freedom in order to support a physical causal AdS theory, as a result only
non-compact localized algebra as $\mathcal{A}(W)$ are nontrivial whereas
double cone algebras remain empty $\mathcal{A}(\mathcal{C})=\left\{
\mathbb{C}\mathbf{1}\right\} .$ The preservation of phase space degrees of
freedom is closely related to the shared spacetime group symmetry. This
physical shortcoming in no way has any influence on the mathematical existence
of both sides.\ \
These are verifiable structural facts; they do not permit any exception just
like TCP or spin\&statistics. The AdS-CFT Maldacena conjecture, to the extend
that it places a physically viable theory on both sides of the correspondence,
contradicts these facts Even worse, results which once were known at times
when the particle theory community was much smaller seem to have been
irrevocably lost. No wonder that incorrect ideas about embedding of QFTs and
dimensional reduction of extra dimensions enjoy a widespread popularity. In
most publications the awareness that such problems cannot be addressed by
manipulating Lagrangians but need the structural knowledge about interacting
fields and their correlations. Often these misunderstandings arise from
quasiclassical approximations thus overlooking the fact that such
approximations unfortunately do not support modular localization and destroy
quantum degrees of freedom properties. This is especially evident in the
notion of "branes". Their quasiclassical constructions do not show what really
happens namely that \textit{all the degrees of freedom which were contained in
the original physical QFT are compressed into the brane} which, as a result of
overpopulation, becomes unphysical \cite{Mack2}.
One problem in which the relation between localization and cardinality of
phase space degrees of freedom has been used to prove the existence of certain
d=1+1 integrable models is Lechner's work \cite{Lech2}, in particular his
theorem about the nontriviality of double cone intersections of wedges algebra
based on the \textit{modular nuclearity property} of the degrees of freedom
resulting from the Z-F algebra structure of wedge generators.
Acknowledgment: I am indebted to Jens Mund and Jakob Yngvason for innumerous
discussions on various topics which entered this work. Special thanks go to
Detlev Buchholz for a critical reading of section 5 which led to its reformulation.
|
1,116,691,500,375 | arxiv | \section{Introduction}
Witten's twistor string theory proposal~\cite{Witten:2003nn}
launched a series of developments
which have greatly expanded our understanding of the mathematical
structure of scattering amplitudes over the past
several years, particularly in maximally supersymmetric
Yang-Mills theory (SYM).
The most computationally useful technology to have emerged
from subsequent developments is the
Britto-Cachazo-Feng-Witten (BCFW) on-shell
recursion relation~\cite{Britto:2004ap,Britto:2005fq}, the discovery
of which initiated a vast new industry for the computation
of amplitudes.
Building
on~\cite{Hodges},
two recent papers~\cite{Mason:2009sa,ArkaniHamed:2009si}
have paved the way for a return to twistor
space by showing that the BCFW recursion has a natural
formulation there.
Here we bring this set of developments full circle by demonstrating
a beautiful connection between
the original twistor string proposal and
the dual formulation for the $S$-matrix of SYM recently proposed
by Arkani-Hamed et.~al.~\cite{ArkaniHamed:2009dn}.
In particular we show a concrete
relation between the former and the BCFW representation of
amplitudes.
Our specific focus is on the connected
prescription~\cite{Roiban:2004yf} due to Roiban and the authors
(see also~\cite{Roiban:2004vt,Roiban:2004ka,
Spradlin:2005hi,Vergu:2006np}),
a fascinating but mysterious formula which has been
conjectured to encode
the entire tree-level $S$-matrix of SYM:
\begin{equation}
\label{eq:Tformula}
{\cal T}_{n,k}({\cal Z}) = \int [d {\cal P}]_{k-1}
d^n \sigma \prod_{i=1}^n \frac{\delta^{3|4}({\cal Z}_i - {\cal P}(\sigma_i))}
{\sigma_i - \sigma_{i+1}}.
\end{equation}
Here ${\cal P}(\sigma)$ denotes a ${\mathbb{P}}^{3|4}$-valued polynomial
of degree $k-1$ in $\sigma$ and $[d{\cal P}]_{k-1}$ is the natural
measure on the space of such polynomials.
We review further details shortly but pause to note that
this formula simply expresses the content of
Witten's twistor string theory: the N${}^{k-2}$MHV superamplitude
is computed as the integral of an
open string current algebra correlator over the moduli space of
degree $k-1$ curves in supertwistor space ${\mathbb{P}}^{3|4}$.
The formula~(\ref{eq:Tformula})
manifests several
properties which scattering amplitudes must possess, including
conformal invariance and cyclic
symmetry of the superamplitude, both of which are hidden
in other representations such as BCFW. It is also relatively easy
to show that it possesses the correct soft and collinear-particle
singularities, as well as (surprisingly) parity
invariance~\cite{Roiban:2004yf,Witten:2004cp}.
Despite these conceptual strengths the connected prescription has
received relatively little attention
over the past five years because
it has resisted attempts to relate it directly to the more
computationally useful BCFW recursion relation.
Here we remedy this situation by showing for the first time
a direct and beautiful relation between the connected
prescription~(\ref{eq:Tformula}) and the BCFW recursion.
Specifically we demonstrate explicitly
for $n=6,7$
(and expect a similar though more intricate story for general $n$)
that different choices of integration contour
in~(\ref{eq:Tformula})
compute different, but equivalent, representations of tree-level
amplitudes~\footnote{It has been argued
in~\cite{Gukov:2004ei} that the connected prescription
can also be related to the CSW representation~\cite{Cachazo:2004kj}
by a contour deformation in the moduli space of curves.
}. The privileged contour singled out by the
delta-functions appearing in~(\ref{eq:Tformula}) computes the
connected prescription representation in which the $n$-particle
N${}^{k-2}$MHV amplitude is expressed as a sum of residues of the integrand
over
the roots of a
polynomial of degree $\genfrac{<}{>}{0pt}{}{n-3}{k-2}$
(where $\genfrac{<}{>}{0pt}{}{a}{b}$ are Eulerian numbers).
Different representations of tree-level amplitudes, including
BCFW representations as well as intermediate prescriptions
similar to those of~\cite{Gukov:2004ei,Bena:2004ry}, are all
apparently encoded in various residues of the integrand ${\cal T}_{n,k}$ and
are computed by choosing various appropriate contours.
The equivalence of different representations follows from the global
residue theorem, a multidimensional analogue of Cauchy's theorem.
The integrand ${\cal T}$
has many residues in common with
\begin{equation}
\label{eq:Lformula}
{\cal L}_{n,k}({\cal W}) = \int
[dC]_{k \times n}
\prod_{i=1}^n
\frac{\delta^{4|4}(C_{\alpha i} {\cal W}_i)}{(i,i+1,\ldots,i+k-1)}
\end{equation}
recently written down by
Arkani-Hamed et.~al.~\cite{ArkaniHamed:2009dn}.
Here $[dC]_{k \times n}$ is the measure on the space of $k \times n$
matrices modulo left-multiplication by $GL(k)$ and
$(m_1,\cdots,m_k)$ denotes
the minor obtained from $C$ by keeping only columns $m_1,\ldots,m_k$.
Residues of both ${\cal T}$ and ${\cal L}$ compute BCFW representations of
tree amplitudes. In addition,
${\cal T}$ also computes various other tree-level
representations while ${\cal L}$ evidently computes parity-conjugate
P(BCFW) representations at tree-level as well as leading singularities
of loop amplitudes.
It is natural to wonder whether there exists some
richer object
${\cal D}$ (for ``dual'') which contains information
about various connected and disconnected representations of amplitudes
at tree level and at all loops.
This could help shed further light on twistor string theory at the loop level.
It is not yet known which contour computes which object from the integrand
${\cal L}$.
In contrast, as mentioned above, the connected prescription ${\cal T}$
comes equipped with
a certain privileged contour which calculates the tree
amplitude. Various other contours which compute different
representations of the same amplitude
can be easily determined by applying
the global residue theorem.
We hope that a better understanding of the relation between ${\cal L}$
and ${\cal T}$ may allow us to transcribe information about the privileged
contour from the latter to the former.
\ifpreprint
In section 2 we review the connected prescription for computing
scattering amplitudes and derive its link representation by Fourier
transforming it to mixed ${\cal Z}$, ${\cal W}$ variables.
In section 3 we demonstrate the precise relation between the connected
prescription, BCFW and intermediate representations of all
six- and seven-particle
amplitudes.
\fi
\section{Linking The Connected Prescription}
Let us begin by reviewing some details of the connected
prescription formula~(\ref{eq:Tformula}) for the color-stripped $n$-particle
N${}^{k-2}$MHV scattering amplitude.
The $4|4$ component homogeneous coordinates for the $i$-th
particle in ${\mathbb{P}}^{3|4}$ are
${\cal Z}_i = (\lambda_i^\alpha, \mu_i^{\dot{\alpha}}, \eta_i^A)$ with
$\alpha,\dot{\alpha} = 1,2$ and $A =1,2,3,4$.
In split signature $--++$ the spinor helicity variables
$\lambda_i^\alpha, \widetilde{\lambda}_i^{\dot{\alpha}}$ can be
taken as independent real variables and the twistor transform realized
in the naive way as a Fourier transform from
$\widetilde{\lambda}_i^{\dot{\alpha}}$ to $\mu_i^{\dot{\alpha}}$.
As emphasized in~\cite{Roiban:2004yf}
(see also~\cite{Vergu:2006np})
the integral~(\ref{eq:Tformula}) must be
interpreted as a contour integral in a multidimensional
complex space. The delta functions specify
the contour of integration according to the usual rule
\begin{equation}
\int d^m z \ h(\vec{z}) \prod_{i=1}^m \delta(f_i(\vec{z}))
= \sum
h(\vec{z})
\left[ \det \frac{\partial f_i}{\partial z_j} \right]^{-1}
\end{equation}
with the sum taken over the set of $\vec{z}_*$ satisfying
$f_1(\vec{z}_*) = \cdots = f_m(\vec{z}_*) = 0$.
In practice the calculation of any $n$-particle
N${}^{k-2}$MHV amplitude therefore reduces to the problem
of solving certain polynomial equations which appear to have
$\genfrac{<}{>}{0pt}{}{n-3}{k-2}$ roots in general.
To write the connected formula slightly more explicitly we
first express the delta functions on ${\mathbb{P}}^{3|4}$
in terms of homogeneous coordinates via the contour
integral
\begin{equation}
\delta^{3|4}({\cal Z} - {\cal Z}') =
\int \frac{d\xi}{\xi} \delta^{4|4}({\cal Z} - \xi
{\cal Z}').
\end{equation}
Next we parameterize the degree $k-1$ polynomial
${\cal P}$ in terms of its $k$ ${\mathbb{C}}^{4|4}$-valued
supercoefficients
${\cal A}_d$ as
\begin{equation}
{\cal P}(\sigma) = \sum_{d=0}^{k-1} {\cal A}_d \sigma^d.
\end{equation}
Using these ingredients~(\ref{eq:Tformula}) may be expressed as
\begin{equation}
\label{eq:integralone}
{\cal A}({\cal Z}) = \int \frac{d^{4k|4k} {\cal A}\, d^n \sigma\,
d^n \xi}{{\rm vol}\,GL(2)}
\prod_{i=1}^n \frac{ \delta^{4|4}({\cal Z}_i -
\xi_i {\cal P}(\sigma_i))}{\xi_i(\sigma_i-\sigma_{i+1})},
\end{equation}
where we have indicated that the integrand and measure
are invariant under a GL(2) acting as M\"obius
transformations of the $\sigma_i$ combined with a simultaneous
compensating reparameterization of the curve ${\cal P}(\sigma)$.
This symmetry must be gauge-fixed in the usual way.
Motivated by~\cite{ArkaniHamed:2009si}
we now consider expressing the connected
prescription~(\ref{eq:Tformula}) in a mixed representation where
some of the particles are specified in terms of the ${\cal Z}$ variables
as above while others are specified in terms of
the variables ${\cal W} = (\widetilde{\mu}^{\dot{a}},
\widetilde{\lambda}^{\dot{a}}, \widetilde{\eta}_{{A}})$ related by Fourier transform
\begin{equation}
{\cal F}({\cal W}) = \int d^{4|4} {\cal Z} \ F({\cal Z})
\,e^{i {\cal W} \cdot {\cal Z}},
\end{equation}
where
${\cal W} \cdot {\cal Z} =\widetilde{\mu}\cdot \lambda - \mu \cdot\widetilde{\lambda} + \eta \cdot \widetilde{\eta}$.
A particularly convenient choice for the N${}^{k-2}$MHV amplitude
is to leave precisely $k$ particles in terms of ${\cal Z}$
and transform the rest
to ${\cal W}$. This replaces the $4n|4n$ delta-functions
in~(\ref{eq:integralone}) with
\begin{equation}
\prod_i \exp \left( i \xi_i {\cal W}_i \cdot {\cal P}(\sigma_i) \right)
\prod_J \delta^{4|4}({\cal Z}_J - \xi_J {\cal P}(\sigma_J)).
\end{equation}
Here and in all that follows it is implicit that sums or products over $i$
run over the subset of the $n$ particles expressed in the ${\cal W}$
variables while sums or products over $J$ run over the particles
expressed in terms of ${\cal Z}$'s.
The utility of our choice is that there are now precisely as many
delta functions as supermoduli ${\cal A}$, which moreover can be
integrated out trivially since they appear
linearly inside delta functions.
This operation sets
\begin{equation}
{\cal P}(\sigma) = \sum_J \frac{{\cal Z}_J}{\xi_J} \prod_{K \ne J}
\frac{\sigma_K - \sigma}{\sigma_K - \sigma_J}
\end{equation}
which is easily seen to satisfy ${\cal P}(\sigma_J) = {\cal Z}_J/\xi_J$.
The resulting expression for the integral may be cleaned up
with the help of the change of variables
\begin{equation}
x_i = \xi_i \prod_K (\sigma_K - \sigma_i), \qquad
x_J^{-1} = \xi_J \prod_{K \ne J} (\sigma_K - \sigma_J).
\end{equation}
which (ignoring for the moment overall signs)
transforms~(\ref{eq:integralone})
into
an integral which can be put
into the form of a link representation
\begin{equation}
\label{eq:link}
{\cal A}({\cal W}_i, {\cal Z}_J) = \int dc_{iJ}\ U(c_{iJ})
\,e^{i c_{iJ} {\cal W}_i \cdot {\cal Z}_J}
\end{equation}
(as introduced in~\cite{ArkaniHamed:2009si}) with the integrand
given by
\begin{equation}
\label{eq:Udef}
U(c_{iJ}) = \int \prod_{a=1}^n
\frac{d\sigma_a\,dx_a}{x_a(\sigma_a - \sigma_{a+1})} \prod_{i,J}
\delta\left( c_{iJ} - \frac{x_i x_J}{\sigma_J - \sigma_i}\right).
\end{equation}
Note that this expression still requires $GL(2)$
gauge fixing.
Usually this is accomplished by freezing four variables
$\sigma_1,\sigma_2,\sigma_3,x_1$
to arbitrary values with the Jacobian
\begin{equation}
\int d\sigma_1\,d\sigma_2\,d\sigma_3\,dx_1 =
x_1 (\sigma_1-\sigma_2)(\sigma_2-\sigma_3)(\sigma_3-\sigma_1).
\end{equation}
Consequently in~(\ref{eq:Udef}) there are effectively only $2n-4$ integration
variables and $k (n-k-4)$ delta functions, so that after integrating
out the $x$'s and $z$'s there remain in $U$ a net $(k-2) (n-k-2)$
delta functions.
As emphasized in~\cite{ArkaniHamed:2009si}
an important feature of the link representation is that
returning physical space is simple because the Fourier transforms
$\mu_i^{\dot{\alpha}} \to \widetilde{\lambda}_i^{\dot{\alpha}}$,
$\widetilde{\mu}_i^\alpha \to \lambda_i^\alpha$ turn the exponential
factors
in~(\ref{eq:link}) into
\begin{equation}
\label{eq:physdelta}
\prod_i \delta^2 (\lambda_i^\alpha - c_{iJ} \lambda_J^\alpha)
\prod_J \delta^2 (\widetilde{\lambda}_J^{\dot{\alpha}} + c_{iJ}
\widetilde{\lambda}_i^{\dot{\alpha}}).
\end{equation}
For given kinematics $(\lambda_i^\alpha,
\widetilde{\lambda}_i^{\dot{\alpha}})$ these equations
fix the $k(n-k)$ $c_{iJ}$ as linear functions of $(k-2)(n-k-2)$
remaining free parameters denoted $\tau_\gamma$.
Finally we obtain the physical space
amplitude in terms of $U$ as
\begin{equation}
\label{eq:pstransform}
{\cal A}(\lambda,\widetilde{\lambda})
=
J\, \delta^4( {\textstyle{\sum}} p_i )
\int d^{(k-2)(n-k-2)} \tau\ U(c_{iJ}(\tau_\gamma)),
\end{equation}
where $J$ is the Jacobian from integrating out~(\ref{eq:physdelta}).
We will always implicitly choose for simplicity a parameterization
of $c_{iJ}(\tau_\gamma)$ for which $J=1$.
Before proceeding let us again emphasize
that each $c_{iJ}(\tau_\gamma)$ is linear in the $\tau$'s.
\section{Examples}
For the trivial case of MHV amplitudes ($k=2$) the remaining integrations
are easily carried out, leading to
\begin{equation}
U^{--+\cdots+} =
\frac{1}{c_{31} c_{n2}} \prod_{i=3}^{n-1}
\frac{1}{c_{i,i+1:1,2}},
\end{equation}
in terms of
$c_{ij;KL} = c_{iK} c_{jL} - c_{iL} c_{jK}$.
The $\overline{\rm MHV}$ case $k=n-2$ yields the same result with
$c_{ab} \to c_{ba}$. When transformed to physical space
using~(\ref{eq:pstransform})
these yield respectively the Parke-Taylor formula and its conjugate.
\subsection{6-Point Amplitudes}
Next we consider
the six-particle alternating helicity amplitude,
for which we find from~(\ref{eq:Udef})
the representation
\begin{equation}
\label{eq:uuu}
U^{+-+-+-} = \frac{1}{c_{14} c_{36} c_{52}} \delta(S_{135:246})
\end{equation}
where $S$ refers to the sextic polynomial
\ifpreprint
\begin{equation}
S_{ijk:lmn} =
c_{im} c_{jm} c_{kl} c_{kn} c_{ij:ln}
- c_{in} c_{jn} c_{kl} c_{km} c_{ij:lm}
- c_{il} c_{jl} c_{km} c_{kn} c_{ij:mn}.
\end{equation}
\else
\begin{multline}
S_{ijk:lmn} =
c_{im} c_{jm} c_{kl} c_{kn} c_{ij:ln}
\\
- c_{in} c_{jn} c_{kl} c_{km} c_{ij:lm}
- c_{il} c_{jl} c_{km} c_{kn} c_{ij:mn}.
\end{multline}
\fi
In this example the
appearance of $\delta(S_{135:246})$ can be understood as follows:
we are trying to express nine variables $c_{iJ}$ in terms of
eight variables (the $x$'s and $z$'s) by solving the delta-function
equations
\begin{equation}
c_{iJ} = \frac{x_i x_J}{\sigma_J - \sigma_i}.
\end{equation}
A solution to this overconstrained set of equations for the $c_{iJ}$
exists if and only
if the sextic $S_{135:246}$ vanishes.
{}From~(\ref{eq:uuu}) we arrive at the expression
\begin{equation}
\label{eq:six}
A^{+-+-+-} = \int d\tau\
\frac{1}{c_{14} c_{36} c_{52}} \delta(S_{135:246})
\end{equation}
for the physical space amplitude.
In this case $S_{135:246}$ is quartic in the single $\tau$ parameter.
By choosing numerical values for the external kinematics and summing
over the four roots of $S_{135:246}$ one can verify that~(\ref{eq:six})
reproduces the correct amplitude.
Now consider more generally the object
\begin{equation}
\label{eq:integrand}
\frac{1}{c_{14} c_{36} c_{52}} \frac{1}{S_{135:246}}
\end{equation}
as a function of $\tau$. The contour integral of this object around
the four zeroes of $S_{135:246}$ evidently computes the alternating
helicity six-particle amplitude. But~(\ref{eq:integrand})
has three other poles located at the vanishing of
$c_{14}$, $c_{36}$
or $c_{52}$. By Cauchy's theorem
we know that the sum of these three residues
computes minus the amplitude,
\begin{equation}
A^{+-+-+-} = - \int d\tau \frac{1}{S_{135:246}} \delta(c_{14} c_{36} c_{52}).
\end{equation}
Since the $c_{iJ}$ are linear in $\tau$ it is simple to calculate the
corresponding residues analytically, and one obtains
\ifpreprint
\begin{equation}
\frac{\bra{1}{3}^4 \ket{4}{6}^4}{\bra{1}{2} \bra{2}{3} \ket{4}{5} \ket{5}{6} s_{123}
\langle 6 | 5 + 4|3 ] \langle 4|5 + 6| 1]}
+ (i \to i + 2) + (i \to i + 4)
\end{equation}
\else
\begin{multline}
\frac{\bra{1}{3}^4 \ket{4}{6}^4}{\bra{1}{2} \bra{2}{3} \ket{4}{5} \ket{5}{6} s_{123}
\langle 6 | 5 + 4|3 ] \langle 4|5 + 6| 1]}
\\
+ (i \to i + 2) + (i \to i + 4)
\end{multline}
\fi
which is the BCFW representation for the amplitude\!
Analysis of the other two independent six-particle helicity configurations
proceeds along the same lines with link representations obtained
from~(\ref{eq:Udef}):
\begin{eqnarray}
U^{+++---} &=& \frac{c_{25}}{
c_{12:45} c_{23:56}
} \delta(S_{123:456}),
\\
U^{++-+--} &=& \frac{c_{16}}{
c_{13} c_{46} c_{12:56}
} \delta(S_{124:356}).
\end{eqnarray}
In each case the connected presentation expresses the amplitude as
a sum over the four roots of the quartic
$S_{ijk:lmn}$ in the $\tau$-plane, which
a simple application of Cauchy's theorem relates to
a sum over simple
linear roots which compute the BCFW representation of the amplitude.
\subsection{7-Point Amplitudes}
For the seven-particle split helicity amplitude we find
\begin{equation}
\label{eq:sevenlink}
U^{++++---} =
\frac{c_{25} c_{26} c_{36} c_{37}}{c_{12:56} c_{34:67}}
\delta(S_{123:567}) \delta(S_{234:567}).
\end{equation}
There are now two $\tau$ variables, and the locus
where both of the delta functions vanish consists of 14 isolated
points in ${\mathbb{C}}^2$. The coordinates of these
points are determined by the vanishing of a polynomial which
is a product of one of degree 11 and three of degree 1.
The three linear roots do not contribute because the numerator
factors in~(\ref{eq:sevenlink}) vanish there.
Therefore~(\ref{eq:sevenlink}) represents the amplitude as a sum over the roots of
a degree 11 polynomial, as expected for the connected prescription
for $n=7$, $k=3$.
To proceed
we must use the multidimensional
analog of Cauchy's theorem known as the global residue theorem:
\begin{equation}
\label{eq:grt}
\oint_{f_1 = \cdots = f_n = 0} d^nz\ \frac{h(z)}{f_1(z) \cdots f_n(z)} = 0
\end{equation}
when $h(z)$ is a polynomial of degree
less than $\sum \deg f_i - (n + 1)$, so that it has no poles at finite $z$
and the integrand falls off sufficiently fast to
avoid a pole at infinity.
To apply~(\ref{eq:grt}) to~(\ref{eq:sevenlink})
we consider the integrand
\begin{equation}
\frac{c_{25} c_{26} c_{36} c_{37}}{c_{12:56} c_{34:67}}
\frac{1}{S_{123:567} S_{234:567}}.
\end{equation}
There are seven independent ways of grouping the terms in the denominator
into a product $f_1 f_2$.
The choice
\begin{equation}
f_1 = c_{12:56} S_{234:567}, \qquad
f_2 = c_{34:67} S_{123:567}
\end{equation}
is particularly nice: in this application of
the global residue theorem all 11 poles at the locus
$S_{123:567} = S_{234:567} = 0$ contribute as do the roots located at
\begin{eqnarray}
c_{12:56} = S_{123:567} &=& 0, \\
c_{34:67} = S_{234:567} &=& 0, \\
c_{12:56} = c_{34:67} &=& 0,
\end{eqnarray}
which amazingly turn out to each consist of a single linear root.
The global residue theorem expresses the connected representation of the
amplitude as (minus) the sum of these three linear roots, which a simple
calculation reveals as precisely the three terms contributing to the
BCFW representation of the amplitude.
Equally amazing is the choice
\begin{equation}
f_1 = S_{123:567}, \qquad
f_2 = c_{12:56} c_{34:67} S_{234:567}.
\end{equation}
This contour computes the sum of residues at 15 poles; 11 of those
are the connected prescription poles which we know compute the correct
physical amplitude, while the others consist of a single linear root
together with four quartic roots. Schematically then this global
residue theorem identity expresses
\begin{equation}
A^{++++---} = \sum {\rm 11~roots} = - \sum {\rm 4~roots} - {\rm 1~root}.
\end{equation}
We interpret the right-hand side of this equation as an `intermediate'
prescription~\cite{Gukov:2004ei,Bena:2004ry}, obtained by BCFW decomposing
$A^{++++---}$ once into the product of a 3-particle
amplitude with a split-helicity six-particle amplitude, and then computing
the latter via the connected prescription as a sum over four roots.
We end by tabulating link representations for the remaining
independent seven-particle helicity amplitudes
\begin{equation}
\begin{split}
U^{+++-+--} &=
\frac{c_{26} c_{27} c_{25:46}}{c_{12:46} c_{23:67}}
\delta(S_{125:467}) \delta(S_{235:467}),
\\
U^{++-++--} &=
\frac{c_{23} c_{56} c_{57} c_{25:36}}{c_{53} c_{12:36} c_{45:67}}
\delta(S_{125:367}) \delta(S_{245:367}),
\\
U^{++-+-+-} &=
\frac{c_{17} c_{43} c_{14:57}}{c_{47} c_{63} c_{12:57}}
\delta(S_{124:357}) \delta(S_{146:357}).
\end{split}
\end{equation}
As usual we interpret $\delta(u) = 1/u$ in the integrand
with the delta functions indicating the preferred contour
which computes the connected prescription representation of the
amplitude.
\section*{Acknowledgments}
We are grateful to N.~Arkani-Hamed and F.~Cachazo
for extensive discussions and
enormous encouragement and to C.~Vergu and C.~Wen for helpful comments.
This work was supported in part by the
Department of Energy under contract DE-FG02-91ER40688 Task J OJI (MS)
and Task A (AV), the National Science Foundation under grants
PHY-0638520 (MS), PECASE PHY-0643150 (AV) and ADVANCE 0548311 (AV).
|
1,116,691,500,376 | arxiv | \section{Introduction} \label{sec:intro}
In recent years, researchers have become increasingly interested in dynamic domination processes on graphs and {\it eternal domination} problems on graphs have been particularly well-studied (see the survey~\cite{survey}, for example). In the all-guards move model for eternal domination, a set of vertices are occupied by ``guards" and the vertices occupied by guards form a dominating set on a graph. At each step, an unoccupied vertex is attacked and then each guard may remain at their current vertex or move along an edge to a neighbouring vertex. The guards aim to occupy a dominating set that contains the attacked vertex and such a movement of guards is said to ``defend against an attack''. The {\it eternal domination number} of a graph, denoted $\gamma_{all}^\infty$ is the minimum number of guards required to defend against any sequence of attacks, where the subscript and superscript indicate that {\it all} guards can move in response to an attack and the sequence of attacks is infinite. Given the complexity of determining the eternal domination number of a graph for the all-guards move model, recent work such as~\cite{BKV,FMvB,GHHKM,HKM,LMSS,vBvB,finbowetal}, has focused primarily on bounding or determining the parameter for particular classes of graphs.
In this paper, we extend the notion of eternal domination to that of eternal $k$-domination in the most natural way: suppose at time $t=0$, the guards occupy a set of vertices that form a $k$-dominating set. At each time step $t>0$ an unoccupied vertex is attacked and every guard moves distance at most $k$ so that the guards occupy the vertices of a $k$-dominating set that contains the attacked vertex. We note that for $k=1$, the process is equivalent to the all-guards move model for eternal domination as described above. Hence, we focus on results for $k \geq 2$.
We present preliminary results in Section~\ref{sec:2} that provide general bounds, the complexity of the associated decision problem, and determine the eternal $k$-domination number exactly for some small classes of graphs. In Section~\ref{section_superstar}, we show that the eternal $k$-domination number of a graph is bounded above by the eternal $k$-domination number of a spanning tree of the graph, which motivates us then to focus on trees in Section~\ref{sec:trees}. The eternal domination number of a tree was characterized in~\cite{KlosterM} by using two reductions. We extend the concepts of these reductions to the eternal $k$-domination model, providing reductions that, informally speaking, ``trim branches" of trees in such a way that the change in the eternal $k$-domination number is controlled. However, for the eternal $k$-domination model such reductions are insufficient to characterize the eternal $k$-domination number of all trees and we state some resulting open problems. In Section~\ref{section_trees}, we provide an upper bound for the eternal $2$-domination number of a tree, which can be extended to an upper bound for the eternal $k$-domination number of a tree. We also determine exactly, the eternal $2$-domination number for perfect $m$-ary trees. Since this paper introduces the concept of eternal $k$-domination, we conclude with a series of open questions in Section~\ref{section_Conclusion}.
We conclude this section with some formal definitions. Recall, in a graph $G$, the open distance-$k$ neighbourhood of $x \in V(G)$ is $N_k(x) = \{ v \in V(G)~:~d(x,v)=k\}$, and the closed distance-$k$ neighbourhood of $x \in V(G)$ is $N_k[x] = \{ v \in V(G)~:~d(x,v)\leq k\}$.
\begin{definition}\label{defn:kdom} Let $G$ be a graph and $k\geq 1$ an integer. A set $D \subseteq V(G)$ is a {\bf $k$-dominating set} if every vertex of $V(G) \backslash D$ is at most distance $k$ from a vertex in $D$. The minimum cardinality of a $k$-dominating set in graph $G$ is the {\bf $k$-domination number}, denoted $\gamma_k(G)$.
\end{definition}
\begin{definition} Let $G$ be a graph. Let $\mathbb{D}_{k,q}(G)$ be the set of all $k$-dominating sets of $G$ which have cardinality $q$. Let $D,D' \in \mathbb{D}_{k,q}(G)$. We will say $D$ {\bf transforms} to $D'$ if $D=\{v_1,v_2,\dots,v_q\}$, $D'=\{u_1,u_2,\dots,u_q\}$, and $u_i \in N_k[v_i]$ for all $i \in [q]$, the closed $k^{th}$neighbourhood. If we consider $D$ as the placement of all the guards in a $k$-dominating set, then the set $D'$ is a permissible movement of all the guards after some attack.\end{definition}
\begin{definition} An {\bf eternal $k$-dominating family} of $G$ is a subset $\mathcal{E} \subseteq \mathbb{D}_{k,q}(G)$ for some $q$ so that for every $D \in \mathcal{E}$ and each possible attack $v \in V(G)$, there is a $k$-dominating set $D' \in \mathcal{E}$ so that $v \in D'$ and $D$ transforms to $D'$.
A set $D \in \mathbb{D}_{k,q}(G)$ is an {\bf eternal $k$-dominating set} if it is a member of some eternal $k$-dominating family. Eternal domination is a discrete time-process, so at each iteration of an ``attack'' the $k$-dominating set $D$ transforms into some other set $D'$ within the family.
The {\bf eternal $k$-domination number} of a graph $G$, denoted $\gamma_{all,k}^\infty(G)$, is the minimum $q$ for which an eternal $k$-dominating family of $G$ exists. We use this notation to indicate that all guards are allowed to move a distance of at most $k$.
\end{definition}
\section{Eternal $k$-domination on general graphs}\label{sec:2}
\subsection{Preliminary Results}\label{sec:prelim}
In this section, we relate the eternal $k$-domination number to known graph parameters in order to obtain bounds, as well as determine the eternal $k$-domination number for well-known families of graphs. We then determine the complexity of computing this number.
By definition, any eternal $k$-dominating set is also a $k$-dominating set, giving us the lower bound in Observation~\ref{eqn:upper}. However, we can also bound the eternal $k$-dominating number of a graph by its $\lfloor k/2\rfloor$-domination number:
\begin{observation}\label{eqn:upper}For any graph $G$ and integer $k \geq 2$, $$\gamma_k(G)\leq \gamma_{all,k}^\infty(G) \leq \gamma_{\lfloor\frac{k}{2}\rfloor}(G).$$ \end{observation}
To demonstrate the upper bound, consider a $\lfloor k/2 \rfloor$-dominating set where $\gamma_{\lfloor k/2 \rfloor}(G)=\ell$ and $D = \{v_1,v_2,\dots,v_{\ell}\}$ on graph $G$. For each $j \in \{1,2,\dots \ell \}$, the maximum distance between any two vertices in $N_{\lfloor k/2 \rfloor}(v_j)$ is $k$. Specifically, a guard $g_j$ is placed on an arbitrary vertex of $N_{\lfloor k/2 \rfloor}(v_j)$ for each $j \in \{1,2,\dots,\ell \}$. The guard $g_j$ will only ever occupy vertices of $N_{\lfloor k/2 \rfloor}(v_j)$. Then, given an attack at a vertex $x$ in $N_{\lfloor k/2 \rfloor}(v_j)$, guard $g_j$ can move to the attacked vertex and no other guard moves. We note that it is possible that the attacked vertex $x$ is within distance $\lfloor k/2 \rfloor$ from multiple vertices in $D$.
The bounds in Observation~\ref{eqn:upper} are tight. It is easy to see that if graph $G$ has a universal vertex, then $\gamma_{all,k}^\infty(G) = \gamma_k(G)=\gamma_{\lfloor\frac{k}{2}\rfloor}(G) = 1$. However, it is worth noting that the difference between $\gamma_k(G)$ and $\gamma_{\lfloor \frac{k}{2}\rfloor}(G)$ can be arbitrarily large. Consider $K_{1,n}$ where each edge is subdivided $k-1$ times and call this graph $S_{n,k}$. Then clearly $\gamma_k(S_{n,k})=1$, but $\gamma_{\lfloor k/2\rfloor}(S_{n,k})=n+1$.
Though the bounds of Observation~\ref{eqn:upper} are tight, it is important to note that for some graphs, such as cycles, $\gamma_{all,k}^\infty$ can be much smaller than $\gamma_{\lfloor k/2 \rfloor}$.
\begin{theorem}\label{obs:cycle}
For $n \geq 3$ and $k \geq 1$, \[\gamma_{all,k}^\infty(C_n) = \gamma_k(C_n) = \Big \lceil \frac{n}{2k+1}\Big \rceil\]
\end{theorem}
\begin{proof} Observe $C_n$ can be decomposed into $\lceil \frac{n}{2k+1}\rceil$ vertex-disjoint paths of length at most $2k+1$. Since a center vertex of a path of length at most $2k+1$ will $k$-dominate the path, $\gamma_k(C_n) = \lceil \frac{n}{2k+1}\rceil$.
Next we show that a minimum $k$-dominating set is indeed a minimum eternal $k$-dominating set. Place guards on the vertices of the minimum $k$-dominating set described above. Suppose vertex $v$ is attacked and let $u$ be a vertex within distance $k$ of $v$ that contains a guard. The guard at $u$ moves distance $d(u,v)=x$ to occupy $v$ and all other guards move exactly distance $x$ in the same direction.\end{proof}
As a consequence of Theorem~\ref{obs:cycle}, whenever a graph $G$ is Hamiltonian, we obtain the following upper bound, by simply considering the guards moving strictly along the Hamilton cycle.
\begin{corollary}\label{cor:Ham}
Let $G$ be a Hamiltonian graph on $n$ vertices. Then for $k \geq 1$, $\gamma_{all,k}^\infty(G) \leq \lceil \frac{n}{2k+1}\rceil$.
\end{corollary}
Although it is easy to see that $\gamma_k(P_n)=\lceil \frac{n}{2k+1}\rceil$, we next show the eternal $k$-domination number of a path is larger.
\begin{theorem}\label{thrm:paths} For $n \geq 1$ and $k \geq 1$, $\gamma_{all,k}^\infty(P_n)=\lceil \frac{n}{k+1} \rceil$.
\end{theorem}
\begin{proof} Let $V(P_n)=\{v_0,v_2,...,v_{n-1}\}$ where $v_i$ is adjacent to $v_{i+1}$ for $0 \leq i \leq n-2$. By partitioning the path into vertex-disjoint sub-paths, each of length at most $k$ and assigning one guard to each such sub-path, it is easy to see that $\lceil \frac{n}{k+1}\rceil$ guards will suffice to form an eternal $k$-dominating family on $P_n$, by using the reasoning following Observation~\ref{eqn:upper}.
Next we prove the lower bound. There must always be a guard within distance $k$ of $v_0$. Thus a guard is always required to be located on the sub-path $v_0, v_1,\ldots, v_k$. We partition the graph into sub-paths of length at most $k$. Let $P_{\ell}$ be the sub-path of length $k$ induced by vertices $v_{\ell ( k+1)},\ldots, v_{(\ell+1)(k+1) -1}$, for $0\leq \ell \leq \lfloor \frac{n}{k+1}\rfloor$, and $P_{\lceil \frac{n}{k+1}\rceil}$ be the sub-path $v_{(\lceil \frac{n}{k+1}\rceil)k+(\lceil \frac{n}{k+1}\rceil)},\ldots, v_{n-1}$, should it exist.
For some fixed $j$, assume the placement of the guards on $P_n$ are such that for each $i<j$, each sub-path $P_i$ always contains at least 1 guard; i.e. every eternal $k$-dominating set contains exactly one vertex from $P_i$ for each $i<j$. Thus $P_j$ is the lowest-indexed sub-path that does not always contain a guard. Let time $t$ be the first time there is no guard in $P_j$ and assume an attack happens on this path. Furthermore, no guard from $P_i$ can move into $P_j$ since each lower-indexed path must have a guard on it at all times. If there are no guards in $P_{j+1}$ that can move to the attacked vertex in $P_j$, then the guards do not form a $k$-dominating set.
Thus, suppose a guard in $P_{j+1}$ moves to the attacked vertex.
If the guard that moves from the sub-path $P_{j+1}$ to $P_j$ leaves the sub-path $P_{j+1}$ without a guard, a guard from $P_{j+2}$ most move onto $P_{j+1}$, and so on. If there is some sub-path $P_\ell$ such that the guard on that path cannot be replaced by a guard in $P_{\ell +1}$ then the guards do not form an eternal $k$-dominating set, as the subsequent attack can occur within this sub-path and not be guarded. Otherwise, each guard in sub-path $P_{\ell+1}$ moves to sub-path $P_{\ell}$, eventually leaving $v_{n-1}$ unable to be defended should an attack occur there at time $t+1$. Thus at the end of each time step, there must be a guard in each $P_i$: so at least $\lceil \frac{n}{k+1} \rceil$ guards are required and the result follows.\end{proof}
The previous results show that for some families of graphs, the eternal $k$-domination number grows linearly with the order of the graph. On the other end of the spectrum, we can easily characterize graphs with eternal $k$-domination number $1$: $\gamma_{all,k}^\infty(G)=1$ if and only if the diameter of graph $G$ is at most $k$.
An important question to ask when investigating a graph parameter is, how difficult is it to compute? To answer this, we will look at the relationship between the eternal $k$-domination number of a graph and it's graph power. For a graph $G$, the $k^{\textrm{th}}$ power of the graph, $G^k$, is formed by adding an edge $u,v\in E(G)$ whenever $\textrm{dist}(u,v)\leq k$. Thus, if there exists a path in $G$ from $u$ to $v$ of length at most $k$, then we will witness an edge $uv \in E(G^k)$.
\begin{theorem}\label{gpower}
If $G$ is a graph and $k \in \mathbb{N}$, then \[\gamma_{all,k}^\infty(G) = \gamma_{all,1}^{\infty}(G^k).\]
\end{theorem}
\begin{proof}
Let $S$ be an eternal $k$-dominating set of $G$ for which $|S|=\gamma_{all,k}^\infty(G)$. Suppose a sequence of attacks, $\{a_1,a_2,\ldots a_\ell\} \subseteq V(G)$ occur. For each guard $g_i$, let $\mathcal{G}_i~=~\{d_1,d_2,\ldots d_\ell\} \subseteq V(G)$ be the set of corresponding defending moves the guard makes, that is guard $g_i$ moves from $d_{j-1}$ to $d_j$ after attack $a_j$. We now consider the eternal 1-domination of $G^k$ by using corresponding moves of the eternal $k$-domination of $G$. Place the $\gamma_{all,k}^\infty(G)$ guards in $S$ on the vertices of $G^k$, since $V(G^k)=V(G)$. Suppose in $G^k$ the same sequence of attacks occur on vertices $\{a_1,a_2,\ldots a_\ell\}$.
For any guard $g_i$ and their sequence $\mathcal{G}_i$, moving from $d_{j-1}$ to $d_j$ in $G^k$ is permissible as these vertices have distance at most $k$ in $G$, and thus will be adjacent in $G^k$. Each guard $g_i$, can use the same sequence of moves and still guard $G^k$, thus $\gamma_{all,k}^\infty(G)\geq \gamma_{all,1}^{\infty}(G^k)$.
Similarly, let $S'$ be an eternal $1$-dominating set of $G_k$ for which $|S'|=\gamma_{all,1}^{\infty}(G^k)$. Suppose a sequence of attacks, $\{a_1,a_2,\ldots a_\ell\}$ occur. For each guard $g_i$, we define $\mathcal{G}_i$ analogously as above. We then consider the eternal $k$-domination of $G$, using moves from the eternal $1$-domination in $G^k$. Place the $\gamma_{all,1}^{\infty}(G^k)$ guards in $S'$ on the vertices of $G$ and consider the eternal $k$-domination process, and again suppose in $G$ the vertices $\{a_1,a_2,\ldots a_\ell\}$ are attacked in that order. Any guard $g_i$ that moves from $d_{j-1}$ to $d_j$ after an attack $a_j$ in $G^k$ can also move from $d_{j-1}$ to $d_j$ in $G$ since these two vertices are adjacent in $G^k$ and thus are at most distance $k$ in $G$. Thus, $\gamma_{all,k}^\infty(G)\leq \gamma_{all,1}^{\infty}(G^k)$, giving the desired result.
\end{proof}
In \cite{chipcomplexity} and the subsequent errata \cite{chipwebsite}, it was shown that deciding if a set of vertices of a graph is an eternal domination set is in EXP. Thus, taken with Theorem~\ref{gpower} we have the following complexity result.
\begin{corollary}\label{complexity}
Let $G$ be a graph of order $n$, a positive integer $k$ with $0\leq k\leq n$ and $S\subseteq V(G)$. Deciding if $S$ is an eternal $k$-dominating set for $G$ is in EXP.
\end{corollary}
\subsection{Using trees to bound $\gamma_{all,k}^\infty$}\label{section_superstar}
In this section, we provide insight to the vertices ``covered'' or ``guarded'' by a single guard or a pair of guards, and present bounds on the eternal $k$-domination number for arbitrary graphs based on a partitioning into vertex-disjoint trees.
A subgraph $H$ of a graph $G$ is a {\it retract} of $G$ if there is a homomorphism $f$ from $G$ to $H$ so that $f(x)=x$ for $x \in V(H)$. The map $f$ is called a {\it retraction} and we note that since this is an edge-preserving map to an induced subgraph, the distance between any two vertices does not increase in the image.
\begin{lemma}\label{lemma:subgraph}Let $H$ be a retract of graph $G$. Then $\gamma_{all,k}^\infty(H) \leq \gamma_{all,k}^\infty(G)$. \end{lemma}
\begin{proof} Let $H$ be a retract of graph $G$ and $f:G \rightarrow H$ be a retraction. We consider two parallel incidences of eternal $k$-domination: one on $G$ and one on $H$. The process on $H$ can be thought of as taking place on $G$, as $H$ is an induced subgraph of $G$. We will restrict the attacks in $G$ to vertices that are also in $H$. Initially, if there is a guard at vertex $v \in V(G)$, then we place a guard at vertex $f(v) \in V(H)$. If a guard in $G$ moves from $x$ to $y$ in response to an attack at vertex $z$ in $H$, we observe that guard may move from $f(x)$ to $f(y)$ in $H$ in response to the attack. Thus, $\gamma_{all,k}^\infty(H) \leq \gamma_{all,k}^\infty(G)$.\end{proof}
Below we consider another subgraph that will also prove useful in obtaining an upper bound for $\gamma_{all,k}^\infty(G)$ for an arbitrary graph $G$.
\begin{lemma}\label{Lem:removeedge}
Let $G$ be a graph, $k$ a positive integer and $e\in E(G)$, then \[\gamma_{all,k}^\infty(G-e)\geq \gamma_{all,k}^\infty(G)\] where $G-e$ is the subgraph of $G$ obtained by deleting edge $e$.
\end{lemma}
\begin{proof}
Let $G$ be a graph and $e\in E(G)$. Consider, $G-e$ the graph with $G$ with edge $e$ removed. Let $S$ be an eternal $k$-dominating set of $G-e$ of minimum cardinality and suppose a sequence of attacks, $\{a_1,a_2,\ldots a_\ell\}$ occur. For each guard $g_i$, let $\mathcal{G}_i~=~\{d_1,d_2,\ldots d_\ell\}$ be the set of corresponding defending moves the guard makes, that is after the attack $a_j$, the guard $g_i$ moves from $d_{j-1}$ to $d_j$.
Now place guards on the vertices of $S$ in $G$ and consider the same sequence of attacks $\{a_1,a_2,\ldots a_\ell\}$. Each guard can respond appropriately, moving to $d_j$ after attack $a_j$. Since every edge of $G-e$ is an edge of $G$, the result follows.\end{proof}
From repeated applications of Lemma~\ref{Lem:removeedge} we obtain the following.
\begin{theorem}\label{thm:span}
Let $G$ be a graph and $T$ a spanning tree of $G$, then
\[ \gamma_{all,k}^\infty(T)\geq \gamma_{all,k}^\infty(G).\]
\end{theorem}
Theorem~\ref{thm:span} suggests that understanding the eternal $k$-domination process on trees is important, as it provides an upper bound on the eternal $k$-domination number of a general graph. We next consider covering a graph $G$ with sub-trees with a particular structure, each of which can be guarded by a single guard. By cover, we mean that a guard is assigned to a particular sub-tree and they can respond to any sequence of attacks that occur within that particular sub-tree.
\begin{definition} A {\bf $k$-rooted tree} with $k \in \mathbb{Z}^+$, is a rooted tree where the eccentricity of the root is at most $k$.\end{definition}
\begin{definition} Given a graph $G$, we define a {\bf $k$-rooted tree decomposition} to be a partition of the vertices into sets $S_i$ for $1 \leq i \leq \ell$ such that $G[S_i]$ contains a spanning subgraph that is $k$-rooted tree.
The {\bf $k$-rooted tree decomposition number} of a graph $G$, denoted $\mathfrak{T}_k(G)$, is the minimum number of sets $S_i$ over all possible decompositions.\end{definition}
An easy bound comes from partitioning graph $G$ into $\lfloor \frac{k}{2}\rfloor$-rooted trees as one guard can cover the vertices of a $\lfloor \frac{k}{2}\rfloor$-rooted tree.
\begin{corollary}
For any graph $G$ and $k\geq 2$
\[\gamma_{all,k}^\infty(G) \leq \mathfrak{T}_{\lfloor \frac{k}{2}\rfloor} (G). \]
\end{corollary}
For some graphs, a better bound can be achieved by partitioning graph $G$ into $k$-rooted trees and recognizing that $2$ guards can protect the vertices of each $k$-rooted tree.
\begin{proposition}\label{propkroot} If $T$ is a $k$-rooted tree for some $k \geq 2$, then $\gamma_{all,k}^\infty(T) \leq 2$.\end{proposition}
\begin{proof}Let $T$ be a $k$-rooted tree with root $r$. Initially place one guard at $r$ and the second guard at an arbitrary vertex, $u$. After a vertex, $v$ is attacked, the guard at $r$ moves to $v$ and the other guard at $u$ moves to $r$. The resulting $k$-dominating set is equivalent to the original, and the guards can respond to attacks in this manner indefinitely.\end{proof}
\begin{corollary} For any graph $G$ and $k \geq 2$, $$\gamma_{all,k}^\infty(G) \leq \min\{ 2\cdot\mathfrak{T}_{k}(G), \mathfrak{T}_{\lfloor k/2 \rfloor} (G) \}.$$ \end{corollary}
In light of Theorem~\ref{thm:span} and the fact that we can partition a graph $G$ into $k$-rooted trees in order to find an upper bound for $\gamma_{all,k}^\infty(G)$, the next two sections will focus on trees.
\section{Eternal $k$-domination on trees}\label{sec:trees}
In this section, we consider the conditions for which the eternal $k$-domination number of a tree will equal or be one greater than that of a sub-tree in the aims of working towards determining $\gamma_{all,k}^\infty(T)$ for any tree $T$.
In~\cite{KlosterM}, the authors provide a linear-time algorithm for determining the eternal $1$-domination number of a tree. Their algorithm consists of repeatedly applying two reductions, {\bf R1} and {\bf R2}, which we restate here. \medskip
\noindent {\bf R1}: Let $x$ be a vertex of $T$ incident to at least two leaves and to exactly one vertex of degree at least two. Delete all leaves incident to $x$. \medskip
\noindent {\bf R2}: Let $x$ be a vertex of degree two in $T$ such that $x$ is adjacent to exactly one leaf, $y$. Delete both $x$ and $y$. \medskip
\noindent If $T'$ is the result of applying either {\bf R1} or {\bf R2} to tree $T$, then $\gamma_{all,1}^\infty(T')=\gamma_{all,1}^\infty(T)-1$~\cite{KlosterM}. With an aim to characterize the eternal $k$-domination number of trees for $k>1$, Propositions~\ref{propR1} and~\ref{propR2} generalize the reductions of~\cite{KlosterM} to arbitrary $k\geq 1$. Figure~\ref{fig:R1R2} (a) and (b) provide a visualization of the sub-trees described in Propositions~\ref{propR1} and~\ref{propR2}, respectively.
\begin{figure}[htbp]
\[ \includegraphics[width=0.75\textwidth]{fig3}\]
\caption{Examples illustrating the reductions of Proposition~\ref{propR1} and~\ref{propR2}.}
\label{fig:R1R2}
\end{figure}
\begin{proposition}\label{propR1}
Let $x$ and $y$ be neighbours in $T=(V,E)$ and let $T_x$ be the component of $T$ induced by the deletion of edge $xy$, that contains $x$. If every leaf in $T_x$ is within distance $k$ of $x$ and $diam(T_x)=2k$, then $$\gamma_{all,k}^\infty(T[V\backslash D]) = \gamma_{all,k}^\infty(T) -1$$ where $D = V(T_x)\backslash \{x\}$.\end{proposition}
\begin{proof} It is easy to see $\gamma_{all,k}^\infty(T) \leq \gamma_{all,k}^\infty(T[V\backslash D])+1$:
initially place\\$\gamma_{all,k}^\infty(T[V\backslash D])$ guards on an eternal $k$-dominating set of sub-tree $T[V\backslash D]$ and place an additional guard at $x$. Whenever a vertex of $D$ is attacked, a guard moves from $x$ to the attacked vertex and the guards in $T[V\backslash D]$ move as they would if $x$ was attacked in $T[V\backslash D]$. Whenever a vertex of $T[V\backslash D]$ is attacked, the guards currently occupying vertices of $T[V\backslash D]$ move to form an eternal $k$-dominating set on $T[V\backslash D]$ that contains the attacked vertex and the guard in $D$ moves to $x$ (it is possible that there is no guard in $D$, in which case there are two guards on $x$ and when the guards move in response to the attack, one of the two guards remains on $x$).
We next prove that $\gamma_{all,k}^\infty(T[V\backslash D]) < \gamma_{all,k}^\infty(T)$. Assume $\gamma_{all,k}^\infty(T[V\backslash D])=\gamma_{all,k}^\infty(T)$ and we will show by way of contradiction that this is not the case. Place the $\gamma_{all,k}^\infty(T[V\backslash D])$ guards on $T$. Let $\ell_1$ and $\ell_2$ be leaves of $D$ at distance $k$ from $x$ and distance $2k$ from each other. Note that a guard must be on $x$ to ensure that $\ell_1$ and $\ell_2$ are $k$-dominated. We will consider what the eternal $k$-domination process looks like on $T$ and on a copy of $T[V\backslash D]$.
First, any attack in $T[V\backslash D]$ corresponds to the guards moving as required, ensuring a guard is on $x$ after the attack is defended. In $T$ this placement of guards is the same, as vertices in $D$ do not require guards, as $x$ has a guard.
Assume a vertex on the path $x\ldots \ell_1$ is attacked. In $T$ a guard on $x$ must move to the attacked vertex (note: if another guard moves to the attack, it must go through $x$, so we can assume this is the guard that moves). We then need to replace that guard otherwise $\ell_2$ is not $k$-dominated.
On the tree $T[V\backslash D]$ this is equivalent to two guards moving onto $x$, since attacks in $D$ and guards on vertices of $D$ correspond attacks and guards on $x$ on $T[V\backslash D]$.
So we have two situations to consider. The first is if in $T$ an attack occurs at a vertex of $D$. In $T[V\backslash D]$ it corresponds to $x$ having two guards, but one guard is superfluous (as it is not really in this sub-tree).
The second situation is if in $T$ an attack occurs at a vertex of $T[V\backslash D]$. Then a guard on $x$ moves within $T[V\backslash D]$ as required and either a guard moves from $D$ to $x$, or if $x$ had more than one guard, at least one of these guards do not move.
In both situations, the attacks lead to a response in $T[V\backslash D]$ that required one less guard to defend that sub-tree. This means that any sequence of attacks in $T$ result in $T[V\backslash D]$ requiring less than $\gamma_{all,k}^\infty(T[V\backslash D])$ guards to eternally $k$-dominate, a contradiction, so $\gamma_{all,k}^\infty(T[V\backslash D]) = \gamma_{all,k}^\infty(T) -1$.\end{proof}
A {\it suspended $i$-end-path} in a graph $G$ is a path of length at least $i \geq 2$ such that at least one endpoint of the path is a leaf and all internal vertices of the path have degree exactly $2$. The next result generalizes the {\bf R2} reduction in \cite{KlosterM} for eternal $1$-domination. See Figure~\ref{fig:R1R2} (b).
\begin{proposition}\label{propR2}
Suppose $T=(V,E)$ contains a suspended $(k+1)$-end-path $P$ and label the vertices of $P$ as $x_0,x_1,\dots,x_{k+1}$ where $x_0$ is a leaf and $x_i$ is adjacent to $x_{i+1}$ for $0 \leq i \leq k$. Then $$\gamma_{all,k}^\infty(T[V\backslash \{x_0,\dots,x_k\}]) = \gamma_{all,k}^\infty(T) - 1.$$ \end{proposition}
\begin{proof}
Clearly $\gamma_{all,k}^\infty(T) \leq \gamma_{all,k}^\infty(T[V\backslash \{x_1,\dots,x_k\}]) +\gamma_{all,k}^\infty(T[\{x_0,\dots,x_k])$ and $\gamma_{all,k}^\infty(T[\{x_0,\dots,x_k]) =1$ since it is a path of length $k$. We next show that $ \gamma_{all,k}^\infty(T[V\backslash \{x_1,\dots,x_k\}])$ guards do not suffice to eternally $k$-dominate $T$.
First, we consider eternal $k$-domination on the graph $T[V\backslash \{x_1,\dots,x_k\}]$. Place the guards and consider a minimal finite sequence of attacks $\mathcal{A}=\{a_1,a_2,\dots,a_q\}$ that requires all $\gamma_{all,k}^\infty(T[V\backslash \{x_1,\dots,x_k\}])$ guards to defend this sequence of attacks. That is, if there is at least one less guard, this sequence of attacks is not able to be defended.
Now consider the graph $T$. Suppose $\gamma_{all,k}^\infty(T)=\gamma_{all,k}^\infty(T[V\backslash\{x_1,\dots,x_k\}])$ and place the guards on the vertices of an eternal $k$-dominating set of $T$, note that there is at least one guard on $P$. Consider the sequence of attacks $\{x_0,a_1,x_0,a_2,x_0,a_3,\dots,x_0,a_q\}$, on tree $T$. Thus there is always a guard in $P$. This means that guard is never able to defend a vertex in $T[V\backslash\{x_1,\dots,x_k\}]$, hence, there are not enough guards to eternally $k$-dominate $T$, a contradiction.\end{proof}
The two previous results provided a means to ``trim branches" off a tree to reduce the eternal $k$-domination number by $1$. In each of these propositions, the diameter of $T_x$ is either exactly $k$ or $2k$. When $k < \textrm{diam}(T_x) <2k$, there are more interesting interactions between the guards in $T$ and in $D$. It may be possible for guards to move in and out of $D$ while guarding $T$, so $T[(V\backslash D)]$ may or may not decrease by one. In fact, for every diameter in $[k+1, 2k-1]$, we next provide a construction where there exists a tree with $\gamma_{all,k}^\infty (T[(V\backslash D)]) = \gamma_{all,k}^\infty(T)$ and a tree with $\gamma_{all,k}^\infty (T[(V\backslash D)])+1 = \gamma_{all,k}^\infty(T)$.
\begin{example}
We first consider a tree $T_x$, rooted at a vertex $x$, with one leaf at distance $k$ from $x$, with $ \textrm{diam}(T_x) = k+\ell$ for some $\ell \in [1,k-1]$. In our first example, to define $T$ we consider $T_x$ and add a star centered at a vertex $y$, with each leaf at distance $k-1$ from $y$, and add the edge $xy$. In this case, $(T[(V\backslash V(T_x) \cup \{x\})]) = \gamma_{all,k}^\infty(T) = 2$, since a guard on a leaf in the star can move to cover $x$, and we can ensure a guard occupies $x$ at all times. Then, whenever there is an attack on the leaf at distance $k$ from $x$, and then a subsequent attack on the vertex in $T_x$ at distance $k+\ell$ from this leaf, we require the guard on $x$ to move into $T_x$ t the new attack, and the previous guard can move back up to $x$.
\end{example}
\begin{example}
For the second example, we define a tree $T$ by considering $T_x$ and adding star centered at a vertex $y$, with each leaf at distance $k$ from $y$, and add the edge $xy$. In this case, $(T[(V\backslash V(T_x) \cup \{x\})]) = 2 < 3= \gamma_{all,k}^\infty(T) $. A guard on the leaf of the star will not be able to guard $x$, and we have one guard always on $y$ in $T\backslash T_x$, however, in order to guard the leaf at distance $k$ from $x$ in $T_x$, we require a third guard in $T_x$ in order to ensure there is always a guard on $x$ available to guard the leaves as above.
\end{example}
Although the previous results provide insight into a reduction on trees, the two examples above show that additional conditions are needed to characterize sub-trees with diameters between $k$ and $2k$.
Our next result provides a way to ``trim branches" without changing the eternal $k$-domination number. For $k=2$, for example, if a vertex is adjacent to at least two leaves, then a leaf can be deleted without changing the eternal $2$-domination number. An example of Theorem~\ref{thm:sup} having been applied seven times to a tree is shown in Figure~\ref{fig:treeillustration}.
\begin{theorem}\label{thm:sup}
Let $T'=(V',E')$ be an induced subgraph of a tree $T=(V,E)$ where $T'$ is a $\lfloor \frac{k}{2} \rfloor$-rooted tree with root $x$, some leaf $\ell$ distance $\lfloor \frac{k}{2}\rfloor$ from $x$ in $T'$, and $T[(V\backslash V')\cup \{x\}]$ is connected. Then $\gamma_{all,k}^\infty(T) = \gamma_{all,k}^\infty(T[(V\backslash V')\cup Q])$ where $Q$ is the set of vertices on the $x\ell$-path.
\end{theorem}
\begin{proof} By Lemma~\ref{lemma:subgraph}, $\gamma_{all,k}^\infty(T[(V\backslash V') \cup Q]) \leq \gamma_{all,k}^\infty(T)$. It is easy to see that $\gamma_{all,k}^\infty(T[(V\backslash V') \cup Q]) = g$ guards suffice to eternally $k$-dominate $T$. The guards on $T$ move as they would on graph $T[(V\backslash V') \cup Q]$, with a few exceptions: when a vertex of $T[V'\backslash Q]$ is attacked in $T$, the guards of $T$ move to the same vertices guards of $T[(V\backslash V') \cup Q]$ would move to in response to an attack at a vertex on $Q$, with the exception of one guard who moves to the attacked vertex on $V' \backslash Q$, rather than a vertex on $Q$. \end{proof}
Lemma~\ref{lemA} describes another set of vertices that can be deleted from a tree without changing the eternal $k$-domination number. We first identify two leaves $\ell_1,\ell_2$ that are distance $2k$ apart and let $x$ be the centre of the $\ell_1\ell_2$-path. Informally, we identify all ``branches'' from $x$ whose leaves are all within distance $k$ of $x$ and remove all vertices on these branches, apart from those on the $\ell_1\ell_2$-path. Lemma~\ref{lemA} proves the sub-tree has the same eternal $k$-domination number as the original tree. An example is shown in Figure~\ref{fig:scribbles}, where the vertices removed (defined as set $D$ in the theorem) are striped.
\begin{figure}[ht]
\[\includegraphics[width=0.45\textwidth]{fig1.png}\]
\caption{An example pertaining to Lemma~\ref{lemA} where the vertices in $D$ are coloured striped.}
\label{fig:scribbles}
\end{figure}
The {\it eccentricity} of vertex $u$ in graph $G$, denoted $\epsilon_G(u)$, is the maximum distance between $u$ and any other vertex in $G$. More formally, $\epsilon_G(u) = \max_{v \in V(G)} d(u,v)$.
\begin{lemma}\label{lemA} Suppose $\ell_1,\ell_2$ are two leaves in tree $T=(V,E)$ such that $d(\ell_1,\ell_2)=2k$. Let $Q$ denote the $\ell_1\ell_2$-path and let $x$ be the center of $Q$.
Let $N(x) = \{a_1,a_2,\dots,a_p\}$ for some integer $p$. Let $A_1,A_2,\dots,A_p$ be the components of $T$ induced by the deletion of $x$ where $a_i \in A_i$. Suppose $\epsilon_{T[A_i]}(a_i) \leq k-1$ for $i \in \{1,2,\dots,q\}$ and $\epsilon_{T[A_i]}(a_i) \geq k$ for $i \in \{q+1,q+2,\dots,p\}$. If $$D = \Big(\bigcup_{1\leq i \leq q} A_i\Big) \backslash Q$$ then $\gamma_{all,k}^{\infty}(T) = \gamma_{all,k}^{\infty}(T[V\backslash D])$.\end{lemma}
\begin{proof} By Theorem~\ref{lemma:subgraph}, $\gamma_{all,k}^{\infty}(T[V\backslash D])\leq \gamma_{all,k}^{\infty}(T)$. Suppose $\gamma_{all,k}^{\infty}(T[V\backslash D])=s$. We simply modify the movements of the $s$ guards on $T[V\backslash D]$ to defend against attacks on vertices of $D$ in $T$.
We first point out the existence of a particular eternal $k$-dominating family on $T[V\backslash D]$. Each eternal $k$-dominating set on $T[V \backslash D]$ must contain at least one vertex on the $x\ell_1$-path (otherwise $\ell_1$ is not $k$-dominated). Suppose that in response to an attack at a vertex on the $x\ell_1$-path, the guards move to form a $k$-dominating set that does not contain $x$. Clearly, the guards could alternately have moved to form a $k$-dominating set that does contain $x$: after the guards move in response to the attack, there must be a guard on the $x\ell_2$-path (otherwise $\ell_2$ is not $k$-dominated) this guard could have moved to $x$. Thus, there exists an eternal $k$-dominating family on $T[V\backslash D]$ where each eternal $k$-dominating set is of cardinality $s$ and contains $x$. We now exploit this eternal $k$-dominating family $\mathcal{E}_x$ in order to create an eternal $k$-dominating family for $T$ of cardinality $s$.
Initially, place $s$ guards on the vertices of $T$ that correspond to the vertices of some eternal $k$-dominating set $S \subseteq \mathcal{E}_x$ on $T[V \backslash D]$. Since this results in a guard on vertex $x$ on $T$, we know $T$ is initially $k$-dominated. If a vertex $y \in V\backslash D$ is attacked in $T$, the guards of $T$ mirror the movements of guards in graph $T[V\backslash D]$ in response to an attack at $y$. If a guard in graph $T[V\backslash D]$ moves from vertex $b$ to $c$, then a guard in graph $T$ will move from vertex $b$ to $c$, with one exception: if there is a guard in $D$, the guard will move to $x$. Note that in $T$, this results in the vertices of $T$ remaining $k$-dominated.
Now suppose that on $T$, vertex $z \in D$ is attacked. If the previous vertex attacked was also in $D$, then a guard moves from $D$ to $x$ and a guard moves from $x$ to $z$. Otherwise, we consider an attack at $\ell_1$ in graph $T[V\backslash D]$. On graph $T[V\backslash D]$, a guard can move from $x$ to $\ell_1$ and the remaining guards move accordingly, to form an eternal $k$-dominating set containing both the attacked vertex and $x$. In $T$, a guard moves from $x$ to $z$ instead of $\ell_1$ and the remaining guards move the same as their counterparts in $T[V\backslash D]$. Thus, the guards on $T$ form a $k$-dominating set.\end{proof}
For $k=2$, we next consider two examples which illustrate that for some trees, removing $a_1,a_2,\ell_1,\ell_2$ will not change the eternal $2$-domination number, but for other trees, it will. Thus, Examples~\ref{ExA} and~\ref{ExB} illustrate that Lemma~\ref{lemA} cannot be improved by removing vertices of $Q\backslash \{x\}$.
\begin{example}\label{ExA}
For $k=2$, consider the tree $T$ given in Figure~\ref{fig:LemA} (a), where vertex $x$ has been identified. Using Lemma~\ref{lemA}, we can ``trim branches'' of the tree without changing the eternal $2$-domination number. The tree $T[V\backslash D]$ shown in Figure~\ref{fig:LemA} (b) shows the result of applying Lemma~\ref{lemA} to tree $T$ using the vertex identified as $x$ in the figure. We note that though $\gamma_{all,2}^\infty(T) = 6 = \gamma_{all,2}^\infty(T[V\backslash D])=6$, it is also the case that $\gamma_{all,2}^\infty(T[V\backslash D\backslash \{a_1,a_2,\ell_1,\ell_2\}]) = 6$. (In Figure~\ref{fig:LemA} (a) and (b), this is the part of the graph outside of the bubble). Thus, for this example, ``trimming off" $a_1,a_2,\ell_1,\ell_2$ does not change the eternal $2$-domination number.\end{example}
\begin{figure}[htbp]
\[ \includegraphics[width=1\textwidth]{fig2.png}\]
\caption{With $k=2$, a tree $T$ in (a); tree $T[V\backslash D]$ in (b) was obtained using Lemma~\ref{lemA}.}
\label{fig:LemA}
\end{figure}
\begin{example}\label{ExB}
For $k=2$, consider the tree $T'$ given in Figure~\ref{fig:LemA} (c). We may assume that this tree is the result of having applied Lemma~\ref{lemA} to a larger tree. Although $\gamma_{all,2}^\infty(T') = 4$, we also note that $\gamma_{all,2}^\infty(T'[V\backslash \{a_1',a_2',\ell_1',\ell_2'\}])=3$. Thus, for this example, ``trimming'' $a_1',a_2',\ell_1',\ell_2'$ does change the eternal $2$-domination number.\end{example}
In Section~\ref{section_treedecomposition}, we present further tree reductions for the case where $k=2$.
\section{Eternal $2$-domination on trees}\label{section_trees}
\subsection{General Results for Trees and $m$-ary Trees}
In this section, we first describe an eternal $2$-dominating set for any tree $T$, which yields an upper bound for $\gamma_{all,2}^\infty(T)$, and second, determine $\gamma_{all,2}^\infty(T)$ exactly when $T$ is a perfect $m$-ary tree.
\begin{lemma}\label{thm:maryUP} Let $T$ be a rooted tree and place guards according to 1.-3. below. The vertices occupied by guards form an eternal $2$-dominating set on $T$.
\begin{enumerate}
\item Initially place a guard on each vertex for which the distance to the nearest leaf is even and positive.
\item If 1. results in no guards being placed on the root, then place one guard on the root.
\item Place one guard on an arbitrary leaf.
\end{enumerate}
\end{lemma}
\begin{proof}It is easy to see that the result holds for any rooted tree of depth $2$ or $3$. We prove the result by inducting on the depth of the tree. Let $T'$ be a tree of depth $d$. Let $T$ be the sub-tree of depth $d-2$ induced by the deletion all leaves of $T'$ to get $T''$; then delete all leaves of $T''$ to get $T$ (note: leaves of $T''$ are the stems in $T'$ with at most one non-leaf neighbour).
Let $f:T' \rightarrow T$ where $$f(v') = \begin{cases} v & \text{if $v'$ is not a leaf in either $T'$ or $T''$;} \\ z & \text{if $v'$ is a leaf in $T'$ or $T''$ and $z'$ is the nearest vertex that is} \\ ~ & \text{~~a grandparent of some leaf in $T'$.}\end{cases}$$
The map above preserves distances in the sub-tree $T \subseteq T'$ and maps leaves in $T'$ and $T''$ to the nearest vertex that is distance $2$ from a leaf, noting that such a vertex may also happen to be a leaf of $T''$.
Let $D$ be any eternal $2$-dominating set on $T$. We create a $2$-dominating set $D'$ on $T'$ that ``mirrors'' $D$ on $T$. Let $D'$ be the set of vertices on $T'$ where
\smallskip
(a) if $v \in D$ then $f^{-1}(v) = v'$ is in $D'$; and
(b) for each vertex $z'$ of $T'$ that is a grandparent of a leaf, if $z \notin D$ then $z' \in D'$ and if $z \in D$, then choose an arbitrary child of $z'$ to be in $D'$. \smallskip
From this construction, $D'$ is a $2$-dominating set on $T'$.
Suppose vertex $u' \in V(T')$ is attacked. We consider two cases: (1) when $u'$ is neither a leaf in $V(T'')$ nor a leaf in $V(T')$ and (2) when $u'$ is a leaf in $V(T'')$ or $V(T')$.
(1) Suppose $u'$ is neither a leaf in $V(T'')$ nor leaf in $V(T')$. Then $f^{-1}(u') = u$. In this situation, we consider an attack at vertex $u$ in $T$. The guards of $T$ move from an eternal $2$-dominating set $D$ to an eternal $2$-dominating set $D_1$ that contains $u$. We move the guards in $T'$ according to how the guards moves in $T$. That is, if a guard in $T$ moves from $x \in V(T)$ to $y \in V(T)$, then in $T'$ the guard at $f^{-1}(x) = x'$ moves $f^{-1}(y)=y'$. Additionally, any guard in $V(T')$ or $V(T'')$ that is located at a leaf (that is not already a grandparent of a leaf) moves to the nearest vertex $z'$ that is a grandparent of a leaf. Observe that $f(z')=z \in V(T)$. Let $D_1'$ be the set of vertices now occupied by guards in $T'$. Observe that $D_1'$ is a $2$-dominating set on $T'$ that contains $u'$. Further observe that $D_1'$ mirrors $D_1$, just as $D'$ mirrored $D$.
(2) Suppose $u'$ is a leaf in $V(T')$ or $V(T'')$. Then $f^{-1}(u')=z$ for where $f(z')=z$ and $z'$ is the nearest (to $u'$) grandparent of a leaf in $T'$.
If there is a guard on a vertex that is a child or grandchild of $z'$, then this guard moves to $z'$ while the guard at $z'$ moves to the attacked vertex. Call the set of vertices now occupied by the guards $D''$ and note that it is a $2$-dominating set containing the attacked vertex. Observe that $D''$ mirrors $D$ just as $D'$ mirrored $D$.
Next, suppose there is no guard on a vertex that is child or grandchild of $z'$. Then we consider an attack at $z \in V(T)$ and the resulting movements of guards in $T$. In $T$, suppose the guards move from $D$ to the the eternal $2$-dominating set $D_1$ that contains $z$.
If a guard in $T$ moves from $x \in V(T)$ to $y \in V(T)$, then the guard at $f^{-1}(x) = x'$ moves $f^{-1}(y)=y'$. The guard at $z'$ moves to the attacked vertex. Finally, any there is a guard on a vertex that is a child or grandchild of $w' \in V(T')$ where $w' \neq z'$ and $w'$ is the grandparent of a leaf, then that guard moves to $w'$. Call the set of vertices now occupied by the guards $D_1'$ and note that it is a $2$-dominating set containing the attacked vertex. Observe that $D_1'$ mirrors $D_1$ just as $D'$ mirrored $D$.
We have seen that for any attack, the movements of guard on $T'$ can be guided by the movements of guards on sub-tree $T$. After each attack, if the guards on $T$ can move to an eternal $2$-dominating set $D_*$, then the guards on $T'$ can move to a $2$-dominating set $D_*'$. \end{proof}
We note that although the result of Lemma~\ref{thm:maryUP} is expressed for $k=2$, the result and proof can easily be extended to arbitrary $k$:
\begin{enumerate}
\item Initially place a guard on each vertex for which the distance to the nearest leaf is a positive multiple of $k$.
\item If 1. results in no guards being placed on the root, then place one guard on the root.
\item Place one guard on an arbitrary leaf.
\end{enumerate}
In this paper, we only use Lemma~\ref{thm:maryUP} for the $k=2$ case, so we do not prove the result for arbitrary $k$. However, combining Lemma~\ref{thm:maryUP} or the above extension with Theorem~\ref{thm:span}, will yield an upper bound for the eternal $2$-domination number (or eternal $k$-domination number) of any graph.
\medskip
An {\bf $m$-ary tree} is a rooted tree where every vertex has at most $m$ children. The {\bf depth} $d$ of an $m$-ary tree is the eccentricity of the root. A {\bf perfect $m$-ary tree} is an $m$-ary tree in which every non-leaf vertex has exactly $m$ children and every leaf is distance $d$ from the root where $d$ denotes the depth of the tree.
We next present a lemma that will be helpful in determining the eternal $2$-domination number for perfect $m$-ary trees. Recall that an eternal $2$-dominating family $\mathcal{E}$ is a set in which the elements are eternal $2$-dominating sets, all of the same cardinality, so that: if the guards occupy eternal $2$-dominating set $D \in \mathcal{E}$ and there is an attack at vertex $v$, the guards can move from set $D \in \mathcal{E}$ to a set $D' \in \mathcal{E}$ that contains $v$ (i.e. $v \in D'$ and $D$ transforms to $D'$). A {\it minimal} eternal $2$-dominating family is minimal in terms of the number of eternal $2$-dominating sets in the family.
\begin{lemma}\label{ClaimX} Let $T$ be a perfect $m$-ary tree of depth $d \geq 2$ for $m \geq 2$. There exists a minimal eternal $2$-dominating family in which each eternal $2$-dominating set contains the grandparent of every leaf and has cardinality $\gamma_{all,2}^\infty(T)$.\end{lemma}
\begin{proof}
Let $T$ be a perfect $m$-ary tree of depth $d \geq 2$. Suppose there exists no minimal eternal $2$-dominating family in which each eternal $2$-dominating set contains the grandparent of every leaf and has cardinality $\gamma_{all,2}^\infty(T)$. Let $\mathcal{E}$ be a minimal eternal $k$-dominating family in which each eternal $2$-dominating set has cardinality $\gamma_{all,2}^\infty(T)$. Then there exists some eternal $2$-dominating set $D \in \mathcal{E}$ that does not contain the grandparent of some leaf. Suppose $\ell$ is a leaf and $D$ does not contain the grandparent $v$ of $\ell$.
Let $T_v$ be the sub-tree of $T$ rooted at $v$. Observe that every set in $\mathcal{E}$ must contain at least one vertex of $T_v$; else $T_v$ is not $2$-dominated. Since $m \geq 2$ and $D$ does not contain $v$, it must be that $D$ contains at least two vertices of $T_v$; otherwise the vertices of $T_v$ are not $2$-dominated. Although one of these vertices may be the most recently attacked vertex (so a guard moved to the attacked vertex in $T_v$), let $u \neq v$ be the other vertex on $T_v$ that is in $D$. Since $N(u)\cup N_2(u) \subseteq N(v) \cup N_2(v)$, in response to the last attack, we could have moved a guard to $v$ instead of $u$. So instead of moving to $D$, the guards could have moved to $(D \backslash \{u\}) \cup \{v\}$. We observe if $D$ transforms to $D' \in \mathcal{E}$, then by construction, $(D \backslash \{u\}) \cup \{v\}$ will also transform to $D' \in \mathcal{E}$. Thus, $\mathcal{E} \backslash D \cup \Big(D \backslash \{u\} \cup \{v\}\Big)$ is an eternal $2$-dominating family. Applying this argument repeatedly results in the desired contradiction.\end{proof}
We conclude this subsection with the following characterization for $m$-ary trees.
\begin{theorem}\label{thm:mary} Let $T$ be a perfect $m$-ary tree of depth $d \geq 2$ and $m\geq 2$. Then $$\gamma_{all,2}^\infty(T)=1+\frac{m^d-\delta_{oe}}{m^2-1}$$ where $\delta_{oe} = \begin{cases} 1 & \text{ if $d$ is even} \\ m & \text{ if $d$ is odd.}\end{cases}$\end{theorem}
\begin{proof} The upper bound follows from Theorem~\ref{thm:maryUP}. It is easy to see that the above formula holds for perfect $m$-ary trees of depth $2,3$, and $4$. For the sake of contradiction, let $T'$ be a minimum depth perfect $m$-ary tree such that the above formula does not hold. Then $\gamma_{all,2}^\infty(T') \leq (m^{d}-\delta_{oe})/(m^2-1)$ where $T'$ has depth $d$. In $T'$, let $Z_1$ be the set of leaves, $Z_2$ be the set of parents of leaves, and $Z_3$ be the set of vertices that are grandparents of leaves. We first observe that in any minimum eternal $2$-dominating set on $T'$, there is always at least one guard in each sub-tree of $T'$ rooted at a vertex at depth $d-2$; otherwise, some leaf is not within distance $2$ of a guard. Thus, in every minimum eternal $k$-dominating set on $T'$, there are at least $m^{d-2}$ guards on the vertices $Z_1\cup Z_2\cup Z_3$ in $T'$.
Let $T$ be a perfect $m$-ary tree of depth $d-2$. Notice that $T$ is the sub-tree of $T'$ with the leaves and the parents of the leaves of $T'$ removed. Since $T$ has a smaller depth than $T'$, the formula for $\gamma_{all,2}^\infty(T)$ holds. The parity of $d$ does not matter since the parity of the depth of $T$ and $T'$ will always match. We next map the movements of guards in response to attacks at vertices in $T'$ to the movements of guards in $T$.
We will show that $\gamma_{all,2}^\infty(T')-m^{d-2}$ guards suffice to defend against any sequence of attacks on $T$.
Consider any minimum eternal $2$-dominating set on $T'$. If a guard occupies $v' \in V(T')\backslash (Z_1\cup Z_2\cup Z_3)$, there will be a shadow guard on the corresponding vertex $v \in V(T)$. If there are $q > 1$ guards in the sub-tree rooted at $z_3' \in Z_3$ then in $T$, then $q-1$ guards occupy the corresponding vertex $z_3 \in V(T)$, since $Z_3 \subseteq V(T)$. If there is exactly one guard in the sub-tree rooted at $z_3' \in Z_3$ then no corresponding guard in needed in $T$. We observe, however, that $z_3$ in $T$ will be $2$-dominated by a guard in $V(T)$: in $T'$, if the guard in $Z_3$ moved to a grandchild of $z_3'$ in response to an attack, a guard from $V(T)$ must move to $z_3'$. It is clear that the $\gamma_{all,2}^\infty(T')-m^{d-2}$ vertices occupied by guards in $T$ form a $2$-dominating set.
We now consider the subsequent attack in two cases.
(1) If a vertex in $V(T')\backslash (Z_1\cup Z_2 \cup Z_3)$ is attacked, the guards in $T$ consider an attack at the corresponding vertex in $T$.
(2) If a vertex in $Z_1 \cup Z_2 \cup Z_3$ is attacked, let $z_3' \in Z_3$ be the vertex in $Z_3$ closest to the attacked vertex. By Lemma~\ref{ClaimX}, when the guards in $T'$ move in response to the attack, their positions will form an eternal $2$-dominating set that contains $z_3'$, and the attacked vertex. The guards in $T$ consider an attack at the corresponding vertex $z_3 \in V(T)$.
If a guard at vertex $u' \in V(T')\backslash (Z_1\cup Z_2 \cup Z_3)$ moves to $w' \in V(T')\backslash (Z_1\cup Z_2 \cup Z_3)$ then a guard moves from $u \in V(T)$ to $w \in V(T)$.
If a guard moves from vertex of $z_3' \in Z_3$ to $y' \in V(T')\backslash (Z_1\cup Z_2 \cup Z_3)$ then by Lemma~\ref{ClaimX}, there must be another guard in $Z_1\cup Z_2\cup Z_3$ that moves to $z_3'$. Then in $T$, there is a guard at $z_3$ before the attack and this guard moves to $y \in V(T)$ after the attack.
We note that a guard $g$ will not move from a vertex of $Z_1\cup Z_2$ to a vertex in $V(T) \backslash (Z_1 \cup Z_2)$ because if there is a guard at a vertex of $Z_1 \cup Z_2$, then by Lemma~\ref{ClaimX}, there is also a guard at the closest vertex in $z_3' \in Z_3$. We may assume the guard at $z_3'$ moves to the vertex $y \in V(T')\backslash (Z_1 \cup Z_2)$, leaving guard $g$ in $Z_1 \cup Z_2$ to move to $z_3'$. In $T$, the guard at $z_3$ moves to vertex $y \in V(T)$.
If a guard $g$ moves from a vertex of $Z_1 \cup Z_2$ to a vertex of $Z_1 \cup Z_2$ then we observe that by Lemma~\ref{ClaimX}, there must be $q > 1$ guards in the sub-tree rooted at the nearest vertex of $Z_3$, at least one of whom is located on $z_3'$. We therefore may assume guard $g$ is not needed in the eternal $2$-dominating set on $T$.
After the guards have all moved on $T$, they still form an eternal $2$-dominating set in $T$. Thus, we see that $\gamma_{all,2}^\infty(T) \leq \gamma_{all,2}^\infty(T')-m^{d-2}$ guards are sufficient to defend against any sequence of attacks in $T$. Thus, $$\gamma_{all,2}^\infty(T) \leq 1+\frac{m^{d-2}-\delta_{oe}}{m^2-1} \leq \gamma_{all,2}^\infty(T')-m^{d-2} \leq \frac{m^d-\delta_{oe}}{m^2-1} - m^{d-2}$$ which provides as contradiction as $m\geq 2$. Thus, the lower bound has been proven.\end{proof}
\subsection{Reductions on trees for $k=2$} \label{section_treedecomposition}
The results in Section~\ref{sec:trees} provide reductions for trees that control the change in eternal $k$-domination number. In this section, we restrict ourselves to $k=2$. We show that for trees with certain structure, deleting a portion of the tree results in the eternal $2$-domination number decreasing by $1$. We then consider the situations where the eternal $2$-domination does not decrease when removing another part of the graph, given further structure.
For the duration of this section, we require the following particular description of tree structure.
\begin{definition}\label{def:allthesets}
Let $x$ be a vertex in tree $T$ that is not a leaf and is distance exactly two from a leaf. Define $L$ to be the set of leaves in $N_2(x)$ for $T$. Let $X$ be the set of all vertices that are distance two from vertices of $L$, that is $\displaystyle{X = \cup_{\ell \in L} N_2(\ell)}$.
Let $S$ be the set of vertices adjacent to $x$ that are either leaves themselves or adjacent to a vertex in $L$. We further partition $S$ into two sets: $A \subset S$ is the set of vertices with a least two neighbours in $X$; $B \subseteq S$ is the set of vertices with exactly one neighbour in $X$.
\end{definition}
As seen in Figure~\ref{fig:treeillustration} the sets $L,S,A$ and $B$ are all dependent on the vertex $x$, and we can consider these sets defined for any eligible vertex $x \in V(T)$. Though $L,S,A,B$ each depend on the choice of $x$, we omit any subscripts in an aim to present results and definitions in a more readable way.
\begin{figure}[htbp]
\[\includegraphics[width=1\textwidth]{fig4.png} \]
\caption{Before and after applying Theorem~\ref{thm:sup}.}
\label{fig:treeillustration}
\end{figure}
\begin{theorem} \label{thm:LASBX} Let $T$ be a tree and suppose there exists a vertex $x \in V(T)$ that generates sets $L,S,A$ and $B$ as defined in Definition~\ref{def:allthesets} such that $A = \emptyset$ and $|B| > 1$. Then $\gamma_{all,2}^\infty(T) = \gamma_{all,2}^\infty(T')+1$ where $T'$ is the graph induced by the deletion of $L\cup B$ from $T$.\end{theorem}
\begin{proof}First, we show $\gamma_{all,2}^\infty(T) \geq \gamma_{all,2}^\infty(T')+1$. For the sake of counter example, suppose $\gamma_{all,2}^\infty(T) = \gamma_{all,2}^\infty(T')$. Observe that there must always be at least one guard on the vertices $L\cup B \cup \{x\}$. If there is only one guard on this set of vertices, it must be located at $x$. This implies there must always be a guard at vertex $x$ in $T$. Suppose a leaf of $L$ is attacked in $T$, then a guard $g$ moves from $x$ to the leaf. However, if another leaf in $L$ is attacked subsequently, the guard $g$ cannot reach this other leaf, thus in $T$ we require that there must be some new guard that moves onto $x$ when $g$ initially moves protect the leaf from the attack. Thus, there is an eternal $2$-dominating family for $T'$ wherein every eternal $2$-dominating set contains $x$. Thus, there is an unnecessary guard in $T'$, so $\gamma_{all,2}^\infty(T') < \gamma_{all,2}^\infty(T)$ as desired.
It is easy to see $\gamma_{all,2}^\infty(T) \leq \gamma_{all,2}^\infty(T')+1$: we create a guard strategy for $T$, based on the movements of guards in $T'$. Initially, suppose guards occupy an eternal $2$-dominating set on $T'$ that contains $x$. Place guards on the vertices of $V(T')$ in $T$ and place an additional guard on an arbitrary vertex of $B \cup L$. Suppose there is an attack on a vertex of $V(T')$ in $T$: if there is a guard in $B \cup L$, that guard moves to $x$. The remaining guards move as their counterparts in $T'$ would move in response to such an attack (this may result in two guards simultaneously occupying $x$). Alternately, suppose there is an attack at a vertex of $B \cup L$ in $T$. In $T'$, we consider an attack at $x$. The guards in $T'$ move to occupy an eternal $2$-dominating set containing $x$ (which may result in no guard moving). In $T$, the guard at $x$ moves to the attacked vertex in $B \cup L$; if there is a guard already in $B \cup L$, that guard moves to $x$, and the remaining guards move like their counterparts in $T'$. Thus, the guards form a $2$-dominating set on $T$. In this manner, $\gamma_{all,k}^\infty(T')+1$ guards can defend against any sequence of attacks on $V(T)$.\end{proof}
Notice that in Theorem~\ref{thm:LASBX}, we exclude the case where $|B|=1$. This is because there exist trees with $|B|=1$ for which removing set $L \cup B$ will reduce the eternal $2$-domination number, $T_1$, and others which will leave the eternal $2$-domination number unchanged, $T_2$; two such trees are illustrated in Figure~\ref{fig:remark}.
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{fig5.png}
\caption{Trees $T_1$ and $T_2$.}
\label{fig:remark}
\end{figure}
In Theorem~\ref{thm:LASBX}, we provided a reduction for the case where $A=\emptyset$. The following result tells us when $A$ is non-empty.
\begin{theorem}\label{thrm:A}
Let $T$ be a tree and $x$ a non-leaf vertex that is a grandparent of a leaf. Then $A \neq 0$ iff there exists a leaf $\ell$ that has two distinct grandparents, one which is $x$, and another $x'$.
\end{theorem}
\begin{proof}
Let $T$ be a tree and $x$ a grandparent of a leaf, and create the sets as defined in Definition~\ref{def:allthesets}.
Suppose $A\not = \emptyset$. This means $|X|\geq 2$ and there exists a vertex, $a$ that adjacent to $x$ and another element of $X$. Let $\ell$ be the leaf, $y$ the parent and $x$ the grandparent. $x$ has a neighbour, $a$ that is adjacent to another vertex, $x'$. Since $x'$ need to be a member of $X$ it is within distance two of a leaf, $\ell'$ that is distance two away from $x$ (note it is possible that $\ell=\ell'$. Thus $\ell'$ had two distinct grandparents, one of which is $x$.
Now suppose that there is some leaf $\ell$ that has two distinct grandparents, one of which is $x$, the other $x'$. Note that both $x,x'\in X$. Since $\ell$ is a leaf, there is a common neighbour to $\ell,x,x'$ that we will call $a$. From the definition of set $A$, $a\in A$, so $A\not = \emptyset$.
\end{proof}
Though we have settled how identify the case and apply reductions to the case where $A = \emptyset$, the example below demonstrates that the case where $A\not = \emptyset$ is not straightforward.
\begin{figure}[htbp]
\
\[ \includegraphics[width=0.4\textwidth]{fig7.png} \]
\caption{Tree $T$ with $A\not = \emptyset$ }
\label{fig:ex}
\end{figure}
Consider the tree $T$ in Figure~\ref{fig:ex}. The eternal $2$-domination number for this tree is $3$. When $x$ is $x_2,x_5$ or $x_9$ the eternal $2$-domination number and we create the sets as defined in Definition~\ref{def:allthesets} and remove the vertices of $L\cup B$, the does not change, but when $x$ is $x_3$ or $x_4$, the eternal $2$-domination number decreases by 1.
Thus, the characterization of $\gamma_{all,2}^\infty(T)$ for all trees remains incomplete and we leave this as an open problem. In Section~\ref{section_Conclusion} we present further open problems and concluding remarks.
\section{Conclusion and Open Problems}\label{section_Conclusion}
We conclude this paper with a discussion of open problems. In Section~\ref{sec:prelim}, discussed some graphs for which the parameters $\gamma_k$ and $\gamma_{all,k}^\infty$ are equal, and others for which the two parameters are not equal; however the question of for which graphs the parameters equal remains open:
\begin{question}\label{q1}
Can we describe the class of graphs $\mathcal(G)$ for which $\gamma_k (G) = \gamma_{all,k}^\infty$(G) for all $G \in \mathcal(G)$? \end{question}
\begin{question}
Let $\mathcal{G}_{n,m}$ be the family of simple graphs on $n$ vertices and $m$ edges. For a fixed $n$ and $m$, what are the optimal families of graphs, that is, what are the graphs with the smallest eternal $k$-domination number? Also what are the least optimal families, that is, the graphs with the largest eternal $k$-domination number?
\end{question}
\begin{question} Given a value of $k$ and an eternal $k$-domination number what is the spectrum of graph orders $n$ that satisfy the given constraints? \end{question}
For any given $k$, if we fix the $k$-eternal domination number to be $1$, the size of the vertex set, $n$, can take on any value, just consider the star graph. One interesting result we can obtain from Theorem~\ref{thrm:paths} is the following. Let $P_{n,\ell}$ be $P_{n}$ with $\ell$ leaves added to a vertex adjacent to one of the leaves of $P_{n}$.
\begin{corollary}
For any given positive integers $k,z$ and $n$, with $n\geq z(k+1)$ there exists a graph on $n$ vertices whose eternal $k$-domination number is $z$, namely $P_{z(k+1),\ell}$, where $\ell=n-z(k+1)$.
\end{corollary}
\begin{proof}
For a given $k$ and any positive integer $z$ and any positive integer $n\geq z(k+1)$ consider the path $P_{z(k+1)}$. This has $z(k+1)$ vertices and from Theorem~\ref{thrm:paths} we know that the eternal $k$-domination number is $z$. Place a $k$-dominating set on this graph so that it is eternally $k$-dominated. Label the vertices of the path $v_1,v_2,...,v_{z(k+1)}$, with $v_1$ and $v_{z(k+1)}$ being the leaves. Add $n-z(k+1)$ leaves to vertex $v_{z(k+1)-1}$. The guard that is dominating $v_{z(k+1)}$ also dominates these new leaves. Thus, this new graph has $n$ vertices and eternal $k$-domination number $z$.
\end{proof}
Though we provide reductions in working towards determining $\gamma_{all,k}^\infty(T)$ and further reductions in working towards determining $\gamma_{all,2}^\infty(T)$ for any tree $T$, we were unable to complete the characterizations and leave them as open problems. We additionally state a few related questions.
\begin{question} Which graphs $G$ have the property that $\gamma_{all,2}^\infty (G) = \gamma(G)$? Can we characterize the trees with this property? \end{question}
Clearly $\gamma_{all,2}^\infty(G) = 1$ if and only if $\gamma(G)=1$ (i.e. $G$ has a universal vertex). If $\gamma(G) = 2$ then $\gamma_{all,2}^\infty(G)=2$, but the converse is not always true. For example, $\gamma_{all,2}^\infty(C_{10})=2 < 4 = \gamma(C_{10})$. In considering trees, we observe that $\gamma_{all,2}^\infty(K_{1,n})=\gamma(K_{1,n})$ but for caterpillars graphs where there are no degree $2$ vertices, $\gamma_{all,2}^\infty$ and $\gamma$ are not equal. However, for every tree $T$ that is formed from a path of $n$ vertices where every internal vertex has a leaf, with $n\geq 3$, we have that $\gamma_{all,2}^\infty (T) \neq \gamma(T)$.
Certainly if there exists a minimum dominating st where each vertex in this set has at least two private neighbours then $\gamma_{all,2}^\infty(T)\leq \gamma(T)$, but the converse remains open.
\begin{question}\label{q2} Suppose that for every minimum dominating set of a tree $T$, each vertex in the dominating set has at least two private neighbours. Then is $\gamma_{all,2}^\infty(T)=\gamma(T)$? \end{question}
\subsection*{Acknowledgements}
M.E. Messinger acknowledges research support from NSERC (grant application 2018-04059). D. Cox acknowledges research support from NSERC (2017-04401) and Mount Saint Vincent University. E. Meger acknowledges research support from Universit\'e du Qu\'ebec \`a Montr\'eal and Mount Allison University.
|
1,116,691,500,377 | arxiv | \section{Appendix}
\subsection{Properties of Differential Privacy}
\label{subsec:dp_props}
Differentially private computations enjoy two nice properties:
\begin{theorem}[Post Processing \cite{DMNS06}]
Let $A:\mathcal{X}^*\rightarrow \mathcal{O}$ be any $(\varepsilon,\delta)$-differentially private algorithm, and let $f:\mathcal{O}\rightarrow \mathcal{O'}$ be any function. Then the algorithm $f \circ A: \mathcal{X}^n\rightarrow \mathcal{O}'$ is also $(\varepsilon,\delta)$-differentially private.
\end{theorem}
Post-processing implies that, for example, every \emph{decision} process based on the output of a differentially private algorithm is also differentially private.
\begin{theorem}[Basic Composition \cite{DMNS06}]
\label{thm:composition}
Let $A_1:\mathcal{X}^*\rightarrow \mathcal{O}$, $A_2:\mathcal{O}\times \mathcal{X}^*\rightarrow \mathcal{O}'$ be such that $A_1$ is $(\varepsilon_1,\delta_1)$-differentially private, and $A_2(o,\cdot)$ is $(\varepsilon_2,\delta_2)$-differentially private for every $o \in \mathcal{O}$. Then the algorithm $A:\mathcal{X}^*\rightarrow \mathcal{O'}$ defined as $A(x) = A_2(A_1(x),x)$ is $(\varepsilon_1+\varepsilon_2,\delta_1+\delta_2)$-differentially private.
\end{theorem}
\subsection{Proof of Theorem~\ref{thm:hp_ub}}
\label{sec:app_hpub}
\begin{proof}
For $j \in \{0,1\}$, let $N_j$ denote the (unknown) true count of 0 and 1 responses, i.e. $N_j = |\{x_i \mid \arg \max_{j' \in \{0,1\}} P_{j'}(x_i) = j\}|$. Then for both $j$, $\E{}{\hat N_j} = \frac{N_j(e^\varepsilon-1) + n}{e^\varepsilon + 1}$. By a Chernoff bound, with high probability $|\hat N_j - \tfrac{N_j(e^\varepsilon-1) + n}{e^\varepsilon+1}| = O(\sqrt{n})$. Then since $N_j' = \tfrac{e^\varepsilon+1}{e^\varepsilon-1} \cdot \left(\hat N_j - \tfrac{n}{e^\varepsilon-1} \right)$ we get $|\hat N_j' - N_j| = O\left(\frac{e^{\varepsilon+1}}{e^\varepsilon-1} \cdot \sqrt{n}\right) = O\left(\frac{\sqrt{n}}{\varepsilon}\right)$. It is therefore sufficient that $\tfrac{1}{\varepsilon\sqrt{n}} = O(\alpha)$ to distinguish between $P_0$ and $P_1$, which implies the claim.
\end{proof}
\subsection{High Probability Sample Complexity from Theorem~\ref{thm:main}}
\label{sec:app_highprob}
We first prove a multiplicative Azuma-Hoeffding Inequality which will drive the high probability bound.
\begin{lemma} [Multiplicative Azuma-Hoeffding Inequality]
\label{lem:ah}
Let $(\gamma_t)_{t=1}^{T}, \gamma_{t} \in [0,1]$ a collection of dependent random variables, and let $(\mathcal{F}_t)_{t=1}^{T}$ a filtration such that $\sigma(\gamma_1, \ldots \gamma_{t-1}) \subset \mathcal{F}_{t-1}$. Suppose $\forall t, \E{}{\gamma_t \mid \mathcal{F}_{t-1}} \leq \mu_t$. Then if w.p. $1, \; \sum_{t}\mu_t \leq \mu,$ we have for any $\delta \in [e^{-3/4\mu}, 1]$:
$$\P{}{\sum_{t=1}^{T}\gamma_t > \sqrt{3\mu \log(1/\delta)} + \mu} \leq \delta$$
\end{lemma}
\begin{proof}
By convexity of $e^{l\gamma_t}, \gamma_t \in [0, 1], \E{}{\gamma_t \mid \mathcal{F}_{t-1}} \leq \mu_t, \; \forall t, l$:
$$ \E{}{e^{l \gamma_t}\mid \mathcal{F}_{t-1}} \leq 1 + (e^{l}-1)\E{}{\gamma_t|\mathcal{F}_{t-1}} \leq 1 + (e^{l}-1)\mu \leq e^{(e^{l}-1)\mu_t}$$
If we define $S_j = \sum_{t=1}^{j}\gamma_t$, then:
$$\E{}{e^{lS_j}} = \E{\mathcal{F}_{j-1}}{\E{}{e^{lS_j} \mid \mathcal{F}_{j-1}}} = \E{\mathcal{F}_{j-1}}{e^{lS_{j-1}}\mid F_{j-1}}\E{}{e^{l\gamma_j} \mid \mathcal{F}_{j-1}} \leq \E{}{e^{lS_{j-1}}}e^{(e^{l}-1)\mu_t} $$
Inducting on $j$, we have:
$$ \E{}{e^{lS_T}} \leq e^{(e^l-1)\sum_t \mu_t} \leq e^{(e^l-1)\mu} $$
For $\varepsilon > 0$, taking $l = \log(1+\varepsilon), a = (1+\varepsilon)\mu$ and using Markov's inequality:
$$\P{}{S_{T} \geq a} \leq e^{-(1+\varepsilon)\mu l }\E{}{e^{lS}} \leq e^{-(1+\varepsilon)\mu l + \mu(e^{l}-1)} = e^{-\mu \phi(\varepsilon)},$$
where $\phi(z) = z - (1+z)\log(1+z)$. Since $\phi(z) \leq -z^2/3$ for $z \in [0, 3/2]$, we get:
$$\P{}{S_{T} \geq (1+\varepsilon)\mu} \leq e^{-\mu \varepsilon^2/3}$$
Setting $\varepsilon = \sqrt{\frac{3\log(1/\delta)}{\mu}}$ gives the desired bound. Note that the condition $\varepsilon \in [0, 3/2]$ forces $\delta \geq e^{-3/4\mu}$.
\end{proof}
\begin{proof}
There are at most $n$ users drawn in line $18$ of $\mathsf{Reduction}$, hence it suffices to bound with high probability the number of users drawn during rejection sampling steps in line $13$. For a given user $i$ drawn during a rejection sampling step, the sample complexity on rounds where $i$ is selected can be written as $\sum_{t: i_t = i}^{T}\gamma_tN_t$, where $\gamma_t \sim \mathsf{Ber}(\frac{e^{-\epsilon_{t}}-1}{e^{-\varepsilon}-1})$, $N_t \stackrel{ind}{\sim} \mathsf{Geom}(p_{t})$ where $p_{t} \geq \frac{e^{-2\epsilon}}{2}, \varepsilon_t \leq \varepsilon$ are random variables that depend on $\Pi_{< t}$. Hence the total sample complexity over the rejection sampling rounds can be written as:
$$S = \sum_{t=1}^{T}\gamma_tN_t $$
First consider $\sum_{t = 1}^{T}\gamma_t$. Let $\mathcal{F}_{t}$ be the $\sigma$-algebra generated as $\mathcal{F}_t = \sigma(\Pi_{< t}, \varepsilon_t, (\gamma_l)_{l=1}^{t-1})$. Then $\E{}{\gamma_t \mid \mathcal{F}_{t-1}} = \frac{e^{-\varepsilon_t}-1}{e^{-\varepsilon}-1} = \mu_t$. Define $\mu = \sum_t \mu_t \leq \frac{n k \epsilon}{1-e^{-\varepsilon}}$. Hence by Lemma~\ref{lem:ah}, with probability $1-\delta/2$, for $\delta \geq 2e^{-\frac{3}{4}\mu}$:
$$\P{}{\sum_{t=1}^{T}\gamma_t > \sqrt{3\mu \log(2/\delta)} + \mu} \leq \frac{\delta}{2}$$
Let $E_{\gamma}$ be the above event $\big\{\sum_{t}\gamma_t \leq \sqrt{3\mu \log(2/\delta)} + \mu\big \}$. Then for any $t$, $$\mathbb{P}[S \geq t| E_{\gamma}] \leq \mathbb{P}[Z \geq t],$$
where $Z = \sum_{t = 1}^{K}N_t', K = \sqrt{3\mu \log(2/\delta)} + \mu$, and $N_t' \stackrel{iid}{\sim} \mathsf{Geom}(\frac{e^{-\epsilon}}{2})$. Let $\mu' = \mathbb{E}[Z] = 2e^{\epsilon}(\sqrt{3\mu \log(2/\delta)} + \mu)$. By Theorem $2.1$ in \cite{boundexp} for any $t \geq 1$:
$$ \mathbb{P}[Z \geq t\mu'] \leq e^{\frac{-e^{-\epsilon}}{2}\mu' (t-1-\log t)}$$
Setting $t = 2(\frac{\log(2/\delta)2e^{\epsilon}}{\mu'} + 1)$ gives $Z \leq 2(\log(2/\delta)2e^{\epsilon} + \mu')$ with probability at least $1-\delta/2$. Hence $\P{}{S \geq 2(\log(2/\delta)2e^{\epsilon} + \mu')|E_{\gamma}} \leq \frac{\delta}{2}$.
Finally,
$$\P{}{S \geq 2(\log(2/\delta)2e^{\epsilon}+ \mu'} \leq \P{}{S \geq 2(\log(2/\delta)2e^{\epsilon}+ \mu'|E_{\gamma}}\P{}{E_{\gamma}} + (1-\P{}{E_{\gamma}}) \leq \frac{\delta}{2} + \frac{\delta}{2} = \delta$$
Substituting in the expression for $\mu'$ gives $S = O(nk + \sqrt{nk\log \frac{1}{\delta}})$ with probability $1-\delta$,
as desired.
\end{proof}
\section{Hypothesis Testing}
\label{sec:hyp}
We now turn our attention to the role of interactivity in hypothesis testing. We first show that for the simple hypothesis testing problem, there exists a non-interactive $(\epsilon, 0)$-LDP protocol that achieves optimal sample complexity. This result extends to the compound hypothesis testing case, when we make the additional assumption that the sets of distributions are convex and compact.
\subsection{Simple Hypothesis Testing}
Let $P_0$ and $P_1$ be two known distributions such that $\tv{P_0}{P_1} \geq \alpha$, and suppose one of $P_0$ and $P_1$ generates $n$ i.i.d. samples $x_1, \ldots, x_n$. The goal in \emph{simple hypothesis testing} is to determine whether the samples are generated by $P_0$ or $P_1$. The Neyman-Pearson lemma~\cite{NP33} establishes that the likelihood ratio test is optimal for this problem absent privacy, and recent work~\cite{CKMSU18} extends this idea to give an optimal (up to constants) private simple hypothesis test in the centralized model of differential privacy. We recall a simple folklore non-interactive hypothesis test in the local model, and then prove that it is optimal even among the set of all fully interactive locally private tests.
\subsubsection{(Folklore) Upper Bound}
Consider the following simple variant $\mathcal{A}$ of the likelihood ratio test: each user $i$ with input $x_i$ outputs $\RR{\varepsilon}{\arg \max_{j \in \{0,1\}} P_j(x_i)}$. For $j \in \{0,1\}$ let $\hat N_j$ denote the resulting count of responses and let $\hat N_j' = \tfrac{e^\varepsilon+1}{e^\varepsilon-1} \cdot \left(\hat N_j - \tfrac{n}{e^\varepsilon-1} \right)$ be the corresponding de-biased count. The analyst computes both quantities $\hat N_j'$ and outputs $P_{\arg \max_j \hat N_j'}$.
\begin{algorithm}
\caption{Locally Private Simple Hypothesis Tester $\mathcal{A}$}\label{alg:lph}
\begin{algorithmic}[1]
\Procedure{Noninteractive Protocol}{$\{x_i\}_{i=1}^n$}
\For{$i = 1 \ldots n$}
\State User $i$ publishes $y_i \gets \RR{\arg \max_{j \in \{0,1\}} P_j(x_i), \varepsilon}$
\EndFor
\For{$j = 0, 1$}
\State Analyst computes $\hat N_j \gets |\{y_i \mid y_i = j\}|$
\State Analyst computes $\hat N_j' \gets \tfrac{e^\varepsilon+1}{e^\varepsilon-1} \cdot \left(\hat N_j - \tfrac{n}{e^\varepsilon-1} \right)$
\EndFor
\State Analyst outputs $P_{\arg \max_j \hat N_j'}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
It is immediate that $\mathcal{A}$ is noninteractive and, since it relies on randomized response, satisfies $(\varepsilon,0)$-local differential privacy. Since we can bound its sample complexity by simple concentration arguments, we defer the proof to Appendix~\ref{sec:app_hpub}.
\begin{theorem}
\label{thm:hp_ub}
With probability at least $2/3$, $\mathcal{A}$ distinguishes between $P_0$ and $P_1$ given $n = \Omega\left(\tfrac{1}{\varepsilon^2\alpha^2}\right)$ samples.
\end{theorem}
\subsubsection{A Lower Bound for Arbitrarily Adaptive $(\varepsilon,\delta)$-Locally Private Tests}
\label{subsubsec:lb}
We now show that the folklore $\epsilon$-private non-interactive test is optimal amongst all $(\epsilon,\delta)$-private fully interactive tests. First, combining (slightly modified versions of) Theorem 6.1 from~\citet{BNS18} and Theorem A.1 in~\citet{CSUZZ18}, we get the following result\footnote{~\citet{BNS18} and~\citet{CSUZZ18} prove their results for noninteractive protocols. However, their constructions both rely on replacing a single $(\varepsilon,\delta)$-local randomizer call for each user with an $(O(\varepsilon),0)$-local randomizer call and proving that these randomizers induce similar output distributions. Since each user still makes a single randomizer call in sequential interactive protocols, essentially the same argument applies. For fully interactive protocols, a naive modification of the same result forces a stronger restriction on $\delta$, roughly $\delta = \tilde o\left(\frac{\varepsilon \beta}{\max(n,T)}\right)$.}
\begin{lemma}
\label{lem:app_to_pure}
Given $\varepsilon > 0$, $\delta < \min\left(\tfrac{\epsilon\beta}{48n\ln(2n/\beta)}, \frac{\beta}{64n\ln(n/\beta)e^{7\varepsilon}}\right)$ and sequentially interactive $(\varepsilon,\delta)$-locally private protocol $\mathcal{A}$, there exists a sequentially interactive $(10\varepsilon,0)$-locally private protocol $\mathcal{A}'$ such that for any dataset $U$, $\tv{\mathcal{A}(U)}{\mathcal{A}'(U)} \leq \beta$.
\end{lemma}
Lemma~\ref{lem:app_to_pure} enables us to apply existing lower bound tools for $\varepsilon$-locally private protocols to (sequentially interactive) $(\varepsilon,\delta)$-locally private protocols. At a high level, our proof relies on controlling the Hellinger distance between transcript distributions induced by an $(\varepsilon,\delta)$-locally private protocol when samples are generated by $P_0$ and $P_1$. We borrow a simulation technique used by~\citet{BGMNW16} for a similar (non-private) problem and find that we can control this Hellinger distance by bounding the KL divergence between a simpler, \emph{noninteractive} pair of transcript distributions. We accomplish this last step using existing tools from~\citet{DJW13}.
\begin{theorem}
\label{thm:hypothesis_lb}
Let $\tv{P_0}{P_1} = \alpha$ and let $\Pi$ be an arbitrary (possibly fully interactive) $(\varepsilon,\delta)$-locally private simple hypothesis testing protocol distinguishing between $P_0$ and $P_1$ with probability $\geq 2/3$ using $n$ samples where $\varepsilon > 0$ and $\delta < \min\left(\tfrac{\varepsilon^3\alpha^2}{48n\ln(2n/\beta)}, \frac{\varepsilon^2\alpha^2}{64n\ln(n/\beta)e^{7\varepsilon}}\right)$. Then $n = \Omega\left(\tfrac{1}{\varepsilon^2 \alpha^2}\right).$
\end{theorem}
\begin{proof}
Let $\Pi_{\vec 0}, \Pi_{\vec 1}$, and $\Pi_{\vec e_i}$ respectively denote the distribution over transcripts induced by protocol $\Pi$ when samples are drawn from $P_0$, $P_1$, and $x_i \sim P_1$ but the remaining $x_{i'} \sim P_0$. Let $h^2$ denote the square of the Hellinger distance, $\sqhel{f}{g} = 1 - \int_{\mathcal{X}} \sqrt{f(x)g(x)}dx$. We begin with Lemma~\ref{lem:bgmnw}, originally proven as Lemma 2 in~\citet{BGMNW16}.
\begin{lemma}
\label{lem:bgmnw}
$\sqhel{\Pi_{\bar 0}}{\Pi_{\bar 1}} = O\left(\sum_{i=1}^n \sqhel{\Pi_{\bar 0}}{ \Pi_{\vec e_i}}\right)$.
\end{lemma}
Since our goal is now to bound these squared Hellinger distances, we will use a few facts collected below.
\begin{fact}
\label{fact:hell}
For any distributions $f$, $g$, and $h$,
\begin{enumerate}
\item $\sqhel{f}{g} \leq 2(\sqhel{f}{h} + \sqhel{h}{g})$.
\item $\sqhel{f}{g} \leq d_{TV}(f,g) \leq \sqrt{2}\hel{f}{g}$.
\item $\sqhel{f}{g} \leq \tfrac{1}{2}\kl{f}{g}$.
\end{enumerate}
\end{fact}
Choose an arbitrary term $i$ of the sum in Lemma~\ref{lem:bgmnw}. Suppose we have user $i$ simulate $\mathcal{A}$ using draws from $P_0$ for the inputs of other users and their input $x_i$ for input $i$. Since $\Pi$ is $(\varepsilon,\delta)$-locally private, this simulation can be viewed as a single $(\varepsilon,\delta)$-local randomizer applied to $x_i$. We can therefore use Lemma~\ref{lem:app_to_pure} to get a $(10\varepsilon,0)$-local randomizer $\Pi'$ inducing distributions $\Pi_{\vec 0}'$ and $\Pi_{\vec e_i}'$ such that $\tv{\Pi_{\vec 0}'}{\Pi_{\vec 0}} \leq \varepsilon^2\alpha^2$ and $\tv{\Pi_{\vec e_i}'}{\Pi_{\vec e_i}} \leq \varepsilon^2\alpha^2$. Then,
\begin{align*}
\sqhel{\Pi_{\vec 0}}{\Pi_{\vec e_i}} \leq&\; 2(\sqhel{\Pi_{\vec 0}}{\Pi_{\vec e_i}'} + \sqhel{\Pi_{\vec e_i}'}{\Pi_{\vec e_i}}) \\
\leq&\; 4(\sqhel{\Pi_{\vec 0}}{\Pi_{\vec 0}'} + \sqhel{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'}) + 2\sqhel{\Pi_{\vec e_i}'}{\Pi_{\vec e_i}} \\
\leq&\; 4(\tv{\Pi_{\vec 0}}{\Pi_{\vec 0}'} + \sqhel{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'}) + 2\tv{\Pi_{\vec e_i}'}{\Pi_{\vec e_i}} \\
\leq&\; 6\varepsilon^2\alpha^2 + 4\sqhel{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'}
\end{align*}
where the first two inequalities follow from item 1 in Fact~\ref{fact:hell}, the third inequality follows from item 2, and the last inequality follows from our use of Lemma~\ref{lem:app_to_pure} above.
It remains to bound $\sqhel{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'}$. By item 3 in Fact~\ref{fact:hell}, $4\sqhel{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'} \leq 2\kl{\Pi_{\vec 0}'} {\Pi_{\vec e_i}'}$. Since the transcript distributions $\Pi_{\vec 0}'$ and $\Pi_{\vec e_i}'$ can be simulated by noninteractive $(10\varepsilon,0)$-local randomizers, we can apply Theorem 1 from~\citet{DJW13}, restated for our setting as Lemma~\ref{lem:djw}.
\begin{lemma}
\label{lem:djw}
Let $Q$ be an $(\varepsilon,0)$-local randomizer and let $P_0$ and $P_1$ be distributions defined on common space $\mathcal{X}$. Let $x_0 \sim P_0$ and $x_1 \sim P_1$. Then $$\kl{Q(x_0)}{Q(x_1)} + \kl{Q(x_1)}{Q(x_0)} \leq \min\{4,e^{2\varepsilon}\}(e^\varepsilon - 1)^2\tv{P_0}{P_1}^2.$$
\end{lemma}
\noindent Thus $$\kl{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'} + \kl{\Pi_{\vec e_i}'}{\Pi_{\vec 0}'} = O(\varepsilon^2 \cdot \tv{P_0}{P_1}^2) = O\left(\varepsilon^2\alpha^2\right).$$ It follows that $\sqhel{\Pi_{\vec 0}'}{\Pi_{\vec e_i}'} = O(\varepsilon^2\alpha^2)$. Moreover, since our original choice of $i$ was arbitrary, tracing back to Lemma~\ref{lem:bgmnw} yields $\sqhel{\Pi_{\vec 0}}{\Pi_{\vec 1}} = O(n\varepsilon^2\alpha^2)$. By Fact~\ref{fact:hell}, $\sqhel{\Pi_{\vec 0}}{ \Pi_{\vec 1}} \geq \tfrac{1}{2}\tv{\Pi_{\vec 0}}{\Pi_{\vec 1}}^2 = \Omega(1)$. Thus $n = \Omega\left(\tfrac{1}{\varepsilon^2\alpha^2}\right)$.
\end{proof}
\subsection{Compound Hypothesis Testing}
We now extend the reasoning of Section~\ref{sec:hyp} to \emph{compound} hypothesis testing. Here $P_0$ and $P_1$ are replaced by (disjoint) collections of discrete hypotheses $H_0$ and $H_1$ such that $$\inf_{(P,Q) \in H_0 \times H_1} \tv{P}{Q} \geq \alpha.$$ The goal is to determine whether samples are generated by a distribution in $H_0$ or one in $H_1$.
\begin{theorem}
\label{thm:compound}
Let $H_0$ and $H_1$ be convex and compact sets of distributions over ground set $X$ such that $\inf_{(P,Q) \in H_0 \times H_1} \tv{P}{Q} \geq \alpha$. Then there exists noninteractive $(\varepsilon,0)$-locally private protocol $\mathcal{A}$ that with probability at least $2/3$ distinguishes between $H_0$ and $H_1$ given $n = \Omega\left(\tfrac{1}{\varepsilon^2\alpha^2}\right)$ samples.
\end{theorem}
\begin{proof}
Let $X$ be the ground set for distributions in $H_0$ and $H_1$, and consider the two-player zero-sum game $$\sup_{S \in \Delta(2^X)} \inf_{(P,Q) \in H_0 \times H_1} \E{E \sim S}{P(E) - Q(E)}.$$ Here, the sup player chooses a distribution over events, and the inf player chooses distributions in $H_0$ and $H_1$. We will use (a simplified version of) Sion's minimax theorem~\cite{S58}.
\begin{lemma}[Sion's minimax theorem]
\label{lem:sion}
For $f \colon A \times B \to \mathbb{R}$, if
\begin{enumerate}
\item for all $a \in A$ $f(a, \cdot)$ is continuous and concave on $B$,
\item for all $b \in B$ $f(\cdot,b)$ is continuous and convex on $A$, and
\item $A$ and $B$ are convex and $A$ is compact,
\end{enumerate}
then $$\sup_{b \in B} \inf_{a \in A} f(a,b) = \inf_{a \in A} \sup_{b \in B} f(a,b).$$
\end{lemma}
We first verify that the three conditions of Lemma~\ref{lem:sion} hold. Let $$f(S,(P,Q)) = \E{E \sim S}{P(E) - Q(E)}.$$ Linearity of expectation implies that $f(\cdot, (P,Q))$ is linear in $\Delta(2^X)$ and $f(S, \cdot)$ is linear in $H_0 \times H_1$. Therefore conditions 1 and 2 hold. Moreover, since $\Delta(2^X)$ is convex and we assumed $H_0$ and $H_1$ to be convex and compact --- properties which are both closed under Cartesian product --- condition 3 holds as well. As a result, $$\sup_{S \in \Delta(2^X)} \inf_{(P,Q) \in H_0 \times H_1} \E{E \sim S}{P(E) - Q(E)} = \inf_{(P,Q) \in H_0 \times H_1} \sup_{S \in \Delta(2^X)} \E{E \sim S}{P(E) - Q(E)} \geq \alpha$$ and there exists fixed distribution $S$ over events such that for all $(P,Q) \in H_0 \times H_1$, $$\E{E \sim S}{P(E) - Q(E)} \geq \alpha.$$
This leads to the following hypothesis testing protocol $\mathcal{A}$: for each $i \in [n]$, user $i$ computes $y_i = \E{E \sim S}{\textbf{1}_{x_i \in E}}$ and publishes $y_i + \Lap{\tfrac{1}{\varepsilon}}$. This protocol is immediately noninteractive, and since $y_i \in [0,1]$, this protocol is $(\varepsilon,0)$-locally private over $\{x_i\}_{i=1}^n$. Finally, by the same analysis used to prove Theorem~\ref{thm:hp_ub} (replacing concentration of randomized responses with concentration of $\Lap{1}$ noise~\cite{CSS11}) it distinguishes between $H_0$ and $H_1$ with probability at least $2/3$ using $n = \Omega\left(\tfrac{1}{\varepsilon^2\alpha^2}\right)$ samples.
\end{proof}
Since Theorem~\ref{thm:hypothesis_lb} still applies, this establishes that the above non-interactive protocol is also optimal.
\section{Introduction}
In the last several years, differential privacy in the \emph{local} model has seen wide adoption in industry, including at Google \cite{EPK14, BEMMR+17}, Apple \cite{A17}, and Microsoft \cite{DKY17}. The choice of adopting the \emph{local} model of differential privacy --- in which privacy protections are added at each individual's device, before data aggregation --- instead of the more powerful \emph{central} model of differential privacy --- in which a trusted intermediary is allowed to first aggregate data before adding privacy protections --- is driven by practical concerns. Local differential privacy frees the data analyst from many of the responsibilities that come with the stewardship of private data, including liability for security breaches, and the legal responsibility to respond to subpoenas for private data, amongst others. However, the local model of differential privacy comes with its own practical difficulties. The most well known of these is the need to have access to a larger number of users than would be necessary in the central model. Another serious obstacle --- the one we study in this paper --- is the need for \emph{interactivity}.
There are two reasons why interactive protocols --- which query users adaptively, as a function of the answers to previous queries --- pose practical difficulties. The first is that communication with user devices is slow: the communication in noninteractive protocols can be fully parallelized, but for interactive protocols, the number of rounds of interactivity becomes a running-time bottleneck. The second is that user devices can go offline or otherwise become unreachable --- and so it may not be possible to return to a previously queried user and pose a new query. The first difficulty motivates the study of noninteractive protocols. The second difficulty motives the study of \emph{sequentially interactive} protocols \cite{DJW13} --- which may pose adaptively chosen queries --- but must not pose more than one query to any user (and so in particular never need to return to a previously queried user).
It has been known since \cite{KLNRS11} that there can be an exponential gap in the sample complexity between noninteractive and interactive protocols in the local model of differential privacy, and that this gap can manifest itself even in natural problems like convex optimization \cite{STU17, U18}. However, it was not known whether the full power of the local model could be realized with only \emph{sequentially interactive} protocols. Almost all known lower bound techniques applied only to either noninteractive or sequentially interactive protocols, but there were no known fully interactive protocols that could circumvent lower bounds for sequential interactivity.
\subsection{Our Results}
We present two kinds of results, relating to the power of sequentially adaptive protocols and non-adaptive protocols respectively. Throughout, we consider protocols operating on datasets that are drawn i.i.d. from some unknown distribution $\mathcal{D}$, and focus on the \emph{sample complexity} of these protocols: how many users (each corresponding to a sample from $\mathcal{D}$) are needed in order to solve some problem, defined in terms of $\mathcal{D}$.
\paragraph{Sequential Interactivity}{We classify locally private protocols in terms of their \emph{compositionality}. Informally, a protocol is $k$-compositional if the privacy costs $\{\epsilon^i_j\}_{j=1}^r$ of the local randomizers executed by any user $i$ over the course of the protocol sum to at most $k\epsilon$, where $\epsilon$ is the overall privacy cost of the protocol: $\sum_j \epsilon^i_j \leq k\epsilon$. When $k = 1$, we say that the protocol is compositional. Compositional protocols capture most of the algorithms studied in the published literature, and in particular, any protocol whose privacy guarantee is proven using the composition theorem for $\epsilon$-differential privacy\footnote{Not every protocol is $1$-compositional: exceptions include RAPPOR~\cite{EPK14} and the evolving data protocol of Joseph et al. \cite{JRUW18}.}.
\begin{enumerate}
\item \textbf{Upper Bounds}: For any (potentially fully interactive) compositional protocol $M$, we give a generic and efficient reduction that compiles it into a sequentially interactive protocol $M'$, with only a constant factor blow-up in privacy guarantees and sample complexity, while preserving (exactly) the distribution on transcripts generated. This in particular implies that up to constant factors, sequentially adaptive compositional protocols are as powerful as fully adaptive compositional protocols. More generally, our reduction compiles an arbitrary $k$-compositional protocol $M$ into a sequentially interactive protocol $M'$ with the same transcript distribution, and a blowup in sample complexity of $O(k)$.
\item \textbf{Lower Bounds}: We show that our upper bound is tight by proving a separation between the power of sequentially and fully interactive protocols in the local model. In particular, we define a family of problems (Multi-Party Pointer Jumping) such that for any $k$, there is a fully interactive $k$-compositional protocol which can solve the problem given sample complexity $n = n(k)$, but such that no sequentially interactive protocol with the same privacy guarantees can solve the problem with sample complexity $\tilde o(k\cdot n)$. Thus, the sample complexity blowup of our reduction cannot be improved in general.
\end{enumerate}}
\paragraph{Noninteractivity}{
We then turn our attention to the power of noninteractive protocols. We consider a large class of compound hypothesis testing problems --- those such that both the null hypothesis $H_0$ and the alternative hypothesis $H_1$ are closed under mixtures. For every problem in this class, we show that the optimal locally private hypothesis test is noninteractive. We do this by demonstrating the existence of a simple hypothesis test for such problems. We then prove that this test's sample complexity is optimal even among the set of all fully interactive tests by extending information theoretical lower bound techniques developed by~\citet{BGMNW16} and first applied to local privacy by~\citet{JKMW18} and~\citet{DR19} to the fully interactive setting.
}
\subsection{Related Work}
The local model of differential privacy was introduced by ~\citet{DMNS06} and further formalized by \citet{KLNRS11}, who also gave the first separation between noninteractive locally private protocols and interactive locally private protocols. They did so by constructing a problem, Masked Parity, that requires exponentially larger sample complexity without interaction than with interaction.~\citet{DF18} later expanded this result to a different, larger class of problems.~\citet{STU17} proved a similar separation between noninteractive and interactive locally private convex optimization protocols that use neighborhood-based oracles.
Recent work by Acharya et al.~\cite{ACFT19, ACT19} gives a qualitatively different separation between the private-coin and public-coin models of noninteractive local privacy. Informally, the public-coin model allows for an additional ``half step'' of interaction over the private-coin model in the form of coordinated local randomizer choices across users. In this paper, we use the public-coin model of noninteractivity.
Duchi et al.~\cite{DJW13} introduced the notion of sequential interactivity for local privacy. They also provided the first general techniques for proving lower bounds for sequentially interactive locally private protocols by bounding the KL-divergence between the output distributions of $\varepsilon$-locally private protocols with different input distributions as a function of $\varepsilon$ and the total variation distance between these input distributions. Bassily and Smith~\cite{BS15} and Bun et al.~\cite{BBNS19} later generalized this result to $(\varepsilon,\delta)$-locally private protocols, and Duchi et al.~\cite{DJW18} obtained an analogue of Assouad's method for proving lower bounds for sequentially interactive locally private protocols.
More recently, Duchi and Rogers~\cite{DR19} showed how to combine the above analogue of Assouad's method with techniques from information complexity~\cite{GMN14, BGMNW16} to prove lower bounds for estimation problems that apply to a restricted class of \emph{fully} interactive locally private protocols. A corollary of their lower bounds is that several known \emph{noninteractive} algorithms are optimal minimax estimators within the class they consider. However, their results do not imply any separation between sequential and full interaction. Moreover, our reduction implies that every (arbitrarily interactive) compositional locally private algorithm can be reduced to a sequentially interactive protocol with only constant blowup in sample complexity, and as a result all known lower bounds for sequentially interactive protocols also hold for arbitrary compositional protocols.
\citet{CKMSU18} study simple hypothesis testing under the centralized model of differential privacy, and Theorem 1 of~\citet{DJW13} implies a tight lower bound for sequentially interactive locally private simple hypothesis testing. We extend this lower bound to the fully interactive setting and match it with a noninteractive upper bound for a more general class of compound testing problems that includes simple hypothesis testing as a special case.
Finally, recent subsequent work~\cite{JMR20} gives a stronger exponential sample complexity separation separation between the sequentially and fully interactive models. It does so through a general connection between communication complexity and sequentially interactive sample complexity. Applying this connection to a communication problem similar to the ``multi-party pointer jumping'' described in Section~\ref{sec:lb} completes the result.
\section{Separating Full and Sequential Interactivity}
\label{sec:lb}
We now prove that our reduction in Section \ref{sec:simulation} is tight in the sense that any generic reduction from a fully interactive protocol to a sequentially interactive protocol must have a sample complexity blowup of $\tilde \Omega(k)$ when applied to a $k$-compositional protocol. Specifically, we define a family of problems such that for every $k$, there is a fully interactive $k$-compositional protocol that can solve the problem with sample complexity $n = n(k)$, but such that \emph{any} sequentially interactive protocol solving the problem must have sample complexity $\tilde \Omega(k \cdot n)$.
Informally, the family of problems (Multi-Party Pointer Jumping, or $\mpj(d)$) we introduce is defined as follows. An \emph{instance} of $\mpj(d)$ is given by a complete tree of depth $d$. Every vertex of the tree is labelled by one of its children. By following the labels down the tree, starting at the root, an instance defines a unique root-to-leaf path. Given an instance of $\mpj(d)$, the data distribution is defined as follows: to sample a new user, first select a level of the tree uniformly $\ell \in [d]$ at random, and provide that user with the vertex-labels corresponding to level $\ell$ (note that fixing an instance of the problem, every user corresponding to the same level of the tree has the same data). The problem we wish to solve privately is to identify the unique root to leaf path specified by the instance.
We first show that there is a fully interactive protocol which can solve this problem with sample complexity $n = \tilde O(d^2/\epsilon^2)$. The protocol is $k$-compositional for $k = \Theta(d)$. Roughly speaking, the protocol works as follows: it identifies the path one vertex at a time, starting from the root, and proceeding to the leaf, in $d$ rounds. In each round, given the most recently identified vertex $v_i$ in level $\ell$, it attempts to identify the child that vertex $v_i$ is labelled with. It queries every user with the same local randomizer, which asks them to use randomized response to identify the labelled child of $v_i$ if their data corresponds to level $\ell$, and to respond with a uniformly random child otherwise (recall that the level that a user's data corresponds to is itself private, and hence is not known to the protocol). Since there are roughly $\tilde \Theta(\sqrt{n}/\epsilon^2)$ users with relevant data, out of $n$ users total, it is possible to identitify the child in question subject to local differential privacy. Although every user applies an $\epsilon$-local randomizer $d$ times in sequence, because each user's data corresponds to only a single level in the tree, the protocol is still $(\epsilon,0)$-locally private. Note that this privacy analysis mirrors the ``histogram'' structure of the non-compositional protocol in Example \ref{ex:comp}.
Informally, the reason that any sequentially interactive protocol must have sample complexity that is larger by a factor of $d$, is that even to identify the child of a single vertex in the local model, $\Omega(d^2/\epsilon^2)$ datapoints are required (this is exactly what our randomized response protocol achieves). But a sequentially interactive protocol cannot re-use these datapoints across levels of the tree, and so must expend $\Omega(d^2/\epsilon^2)$ samples for \emph{each} of the $d$ levels of the tree. This intuition is formalized in a delicate and technical induction on the depth of the tree, using information theoretic tools to bound the success probability of any protocol as a function of its sample complexity. The precise definition of $\mpj(d)$ is somewhat more complicated, in which half of the weight on the underlying distribution is assigned to ``level 0'' dummy agents whose purpose is to break correlations between levels of the tree in the argument.
\def \mathcal{D}{\mathcal{D}}
\def \mathcal{A}{\mathcal{A}}
\def \textbf{1}{\textbf{1}}
\subsection{The Multi-Party Pointer Jumping Problem}
We now formally define the \emph{Multi-party Pointer Jumping} ($\mpj$) problem.
\begin{definition}
Given integer parameter $d > 1$, an instance of Multi-party Pointer Jumping $\mpj(d)$ is defined by a vector $Z = Z_1 \circ \cdots \circ Z_d$, a concatenation of $d$ vectors of increasing length. Letting $s = d^4$, for each $i \in [d]$ $Z_i$ is a vector of $s^{i-1}$ integers in $\{0,1,\ldots,s-1\}$. For each $Z_i$, $Z_{i,j}$ is its $j^{th}$ coordinate.
Viewed as a tree, $Z$ is a complete $s$-ary tree of depth $d$ where each $Z_{i,j}$ marks a child of the $j$-th vertex at depth $i$. $P = P(Z)$ then denotes the vector of $d$ integers representing the unique root to leaf \emph{path} down this tree through the children marked by $Z$. Formally, $P$ is defined in a recursive way: $P_1 = Z_{1,1}$, ...,$P_i = Z_{i, P_1 \cdot s^{i-1} + P_2 \cdot s^{i-2} + \cdots + P_{i-1}+1}$,...,$P_d = Z_{d, P_1 \cdot s^{d-1} + P_2 \cdot s^{d-2} + \cdots + P_{d-1}+1}$.
Finally, an instance $\mpj(d)$ defines a data distribution $\mathcal{D}$. For each $x \sim \mathcal{D}$, with probability $1/2$, $x = (0,\emptyset)$ is a ``dummy datapoint'', and with the remaining probability $x = (\ell, Z_{\ell})$ where $\ell$ is a level drawn uniformly at random from $[d]$. A protocol solves $\mpj(d)$ if it recovers $P$ using samples from $\mathcal{D}$.
\end{definition}
A graphical representation of $\mpj(d)$ where $s=2$ appears in Figure~\ref{fig:mpj} (we set $s=2$ in this figure for easier graphical representation).
\iffalse
For notation convenience, we use $Z_{|p_1,...,p_l}$ to denote the concatenation of $Z_{i,j}$'s such that $l+1 \leq i \leq d$ and $p_1 \cdot s^{d-1}+\cdots + p_l \cdot s^{d-l} < j \leq p_1 \cdot s^{d-1}+\cdots + p_l \cdot s^{d-l} + s^{d-l}$. This corresponds to the sub-tree defined by the partial path $p_1,\ldots,p_l$ of depth $l$ starting at the root.
\fi
\begin{figure}[H]
\begin{tikzpicture}[level/.style={sibling distance=50mm/#1}]
\node [circle,draw,fill=green!20!white,minimum size=1cm] (z){$\emptyset$}
child{node [circle,draw,fill=green!20!white,minimum size=1cm] (a) {0}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (b) {00}
child {node {$\vdots$}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (d) {}}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (e) {}
edge from parent node[xshift=0.2cm,yshift = 0.1cm]{$1$}}
edge from parent node[xshift=-0.2cm,yshift = 0.1cm]{$0$}
}
child {node {$\vdots$}}
}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (g) {01}
child {node {$\vdots$}}
child {node (cc){$\vdots$}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (aa) {}
child [grow=down] {node (aaa) {$P = 011...0$} edge from parent [draw=none]}
edge from parent node[xshift=-0.2cm,yshift = 0.1cm]{$0$}}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (bb) {}}
edge from parent node[xshift=0.2cm,yshift = 0.1cm]{$1$}}
edge from parent node[xshift=0.2cm,yshift = 0.1cm]{$1$}
}
edge from parent node[above]{$0$}
}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (j) {1}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (k) {10}
child {node {$\vdots$}}
child {node {$\vdots$}
edge from parent node[xshift=0.2cm,yshift = 0.1cm]{$1$}}
edge from parent node[above]{$0$}
}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (l) {11}
child {node {$\vdots$}
edge from parent node[xshift=-0.2cm,yshift = 0.1cm]{$0$}}
child {node (c){$\vdots$}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (o) {}
edge from parent node[xshift = -0.2cm,yshift = 0.1cm]{$0$}}
child {node [circle,draw,fill=green!20!white,minimum size=1cm] (p) {}
child [grow=right] {node (q) {} edge from parent[draw=none]
child [grow=right] {node (q) {Depth $d+1$} edge from parent[draw=none]
child [grow=up] {node (r) {$\vdots$} edge from parent[draw=none]
child [grow=up] {node (s) {Depth 3, $Z_3 = 0110$} edge from parent[draw=none]
child [grow=up] {node (t) {Depth 2, $Z_2 = 10$} edge from parent[draw=none]
child [grow=up] {node (u) {Depth 1, $Z_1 =0$} edge from parent [draw=none]}
}
}
}
}
}
}
}
}
};
\path[draw,color=red,line width=1mm] (z)--(a);
\path[draw,color=blue,line width=1mm] (a)--(g);
\path[draw,color=blue,line width=1mm] (g)--(cc);
\path[draw,color=red,line width=1mm] (cc)--(aa);
\end{tikzpicture}
\caption{Multi-party Pointer Jumping}
\label{fig:mpj}
\end{figure}
\begin{algorithm}
\caption{A fully interactive $(\varepsilon,0)$-locally private protocol for $\mpj(d)$}
\label{alg:mpj}
\begin{algorithmic}[1]
\State Divide users into $u = \lceil \log(s)/\log(2) \rceil$ groups each of $m = 512d^2\log(d) \cdot\frac{(e^{\varepsilon}+1)^2}{(e^{\varepsilon}-1)^2}$ users.
\State Initialize $Q \gets 0$
\For{$r=1, 2, \ldots, d$}
\State $Q_r \gets 0$
\For{each group $g = 1, 2, \ldots, u$}
\For{each user $i = 1, 2, \ldots, m$}
\State $\ell_i \gets$ level of user $x_i$
\If{$\ell_i = r$}
\State $b_{i,r} \gets g$-th bit of binary representation of $Z_{r,Q+1}$
\State User $i$ publishes randomized response $y_i \sim \RR{b_{i,r},\varepsilon}$
\Else
\State User $i$ publishes $y_i \sim \Ber{0.5}$
\EndIf
\EndFor
\State $g$-th bit of $Q_r \gets$ majority bit of $\{y_i\}_{i=1}^m$
\EndFor
\State $Q \gets s \cdot Q + Q_r$
\EndFor
\State Output $Q_1 \circ \cdots \circ Q_d$
\end{algorithmic}
\end{algorithm}
\subsection{An Upper Bound for Fully Interactive Mechanisms}
\begin{theorem}
There exists a fully interactive $(\varepsilon,0)$-locally private protocol (Algorithm \ref{alg:mpj}) with sample complexity $n = O(d^2 \log^2(d)(e^{\varepsilon}+1)^2/(e^\varepsilon-1)^2)$ that, on any instance $Z$ of $\mpj(d)$, correctly identifies $P(Z)$ with probability at least $1 - 1/d$.
\end{theorem}
\begin{proof}
First, it is easy to check that the total sample complexity of Algorithm \ref{alg:mpj} is $n = u \cdot m = O\left(d^2\log^2(d) \cdot\frac{(e^{\varepsilon}+1)^2}{(e^{\varepsilon}-1)^2}\right).$ Privacy follows from the same line of logic used in Example~\ref{ex:comp}: each agent sends $d$ bits in total, and at most one of these bits is not sampled uniformly at random. Therefore, the probability of an agent sending any binary string of $d$ bits is bounded between $\frac{1}{2^{d-1}} \cdot \frac{1}{1 + e^{\varepsilon}}$ and $\frac{1}{2^{d-1}} \cdot \frac{e^{\varepsilon}}{1 + e^{\varepsilon}}$, for any datapoint that they might hold. Algorithm~\ref{alg:mpj} is therefore $(\varepsilon,0)$-locally private.
It remains to prove correctness. We first show that each group contains enough users from each level. For each group $g \in [u]$, define $X_{i,g,r}$ to be 1 if the $i$-th user in group $g$ has level $r$ and 0 otherwise. By definition, for any $r \in [d]$, $\P{}{X_{i,g,r}=1} = 1/(2d)$. Therefore we have $\E{}{\sum_{i=1}^m X_{i,g,r}} = m / (2d)$, and by a Chernoff bound
\[
\P{}{\sum_{i=1}^m X_{i,g,r} < m/(4d)} \leq \exp\left(-\frac{m}{16d}\right) \leq 1/(d^4).
\]
Define $W$ to be the event that for every $r \in [d]$ and $g \in [u]$, there are at least $m/(4d)$ users in group $g$ with level $r$. By a union bound, we know $\P{}{W} \geq 1- (ud)/d^4\geq 1-1/d^2$, so with high probability we have enough users in each level in each group.
We now analyze the quantities $Q_r$. For each $r \in [d]$, we want to show $$\P{}{Q_r = P_r |Q_1 = P_1,...,Q_{r-1} = P_{r-1},W} \geq 1-1 /d^3,$$ i.e. that the output $Q$ actually matches $P$. Conditioning on $Q_1 = P_1, \ldots, Q_{r-1} = P_{r-1}$ and $W$, $Z_{r,Q+1} = P_r$. Define $Y_{i,g,r}$ to be 1 if the bit sent by the $i$-th user of group $g$ is equal to the $j$-th bit of $P_r$ and 0 otherwise. If the $i$-th user has level $r$ then they send their bit using randomized response and $\P{}{Y_{i,g,r} =1} = \frac{e^{\varepsilon}}{e^{\varepsilon}+1}$. If the $i$-th user's level is not $r$ then they send a uniform random bit $\P{}{Y_{i,g,r}=1}=1/2$. Since we conditioned on $W$, there are at least $m/(4d)$ users in group $g$ with level $r$. Thus
\begin{align}
\E{}{\sum_{i=1}^m Y_{i,g,r}} =&\; \frac{m}{4d} \cdot \frac{e^\varepsilon}{e^\varepsilon+1} + \left(m - \frac{m}{4d}\right) \cdot \frac{1}{2} \label{eq:y_sum}.
\end{align}
Then we have
\begin{align*}
&\P{}{Q_r, P_r \text{ have the same }g\text{-th bit} |Q_1 = P_1,...,Q_{r-1} = P_{r-1},W}\\
=& \P{}{ \sum_{i=1}^m Y_{i,g,r} > \frac{m}{2}} \\
\geq & \P{}{\sum_{i=1}^m Y_{i,g,r} > \E{}{\sum_{i=1}^m Y_{i,g,r}} + \frac{m}{2} - \frac{m}{4d} \cdot \frac{e^\varepsilon}{e^\varepsilon+1} - \left(m - \frac{m}{4d}\right) \cdot \frac{1}{2}} ~~~~~~\text{(Equation~\ref{eq:y_sum})}\\
\geq& \P{}{\sum_{i=1}^m Y_{i,g,r} > \E{}{\sum_{i=1}^m Y_{i,g,r}} - \frac{m}{8d}\cdot \frac{e^{\varepsilon}-1}{e^{\varepsilon}+1}}\\
\geq & 1 - \exp\left( - \frac{1}{2m} \cdot \left(\frac{m}{8d}\cdot \frac{e^{\varepsilon}-1}{e^{\varepsilon}+1} \right)^2\right) ~~~~~~\text{(Chernoff bound)}\\
= & 1- \exp\left(- m \cdot \frac{1}{128d^2} \cdot \frac{(e^{\varepsilon}-1)^2}{(e^{\varepsilon}+1)^2} \right) \\
\geq & 1- \exp(-4\log(d)) = 1 -1/d^4.
\end{align*}
Union bounding over all $u$ groups yields $$\P{}{Q_r = P_r |Q_1 = P_1,...,Q_{r-1} = P_{r-1},W} \geq 1- u/d^4 \geq 1- 1/d^3.$$
Putting this all together, Algorithm \ref{alg:mpj} outputs $P(Z)$ with probability at least
\begin{align*}
\P{}{Q_1 = P_1,...,Q_d = P_d} \geq&\; \P{}{W} \cdot \P{}{Q_1 = P_1,...,Q_d = P_d|W} \\
\geq&\; \P{}{W} \prod_{r=1}^d \P{}{Q_r = P_r |Q_1 = P_1,...,Q_{r-1} = P_{r-1},W} \\
\geq&\; (1-1/d^2)(1-1/d^3)^d \\
\geq&\; 1-1/d.
\end{align*}
\end{proof}
Note that Algorithm \ref{alg:mpj} is $k$-compositional only for $k \geq \Omega(d)$. The lower bound that we prove next (Theorem \ref{thm:lower}) shows that any sequentially interactive protocol for the same problem must have a larger sample complexity by a factor of $\tilde \Omega(d) = \tilde \Omega(k)$, showing that in general, the sample-complexity dependence that our reduction (Theorem \ref{thm:main}) has on $k$ cannot be improved.
\subsection{A Lower Bound for Sequentially Interactive Mechanisms}
We prove our lower bound for sequentially interactive $(\varepsilon,0)$-locally private protocols. As previous work~\cite{BNS18,CSUZZ18} has established that $(\varepsilon,0)$- and $(\varepsilon,\delta)$-local privacy are approximately equivalent for reasonable parameter ranges, our lower bound also holds for sequentially interactive $(\varepsilon,\delta)$-locally private protocols. For an extended discussion of this equivalence, see Section~\ref{subsubsec:lb}.
\begin{theorem}
\label{thm:lower}
Let $\mathcal{A}$ be a sequentially interactive $(\varepsilon,0)$-locally private protocol that, for every instance $Z$ of $\mpj(d)$, correctly identifies $P(Z)$ with probability $\geq 2/3$. Then $\mathcal{A}$ must have sample complexity $n \geq d^3/(216(e^\varepsilon-1)^2\log(d))$.
\end{theorem}
\begin{proof}
We will prove that any sequentially interactive $(\varepsilon,0)$-locally private protocol with $n=d^3/(216(e^\varepsilon-1)^2\log(d))$ samples fails to solve $\mpj(d)$ correctly with probability $> 1/3$ when $Z$ is sampled uniformly randomly. This is a distributional lower bound which is only stronger than the theorem statement (a worst case lower bound). For notational simplicity,we assume in this argument that all local randomizers have discrete message spaces. However, this assumption is without loss of generality and can be removed (e.g. using the rejection sampling technique from~\citet{BS15}).
We will prove our lower bound even for protocols to which we ``reveal'' some information about the hidden instance $Z$ and users' inputs to the protocol and users. This only makes our lower bound stronger, as the mechanism can ignore this information if desired. Before the protocol starts, each user $i$ publishes a quantity $R_i$. $R_i = \ell_i$, user $i$'s level, if $\ell_i \neq 0$ (i.e., user $i$ is not a ``dummy'' user). Otherwise $R_i$ is set to be $\lfloor \frac{i -1 }{n/d}\rfloor+1$. At a high level, we reveal these $\{R_i\}_{i=1}^n$ to break the dependence between $Z_i$'s during the execution of the protocol (see Claim~\ref{clm:lb_prod} for a formalization of this intuition). Throughout the proof and its claims, we fix realizations $R_1 = r_1, R_2 = r_2, \ldots, R_n = r_n$. We will show that even given such $r_1,...,r_n$, any sequentially interactive $(\varepsilon,0)$-locally private protocol with $n$ users fails with probability more than $1/3$.
For each $i \in [n]$, denote by $\Pi_i$ the message --- i.e., portion of the transcript --- sent by user $i$ via their local randomizer. Note that there is at most one such message since the protocol is sequentially interactive. We begin with a result about how conditioning on messages and revealed values affects the distribution of $Z$.
\begin{claim}
\label{clm:lb_prod}
Suppose $Z_1,...,Z_d$ are sampled from a product distribution. Conditioned on the messages $\Pi_1,...,\Pi_i$ of the first $i$ users and the revealed values $R_1,\ldots,R_n$, $Z_1,...,Z_d$ are still distributed according to a product distribution.
\end{claim}
\begin{proof}
We proceed via induction on the number of messages $i$. The base case $i=0$ is immediate from the assumption. Now suppose the statement of the claim is true for $i-1$. Use $\mathcal{D}_1\times \mathcal{D}_2 \times \cdots \mathcal{D}_d$ to denote the product distribution of $Z_1,...,Z_d$ conditioned on $\Pi_1,...,\Pi_{i-1}$ and $R_1,\ldots,R_n$ (all quantities that follow are conditioned on $R_1,\ldots,R_n$, and so for notational simplicity we elide the explicit conditioning).
Since the protocol is sequentially interactive, conditioned on $\Pi_1,...,\Pi_{i-1}$, $\Pi_i$ depends only on $Z_{r_i}$, user $i$'s internal randomness, and their level $\ell_i$ (recall that when $r_i = \lfloor \frac{i -1 }{n/d}\rfloor+1$, it may be that $\ell_i = 0$ or $\ell_i = r_i$). Therefore, conditioned on $\Pi_1,...,\Pi_i$, $Z_1,...,Z_d$ distribute as $$\mathcal{D}_1\times \mathcal{D}_2 \times \cdots \times (\mathcal{D}_{r_i} | \Pi_i) \times \cdots \times \mathcal{D}_d,$$ a product distribution.
\end{proof}
We also use induction to prove the overall theorem. It has $d$ steps. For step $\ell \in [d]$, we let $\Delta_\ell$ be the following set of distributions on $Z$.
\begin{definition}[$\Delta_\ell$]
For each $\ell \in [d]$, the set $\Delta_\ell$ is composed of distributions $\mathcal{D}$ on $Z$ such that
\begin{enumerate}
\item $\mathcal{D}$ is a product distribution on $Z_1,...,Z_d$,
\item for each $i = 1,...,d-\ell$, $Z_i$ is deterministically fixed to be $z_i$, and
\item since $Z_1,...,Z_{d-\ell}$ are fixed, by the definition of $\mpj$, $P_1,...,P_{d-\ell}$ are also fixed to some $p_1,...,p_{d-\ell}$. The marginal distribution on $Z_{|p_1,...,p_{d-\ell}}$ is the uniform distribution.
\end{enumerate}
\end{definition}
In the induction step, we consider locally private sequentially interactive protocols with fewer users. The idea is that for any sequentially interactive $(\varepsilon,0)$-locally private protocol of $n$ users, if we fix the messages of the first $i$ users, then what remains is a sequentially interactive $(\varepsilon,0)$-locally private protocol on $n-i$ users. Accordingly, we want to lower bound the failure probability of this remaining protocol. More concretely:
\paragraph{Inductive Statement}Any sequentially interactive $(\varepsilon,0)$-locally private protocol with $n\cdot \tfrac{\ell}{d}$ users (the $\left( n\cdot \tfrac{d-\ell}{d}+1\right)$-th user to the $n$-th user) fails to solve $\mpj(d)$ correctly with probability at $> 2/3 - \ell/(3d)$ when $Z$ is sampled from a distribution in $\Delta_\ell$.
\\\\It will be slightly more convenient to establish the inductive case first and then establish the base case second.
\paragraph{Induction step ($\ell>1$):} Assume the above statement is true for $\ell-1$.
In this induction, let $\mathcal{A}$ be a sequentially interactive $(\varepsilon,0)$-locally private protocol with $n\cdot \tfrac{\ell}{d}$ users and let $\mathcal{D}$ be the distribution generating $Z$ before $\mathcal{A}$ starts. Let $\Pi$ be the messages sent by the first $n/d$ users of $\mathcal{A}$ (the $\left( n\cdot \frac{d-\ell}{d}+1\right)$-th user to the $\left( n\cdot \frac{d-\ell+1}{d}\right)$-th user) and let $\mathcal{A}_{\pi}$ be the sequentially interactive $(\varepsilon,0)$-locally private protocol with $n\cdot \frac{\ell-1}{d}$ users conditioned on $\Pi =\pi$. For notational convenience, define $n_\ell =n\cdot \frac{d-\ell}{d}$, $\Pi_{<i} = \Pi_{n_\ell+1},...,\Pi_{i-1}$ and $\Pi_{\leq i} =\Pi_{n_\ell+1},...,\Pi_i$.
For each prefix of messages, $\pi$, let $\mathcal{D}'(\pi)$ be some mixture of distributions in $\Delta_{\ell-1}$ (to be specified later). By the induction hypothesis on $\ell-1$, $$\P{Z\sim \mathcal{D}'(\pi)}{\mathcal{A}_{\pi} \text{ outputs } P(Z)} < 1/3 + (\ell-1)/(3d).$$
Thus
\begin{align*}
\P{Z \sim \mathcal{D}}{\mathcal{A} \text{ outputs } P(Z)} =&\; \sum_{\pi} \P{}{\Pi=\pi} \cdot \P{Z\sim (\mathcal{D}|\Pi=\pi)}{\mathcal{A}_{\pi} \text{ outputs } P(Z)}\\
\leq&\; \sum_{\pi} \P{}{\Pi=\pi} \cdot \left( \P{Z\sim \mathcal{D}'(\pi)}{\mathcal{A}_{\pi} \text{ outputs } P(Z)} + \|(\mathcal{D}|\Pi=\pi) - \mathcal{D}'(\pi)\|_1\right) \\
<&\; 1/3 + (\ell-1)/(3d) + \sum_{\pi} \P{}{\Pi=\pi}\cdot \|(\mathcal{D}|(\Pi=\pi)) - \mathcal{D}'(\pi)\|_1.
\end{align*}
It therefore suffices to show that $$\sum_{\pi} \P{}{\Pi=\pi}\cdot \|(\mathcal{D}|(\Pi=\pi)) - \mathcal{D}'(\pi)\|_1 \leq 1/(3d).$$ We show this via a sequence of three claims (Claims \ref{clm:lb_maxp}, \ref{clm:lb_info}, and~\ref{clm:lb_l1}), where $\mathcal{D}'(\pi)$ is defined in Claim~\ref{clm:lb_l1}.
First we define some notation for the path we need to reason about. Since $\mathcal{D} \in \Delta_\ell$, by the definition of $\Delta_\ell$ we know that for $Z \sim \mathcal{D}$, the first $d-\ell$ levels of the tree $Z_1,...,Z_{d-\ell}$ deterministically take fixed values $z_1,...,z_{d-\ell}$. Thus, the first $d-\ell$ nodes in the path $P_1,...,P_{d-\ell}$ are also fixed to take particular values $p_1,...,p_{d-\ell}$. For the induction step, we write $P=P_1,...,P_{d-\ell+1}$ to denote the first $d-\ell+1$ vertices of the path. Since $P_{d-\ell+1}$ is the only value that is not fixed, and the path is through an $s$-ary tree, $P$ can take on at most $s$ different possible values and is determined by $Z_{d-\ell+1}$.
In the first claim, we show that after observing the messages sent by $n/d$ agents, there remains substantial uncertainty about $P$.
\begin{claim}
\label{clm:lb_maxp}
For $i \in \{ n_\ell+1,...,n_\ell + n/d\}$, $$\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left( \max_p \P{}{P = p|\Pi_{\leq i} = \pi_{\leq i}} \right) \leq 3/d^4.$$
\end{claim}
\begin{proof}
Denoting by $\textbf{1}_E$ the indicator function for event $E$,
\begin{align}
&\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left( \max_p \P{}{P = p|\Pi_{\leq i} = \pi_{\leq i}} \right) \nonumber\\
\leq &\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot\left(\textbf{1}_{\max_p \P{}{P=p|\Pi_{\leq i} = \pi_{\leq i}}>2/s} \cdot 1 + \textbf{1}_{\max_p \P{}{P = p|\Pi_{\leq i} = \pi_{\leq i}}\leq 2/s} \cdot \frac{2}{s} \right) \nonumber \\
\leq &\frac{2}{s} + \sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left(\textbf{1}_{\max_p \P{}{P=p|\Pi_{\leq i} = \pi_{\leq i}}>2/s} \right) \nonumber \\
\leq & \frac{2}{s} + \sum_p\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left(\textbf{1}_{ \P{}{P=p|\Pi_{\leq i} = \pi_{\leq i}}>2/s} \right). \label{eq:triangle}
\end{align}
Now consider some specific $p$. We know that
\begin{align*}
\P{}{P=p|\Pi_{\leq i} = \pi_{\leq i}} =&\; \frac{\P{}{P= p, \Pi_{\leq i}= \pi_{\leq i}}}{\P{}{\Pi_{\leq i}=\pi_{\leq i}}} \\
=&\; \P{}{P=p} \cdot \frac{\P{}{\Pi_{\leq i} = \pi_{\leq i}|P=p}}{\P{}{\Pi_{\leq i}=\pi_{\leq i}}}~~~~~~\text{(Bayes' rule)} \\
=&\; \frac{1}{s} \cdot \frac{\P{}{\Pi_{\leq i} = \pi_{\leq i}|P=p}}{\P{}{\Pi_{\leq i} = \pi_{\leq i}}}~~~~~~\text{(uniformity of $P$)}.
\end{align*}
For $j = n_\ell+1,...,i$, define random variable $$X_j = \log \left(\frac{\P{}{\Pi_j|\Pi_{<j} , P=p}}{\P{}{\Pi_j|\Pi_{<j}}}\right).$$ As we ultimately want to upper bound the quantity in Equation~\ref{eq:triangle}, we now focus on bounding these $X_j$.
Recall that $r_j$ is user $j$'s level if that level is non-zero (i.e. user $j$ is not a ``dummy'' user). Otherwise $r_j$ is $d-\ell+1$ for $j = n_\ell+1,...,n_\ell+ n/d$. If $r_j \neq d-\ell+1$, by Claim \ref{clm:lb_prod}, we know that conditioned on $\Pi_{<j}$, $\Pi_j$ is independent of $P$. Therefore when $r_j \neq d - \ell + 1$, $X_j = \log(1) = 0$.
If instead $r_j = d-\ell+1$, we know the level $\ell_j$ of the user $j$ is $0$ with probability $d/(d+1)$ and $d-\ell+1$ with probability $1/(d+1)$. If $\ell_j = 0$, then the user is a ``dummy'' and since the user has no private data about $P$, $\Pi_j$ is independent of $P$ conditioned on $\Pi_{<j}$. Call the input distribution of the $j$-th user $q_j$. Here, we recall Lemmas 3 and 4 from~\citet{DJW13} and restate a simplified version as Lemma~\ref{lem:djws}
\begin{lemma}
\label{lem:djws}
Let $m_1$ and $m_2$ be the output distributions of an $(\varepsilon,0)$-local randomizer in a sequentially interactive protocol given, respectively, input distributions $q_j \mid \Pi_{<j}, P = p$ and $q_j \mid \Pi_{<j}$. Then $$\left| \log\left(\frac{m_1(z)}{m_2(z)}\right)\right| \leq \min(2,e^\varepsilon)(e^\varepsilon-1) \cdot \tv{(q_j \mid \Pi_{<j}, P = p)}{(q_j \mid \Pi_{<j})}.$$
\end{lemma}
We know that $\tv{(q_j|\Pi_{<j} = \pi_{<j},P=p)}{(q_j|\Pi_{<j} = \pi_{<j})} \leq 1/(d+1)$, as the difference stems from the event that $\ell_j = d - \ell + 1$. Thus, by Lemma~\ref{lem:djws} $$|X_j| \leq 2(e^{\varepsilon}-1)/(d+1) < 2(e^{\varepsilon}-1)/d.$$ Next, we bound the conditional expectation of $X_j$:
\begin{align*}
\E{}{X_j |\Pi_{<j} = \pi_{<j}} =&\; \sum_{\pi_j} \P{}{\Pi_j = \pi_j|\Pi_{<j} = \pi_{<j}}\cdot \log \left(\frac{\P{}{\Pi_j=\pi_j|\Pi_{<j}=\pi_{<j} , P=p}}{\P{}{\Pi_j=\pi_j|\Pi_{<j}=\pi_{<j}}}\right)\\
=&\; -\kl{(\Pi_j|\Pi_{<j} = \pi_{<j},P=p)}{(\Pi_j|\Pi_{<j} = \pi_{<j})} \leq 0.
\end{align*}
Therefore $X_{n_\ell+1}, X_{n_\ell+1}+X_{n_\ell+2},...,X_{n_\ell+1}+ \cdots +X_{i}$ form a supermartingale. Next, we use the above bounds on these $X_j$ to control their sum using the Azuma-Hoeffding inequality:
\begin{align*}
\P{}{ X_{n_\ell+1}+ \cdots + X_i >\log(2)} \leq&\; \exp\left(-\frac{\log^2(2)}{2(2(e^{\varepsilon}-1)/d)^2(i-n_\ell)}\right)\\
\leq&\; \exp\left(-\frac{\log^2(2)}{2(2(e^{\varepsilon}-1)/d)^2(n/d)}\right)\\
\leq&\; \frac{1}{d^8} = \frac{1}{sd^4}.
\end{align*}
By the chain rule and Bayes' rule, we know $$X_{n_\ell+1}+ \cdots + X_i = \log\left(\frac{\P{}{\Pi_{\leq i}|P=p}}{\P{}{\Pi_{\leq i}}}\right) = \log\left(s\cdot \P{}{P=p|\Pi_{\leq i}}\right).$$ Therefore
\begin{align*}
\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left(\textbf{1}_{ \P{}{P=p|\Pi_{\leq i} = \pi_{\leq i}}>2/s} \right) =&\; \sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left(\textbf{1}_{s\cdot \P{}{P=p|\Pi_{\leq i}= \pi_{\leq i}}>2} \right) \\
=&\; \P{}{ X_{n_\ell+1} + \cdots + X_i >\log(2)} \\
\leq&\; \frac{1}{sd^4}.
\end{align*}
Tracing the above inequality back through Equation~\ref{eq:triangle}, we have
\begin{align*}
\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left( \max_e \P{}{P = p|\Pi_{\leq i} = \pi_{\leq i}} \right) \leq&\; \frac{2}{s} + \sum_p\sum_{\pi_{\leq i}} \P{}{\Pi_{\leq i}=\pi_{\leq i}} \cdot \left(\textbf{1}_{ \P{}{P=p|\Pi_{\leq i} = \pi_{\leq i}}>2/s} \right)\\
\leq&\; \frac{2}{s} + s \cdot \frac{1}{sd^4} = \frac{3}{d^4}.
\end{align*}
\end{proof}
In Claim~\ref{clm:lb_info}, we bound the information $\Pi$ contains about $Z_{|P}$ (for a primer on information theory, see Appendix~\ref{sec:info}). Intuitively, by Claim~\ref{clm:lb_maxp} users have little information about $P$, and as a result they cannot know which potential subtree $Z_{|p}$ to focus their privacy budget on.
\begin{claim}
\label{clm:lb_info}
$$\sum_{p} \P{}{P=p} \cdot I(\Pi;Z_{|p}|P=p) \leq 1/(18d^2).$$
\end{claim}
\begin{proof}
By the inductive hypothesis, $Z$ is sampled from $\mathcal{D} \in \Delta_\ell$. Define $Z_{|<p}$ to be \\$Z_{|p_1,...,p_{d-\ell},0},...,Z_{|p_1,...,p_{d-\ell}, p_{d-\ell+1}-1}$. By the definition of $\Delta_l$, we know $Z_{|<p}$ and $Z_{|p}$ are independent given $P$, so $I(Z_{|<p};Z_{|p}|P=p) = 0$. Therefore by the chain rule for mutual information, we get
\begin{align*}
I(\Pi;Z_{|p}|P=p) \leq &I(\Pi,Z_{|<p};Z_{|p}|P=p) \\
=& I(Z_{|<p};Z_{|p}|P=p) + I(\Pi;Z_{|p}|P=p,Z_{|<p})\\
=&0 + I(\Pi;Z_{|p}|P=p,Z_{|<p}).
\end{align*}
The main step of the proof of this claim is to compare $I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}=\pi_{<i}, Z_{|<p})$ and $ I(\Pi_i;Z_{|p}| \Pi_{<i}=\pi_{<i}, Z_{|<p})$. First, by Claim \ref{clm:lb_prod}, conditioning on $\Pi_{<i} = \pi_{<i}$ induces a product distribution for $Z_1,...,Z_d$. We also know that (as mentioned in the proof of Claim~\ref{clm:lb_prod}) conditioned on $\Pi_{<i} = \pi_{<i}$, $\Pi_i$ only depends on $Z_{r_i}$, the internal randomness of the user $i$, and their level $\ell_i$. By item 3 in the definition of $\Delta_\ell$, $P$ only depends on $Z_{d-\ell+1}$. We prove
\begin{equation}
I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}=\pi_{<i}, Z_{|<p}) = I(\Pi_i;Z_{|p}| \Pi_{<i}=\pi_{<i}, Z_{|<p}). \label{eq:square}.
\end{equation}
There are two cases depending on $r_i$.
\begin{itemize}
\item When $r_i \leq d-\ell+1$, user $i$ either has $\ell_i \leq d - \ell + 1$ or is a ``dummy'' user. Therefore, whether or not we condition on $P=p$, $\Pi_i$ is independent of $Z_{|p},Z_{|<p}$. Thus $$I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}=\pi_{<i}, Z_{|<p}) =0 = I(\Pi_i;Z_{|p}| \Pi_{<i}=\pi_{<i}, Z_{|<p}).$$
\item When $r_i > d-\ell+1$, once we've conditioned on $\Pi_{<i} = \pi_{<i}$, additionally conditioning on $P=p$ does not change the joint distribution of $Z_{d-\ell+2},...,Z_d$. This is because $P = P_1, \ldots, P_{d - \ell + 1}$ and by above conditioning on $\Pi_{<i} = \pi_{<i}$ induces a product distribution on $Z_1, \ldots, Z_d$ (and in particular on $Z_{d-\ell+2},...,Z_d$). It follows that conditioning on $P=p$ does not change the joint distribution of $Z_{|p},Z_{|<p},\Pi_i$. Thus $$I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}=\pi_{<i}, Z_{|<p}) = I(\Pi_i;Z_{|p}| \Pi_{<i}=\pi_{<i}, Z_{|<p}).$$
\end{itemize}
Putting things together, we have
\begin{align}
&\sum_{p} \P{}{P=p} \cdot I(\Pi;Z_{|p}|P=p) \nonumber \\
\leq&\sum_p \P{}{P=p} \cdot I(\Pi;Z_{|p}|P=p,Z_{|<p}) \nonumber \\
=& \sum_p \sum_{i =n_\ell+1}^{n_\ell+n/d} \P{}{P=p} \cdot I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}, Z_{|<p}) \nonumber \\
=&\sum_{i =n_\ell+1}^{n_\ell+n/d} \sum_{\pi_{<i}}\sum_p\P{}{P=p} \cdot\P{}{\Pi_{<i} = \pi_{<i}|P=p}\cdot I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}=\pi_{<i}, Z_{|<p}) \nonumber \\
=&\sum_{i =n_\ell+1}^{n_\ell+n/d}\sum_{\pi_{<i}}\sum_p\P{}{\Pi_{<i} = \pi_{<i}} \cdot\P{}{P=p|\Pi_{<i} = \pi_{<i}}\cdot I(\Pi_i;Z_{|p}|P=p, \Pi_{<i}=\pi_{<i}, Z_{|<p})~~~~~~\text{(Bayes' rule)} \nonumber \\
=&\sum_{i =n_\ell+1}^{n_\ell+n/d}\sum_{\pi_{<i}}\sum_p\P{}{\Pi_{<i} = \pi_{<i}} \cdot\P{}{P=p|\Pi_{<i} = \pi_{<i}}\cdot I(\Pi_i;Z_{|p}| \Pi_{<i}=\pi_{<i}, Z_{|<p})~~~~~~\text{(Equation~\ref{eq:square})} \nonumber \\
\leq&\sum_{i =n_\ell+1}^{n_\ell+n/d}\sum_{\pi_{<i}}\left(\sum_p\P{}{\Pi_{<i} = \pi_{<i}} \cdot I(\Pi_i;Z_{|p}| \Pi_{<i}=\pi_{<i}, Z_{|<p})\right)\cdot\left(\max_p\P{}{P=p|\Pi_{<i} = \pi_{<i}}\right) \nonumber \\
\leq&\sum_{i =n_\ell+1}^{n_\ell+n/d} \sum_{\pi_{<i}}\P{}{\Pi_{<i} = \pi_{<i}} \cdot I(\Pi_i;Z| \Pi_{<i}=\pi_{<i})\cdot\left(\max_p\P{}{P=p|\Pi_{<i} = \pi_{<i}}\right) \label{eq:bookmark}.
\end{align}
We now bound $I(\Pi_i;Z| \Pi_{<i}=\pi_{<i})$ using Theorem 1 from~\citet{DJW13}, simplified here as Lemma~\ref{lem:djw_2}.
\begin{lemma}
\label{lem:djw_2}
Let $\Pi$ be the distribution over randomizer outputs for an $\varepsilon$-local randomizer with inputs drawn from a distribution family parametrized by $\mathcal{V}$. Then $I(\Pi; \mathcal{V}) \leq 4(e^\varepsilon-1)^2$.
\end{lemma}
In particular, the proof of Lemma~\ref{lem:djw_2} implies that $I(\Pi_i;Z| \Pi_{<i}=\pi_{<i}) \leq 4(e^\varepsilon-1)^2.$ We continue our chain of inequalities.
\begin{align*}
(\ref{eq:bookmark}) \leq&\; \sum_{i =n_\ell+1}^{n_\ell+n/d} \sum_{\pi_{<i}}\P{}{\Pi_{<i} = \pi_{<i}} \cdot 4(e^\varepsilon-1)^2 \cdot\left(\max_p\P{}{P=p|\Pi_{<i} = \pi_{<i}}\right)\\
\leq&\; \frac{n}{d} \cdot (e^\varepsilon-1)^2 \cdot \frac{12}{d^4} ~~~\text{(Claim \ref{clm:lb_maxp})}\\
\leq&\; \frac{1}{18d^2}.
\end{align*}
\end{proof}
In our last claim, we convert the bound on mutual information from Claim \ref{clm:lb_info} into a bound on the $L_1$ distance between distributions.
\begin{claim}
\label{clm:lb_l1}
There exists a distribution $\mathcal{D}'(\pi)$ which is a mixture of distributions in $\Delta_{\ell-1}$ for each $\pi$ such that $$\sum_{\pi} \Pr[\Pi=\pi] \cdot \|( \mathcal{D}|(\Pi=\pi)) -\mathcal{D}'(\pi) \|_1 \leq 1/(3d).$$
\end{claim}
\begin{proof}
By the definition of mutual information in terms of KL-divergence, $$I(\Pi; Z_{|p}|P=p) = \kl{\P{}{\Pi,Z_{|p}|P=p}}{\P{}{\Pi|P=p} \times \P{}{Z_{|p}|P=p}}.$$ Next, by Pinsker's inequality,
\begin{align*}
&\;\sum_{\pi,z_{|p}} \left|\P{}{\Pi=\pi,Z_{|p}=z_{|p}|P=p} - \P{}{\Pi=\pi|P=p} \times \P{}{Z_{|p}=z_{|p}|P=p}\right| \\
\leq&\; \sqrt{2 \kl{\P{}{\Pi,Z_{|p}|P=p}}{\P{}{\Pi|P=p} \times \P{}{Z_{|p}|P=p}}}
\end{align*}
\noindent so we may upper bound
\begin{align*}
&\sum_p \P{}{P=p} \cdot \sum_{\pi,z_{|p}} \left|\P{}{\Pi=\pi,Z_{|p}=z_{|p}|P=p} - \P{}{\Pi=\pi|P=p} \times \P{}{Z_{|p}=z_{|p}|P=p}\right|\\
\leq &\sum_p \P{}{P=p} \cdot \sqrt{2 \kl{\P{}{\Pi,Z_{|p}|P=p}}{\P{}{\Pi|P=p} \times \P{}{Z_{|p}|P=p}}}\\
= &\sum_p \P{}{P=p} \cdot \sqrt{2I(\Pi; Z_{|p}|P=p)}~~~~~~\text{(definition of mutual information)}\\
\leq & \sqrt{2\sum_p \P{}{P=p} \cdot 2I(\Pi; Z_{|p}|P=p)} ~~~~~\text{(Jensen's inequality and concavity of $\sqrt{\cdot}$)}\\
\leq & 1/(3d) ~~~~~\text{(Claim \ref{clm:lb_info})}.
\end{align*}
On the other hand, we can also lower bound
\begin{align}
&\;\sum_p \P{}{P=p} \cdot \sum_{\pi,z_{|p}} \left|\P{}{\Pi=\pi,Z_{|p}=z_{|p}|P=p} - \P{}{\Pi=\pi|P=p} \times \P{}{Z_{|p}=z_{|p}|P=p}\right| \nonumber \\
=&\sum_p \P{}{P=p} \cdot \sum_{\pi} \P{}{\Pi=\pi|P=p} \cdot \sum_{z_{|p}} \left| \P{}{Z_{|p}=z_{|p}|\Pi=\pi,P=p}-\P{}{Z_{|p}=z_{|p}|P=p}\right| \nonumber \\
=&\sum_{\pi} \P{}{\Pi=\pi} \cdot \sum_p \P{}{P=p|\Pi=\pi} \cdot \sum_{z_{|p}} \left| \P{}{Z_{|p}=z_{|p}|\Pi=\pi,P=p}-\P{}{Z_{|p}=z_{|p}|P=p}\right| \nonumber \\
=&\sum_{\pi} \P{}{\Pi=\pi} \cdot \sum_p \P{}{P=p|\Pi=\pi} \cdot \nonumber \\
& \sum_{z} \left| \P{}{Z = z|\Pi=\pi,P=p}-\P{}{Z_{|p}=z_{|p}|P=p} \cdot \P{}{Z=z|\Pi=\pi,P=p,Z_{|p} = z_{|p}}\right| \nonumber \\
\geq&\sum_{\pi} \P{}{\Pi=\pi} \cdot \sum_{z} \big| \P{}{Z=z|\Pi=\pi}- \nonumber\\
&\sum_p \P{}{P=p|\Pi=\pi} \cdot \P{}{Z_{|p}=z_{|p}|P=p}\cdot \P{}{Z=z|\Pi=\pi,P=p,Z_{|p} = z_{|p}}\big| \label{eq:swirl}
\end{align}
\noindent where the last equality comes from multiplying by $$1 = \sum_z \P{}{Z = z \mid \Pi = \pi, P = p, Z_{\mid p} = z_{\mid p}}$$ and the inequality uses $$\P{}{Z = z \mid \Pi = \pi} = \sum_p \P{}{Z = z \mid \Pi = \pi, P = p} \cdot \P{}{P = p \mid \Pi = \pi}$$ and the triangle inequality. With the preceding upper bound, the quantity in Equation~\ref{eq:swirl} is $\leq 1/(3d)$.
Now, define $\mathcal{D}'(\pi)$ to be the distribution on $Z$ such that for all $z$,
$$\P{Z\sim \mathcal{D}'(\pi)}{Z=z}=\sum_p\P{}{P=p|\Pi=\pi} \cdot \P{}{Z_{|p}=z_{|p}|P=p}\cdot \P{}{Z=z|\Pi=\pi,P=p,Z_{|p} = z_{|p}}.$$
Equivalently, $Z\sim \mathcal{D}'(\pi)$ is sampled through the following procedure: (1) sample $P$ according to $P \mid \Pi = \pi$, (2) sample $Z_{|p}$ according to $Z_{\mid p} \mid P = p$, and (3) sample $Z$ according to $Z \mid \Pi = \pi, P = p, Z_{\mid p} = z_{\mid p}$.
Noting that $\P{Z\sim \mathcal{D}|(\Pi=\pi)}{Z=z} = \P{}{Z=z|\Pi=\pi}$ for all $z$, since the quantity in Equation~\ref{eq:swirl} is $\leq 1/(3d)$ we get $$\sum_{\pi} \Pr[\Pi=\pi] \cdot \|( \mathcal{D}|(\Pi=\pi)) -\mathcal{D}'(\pi) \|_1 \leq 1/(3d).$$
It remains to show that $\mathcal{D}'(\pi)$ is a mixture of distributions in $\Delta_{\ell-1}$; doing so will complete our proof of the original inductive step. We will show for any $z_1,...,z_{d-\ell+1}$ such that $\P{Z\sim\mathcal{D}'(\pi)}{Z_1,...,Z_{d-\ell+1}=z_1,...,z_{d-\ell+1} }\neq 0$, $\mathcal{D}'(\pi) \mid (Z_1,...,Z_{d-\ell+1}=z_1,...,z_{d-\ell+1})$ is a distribution in $\Delta_{\ell-1}$. Recalling that membership in $\Delta_{\ell-1}$ requires meeting three conditions, we verify these conditions below.
\begin{enumerate}
\item By Claim \ref{clm:lb_prod}, we know $\mathcal{D}|(\Pi=\pi)$ is a product distribution on $Z_1,...,Z_d$. It is easy to check that as $\mathcal{D}'(\pi)$ is sampled according to $\mathcal{D}|(\Pi=\pi)$, $\mathcal{D}'(\pi)$ is also a product distribution on $Z_1,...,Z_d$, and after the conditioning, $\mathcal{D}'(\pi)|(Z_1,...,Z_{d-\ell+1}=z_1,...,z_{d-\ell+1})$ remains a product distribution on $Z_1,...,Z_d$.
\item Since we draw the final $Z$ conditioned on $Z_{\mid p} = z_{\mid p}$, $Z_i$ is deterministically fixed for $i = 1, \ldots, d - \ell$.
\item First, note that the marginal distribution of $\mathcal{D}|(P=p)$ on $Z_{|p}$ is uniform since $D \mid (\Pi = \pi)$ induces a product distribution on $Z_1, \ldots, Z_d$, and conditioning on $P = p$ only fixes $Z_{\leq d- \ell + 1}$ and leaves $Z_{d - \ell + 2} \times \cdots Z_d$ as a product distribution. Thus $$\P{Z \sim \mathcal{D}'(\pi)|(Z_{\leq d - \ell + 1} = z_{\leq d - \ell + 1})}{Z_{|p} = z_{|p}} = \P{}{Z_{|p}=z_{|p} \mid P=p}$$
so the marginal distribution of $\mathcal{D}'(\pi)|(Z_1,...,Z_{d-\ell+1}=z_1,...,z_{d-\ell+1})$ on $Z_{|p}$ is also the uniform distribution. Therefore $\mathcal{D}'(\pi)|(Z_1,...,Z_{d-\ell+1}=z_1,...,z_{d-\ell+1})$ is a distribution in $\Delta_{\ell-1}$ and $\mathcal{D}'(\pi)$ is a mixture of distributions in $\Delta_{\ell-1}$.
\end{enumerate}
\end{proof}
\paragraph{Base case ($\ell = 1$):} We finally discuss the base case of our induction. Define $\mathcal{A}$, $\Pi$ and $P$ as in the induction step. Since the output of $\mathcal{A}$ is a function of $\Pi$, $$\P{}{\mathcal{A} \text{ outputs } P(Z)} \leq \sum_{\pi} \P{}{\Pi = \pi} \cdot \max_p \P{}{P=p | \Pi = \pi}.$$
Since Claim \ref{clm:lb_maxp} also applies to the base case, we get $$
\P{}{\mathcal{A} \text{ outputs } P(Z)} \leq 3/d^4 < 1/3 < 1/3 + 1/(3d).$$
\end{proof}
\section{Information Theory}
\label{sec:info}
We briefly review some standard facts and definitions from information theory,
starting with entropy. Throughout, our $\log$ is base $e$.
\begin{definition}
The \emph{entropy} of a random variable $X$, denoted by $H(X)$, is defined as $H(X) = \sum_x \Pr[X = x] \log(1 / \Pr[X = x])$, and the \emph{conditional entropy} of random variable $X$ conditioned on random variable $Y$ is defined as $H(X|Y) = \mathbb{E}_y[H(X|Y = y)]$.
\end{definition}
Next, we can use entropy to define the mutual information between two random
variables.
\begin{definition}
\label{def:muinfo}
The \emph{mutual information} between two random variables $X$ and $Y$ is defined as $I(X;Y) = H(X) - H(X|Y) = H(Y) - H(Y|X)$, and the \emph{conditional mutual information} between $X$ and $Y$ given $Z$ is defined as $I(X;Y|Z) = H(X|Z) - H(X|YZ) = H(Y|Z) - H(Y|XZ)$.
\end{definition}
\begin{fact}\label{fact:cr}
Let $X_1,X_2,Y,Z$ be random variables, we have $I(X_1X_2;Y|Z) = I(X_1;Y|Z) + I(X_2;Y|X_1Z)$.
\end{fact}
\begin{definition}
The \emph{Kullback-Leibler divergence} between two random variables $X$ and $Y$ is defined as $\kl{X}{Y} = \sum_x \Pr[X = x] \log(\Pr[X = x] / \Pr[Y = x])$.
\end{definition}
\begin{fact}
\label{fact:div}
Let $X,Y,Z$ be random variables, we have $$I(X;Y|Z) = \mathbb{E}_{x,z}[\kl{(Y| X = x, Z = z)}{(Y| Z = z)}].$$
\end{fact}
\begin{lemma}[Pinsker's inequality]
Let $X,Y$ be random variables,
$$\sqrt{2\kl{X}{ Y}} \geq \sum_x |\Pr[X= x] - \Pr[Y=x]|$$.
\end{lemma}
\section{Preliminaries}
\label{sec:prelims}
We begin with the definition of approximate differential privacy. Given data domain $\mathcal{X}$, two data sets $S, S' \in \mathcal{X}^n$ are \emph{neighbors} (denoted $S \sim S'$) if they differ in at most one coordinate: i.e. if there exists an index $i$ such that for all $j \neq i$, $S_j = S'_j$. A differentially private algorithm must have similar output distributions on all pairs of neighboring datasets.
\begin{definition}[\cite{DMNS06}]
Let $\varepsilon,\delta \geq 0$. A randomized algorithm $\mathcal{M}:\mathcal{X}^n\rightarrow \mathcal{O}$ is \emph{$(\varepsilon,\delta)$-differentially private} if for every pair of neighboring data sets $S \sim S' \in \mathcal{X}^n$, and every event $\Omega \subseteq \mathcal{O}$ $$\P{\mathcal{M}}{\mathcal{A}(S) \in \Omega} \leq \exp(\varepsilon)\P{\mathcal{M}}{\mathcal{A}(S') \in \Omega} + \delta.$$ When $\delta = 0$, we say that $\mathcal{M}$ satisfies (pure) $\varepsilon$-differential privacy.
\end{definition}
Differential privacy has two nice properties. First, it composes neatly: the composition of algorithms $\mathcal{M}_1, \ldots, \mathcal{M}_n$ that are respectively $(\varepsilon_1, \delta_1), \ldots, (\varepsilon_n, \delta_n)$-differentially private is $(\sum_i \varepsilon _i, \sum_i \delta_i)$-differentially private. For pure differential privacy, this is tight in general. Second, differential privacy is resilient to post-processing: given an $(\varepsilon,\delta)$-differentially private $\mathcal{M}$ and any function $f$, $f \circ \mathcal{M}$ is still $(\varepsilon,\delta)$-differentially private (see Appendix~\ref{subsec:dp_props} for details). For brevity, we often abbreviate ``differential privacy'' as ``privacy''.
As defined, the constraint of differential privacy is on the \emph{output} of an algorithm $\mathcal{M}$, not on its internal workings. Hence, it implicitly assumes a trusted data curator, who has access to the entire raw dataset. This is sometimes referred to as differential privacy in the central model. In contrast, this paper focuses on the more restrictive \emph{local} model~\cite{DMNS06} of differential privacy. In the local model, the private computation is an interaction between $n$ users, each of whom hold exactly one dataset record, and is coordinated by a protocol $\mathcal{A}$. We assume throughout this paper that each user's datum is drawn i.i.d. from some unknown distribution: $x_i \sim_{iid} \mathcal{D}$\footnote{Roughly speaking, this corresponds to a setting in which users are ``symmetric'' and in which nothing differentiates them a priori. All of our results generalize to the setting in which there are different ``types'' of users, known to the protocol up front.}. Informally, at each round $t$ of the interaction, a protocol $\mathcal{A}$ observes the transcript of interactions so far, selects a user, and assigns the user a randomizer. The user then applies the randomizer to their datum, using fresh randomness for each application, and publishes the output. In turn, the protocol observes the updated transcript, selects a new user-randomizer pair, and the process continues. We define these terms precisely below.
\begin{definition}
An \emph{$(\varepsilon,\delta)$-randomizer} $R \colon X \to Y$ is an $(\varepsilon,\delta)$-differentially private function taking a single data point as input.
\end{definition}
A simple, canonical, and useful randomizer is \emph{randomized response}~\cite{W65, DMNS06}.
\begin{example}[Randomized Response]
\label{ex:rr}
Given data universe $\mathcal{X} = [k]$ and datum $x_i \in \mathcal{X}$, $\varepsilon$-randomizer $\RR{x_i, \varepsilon}$ outputs $x_i$ with probability $\tfrac{e^\varepsilon}{e^\varepsilon + k-1}$ and otherwise outputs a uniformly random element of $\mathcal{X} - \{x_i\}$.
\end{example}
Next, we formally define transcripts and protocols.
\begin{definition}
A \emph{transcript} $\pi$ is a vector consisting of 5-tuples $(i^t, R_t, \varepsilon_t, \delta_t, y_t)$ --- encoding the user chosen, randomizer assigned, randomizer privacy parameters, and randomized output produced --- for each round $t$. $\pi_{<t}$ denotes the transcript prefix before round $t$. Letting $S_\pi$ denote the collection of all transcripts and $S_R$ the collection of all randomizers, a \emph{protocol} is a function $\mathcal{A} \colon S_\pi \to \left([n] \times S_R \times \mathbb{R}_{\geq 0} \times \mathbb{R}_{\geq 0}\right) \cup \{\perp\}$ mapping transcripts to users, randomizers, and randomizer privacy parameters ($\perp$ is a special character indicating a protocol halt).
\end{definition}
The transcript that results from running a locally private computation will often be post-processed to compute some useful function of the data. However, the privacy guarantee must hold even if the entire transcript is observed. Hence, in this paper we abstract away the task that the computation is intended to solve, and view the output of a locally private computation as simply the transcript it generates.
To clarify the role of interaction in these private computations -- especially when analyzing reductions between computations with different kinds of interactivity -- it is often useful to speak separately of protocols and \emph{experiments}. While the protocol $\mathcal{A}$ is a function mapping transcripts to users and randomizers, the experiment is the interactive process that maps a protocol and collection of users drawn from a distribution $\mathcal{D}$ to a finished transcript. In the simplest case, \ensuremath{\mathsf{FollowExpt}}~(Algorithm~\ref{alg:fexpt}), the experiment exactly follows the outputs of its protocol.
\begin{algorithm}
\caption{}\label{alg:fexpt}
\begin{algorithmic}[1]
\Procedure{$\ensuremath{\mathsf{FollowExpt}}$}{$\mathcal{A}, \mathcal{D}, n$}
\State Draw $n$ users $\{x_i\} \sim \mathcal{D}^{n}$
\State Initialize transcript $\pi_0 \gets \emptyset$
\For{$t = 1, 2, \ldots $}
\If{$\mathcal{A}(\pi_{<t}) = \perp$}
\State Output transcript $\pi_{<t}$
\Else
\State $(i^t, R_t, \varepsilon_t, \delta_t) \gets \mathcal{A}(\pi_{<t})$
\State User $i^t$ publishes $y_t \sim R_t(x_{i^t}, \varepsilon_t, \delta_t)$
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
However, experiments may in general heed, modify, or ignore the outputs of their input protocol. We delineate the privacy characteristics of experiment-protocol pairs and protocols in isolation below. Here and throughout, the dataset is not viewed as an input to an experiment, but is drawn from $\mathcal{D}$ by the experiment-protocol pair. Drawing a fresh user $\sim \mathcal{D}$ corresponds to adding an additional data point, and so the sample complexity of an experiment-protocol pair is the number of draws from $\mathcal{D}$ over the run of the algorithm. For the simple algorithm $\ensuremath{\mathsf{FollowExpt}}(\mathcal{A})$ defined above, the sample complexity is always $n$. Finally we remark that although the distribution $\mathcal{D}$ and the sample complexity $n$ are inputs to the experiment, for brevity we typically omit them and focus on the protocol $\mathcal{A}$; e.g. writing $\ensuremath{\mathsf{Expt}}(\mathcal{A})$ rather than $\ensuremath{\mathsf{Expt}}(\mathcal{A}, \mathcal{D}, n)$.
\begin{definition}
Experiment-protocol pair $\ensuremath{\mathsf{Expt}}(\mathcal{A})$ satisfies \emph{$(\varepsilon,\delta)$-local differential privacy} (LDP) if it is $(\varepsilon,\delta)$-differentially private in its transcript outputs. A protocol $\mathcal{A}$ satisfies $(\varepsilon,\delta)$-local differential privacy (LDP) if experiment-protocol pair $\ensuremath{\mathsf{FollowExpt}} (\mathcal{A})$ is $(\varepsilon,\delta)$-locally differentially private.
\end{definition}
Experiment-protocol pairs can be, by increasing order of generality, \emph{noninteractive}, \emph{sequentially interactive}, and \emph{fully interactive}.
\begin{definition}
An experiment-protocol pair $\ensuremath{\mathsf{Expt}}(\mathcal{A})$ is \emph{noninteractive} if, at each round $t$, as random variables, $(i^t, R_t, \varepsilon_t, \delta_t) \perp\!\!\!\!\perp \Pi_{<t} \mid t$.
\end{definition}
In other words, noninteractivity forces nonadaptivity, and all user-randomizer assignments are made before the experiment begins. In contrast, in sequentially interactive experiment-protocol pairs, users may be queried adaptively, but only once.
\begin{definition}
An experiment-protocol pair $\ensuremath{\mathsf{Expt}}(\mathcal{A})$ is \emph{sequentially interactive} if, at each round $t$, $i^t \neq i^{t-1}, \ldots, i^1$.
\end{definition}
Finally, in in fully interactive experiments, the experiment-protocol may make user-randomizer assignments adaptively, and each user may receive arbitrarily many randomizer assignments. Along the same lines, we say a protocol $\mathcal{A}$ is noninteractive (respectively sequentially and fully interactive) if $\ensuremath{\mathsf{FollowExpt}}(\mathcal{A})$ is a noninteractive (respectively sequentially and fully interactive) experiment-protocol pair. This experiment-protocol formalism will be useful in constructing the full-to-sequential reduction in Section~\ref{sec:simulation}; elsewhere, we typically elide the distinction and simply reason about $\ensuremath{\mathsf{FollowExpt}}(\mathcal{A})$ as ``protocol $\mathcal{A}$''. For any locally private protocol, we refer to the number of users $n$ that it queries as its \emph{sample complexity}. For fully interactive protocols, the total number of rounds --- which we denote by $T$ --- may greatly exceed $n$. In contrast, for both non-interactive and sequentially interactive protocols, the number of rounds $T \leq n$.
At each round $t$ of a fully interactive $\epsilon$-locally private protocol, we know that $\epsilon_t \leq \epsilon$. For many protocols, we can say more about how the $\epsilon_t$ parameters relate to $\epsilon$:
\begin{definition}
Consider an $\epsilon$-locally private protocol $\mathcal{A}$. Let $\{\epsilon_t\}_{t=1}^T$ denote the minimal privacy parameters of the local randomizers $R_t$ selected at round $t$ considered as random variables. We say the protocol $\mathcal{A}$ is \emph{$k$-compositionally private} if for all $i \in [n]$, with probability $1$ over the randomness of the transcript, $$\sum_{t \colon i_t = i}\epsilon_t \leq k\epsilon.$$ If $k = 1$, a protocol is simply \emph{compositional private}.
\end{definition}
\begin{remark}
In fact, all of our results hold without modification even under the weaker condition of \emph{average} $k$-compositionality. For a protocol $\mathcal{A}$ with sample complexity $n$, $\mathcal{A}$ is $k$-compositional on average if
$$\sum_{t} \epsilon_t \leq k\epsilon n.$$ For brevity, we often shorthand ``$k$-compositionally private'' as simply ``$k$-compositional''.
\end{remark}
Informally, a compositionally private protocol is one in which the privacy parameters for each user ``just add up.'' Almost every locally private protocol studied in the literature (and in particular, every protocol whose privacy analysis follows from the composition theorem for pure differential privacy) is compositionally private\footnote{This simple compositionality applies even if $\{\varepsilon_t\}_{t=1}^T$ are chosen adaptively in each round (see Theorem 3.6 in~\citet{RRUV16}).}. They are so ubiquitous that it is tempting to guess that all $(\epsilon,0)$-locally private protocols are compositional. However, this is false: for every $k$ and $\epsilon$, there are $\epsilon$-locally private protocols that fail to be $k$-compositionally private. The following example shows that by taking advantage of special structure in the data domain and choice of randomizers it is possible to achieve $(\varepsilon,0)$-local privacy, even as the sum of the round-by-round privacy parameters greatly exceeds $\epsilon$.
\begin{example}[Informal]
\label{ex:comp}
Let the data universe $\mathcal{X}$ consist of the canonical basis vectors $e_1, \ldots, e_d \in \{0,1\}^{d}$, and let each $x_1, \ldots, x_n$ be an arbitrary element of $\mathcal{X}$. Consider the $d$ round protocol where, for each round $j \in [d]$, every user $i$ with $x_i = e_j$ outputs a sample from $\RR{1, \varepsilon}$, and the remaining users output a sample from $\Ber{0.5}$. As $\mathsf{RR}(\cdot,\varepsilon)$ is an $\varepsilon$-local randomizer which each user employs only once, and remaining outputs are data-independent, this protocol is $\varepsilon$-locally private. But the protocol fails to be $k$-compositionally private for $k < d/2$.
\end{example}
The preceding example demonstrates that the careful choice of local randomizers based on the data universe structure can strongly violate compositional privacy. Seen another way, when multiple queries are asked of the same user, there are situations in which the correlation in privatized responses induced by being run on the same data element can lead to arbitrarily sub-compositional privacy costs. The main result of our paper is that the additional power of a fully interactive protocol, on top of sequential interactivity, is characterized by its compositionality.
\section{From Full to Sequential Interactivity} \label{sec:simulation}
We show that any $(\varepsilon, 0)$-locally private compositional protocol is ``equivalent" to a sequentially interactive protocol with sample complexity that is larger by only a small constant factor. By equivalent, we mean that for any $(\varepsilon, 0)$-locally private compositional protocol, we can exhibit a sequentially interactive $(3\varepsilon, 0)$-locally differentially private protocol with only a constant factor larger sample complexity that induces exactly the same distribution on transcripts. Thus for any task for which the original protocol was useful, the sequentially interactive protocol is just as useful\footnote{Formally, for any loss function defined over a data distribution $\mathcal{D}$ and a transcript $\Pi$, when data points $x_i$ are drawn i.i.d. from $\mathcal{D}$, the two protocols induce exactly the same distribution over transcripts, and hence the same distribution over losses. Once one restricts attention to locally private protocols with privacy parameter $\epsilon \leq 1$ that take as input points drawn i.i.d. from a distribution $\mathcal{D}$, it is without loss of generality to measure the success or failure of a protocol with respect to the underlying distribution $\mathcal{D}$, rather than with respect to the sample. This is because such protocols are $\approx \epsilon/\sqrt{n}$ differentially private when viewed in the central model of differential privacy (in which the input may be permuted before used in the protocol) \cite{EFMRTT18,BBGN19}, and hence the distribution on transcripts would be almost unchanged even if the entire dataset was \emph{resampled} i.i.d. from $\mathcal{D}$. \cite{CLNRW16,NRW18}. Thus, for such protocols, the transcript distribution is governed by the data distribution $\mathcal{D}$, but not (significantly) by the sample.}.
More generally, we give a generic reduction under which any $(\varepsilon, 0)$-private $k$-compositional protocol can be compiled into a sequentially interactive protocol with an $e^{\epsilon}k$-factor increase in sample complexity.
Our proof is constructive; given an arbitrary $k$-compositional $(\varepsilon, 0)$-locally differentially private protocol we show how to simulate it using a sequentially interactive protocol that induces the same joint distribution on transcripts. The ``simulation'' is driven by three main ideas:
\begin{enumerate}
\item \textbf{Bayesian Resampling}: The dataset used in a locally differentially private protocol is static once the protocol begins. However, we consider the following thought experiment: each user's datum is \emph{resampled} from the posterior distribution on their datum, conditioned on the transcript thus far, before every round in which they are given a local randomizer. We observe that the mechanism from this thought experiment induces exactly the same joint distribution on datasets and transcripts upon completion of the mechanism. Thus, for the remainder of the argument, we can seek to simulate this ``Bayesian Resampling'' version of the mechanism.
\item \textbf{Private Rejection Sampling}: Because of the local differential privacy guarantee, at any step of the algorithm, the posterior on a user's datum conditioned on the private transcript generated so far must be close to their prior. Thus, it is possible to sample from this posterior distribution by first sampling from the prior, and then applying a rejection sampling step that is both a) likely to succeed, and b) differentially private. Sampling from the prior simply corresponds to querying a new user. At first glance, applying rejection sampling as needed seems to require information that the users will not have available, because they do not know the underlying data distribution $\mathcal{D}$. But an application of Bayes rule, together with a data independent rescaling can be used to re-write the required rejection probability using only quantities that each user can compute from her own data point and the transcript. A similar use of rejection sampling appears in the simulation of locally private algorithms by statistical query algorithms given by~\citet{KLNRS11}.
\item \textbf{Data Independent Decomposition of Local Randomizers}: The two ideas above suffice to transform a fully interactive mechanism into a sequentially interactive mechanism, with a blowup in sample complexity from $n$ to $T$ (because in the sequentially interactive protocol that results from rejection sampling, each user applies only one local randomizer instead of an average of $T/n$). However, we generalize a recent result of \cite{BBGN19} to show that any $\epsilon_i$-private local randomizer can be described as a mixture between a \emph{data independent} distribution and an $(\varepsilon, 0)$-private local randomizer for any $\epsilon > \epsilon_i$, where the weight on the data independent distribution is roughly (for small constant $\epsilon$) $1 - \epsilon_i/\epsilon$. Thus we can simulate each local randomizer while only needing to query a new user with probability $\epsilon_i/\epsilon$. As a result, for any compositional mechanism, 1 user in the sequential setting suffices (in expectation) to simulate the entire transcript of a single user in the fully interactive setting. More generally, if the mechanism is $k$-compositional, then $k$ users are required in expectation to carry out the simulation. The realized sample complexity concentrates sharply around its expectation.
\end{enumerate}
\subsection{Step 1: A Bayesian Thought Experiment}
The first step of our construction is to observe that for any locally private protocol $\mathcal{A}$, $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$ induces exactly the same distribution over transcripts as $\ensuremath{\mathsf{FollowExpt}}(\mathcal{A})$. The difference is that in $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$, between each interaction with a given user $i$, their datum $x_i$ is \emph{resampled} from the posterior distribution on user $i$'s data conditioned on the portion of the transcript generated thus far. We prove in Lemma~\ref{lem:bayes} that the two experiments produce exactly the same transcript distribution. Once we establish this, our goal will be to simulate the transcript distribution induced by $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$.
\begin{algorithm}
\caption{}\label{alg:fipbayes}
\begin{algorithmic}[1]
\Procedure{$\ensuremath{\mathsf{BayesExpt}}$}{$\mathcal{A}, \mathcal{D}, n$ }
\State Initialize transcript $\pi_0 = \emptyset$
\For{$t = 1,2, \ldots$}
\If{$\mathcal{A}(\pi_{<t}) = \perp$}
\State Output transcript $\pi_{<t}$
\Else
\State $(i^{t}, R_t, \varepsilon_t,\delta_t) \gets \mathcal{A}(\pi_{<t})$
\State Redraw $x_{i^t} \sim Q_{i,t}$ \Comment{$Q_{i,t}$ is the posterior on $x_{i^t}$ given $\pi_{ < t}$}
\State User $i^{t}$ publishes $y_t \sim R_t(x_{i^t})$
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
Note that when $i^{t}$ is selected for the first time $Q_{i,t} = \mathcal{D}$, and so the sample complexity (e.g. number of draws from $\mathcal{D}$) of $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$ is bounded by $n$.
\begin{lemma}
\label{lem:bayes}
For any protocol $\mathcal{A}$, Let $\Pi^f$ be the transcript random variable that is output by $\ensuremath{\mathsf{FollowExpt}}(\mathcal{A})$ and let $\Pi^b$ be the transcript output by $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$. Then $$ \Pi^f \stackrel{d}{=} \Pi^b$$ where $\stackrel{d}{=}$ denotes equality of distributions.
\end{lemma}
\begin{proof}
We show this by (strong) induction on rounds in the transcript. The base case $t=1$ is immediate: for any index $i^{1}$ selected by $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$, the posterior distribution $Q_{i,1} $ is the same as the prior $\mathcal{D}$.
Now suppose it is true up to time $t+1$, i.e. $\Pi^f_{< t+1} \stackrel{d}{=} \Pi^b_{< t+1}$. Then since the joint distributions $\Pi_{ < t +2}$ factor as $(i^{t+1}, R_{t+1}, \epsilon_{t+1}, \delta_{t+1}, Y_{t+1}|\Pi_{ < t +1}) \cdot \Pi_{ < t +1}$, it suffices to show that the conditional distributions on $i^{t+1}, R_{t+1}, \epsilon_{t+1}, \delta_{t+1}, Y_{t+1}|\Pi_{ < t +1}$ coincide. Moreover, the conditional distribution on $i^{t+1}, R_{t+1}, \epsilon_{t+1}, \delta_{t+1}|\Pi_{ < t +1}$ is given by $\mathcal{A}(\Pi_{<{t+1}})$ under both algorithms, and so it remains only to show that $Y_{t+1}|i^{t+1}, R_{t+1}, \epsilon_{t+1}, \delta_{t+1},\Pi_{ < t +1}$ is the same distribution under both algorithms.
Under $\ensuremath{\mathsf{FollowExpt}}(\mathcal{A})$,
$$Y_{t+1}|i^{t+1}, R_{t+1}, \epsilon_{t+1}, \delta_{t+1},\Pi_{ < t +1} \sim R_{t+1}(x_{i^{t+1}}, \epsilon_{t+1}, \delta_{t+1}|\Pi_{ < t +1}) \stackrel{d}{=} R_{t+1}(u, \epsilon_{t+1}, \delta_{t+1}),$$
where $u \stackrel{d}{=} x_{i^{t+1}}|\Pi_{< t+1} \stackrel{d}{=} Q_{i, t+1}$ by definition, and we use the fact that after conditioning on $\Pi_{<t+1}$, $x_{i^{t+1}}$ is independent of $\varepsilon_{t+1}$ and $\delta_{t+1}$. Redrawing $u \sim Q_{i, t+1}$ does not change the marginal distribution of $R_{t+1}(u, \epsilon_{t+1}, \delta_{t+1})$, which is exactly the distribution under $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$, as desired.
\end{proof}
\subsection{Step 2: Sequential Simulation of Algorithm \ref{alg:fipbayes} via Rejection Sampling}
\label{subsec:rej}
We now show how to replace step $8$ in Algorithm \ref{alg:fipbayes} by selecting a new datapoint (drawn from $\mathcal{D}$) at every round and using rejection sampling to simulate a draw from $Q_{i,t}$. The result is a sequentially interactive mechanism that preserves the transcript distribution of Algorithm \ref{alg:fipbayes} (and, by Lemma \ref{lem:bayes}, of Algorithm \ref{alg:fexpt}), albeit one with a potentially very large increase in sample complexity (from $n$ to $T$). The rejection sampling step increases the privacy cost of the protocol by at most a factor of $2$.
We first review why it is non-obvious that rejection sampling can be performed in this setting. We want to sample from the target distribution $Q_{i,t}$, the posterior $x_i^{t}|\pi_{< t}$, using samples from the proposal distribution $\mathcal{D}$. Let $p_\pi$ denote the density function of $Q_{i,t}$ and let $p$ denote the density function of $\mathcal{D}$. In rejection sampling, we would typically sample $u \sim \mathcal{D}$, and with probability $\propto \frac{p_\pi(u)}{p(u)}$ we would accept $u$ as a sample drawn from $Q_{i,t}$, or else redraw another $u$ and continue.
This is not immediately possible in our setting, since the individuals (who must perform the rejection sampling computation) do not know the prior density $p$ and hence do not know the posterior $p_\pi$. As a result, they cannot compute either the numerator or denominator of the expression for the acceptance probability. We solve this problem by using the fact that we are simulating a posterior with a prior distribution, and formulate the rejection sampling probability ratio as a quantity depending only on a user's private data point and the transcript. Users may then compute this quantity themselves.
To define our transformed rejection sampler we set up some new notation: given a user $i$ and round $t$, let $\pi_{< t, i}$ denote the subset of the realized transcript up to time $t$ that corresponds to user $i$'s data, i.e. $\pi_{< t, i} = \{(i^{t'}, R_{t'}, \varepsilon_{t'}, \delta_{t'}, y_{t'}): t' < t, i^{t'}= i\}$. Let $\P{x_i}{\pi_{< t, i}}$ denote the conditional probability of the messages corresponding to user $i$ given the choices of privacy parameters and randomizers up to time $t$: $$\P{}{\pi_{< t, i}} = \prod_{t' \colon i^{t'} = i}\P{R_{t'}}{R_{t'}(x_i, \epsilon_{t'}, \delta_{t'}) = y_{t'}}.$$ Using this notation, we define our rejection sampling procedure $\mathsf{RejSamp}$ in Algorithm~\ref{alg:RS}.
\begin{algorithm}
\caption{Rejection Sampling}\label{alg:RS}
\begin{algorithmic}[1]
\Procedure{$\mathsf{RejSamp}$}{$i, \pi_{ < t}, \varepsilon, \varepsilon_t, R_t(\cdot), \mathcal{D}$} \Comment{Publishing $\Pi_{ < t}$ is $(\varepsilon, 0)$-private}
\State Initialize indicator $\mathsf{accept} \gets 0$
\While{$\mathsf{accept} = 0$}
\State Draw a new user $x \sim \mathcal{D}$
\State User $x$ computes $p_x \gets \frac{\P{x}{\pi_{< t, i}}}{\max_{x^*}\P{x^*}{\pi_{< t, i}}}$
\State User $x$ publishes $\mathsf{accept} \sim \Ber{p_x/2}$
\If{$\mathsf{accept} = 1$}
\State User $x$ outputs $Y_t' \sim R_t(x, \varepsilon_t)$
\EndIf
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
We now prove that $\mathsf{RejSamp}$ is private and does not need to sample many users.
\begin{lemma}
\label{lem:rs}
Let $Y_t \stackrel{d}{=} R_t(x'),$ where $x' \sim Q_{i,t}$ and let $Y_t'$ be defined by the rejection sampling algorithm $\mathsf{RejSamp}$ above. Let the sample complexity $N$ be the total number of new users $x$ drawn in step 4 of $\mathsf{RejSamp}$. Then $\mathsf{RejSamp}$ is $(\varepsilon + \varepsilon_t, 0)$-locally private, $Y_t \stackrel{d}{=} Y_t'$, and $\mathbb{E}[N] \leq 2e^{\varepsilon}$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:rs}]
\begin{claim}
$\mathsf{RejSamp}$ is $(\varepsilon+ \varepsilon_t)$-locally private.
\end{claim}
We first show that publishing a draw from $\Ber{p_x/2}$ is $(\varepsilon, 0)$-locally private. By assumption publishing $\pi_{< t}$, and hence publishing $\pi_{< t, i}$ (by post-processing), $(\varepsilon, 0)$-private. Hence for any $x \in \mathcal{X}$ $$\P{}{1 \mid x} = p_x/2 = \frac{\P{x}{\pi_{< t, i}}}{2\max_{x^*}\P{x^*}{\pi_{< t, i}}} \in [1/(2e^{\varepsilon}),1/2].$$ Therefore for any $x, x'$, $\P{}{1 \mid x} \leq e^{\varepsilon} \P{}{1 \mid x'}$. Similarly,$$\P{}{0 \mid x} = (1 - p_x/2) \in [1/2, (2e^{\varepsilon} - 1)/2e^{\varepsilon}]$$ and by $1+x \leq e^x$, we get $1-\varepsilon \leq e^{-\varepsilon}$, so $1 + \varepsilon \geq 2 - e^{-\varepsilon}$, and $e^{\varepsilon}/2 \geq (2^{\varepsilon} - 1)/(2e^{\varepsilon})$. Thus for any $x, x'$, $\P{}{0 \mid x} \leq e^{\varepsilon} \P{}{0 \mid x'}$.
Releasing $R_t(x, \epsilon_t)$ is $\varepsilon_t$-locally private, so by composition the whole process is $(\varepsilon + \varepsilon_t)$-locally private.
\begin{claim}
$Y_t \stackrel{d}{=} Y_t'$
\end{claim}
It suffices to show that $x |\{\mathsf{accept} = 1\} \stackrel{d}{=} Q_{i,t}$. Fix any $x_0 \in \mathcal{X}$. Then by Bayes' rule
\begin{align*}
\P{}{x = x_0 \mid \mathsf{accept} = 1} =&\; \P{}{\mathsf{accept} = 1 \mid x = x_0} \cdot \frac{\P{}{x = x_0}}{\P{}{\mathsf{accept} = 1}} \\
=& \; \frac{\P{x_0}{\pi_{< t, i}}}{\max_{x^*}\P{x^*}{\pi_{< t, i}}} \cdot \frac{\P{}{x = x_0}} {\sum_{x'} \P{}{x = x'}\frac{\P{x'}{\pi_{< t, i}}}{\max_{x^*}\P{x^*}{\pi_{< t, i}}}} \\
=& \; \frac{\P{x_0}{\pi_{< t, i}}\P{}{x = x_0}} {\sum_{x'} \P{}{x = x'}\P{x'}{\pi_{< t, i}}} \\
=&\; \frac{\P{x_0}{\pi_{< t, i}}\P{}{x = x_0}} {\P{}{\pi_{< t, i}}} \\
=& \; \P{}{x = x_0 | \pi_{ < t, i}} \stackrel{d}{=} Q_{i,t},
\end{align*}
as desired. Finally, since $p_x/2 \geq \frac{1}{2e^{\varepsilon}}$, the expected number of samples until $\mathsf{accept} = 1$ is $\leq 2e^{\varepsilon}$.
\end{proof}
\subsection{Step 3: Data Independent Decomposition of Local Randomizers}
\label{subsec:decomp}
The preceding sections enable us to simulate a fully interactive $k$-compositional $(\varepsilon,0)$-locally private protocol with a sequentially interactive $(2\varepsilon,0)$-locally private protocol. However, our solution so far may require sampling a new user for each query in the original protocol. Since a fully interactive protocol's query complexity may greatly exceed its sample complexity, this is undesirable. To address this problem, we \emph{decompose} each local randomizer in a way that substantially reduces the number of queries that actually require samples.
Let $R: \mathcal{X} \to \mathcal{Y}$ be an $\epsilon'$ local randomizer, fix an arbitrary element $x_0 \in \mathcal{X}$, and let $x$ be a private input to $R$. Then Lemma 5.2 in \citet{BBGN19} shows that we can write $R(x)$ as a mixture $\gamma w + (1-\gamma)d_x$, where $w$ is a data-independent distribution, $d_x$ is a data-dependent distribution, and $\gamma \geq e^{-\epsilon'}$. This suggests that decomposition --- by answering a proportion of queries from data-independent distributions --- can reduce the sample complexity of our solution. Unfortunately, the data dependent distribution need not be differentially private (in fact, it often corresponds to a point mass on the private data point), so the privacy of the overall mechanism crucially relies on not releasing \emph{which} of the two mixture distributions the output was sampled from.
We first generalize this result, showing that for any $\epsilon \geq \epsilon'$, we can write $R(x)$ as $(1-\gamma) w + \gamma \tilde{R}(x)$ where $\tilde{R}$ is a $2\epsilon$-differentially private local randomizer, and $\gamma = \frac{e^{-\epsilon'}-1}{e^{-\varepsilon}-1}$ (Lemma~\ref{lem:decomp}). The upshot of this generalization is that even if we make public which part of the mixture distribution was used, the resulting privacy loss is still bounded by $2\epsilon$. Larger values of $\epsilon$ increase our chance of sampling from a data-independent distribution when simulating a local randomizer, while increasing the privacy cost incurred by a user in the event that we sample from the data-dependent mixture component. This tradeoff will be crucial for us in the proof of our main result.
\begin{lemma}[Data Independent Decomposition]
\label{lem:decomp}
Let $R: \mathcal{X} \to \mathcal{Y}$ be an $\epsilon'$-differentially private local randomizer and let $\varepsilon \geq \varepsilon'$. Then there exists a mapping $\tilde{R}$ and fixed data-independent distribution $\mu$ such that $\tilde{R}(\cdot)$ is a $2\varepsilon-$differentially private local randomizer and $$R(x) \stackrel{d}{=} \gamma \tilde{R}(x) + (1-\gamma)\mu, $$
where $\gamma = \frac{e^{-\epsilon'}-1}{e^{-\varepsilon}-1}$.
\end{lemma}
\begin{proof}
Let $0 < \epsilon' \leq \epsilon$, fix any $x_0 \in \mathcal{X}$, let $\gamma = \frac{e^{-\epsilon'}-1}{e^{-\epsilon}-1}$, and let $r(x)$ denote the density function of the local randomizer $R$ with input $x$ implicitly evaluated at some arbitrary point in the range, which we suppress. Since $\epsilon \geq \epsilon' > 0$, $\gamma \in (0, 1]$ is a valid mixture probability. Thus we can write $$r(x) = (r(x) - (1-\gamma)r(x_0)) + (1-\gamma)r(x_0)$$ and, rewriting the first term, $$r(x) - (1-\gamma)r(x_0) = \gamma (r(x_0) + \frac{1}{\gamma}(r(x)-r(x_0))) = \gamma \tilde{r}(x).$$ $\tilde r$ defines a new mapping $\tilde{R}(\cdot)$ by mapping $x$ to the random variable $\tilde{R}(x)$ with density function $\tilde{r}(x) = (r(x_0) + \frac{1}{\gamma}(r(x)-r(x_0)))$. Thus, it suffices to show that the mapping $\tilde{R}(x)$ is a $2\epsilon$-private local randomizer.
We first show that for any $x$, $\tilde{r}(x)$ is a well-defined density function. Since $R$ is an $\epsilon'$-private local randomizer, $r(x)-r(x_0) \geq (e^{-\epsilon'}-1)r(x_0)$, and so $$r(x_0) + \frac{1}{\gamma}(r(x)-r(x_0)) \geq r(x_0)\left(1 + \frac{e^{-\epsilon'}-1}{\gamma}\right) = r(x_0)e^{-\epsilon}.$$ This establishes that $\tilde{r}(x)$ is non-negative. Then since $$\int_{\Omega}\tilde{r}(x) = \int_\Omega r(x_0) + \frac{1}{\gamma}\int_\Omega(r(x)-r(x_0)) = 1 + \frac{1}{\gamma}(1-1) = 1,$$ $\tilde{r}(x)$ defines a valid density function for any $x$.
To see that $\tilde r$ is also a $2\epsilon$-private local randomizer, fix any outcome $o \in \mathcal{Y}$ and any other $x' \in \mathcal{X}$. Since $r$ is an $\varepsilon'$-local randomizer, $r(x) - r(x_0) \leq r(x_0)(e^{\varepsilon'}-1)$ and we get
\begin{align*}
\tilde r(x) =&\; r(x_0) + \frac{1}{\gamma}(r(x) - r(x_0)) \\
\leq&\; r(x_0) \left[1 + \frac{1}{\gamma}\left(e^{\varepsilon'}-1\right)\right] \\
=&\; r(x_0)\left[1 + \frac{1 - e^{-\varepsilon}}{1 - e^{-\varepsilon'}}\left(e^{\varepsilon'}-1\right)\right] \\
=&\; r(x_0)\left[1 + e^{\varepsilon'} \cdot \left(1 - e^{-\varepsilon}\right)\right] \\
\leq&\; r(x_0)\left[1 + e^{\varepsilon} \cdot \left(1 - e^{-\varepsilon}\right)\right] = r(x_0)e^\varepsilon.
\end{align*}
We already showed $\tilde r(x') \geq e^{-\varepsilon}r(x_0)$, so $$\frac{\tilde{r}(x)(o)}{\tilde{r}(x')(o)]} \leq \frac{e^{\epsilon}r(x_0)(o)}{e^{-\epsilon}r(x_0)(o)} \leq e^{2\epsilon}.$$
\end{proof}
\subsection{Putting it All Together: The Complete Simulation}
Finally, we combine rejection sampling and decomposition to give our complete reduction, Algorithm~\ref{alg:red}. We use rejection sampling to convert from a fully interactive mechanism to a sequentially interactive one and use our data-independent decomposition of local randomizers to reduce the sample complexity of the converted mechanism.
\begin{algorithm}
\caption{$\ensuremath{\mathsf{Reduction}}$}\label{alg:red}
\begin{algorithmic}[1]
\Procedure{$\ensuremath{\mathsf{Reduction}}$}{Fully interactive $(\varepsilon, 0)-$LDP Protocol $\mathcal{A}, \mathcal{D}, n$}
\State Initialize $s_1, \ldots, s_n \gets 0$. \Comment{indicator if user $i$ has been selected yet}
\For{$t = 1 \ldots $}
\If{$\mathcal{A}(\pi_{<t}) = \perp$}
\State Output transcript $\pi_{<t}$
\Else
\State $(i^{t}, R_t, \varepsilon_t) \gets \mathcal{A}(\pi_{<t})$
\If{$s_i^{t} = 1$}
\State Let $\gamma \gets \frac{e^{-\epsilon_t}-1}{e^{-\varepsilon}-1}$
\State Let $R_t = \gamma \tilde{R}_t + (1-\gamma)R_t(x_0)$ \Comment{Data Decomposition}
\State Draw $\rho \sim \mathsf{Unif}(0,1)$
\If{$\rho \leq \gamma$}
\State Draw $Y_t \sim \mathsf{RejSamp}(i^{t}, \pi_{ < t}, \varepsilon, 2\varepsilon, \tilde{R}(\cdot), \mathcal{D})$
\Else
\State Draw $Y_t \sim R_t(x_0, \epsilon_t)$ \Comment{Data independent distribution}
\EndIf
\Else
\State Draw $x_{i^{t}} \sim Q_{i,t} = \mathcal{D}$, then draw $Y_t \sim R_t(x_{i^{t}}, \epsilon_t)$ \Comment{$Q_{i,t} = \mathcal{D}$ since $s_{i^{t}} = 0$}
\State Let $s_{i^{t}} \gets 1$
\EndIf
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
We now prove that $\ensuremath{\mathsf{Reduction}}$ has the desired interactivity, privacy, transcript, and sample complexity guarantees. We again denote by $N$ the sample complexity of $\ensuremath{\mathsf{Reduction}}$, i.e. the number of samples drawn from the prior $\mathcal{D}$ over the run of the algorithm, either in Step $15$ (which is bounded by $n$), or over the runs of $\mathsf{RejSamp}$ in line $10$. We observe that sampling from the prior $\mathcal{D}$ simply corresponds to using a new datapoint drawn from $\mathcal{D}$. Fixing a protocol $\mathcal{A}$, let $\Pi^r$ denote the transcript random variable generated by $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$, and let $\Pi^b$ denote the transcript random variable generated by $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$.
\begin{theorem}
\label{thm:main}
Let $\mathcal{A}$ a fully-interactive $k$-compositional $(\varepsilon, 0)$-locally private protocol. Then
\begin{enumerate}
\item $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$ is sequentially interactive,
\item $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$ is $(3\varepsilon,0)$-locally private,
\item $\Pi^r \stackrel{d}{=} \Pi^b$,
\item $\mathbb{E}[N] \leq n(\frac{2e^{\varepsilon}\cdot \varepsilon}{1-e^{-\varepsilon}}k + 1)$, and with probability $1-\delta$, $N = O(nk + \sqrt{nk\log \frac{1}{\delta}})$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
\textbf{1. Interactivity:} Since each user $i$'s data is only used once (before $s_i$ is set to $1$), $\ensuremath{\mathsf{Reduction}}$ is sequentially interactive.
\textbf{2. Privacy:} Consider a data point $x$ corresponding to an arbitrary user over the run of $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$. Then either $x$ is drawn in line $18$, or $x$ is drawn during a rejection sampling step. In the first case, $x$ is only used once in step $18$, as input to an $\varepsilon_t$-local randomizer, preserving $(\varepsilon, 0)$-LDP, since $\varepsilon_t \leq \varepsilon$. If $x$ is drawn during the rejection sampling step, then it is used during the use of rejection sampling to simulate a draw from a $(2\varepsilon, 0)$-local randomizer $\tilde{R}(\cdot)$, where the input transcript $\pi_{< t}$ has been generated $(\varepsilon, 0)$-privately. The privacy of the input transcript is relevant because it bounds the privacy of the user's rejection sampling step. By Lemma~\ref{lem:rs}, this is $(3\varepsilon, 0)$-private.
\textbf{3. Transcripts:} We prove this claim by a similar argument as that of Lemma~\ref{lem:bayes}: we show by induction that the transcript distribution at each step $t$ is the same for $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$ and $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$. This is trivially true at $t = 1$. Now suppose it is true up to time $t+1$, i.e. $\Pi^r_{< t+1} \stackrel{d}{=} \Pi^b_{< t+1}$. Then since the joint distributions $\Pi_{ < t +2}$ factor as $(i^{t+1}, R_{t+1}, \epsilon_{t+1}, Y_{t+1}|\Pi_{ < t +1}) \cdot \Pi_{ < t +1}$, it suffices to show that the conditional distributions on $i^{t+1}, R_{t+1}, \epsilon_{t+1}, Y_{t+1}|\Pi_{ < t +1}$ coincide.
Note that under both $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$ and $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$, protocol $\mathcal{A}$ is used to select $i^{t+1}, R_{t+1}, \epsilon_{t+1}$ as a function of $\Pi_{ < t+1}$, so we can condition on $i^{t+1}, R_{t+1}, \epsilon_{t+1}$ as well, and need only show that the distribution on $Y_{t+1}$ is the same. Under $\ensuremath{\mathsf{BayesExpt}}(\mathcal{A})$, $Y_{t+1}$ is drawn from $R_{t+1}(u, \epsilon_{t+1}), u \sim Q_{i, t+1}$. There are two cases for $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$:
\begin{itemize}
\item If $s_i^{t+1} = 0$, then under $\ensuremath{\mathsf{Reduction}}(\mathcal{A}), Y_{t+1}$ is drawn in line $18$ from $R_{t+1}(u, \epsilon_{t+1}), u \sim Q_{i, t+1}$, as desired.
\item If $s_i^{t+1} = 1$, then $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$ uses Lemma~\ref{lem:decomp} to write $R_{t+1}(\cdot)$ as a mixture. Hence if we sample from the mixture with input $u \sim Q_{i, t+1}$, we sample from $R_{t+1}(u)$, which is the desired sampling distribution. To see that $\ensuremath{\mathsf{Reduction}}(\mathcal{A})$ does sample from the target, we need only show that $Y_{t+1}$ drawn in line $13$ is sampled from $\tilde{R}_t(u)$ where $u \sim Q_{i, t+1}$. This is true by Lemma~\ref{lem:rs}.
\end{itemize}
\textbf{4. Sample Complexity:} Here we bound the expected sample complexity, deferring the high probability bound to Section~\ref{sec:app_highprob} in the Appendix. Let $N_i$ be the number of fresh samples drawn over all rounds $t$ where $i^t = i$, i.e. the number of samples drawn when simulating follow-up queries to $i$. Let $N_i^t$ be the number of samples drawn during rejection sampling in round $t$; we imagine that regardless of the coin-flip in line $11$ of the pseudocode of \ensuremath{\mathsf{Reduction}}, $N_i^t$ is always drawn. Then the total number of samples is $N_i = \sum_{t =1}^{T}\gamma_tN_i^t$. (Note that for simplicity, we are summing over all rounds $T$, since equivalently we may imagine that each user is given a local randomizer at each round, with privacy cost $0$ in any round in which $i_t \neq i$.) Then by Lemma~\ref{lem:rs} $$\mathbb{E}[N_i] = \sum_{t =1}^{T}\gamma_t 2e^{\varepsilon} = \frac{2e^{\varepsilon}}{1 - e^{-\varepsilon}}\sum_{t =1}^{T} (1 - e^{-\varepsilon_t}).$$ Since $1-x \leq e^{-x}$, we get that $1 - \varepsilon_t \leq e^{-\varepsilon_t}$ and so $1 - e^{-\varepsilon_t} \leq \varepsilon_t$. Hence $$\mathbb{E}[N_i] \leq \frac{2e^{\varepsilon}}{1-e^{-\varepsilon}}\sum_{t=1}^{T}\epsilon_t \leq \left(\frac{2e^{\varepsilon}\cdot \varepsilon}{1-e^{-\varepsilon}}\right)k.$$ Summing over $i$, and including the at most $n$ samples drawn in line $18$ bounds the expected sample complexity by $((\frac{2e^{\varepsilon}\cdot \varepsilon}{1-e^{-\varepsilon}})k + 1)n$, as desired.
\end{proof}
|
1,116,691,500,378 | arxiv | \section{Appendix}
\section{Additional Information about Data}
\label{app:dictionary}
Figure~\ref{fig:tok_per_lang} displays the dictionary coverage for each of our 100 languages.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{dictionary_coverage.pdf}
\caption{\textbf{Dictionary Coverage} per Language}
\label{fig:tok_per_lang}
\end{figure}
\section{Model Architectures}
\label{app:dense}
Table~\ref{tbl:wide_deep} shows the various model configurations considered in our experiments when scaling dense models.
\begin{figure}[h!]
\centering
\begin{tabular}{l c c c }
\toprule
\bf Size & \bf Embed & \bf FFN & \bf Layers \\
\midrule
1B Wide & 1024 & 16K & 14 \\
1B Deep & 1024 & 4K & 38 \\
\midrule
2B Wide & 2048 & 16K & 11 \\
2B Deep & 1024 & 8K & 48 \\
\midrule
10B Wide & 4096 & 16K & 24 \\
10B Deep & 3072 & 12K & 36 \\
\bottomrule
\end{tabular}
\caption{\textbf{Architecture of Wide and Deep Models.}
}
\label{tbl:wide_deep}
\end{figure}
\section{Exploiting Multilinguality at Inference Time with Multi-source Self-Ensembles}
Throughout the paper, we explored how to improve the performance of single models, scaling the amount of data as well as the model size, but there remain numerous directions for future investigation of multilinguality. One direction is understanding how to exploit the nature of multilingual translation at inference time as well.
A known, effective strategy to improve accuracy is to ensemble multiple models at inference time.
However, this requires training multiple models which substantially increases the training compute requirements.
Instead, we suggest exploring self-ensembles, created by applying the multilingual model to the same source sentence in different languages.
For example, if we wish to translate Galician to English, then instead of directly translating between the two, we ensemble the translation of Spanish to English with the translation of Galician to English, using the same multilingual model for both directions, and by averaging the predicted token log-probabilities, as for standard multi-model ensembles.
The additional source is obtained by translating the input to another \textit{intermediary} language.
After this, we ensemble the translation of both sources to the target.
This uses the same multilingual model for all steps.
\begin{table}
\centering
\small
\begin{tabular}{lr}
\toprule
\bf Model & \bf BLEU \\
\midrule
Multilingual & 17.3 \\
Multi-Model Ensemble & 17.5 \\
Pivoting with Multilingual & 17.0 \\
Multi-source Self-Ensemble & 17.5 \\
\bottomrule
\end{tabular}
\caption{\textbf{Results on zero-shot language pairs for Multi-Source Self-Ensemble} compared to various baselines. We report the average test BLEU score on 100 randomly sampled pairs.}
\label{tab:multisource_zeroshot_results}
\end{table}
We evaluate both pivoting and self-ensembling on zero-shot directions as these can benefit from better accuracy.
We report results on 100 randomly sampled zero-shot translation directions which have at least 1000 examples in the validation and test set.
Next, for each translation direction, we choose the intermediary language that resulted in the highest BLEU on the validation set; the same is done to choose the intermediary language for pivoting.
We also tune a weight to balance the two language directions~\citep{garmash2016ensemble}.
Table~\ref{tab:multisource_zeroshot_results} shows that multi-source self-ensembling improves the single model result by 0.2 BLEU on average.
It also performs as well as standard multi-model ensembling but requires training only a single model.
This is particularly relevant for large models trained on vast quantities of data, which require a lot of compute to be able to perform standard ensembling.
\section{Preliminaries}
\label{sec:prelim}
In this work, we investigate how we can best translate from 100 languages to 100 languages, or 9900 directions, using a single model.
We describe our starting point in this section, and provide preliminary context on Transformer-based neural machine translation models.
Sequence-to-sequence models are trained on pairs of sequences, conditioning on an input sequence to produce an output sequence.
Each sentence is split into tokens, that can be words or characters, resulting in pairs of sequences $(w_1,\dots,w_S)$ and $(v_1,\dots,v_T)$.
Most machine translation systems are trained by maximizing the probability of the target sequence, given the source sentence and the target language $\ell_t$:
$$P(v_1,\ \dots,\ v_T~|~w_1,\ \dots,\ w_S,\ \ell_t)$$
Modern neural machine translation systems are based on several standard components, namely a subword segmentation method and an encoder-decoder architecture called a Transformer.
We describe these components in the context of multilingual translation.
\paragraph{Segmentation with SentencePiece.}
The input and output of translation systems are sequences of tokens.
These tokens are units from a dictionary built with the goal to reconstruct any sentence in any language.
Using words as base units is challenging, as it leads either to vocabularies with poor coverage or to large vocabularies.
This is especially true in the multilingual setting.
Another limitation of word-based systems are languages that are not naturally split into words, like Thai.
An alternative approach is to use \textit{subword} units, which are learned directly from data~\citep{sennrich2015neural,kudo2018sentencepiece}.
We use SentencePiece\footnote{\url{https://github.com/google/sentencepiece}} as it was designed to work with languages with no segmentation, making it particularly suited to our setting.
We train a model with 0.9995 character coverage to have sufficient representation of character-based languages.
\paragraph{Creating a Multilingual Dictionary.}
SentencePiece produces subword units depending on their frequency in the training dataset.
Naively applying it to our corpora would result in low resource languages and languages written in less frequent scripts being underrepresented in the resulting dictionary.
Randomly sampling data favors overrepresented languages because the probability of picking language $\ell$ is proportional to its number of sentences, $D_\ell$, i.e., $p_\ell = \frac{D_\ell}{\sum_i{D_i}}$.
We circumvent this problem by adding monolingual data for low resource languages and by using temperature sampling with $T=5$.
More precisely, the probability $p_\ell$ is rescaled to $p_\ell^{\frac{1}{T}}$ where the temperature $T$ controls the distribution.
For example, setting $T$ to $1$ gives the original data distribution.
The resulting dictionary contains 128k unique tokens that are well distributed across languages, as shown in Appendix~\ref{app:dictionary}.
\subsection{Transformers}
Our multilingual machine translation model is based on the Transformer sequence-to-sequence architecture, which is composed of two modules: the encoder and the decoder~\citep{vaswani2017attention}.
The encoder transforms the source token sequence into a sequence of embeddings of the same length.
Then, the decoder sequentially produces the target sentence, token by token, or autoregressively.
More precisely, the encoder takes the sequence of tokens $W=(w_1,\dots,w_S)$ and the source language $\ell_s$, and produces a sequence of embeddings $H=(h_1,\dots,h_S)$, which are then fed to the decoder with the target language $\ell_t$ to produce the sequence of target tokens $V=(v_1,\dots,v_T)$ sequentially, i.e.,
\begin{eqnarray}
H &=& \texttt{encoder} (W,\ \ell_s),\\
\forall i\in[1,\dots,T],~v_{i+1} &=& \texttt{decoder} (H,\ \ell_t,\ v_1,\ \dots,\ v_i).
\end{eqnarray}
Both the encoder and decoder are composed of the same type of layers, called Transformer layers.
Each Transformer layer takes a sequence of vectors as input and outputs a sequence of vectors.
In the encoder, transformer layers are composed of two sublayers, a self-attention and a feed-forward layer.
These are applied sequentially and are both followed by a residual connection~\citep{he2015deep} and layer normalization~\citep{ba2016layer}:
\begin{eqnarray}
Z &=& \texttt{norm}\left(X + \texttt{self-attention}(X) \right),\\
Y &=& \texttt{norm}\left(Z + \texttt{feed-forward}(Z) \right).
\end{eqnarray}
The self-attention layer is an attention layer that updates each element of the sequence by looking at the other elements, while the feed-forward layer (FFN) passes each element of the sequence independently through a 2-layer MLP.
In the decoder, there is an additional third sublayer, between the self-attention and the feed-forward, which computes attention over the output of the encoder.
We refer the reader to \citet{vaswani2017attention} for details of these layers.
\paragraph{Target language token.}
The Transformer architecture has been designed for the bilingual case, where the target language is fixed.
In the case of multilingual machine translation, the target language is not fixed, and several strategies can be applied to condition the network to produce a sentence in the desired target language.
Similarly to~\citet{ha2016toward} and~\citet{johnson2017google}, we add a special token in the encoder indicating the source language and a special token in the decoder indicating the target language.
\paragraph{Training.}
Our starting point for improving massively multilingual translation models is a large Transformer model, with 12 Encoder and 12 Decoder layers, with 8192 FFN size and 1024 embedding dimension. We share the weight matrices of the input and output embeddings. The total parameter count is 1.2B. We train with the Adam optimizer~\citep{kingma2015adam} and warmup first for 4000 updates, with label smoothing $0.1$~\citep{szegedy:inception:2015,pereyra:regularize:2017}. For regularization, we tune the dropout parameter between $\{0.1, 0.2, 0.3\}$. To stabilize the training of deeper Transformers, we train with LayerDrop~\citep{fan2019reducing} 0.05 and pre-normalization~\citep{nguyen2019transformers}.
To train with billions of sentences, we split the training data into 256 different shards to manage memory consumption. However, directly dividing mid and low resource languages into shards would reduce the variability of each shard's data for mid or low resource languages. Imagine the case where there are only $100$ sentences of a language direction per shard --- the model would easily overfit. Thus, each language is divided into a different number of shards based on resource level, such that high resource languages have more shards and the lowest resource languages only have one shard. Subsequently, lower resource shards are replicated until the full number of shards is reached.
\paragraph{Generation.}
Unless otherwise specified: for all results, we report single models with no checkpoint averaging, use beam search with beam 5, and do not tune length penalty.
\section{Many-to-Many Compared to English Centric}
\label{sec:comp}
In this section, we first present an experiment to better understand the performance improvements of English-Centric systems and to compare them to our Many-to-Many setting.
\paragraph{Experimental Setting.}
We train our $1.2$B model on the full $100$ language Many-to-Many dataset and compare it to the same model trained only on data through English.
We use the same vocabulary built with SentencePiece on the full dataset in both cases.
Each model has a different dataset size and we train for 500K updates.
This number of updates corresponds to one pass over the entire Many-to-Many dataset and $3.5$ passes on the English-centric data.
We tune the dropout rate for each model over the values $\{0.1, 0.2, 0.3\}$.
\subsection{Main Result}
\begin{table}[t]
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{l ccc}
\toprule
\bf Setting & \bf To English & \bf From English & \bf Non-English \\
\midrule
Bilingual baselines & 27.9 & \bf 24.5 & 8.3 \\
English-Centric & 31.0 & 24.2 & 5.7 \\
English-Centric with Pivot & --- & --- & 10.4 \\
\midrule
Many-to-Many & \bf 31.2 & 24.1 & \bf 15.9 \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of Many-to-Many and English-Centric Systems.} Many-to-Many matches the performance of English-centric on evaluation directions involving English, but is significantly better on non English directions.
}
\label{tab:m2m_english}
\end{table}
\begin{table}[t]
\begin{minipage}{0.47\textwidth}
\centering
\small
\begin{tabular}{l cc}
\toprule
\bf Setting & \bf w/ bitext & \bf w/o bitext\\
\midrule
En-Centric & 5.4 & 7.6 \\
En-Centric Piv. & 9.8 & 12.4 \\
\midrule
M2M & \bf 12.3 & \bf 18.5 \\
\bottomrule
\end{tabular}
\caption{
\textbf{Many-to-Many versus English-Centric on zero-shot directions.}
We report performance on language pairs with and without bitext in the Many-to-Many training dataset.
}
\label{tab:m2m_english2}
\end{minipage}
\hfill
\begin{minipage}{0.47\textwidth}
\centering
\small
\begin{tabular}{l ccc}
\toprule
\bf Setting & \bf $\rightarrow$En & \bf En$\rightarrow$ & \bf Non-En \\
\midrule
En-Centric & \bf 26.4 & 17.8 & 2.4 \\
En-Centric Piv. & --- & --- & 5.1 \\
\midrule
M2M & 25.7 & \bf 18.1 & \bf 9.4 \\
\bottomrule
\end{tabular}
\caption{
\textbf{Many-to-Many versus English-Centric on one pass of data.}
We report performance for models after a number of updates equivalent to the size of the English-centric dataset.
}
\label{tab:m2m_english_onepass}
\end{minipage}
\end{table}
In Table~\ref{tab:m2m_english}, we compare the performance of both models on different types of directions, namely, any language to English (To English), English to any language (From English), and all the directions not involving English (Non-English).
Performance is aggregated over $150$ directions for To English and From English, and over 2500 directions for Non-English.
On the pairs including English, both models achieve similar performance, suggesting that a $1.2$B model does not underfit even though the additional non-English data represents $98\%$ of the directions and 74\% of the data.
For the non-English pairs, we consider two translation strategies for the English-Centric model: directly translating as if the model was trained on the pair -- by using the corresponding language tokens -- or by pivoting through English.
Our model outperforms direct translation with the English-Centric model by $10.2$ BLEU and when the English-Centric model uses pivoting by $5.5$ BLEU.
While this result is not surprising, it confirms that a purely English-Centric model has limited potential on non-English pairs, and there is a fundamental need for training on Many-to-Many data.
\subsection{Understanding the Source of Improvement}
The main impact of adding Many-to-Many data is on the directions that do not include English.
In this section, we provide a detailed study of where we observe the largest improvements with the additional data.
\paragraph{Impact on Zero-shot.}
Many non-English pairs are not covered by our Many-to-Many model, and we can thus study if the improvements we observe originate primarily from directions associated with bitext data or if we observe the same improvement on directions where the Many-to-Many model generates translations in a zero-shot fashion.
In Table~\ref{tab:m2m_english2}, we show the performance if the evaluation is split between the Non-English pairs \textit{with} and \textit{without} bitext.
On directions with bitext, the Many-to-Many model outperforms the English-Centric model by $7$ BLEU for direct translation, and by $3.5$ BLEU for English-Centric with pivoting.
This shows the importance of diverse data.
Not surprisingly, this gap is even bigger on pairs without bitext.
Many-to-Many performs nearly $11$ BLEU better than the English-Centric model for direct translation, and with pivoting the gain remains over $6$ BLEU.
\paragraph{Impact of the quantity of training data.}
A hypothesis to explain the gain between English-Centric and Many-to-Many models is the effect of additional source and target side training data.
Even if the Many-to-Many system has never seen a direction at training time, it benefits from additional source and target side data available through other training pairs.
As mining non-English language pairs creates more training data compared to English-centric datasets, the Many-to-Many model benefits from a larger training set.
In Table~\ref{tab:m2m_english_onepass}, we compare both models after seeing the same quantity of data.
We train both models for one epoch. The English-Centric model performs better on To English directions, likely because it only has one output language to learn, but the Many-to-Many model outperforms on From English directions and Non-English directions.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{improve_v_data.png}
\caption{\textbf{Improvement of Many-to-Many over English-centric} with respect to the amount of mined training data (left) and the amount of target side language data (right). Improvement from a Many-to-Many model correlates with greater amounts of bilingual training data with Pearson correlation 0.38 (left) and greater amounts of target language data with Pearson correlation 0.32 (right).
}
\label{fig:bt_fig3}
\end{figure}
\paragraph{Which Pairs Improve the Most?}
The main factor for improvement is the quantity of data associated with either a pair or a language.
Pairs that have a large quantity of mined data, such as Spanish-Portuguese, greatly benefit from our Many-to-Many dataset.
We show this effect in the left panel of Figure~\ref{fig:bt_fig3} (left).
A second source of improvement is observed on languages for which the Many-to-Many dataset contains a large amount of data across many pairs.
This data benefits the decoder-side language model in a way that is comparable with BT.
In the right panel of Figure~\ref{fig:bt_fig3}, we show the impact of this cumulative monolingual data on the average performance per language.
Finally, we also observe a third type of improvements from the similarity in vocabulary and syntax from related languages.
A striking example is the quality of translation between English and Belarusian, where the Many-to-Many model achieves 12.7 BLEU on the TED evaluation set, compared to 3.2 BLEU for a bilingual model.
The number of bitexts for Belarusian is small, but Belarusian is related to Russian, and the Many-to-Many model transfers its knowledge from Russian to Belarusian.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{mining_v_bt.pdf}
\caption{\textbf{Performance of many-to-English multilingual translation compared to bilingual baselines trained on mined data and bilingual + backtranslation.}
The average performance of many-to-English is 25.1 BLEU compared to 25.2 BLEU for back-translation while the bilingual system achieves 23.1.
}
\label{fig:bt_fig2}
\end{figure}
\subsection{Understanding the Performance of English-Centric Systems}
In Table~\ref{tab:m2m_english}, we confirm an observation made in~\citet{arivazhagan2019massively} that an English-Centric model improves the most over bilingual models on the directions into English, while improvement in the other directions (From English) remain more modest.
A hypothesis to explain this discrepancy between directions from and to English is that the decoder of an English-Centric model learns a better English language model by leveraging the aggregated English data across all through-English directions.
\paragraph{Result.}
We test this hypothesis with a controlled experiment where we compare a Many-to-English model with bilingual models using backtranslated English data (\autoref{sec:bt}).
The experiment is based on 11 Slavic languages and we backtranslate the exact same English data as was used to train the Many-to-English model so that both models are trained on the same English data.
Figure~\ref{fig:bt_fig2} shows that backtranslation performs comparably to the Many-to-English approach.
While this improves our understanding of Many-to-English translation, a multilingual approach nevertheless retains the advantage of combining many directions into a single model which greatly simplifies modeling.
\section{Building a Many-to-Many Parallel Dataset for 100 Languages}
\label{sec:data}
In this section, we provide an overview of our Many-to-Many setting: the selection of the $100$ languages, the evaluation benchmarks, and the construction of a large-scale training set through data mining~\citep{artetxe2018margin} and backtranslation~\citep{sennrich2016backtranslation} that provides training data thousands of directions.
\subsection{Creating a Multilingual Benchmark}
The first step of establishing a Many-to-Many dataset is to select $100$ languages for which there already exist high-quality, annotated datasets that can be used for model evaluation.
\subsubsection{Language Selection}
\newcommand{\tabtl}[1]{\begin{tabular}[h]{@{}l@{}} #1 \end{tabular}}
\newcommand{\multicolumn}{\multicolumn}
\newcommand{\multirow}{\multirow}
\begin{table}[]
\footnotesize
\scriptsize
\centering
\begin{tabular}{l@{\,}l l l | l@{\,}l l l }
\toprule
\bf ISO & \bf Language & \bf Family & \bf Script & \bf ISO & \bf Language & \bf Family & \bf Script \\
\midrule
af & Afrikaans & Germanic & Latin & ja & \textbf{Japanese} & Japonic & Kanji; Kana \\
da & Danish & Germanic & Latin & ko & \textbf{Korean} & Koreanic & Hangul \\
nl & \textbf{Dutch} & Germanic & Latin & vi & \textbf{Vietnamese} & Vietic & Latin \\
de & \textbf{German} & Germanic & Latin & zh & \textbf{Chinese Mandarin} & Chinese & Chinese \\
\cmidrule{5-8} \\[-10pt]
en & \textbf{English} & Germanic & Latin & bn & \textbf{Bengali} & Indo-Aryan & Eastern-Nagari \\
is & Icelandic & Germanic & Latin & gu & Gujarati & Indo-Aryan & Gujarati \\
lb & Luxembourgish & Germanic & Latin & hi & \textbf{Hindi} & Indo-Aryan & Devanagari \\
no & Norwegian & Germanic & Latin & kn & Kannada & Tamil & Kannada \\
sv & \textbf{Swedish} & Germanic & Latin & mr & Marathi & Indo-Aryan & Devanagari \\
fy & Western Frisian & Germanic & Latin & ne & Nepali & Indo-Aryan & Devanagari \\
yi & Yiddish & Germanic & Hebrew & or & Oriya & Indo-Aryan & Odia \\
\cmidrule{1-4} \\[-10pt]
ast & Asturian & Romance & Latin & pa & Panjabi & Indo-Aryan & Gurmukhi \\
ca & Catalan & Romance & Latin & sd & Sindhi & Indo-Aryan & \tabtl{Persian\\ Devanagari} \\
fr & \textbf{French} & Romance & Latin & si & Sinhala & Indo-Aryan & Sinhala \\
gl & Galician & Romance & Latin & ur & Urdu & Indo-Aryan & Arabic \\
it & Italian & Romance & Latin & ta & \textbf{Tamil} & Dravidian & Tamil \\
\cmidrule{5-8} \\[-10pt]
oc & Occitan & Romance & Latin & ceb & Cebuano & Malayo-Polyn. & Latin \\
pt & \textbf{Portuguese} & Romance & Latin & ilo & Iloko & Philippine & Latin \\
ro & Romanian & Romance & Latin & id & \textbf{Indonesian} & Malayo-Polyn. & Latin \\
es & \textbf{Spanish} & Romance & Latin & jv & Javanese & Malayo-Polyn. & Latin \\
\cmidrule{1-4} \\[-10pt]
be & Belarusian & Slavic & Cyrillic & mg & Malagasy & Malayo-Polyn. & Latin \\
bs & Bosnian & Slavic & Latin & ms & Malay & Malayo-Polyn. & Latin \\
bg & Bulgarian & Slavic & Cyrillic & ml & Malayalam & Dravidian & Malayalam \\
hr & Croatian & Slavic & Latin & su & Sundanese & Malayo-Polyn. & Latin \\
cs & \textbf{Czech} & Slavic & Latin & tl & Tagalog & Malayo-Polyn. & Latin \\
\cmidrule{5-8} \\[-10pt]
mk & Macedonian & Slavic & Cyrillic & my & Burmese & Sino-Tibetan & Burmese \\
pl & \textbf{Polish} & Slavic & Latin & km & Central Khmer & Khmer & Khmer \\
ru & \textbf{Russian} & Slavic & Cyrillic & lo & Lao & Kra-Dai & Thai; Lao \\
sr & Serbian & Slavic & Cyrillic; Latin & th & Thai & Kra-Dai & Thai \\
sk & Slovak & Slavic & Latin & mn & Mongolian & Mongolic & Cyrillic \\
\cmidrule{5-8} \\[-10pt]
sl & Slovenian & Slavic & Latin & ar & \textbf{Arabic} & Arabic & Arabic \\
uk & Ukrainian & Slavic & Cyrillic & he & \textbf{Hebrew} & Semitic & Hebrew \\
\cmidrule{1-4} \\[-10pt]
et & Estonian & Uralic & Latin & ps & Pashto & Iranian & Arabic \\
fi & \textbf{Finnish} & Uralic & Latin & fa & \textbf{Farsi} & Iranian & Arabic \\
\cmidrule{5-8} \\[-10pt]
hu & \textbf{Hungarian} & Uralic & Latin & am & Amharic & Ethopian & Ge'ez \\
lv & Latvian & Baltic & Latin & ff & Fulah & Niger-Congo & Latin \\
lt & \textbf{Lithuanian} & Baltic & Latin & ha & Hausa & Afro-Asiatic & Latin \\
\cmidrule{1-4} \\[-10pt]
sq & Albanian & Albanian & Latin & ig & Igbo & Niger-Congo & Latin \\
hy & Armenian & Armenian & Armenian & ln & Lingala & Niger-Congo & Latin \\
ka & Georgian & Kartvelian & Georgian & lg & Luganda & Niger-Congo & Latin \\
el & \textbf{Greek} & Hellenic & Greek & nso & Northern Sotho & Niger-Congo & Latin \\
\cmidrule{1-4} \\[-10pt]
br & Breton & Celtic & Latin & so & Somali & Cushitic & Latin \\
ga & Irish & Irish & Latin & sw & \textbf{Swahili} & Niger-Congo & Latin \\
gd & Scottish Gaelic & Celtic & Latin & ss & Swati & Niger-Congo & Latin \\
cy & Welsh & Celtic & Latin-Welsch & tn & Tswana & Niger-Congo & Latin \\
\cmidrule{1-4} \\[-10pt]
az & Azerbaijani & Turkic & \tabtl{Latin; Cyrillic\\ Persian} & wo & Wolof & Niger-Congo & Latin \\
ba & Bashkir & Turkic & Cyrillic & xh & Xhosa & Niger-Congo & Latin \\
kk & Kazakh & Turkic & Cyrillic & yo & Yoruba & Niger-Congo & Latin \\
tr & \textbf{Turkish }& Turkic & Latin & zu & Zulu & Niger-Congo & Latin \\
\cmidrule{5-8} \\[-10pt]
uz & Uzbek & Turkic & Latin; Cyrillic & ht & Haitian Creole & Creole & Latin \\
\bottomrule
\end{tabular}
\caption{\textbf{100 Languages grouped by family.} For each language, we display the ISO code, language name, language family, and script. Languages in bold are \textit{bridge languages} (\textit{Malayo-Polyn.} stands for \textit{Malayo-Polynesian}).}
\label{tab:all_languages}
\end{table}
We consider several factors to select which languages to focus on.
First, we include widely-spoken languages from geographically diverse language families.
We cover a diversity of scripts and resource levels (as shown in Table~\ref{tab:all_languages}) to have high coverage of languages worldwide.
Second, we use languages for which public evaluation data exists, as we must be able to quantify model performance.
Lastly, we only use languages for which monolingual data is available, as monolingual data is a critical resource for large-scale mining.
Combining these three criteria results creates our full list of 100 languages, summarized in Table~\ref{tab:all_languages}.
\subsubsection{Evaluation Benchmarks}
\label{sec:eval_data}
We use publicly available evaluation datasets to evaluate the performance of all of our models.
To cover our set of $100$ languages and $2200$ directions, we bring together data from a variety of sources. We describe each evaluation dataset below.
\begin{itemize}
\item \textbf{WMT} --- The majority of language pairs from WMT go through English and the data is from the news domain. We consider data for $13$ languages~\citep{ondrej2017findings,bojar-etal-2018-findings,barrault2019findings}.
\item \textbf{WAT} --- The WAT competition covers Asian languages paired with English. We consider data for Burmese-English~\citep{riza2016introduction}, which contains news articles. WAT contains many other evaluation directions, but many of those are covered by WMT or in a specific domain, so we focus on Burmese-English for WAT only.
\item \textbf{IWSLT} --- The IWSLT translation competition contains data from TED talks paired with English translations. We use data for $4$ languages~\citep{cettolo2017overview}.
\item \textbf{FLORES} --- FLORES\footnote{\url{https://github.com/facebookresearch/flores}}~\citep{flores2019} pairs two low resource languages, Sinhala and Nepali, with English in the Wikipedia domain.
\item \textbf{TED} --- The TED Talks dataset\footnote{\url{https://github.com/neulab/word-embeddings-for-nmt}}~\citep{Ye2018WordEmbeddings} contains translations between more than $50$ languages; most of the pairs do not include English. The evaluation data is n-way parallel and contains thousands of directions.
\item \textbf{Autshumato} --- Autshumato\footnote{\url{https://repo.sadilar.org/handle/20.500.12185/506}, CTexT® (Centre for Text Technology, North-West University), South Africa; Department of Arts and Culture, South Africa} is an $11$-way parallel dataset comprising $10$ African languages and English from the government domain. There is no standard valid/test split, so we use the first half of the dataset for valid and second half for test.
\item \textbf{Tatoeba} --- Tatoeba Challenge\footnote{\url{https://tatoeba.org/eng/}} covers $692$ test pairs from mixed domains where sentences are contributed and translated by volunteers online. The evaluation pairs we use from Tatoeba cover 85 different languages.
\end{itemize}
We evaluate the quality of translations with BLEU~\citep{papineni2002bleu}. We first detokenize all data, then apply standard tokenizers for each language before computing BLEU. For most languages, we use the \texttt{moses} tokenizer~\citep{koehn2007moses}.\footnote{\url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl}} For Chinese we use the SacreBLEU tokenizer (\texttt{tok zh}) and convert all traditional characters generated by the model to simplified characters using HanziConv\footnote{\url{https://github.com/berniey/hanziconv}}~\citep{post2018sacrebleu},\footnote{The evaluation datasets for Chinese usually contained simplified characters. However, our training data contains a mix of simplified and traditional characters, and thus the model could generate either. We convert the generated traditional Chinese characters to simplified for consistency.} for Indian languages we use the Indic NLP library~\citep{kunchukuttan2020indicnlp},\footnote{\url{https://github.com/anoopkunchukuttan/indic_nlp_library}} for Japanese we use Kytea,\footnote{\url{https://github.com/neubig/kytea}} for Thai we use PyThaiNLP~\citep{pythainlp},\footnote{\url{https://github.com/PyThaiNLP/pythainlp}} for Arabic we use the QCRI Arabic Normalizer,\footnote{\url{http://alt.qcri.org/tools/arabic-normalizer/}} for Korean we use Mecab,\footnote{\url{https://pypi.org/project/python-mecab-ko/}} for Burmese we use the official segmentation tool provided by \citet{ding2019towards}, for Romanian we follow \citet{sennrich2016edinburgh} and apply Moses tokenization, special normalization, and remove diacritics for Romanian texts,\footnote{\url{https://github.com/rsennrich/wmt16-scripts/blob/master/preprocess/}} and finally for Serbian we transliterate the output to Latin characters before computing BLEU.\footnote{In Serbian, both Latin script and Cyrillic script are used, and often intermixed within a sentence in the evaluation data. As the target sentence could be in either script and it is not possible to predict the target script from the input, we transliterate before computing BLEU.}
We release the tokenization and evaluation scripts for reproducibility \href{https://github.com/pytorch/fairseq/tree/master/examples/m2m_100}{\texttt{here}}\footnote{\url{https://github.com/pytorch/fairseq/tree/master/examples/m2m_100}}. We remove all data from all evaluation sets from our training sets.
\subsection{Covering the Language Matrix by Mining Relevant Parallel Data}
Supervised translation systems rely on large quantities of parallel sentences, which we refer to as bitext data, which are traditionally derived from human translations.
Most existing bitext datasets go through English, with a few domain specific exceptions such as proceedings from international organizations~\citep{Koehn:2005:mtsummit_eurparl,ziemski:2016:un_corpus}.
These corpora are limited in size and domain, and an alternative is to \textit{mine} parallel data~\citep{resnik1999mining,utiyama2003reliable} in large collections of monolingual data~\citep{conneau2019unsupervised,wenzek2019ccnet}.
In this work, we leverage and extend the corpus provided by two of these mining projects: CCMatrix~\citep{schwenk2019ccmatrix} and CCAligned\footnote{\url{http://www.statmt.org/cc-aligned}}~\citep{elkishky2020ccaligned}.
In the following, we describe our mining strategy and summarize the main ideas of CCMatrix and CCAligned. We refer the reader to the references for a detailed description of the approaches.
\paragraph{Mining parallel data with LASER.}
Mining parallel data consists of searching for sentences that could be potential translations in large monolingual corpora.
This search requires a measure that captures the semantic similarity between sentences in different languages.
Most recent methods build this similarity by comparing the embeddings from a neural network trained on multilingual data~\citep{artetxe2018margin,chen:2020:acl_mine,bojar:2020:acl_mine}.
We focus on the embeddings generated by the LASER encoder, which enables the comparison of sentences in $94$ different languages~\citep{Artetxe:2018:arxiv_massive_ml}. We then retrieve parallel corpora efficiently using FAISS indexing~\citep{johnson2019billion}.
LASER embeddings generalize to unseen languages, like Asturian, allowing us to mine bitexts for $100$ languages.
The generic data mining pipeline consists of several steps: \textbf{(1)} a large corpus of text is preprocessed and divided into different languages, \textbf{(2)} candidate pairs of aligned sentences are embedded and stored in a index, \textbf{(3)} indexed sentences are compared to form potential pairs, \textbf{(4)} the resulting candidate pairs are filtered in post-processing.
\paragraph{CCMatrix Mining Approach.}
CCMatrix takes a global approach: all unique sentences in one language are compared with all unique sentences in another language.
This \textit{global mining} approach has the advantage of considering all possible documents when searching for the translation of a sentence.
CCMatrix works on the large monolingual corpora in the $91$ languages of CCNet~\citep{wenzek2019ccnet}, but at this scale, the global search is computationally demanding even with fast indexing from FAISS~\citep{johnson2019billion}.
Thus, we apply it to a selected subset of relevant pairs, as detailed in \autoref{sect:bridge}.
\paragraph{CCAligned Mining Approach.}
CCAligned avoids the scaling challenges of global mining by pre-selecting documents to compare.
This \textit{local mining} follows a hierarchical approach: first, document-level language identification along with various rules is applied to find whole documents that are likely to contain mutual translations~\citep{elkishky2020ccaligned}. Parallel sentences are then mined using LASER-based alignment within the paired documents only. Filtering~\citep{chaudhary2019low} is performed to remove unaligned data that exists because the original webpage did not have any parallel data, only partial parallel data, or other processing failures.
One advantage of this approach is that it is very fast, scalable, and retrieves parallel sentences with high precision. Another is that each English document is aligned to many non-English documents --- thus, mining non-English pairs can be quickly performed by joining non-English documents paired to the same source.
\paragraph{Postprocessing.}
We apply a filtering step to remove sentences of greater than 50\% punctuation.
The data is then deduplicated, and we remove any sentence that appears in any validation or test dataset -- even if it is associated with another language pair.
Finally, we apply length and language-specific filtering.
The length filtering removes sentences that are too long -- more than $250$ subwords after segmentation with SPM -- or with a length mismatch between the sentence and its translation -- if the length ratio is greater than $3\times$.
The language-specific filtering removes sentences that contain more than $50\%$ of characters that have not been marked as core to the identified language -- specifically, characters that are commonly used in the identified language with the exception of white space, numbers, punctuation, and Latin characters for non-Latin script languages.
\subsubsection{Bridge Language Group Mining Strategy}
\label{sect:bridge}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{mining_setting.png}
\caption{\textbf{Depiction of an English-Only data mining setting compared to the Bridge Language Mining Strategy}. We display a data matrix, where languages are shown on the X and Y axes. Data is mined in one direction (such as Hindi to Marathi) and used to train bidirectionally.\\
}
\label{fig:data_fig0}
\end{figure}
Mining data for each and every language pair is prohibitive --- previous work circumvents this issue by focusing only on the $99$ pairs that go through English~\citep{zhang2020improving}. One alternative to the extensive computation required to mine all possible combinations of pairs is \textit{sparse mining}, or mining only a select subset of pairs. A straightforward strategy is to \textit{randomly} select pairs to mine, but this does not use any linguistic information on how languages are related and spoken around the world.
In this work, we propose an alternative based on language families and bridge languages that avoids exhaustively mining every possible pair.
Our goal is to reduce the number of bitext pairs while preserving translation directions of practical interest.
We first group all the $100$ languages into $14$ \textit{language groupings}.
All languages within a grouping are mined against each other.
For instance, within the Indic language grouping, we mine all pairs of Bengali, Hindi, Marathi, Tamil, Urdu, and so on.
The motivation for this strategy is two-fold.
First, people living in areas that speak multiple languages in the same grouping tend to communicate a lot with each other and would benefit from high quality direct translation.
Second, systematically mining languages of the same grouping is helpful for training language-specific parameter models (see \autoref{sec:lang_specific}).
For the most part, languages are grouped by linguistic similarity, e.g. Germanic, Slavic, or Malayo-Polynesian languages.
However, the size of the resulting groupings varies greatly, resulting in less mined data for the languages in the smallest groupings.
We further group languages by geographic and cultural proximity to reduce this discrepancy.
For example, Uralic and Baltic languages are gathered into a single group to increase the quantity of mined data.
The resulting groupings are shown in Table~\ref{tab:all_languages}.
To connect languages across groupings, we define 1--3 \textit{bridge languages} in each grouping, usually those with the most resources, such as
Bengali, Hindi, and Tamil for the $12$ languages in the Indo-Aryan family.
All $26$ bridge languages are highlighted in Table~\ref{tab:all_languages}.
These bridge languages are mined against all other bridge languages.
Finally, all $100$ languages are mined against English.
We illustrate this mining strategy in Figure~\ref{fig:data_fig0}.
On the left, we depict what many current approaches model: data only through English.
On the right, we depict our Many-to-Many language matrix for several example languages.
Compared to English-Centric, our dataset has far greater coverage of non-English, direct translation directions.
\paragraph{Training Data Statistics.}
In total, our final training dataset contains 7.5B parallel sentences, corresponding to $2200$ directions.
In Figure~\ref{fig:data_statistics}, we show all bridge languages and demonstrate how their associated training data is divided between translations with English, within a language grouping, or with bridge languages across language groupings.
Of particular interest is the comparison between the additional Many-to-Many data and the data through English.
We observe that 5--10 times more parallel data can be mined if using a Many-to-Many strategy, compared to an English-Centric one. This is particularly beneficial for mid- and low-resource languages.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{amt_data.pdf}
\caption{\textbf{Total Amount of Data through Bridge Languages on our 100x100 Training Corpus}. We depict the amount of data through English (gray), amount of data through a bridge language not counting English (orange), and amount of data through the language grouping not counting bridge languages (blue).}
\label{fig:data_statistics}
\end{figure}
\subsubsection{Results}
\begin{table}[t]
\begin{minipage}{0.44\textwidth}
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{l | c | c c c }
\toprule
& \bf All & \multicolumn{3}{c}{\bf Supervised} \\
\bf Model& \bf Avg & \bf Low & \bf Mid & \bf High \\
\midrule
Random 80\% & 11.9 & 3.6 & 16.1 & 31.5 \\
Random 80\% w/ En & 16.3 & 8.9 & 22.4 & 36.6 \\
\midrule
Bridge Language, 80\% & \bf 17.2 & \bf 10.4 & \bf 23.2 & \bf 37.4 \\
\bottomrule
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}{0.44\textwidth}
\centering
\includegraphics[width=\linewidth]{sparsity}
\end{minipage}
\caption{\textbf{(left) Comparison of Sparse Mining Strategies}. We first hold the sparsity level fixed at 80\% --- compared to randomly selecting pairs to mine, the Bridge Language mining strategy performs very well. \textbf{(right) Bridge Language Strategy at Different Sparsity Levels}. We analyze different levels of sparsity in the language matrix to understand how many pairs to mine. Based on these results, we take 60\% sparse as a tradeoff point between strong performance and reasonable quantity of mining. 0\% indicates no sparsity, or a fully mined language matrix.}
\label{tab:mining_strategy1}
\end{table}
We validate the impact of several decisions made during data construction.
First, we study the impact of our bridge language strategy compared to English-Centric mining augmented by other random pairs, as well as fully random mining.
Second, we investigate the impact of the level of sparsity chosen in our bridge strategy, focusing on a subset of 50 languages.
\paragraph{Bridge Language strategy versus Random and English-Centric Random.}
We experimentally evaluate the impact of our bridge language mining strategy on the performance of our baseline model in Table~\ref{tab:mining_strategy1} (left).
We consider two additional baselines, a fully random mining strategy (Random 20\%) and a \textit{English-Centric + Random} strategy (Random 20\% w/ En).
In the Random strategy, mined pairs are randomly chosen, while in the \textit{English-Centric + Random} strategy, we retain all pairs through English and only select the remaining pairs randomly.
We show that fully random mining has a substantial negative impact on performance, as a lot of high quality data is aligned through English, so sampling fully randomly eliminates a large portion of the potential training set.
Random 20\% w/ En is worse as well. Through examination, we find that randomly sampling pairs to mine often selects pairs that do not produce as much data, as the pairs may not include high resource languages. However, the bridge language strategy ensures that high resource languages are mined, and then focuses on mining languages in related families. This produces a large amount of bitext, and at the same time, covers many language directions.
\paragraph{Impact of Sparsity.}
We control the sparsity of our language matrix using the number of bridge languages. In Figure~\ref{tab:mining_strategy1} (right), we show the impact of sparsity on the performance of our baseline model compared to a fully mined language matrix (0\% sparse).
We observe that increasing the amount of mined data to make the matrix less sparse is helpful, but fully mining the matrix is not substantially better.
The main reason is that our mining strategy prioritizes frequently used pairs which are often associated with the largest bitext, while the discarded pairs are often associated with small bitext. For example, fully mining the matrix would mine a pair such as Icelandic to Chinese, but the amount of data produced by mining this pair is quite low. This case is representative of what occurs as the full matrix is mined --- as increasingly more data is mined, the additional pairs begin to add less data which in turn leads to diminishing quality improvements.
\subsection{Augmenting Bitext Data with Backtranslation}
\label{sec:bt}
Backtranslation (BT) creates synthetic bitexts from unaligned monolingual data~\citep{Schwenk:2008:unsup,bojar2011bt_pbmt,sennrich2016backtranslation,edunov2018bt,hoang2018iterative}.
The core idea is to translate monolingual sentences in the backward direction, and add the obtained synthetic translations to the training set.
More precisely, when training a model to translate from a source language to a target language, backtranslation generates additional data by translating monolingual target sentences into the source language.
Using backtranslation thus requires the ability to translate in both directions, which fits well into the setting of multilingual machine translation ~\citep{zhang2020improving,siddhant2020leveraging}.
However, generating these backtranslations is time consuming even for a single direction, which is compounded in the Many-to-Many case.
We thus focus on applying backtranslation on specific pairs to supplement mining data where needed.
\paragraph{Selecting Backtranslation Pairs.}
Our goal is to translate between 100 languages and to provide good translation quality for as many translation directions as possible.
To this end, we use BT to improve directions which have initially lower translation quality.
We identify these language directions by measuring the quality of our 1.2B parameter multilingual model before applying BT.
Since back-translation is computationally intensive, we focus on $100$ directions with a BLEU score of between $2$ and $10$.
For $50$ of these directions, we do not have any bitext at all as we did not mine all 4,450 possible language pairs.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{bt_improvement}
\caption{\textbf{Improvements from Adding Backtranslated Data.} For each of the 100 language directions we explored by adding backtranslation. The blue line indicates the original model, where directions were selected if they had between 2 and 10 BLEU. The orange scatter indicates the effect of adding backtranslation. Languages are ordered by their original BLEU scores before backtranslation.}
\label{fig:bt_fig1}
\end{figure}
\paragraph{Training a Multilingual Model with Additional Backtranslations.}
For the selected pairs, we first generate synthetic translations that are added to the training set without upsampling.
Following~\citet{chelba2019tagged}, we add a special encoder-side BT token to these translations to indicate to the model that they are synthetic.
For each of the $100$ target languages, we randomly sample $50$ million unique monolingual sentences from the cleaned CommonCrawl corpus of \citet{wenzek2019ccnet}.
The synthetic translations are then generated with our $1.2$B MMT model.
We use a beam search with beam of size $5$ and fix all the hyper-parameters, including the length penalty, to the same values for all the directions.
We apply the same filtering to the backtranslations as the original mined training data, which substantially reduces the size of the resulting synthetic bitexts.
\paragraph{Impact of Backtranslated Data.}
Results are shown in Figure~\ref{fig:bt_fig1}, where we compare the original Many-to-Many model used to create the backtranslations (blue line) with the improvements after training a multilingual model with the backtranslation added (orange scatter).
Backtranslation almost always improves performance for any direction, regardless of the original BLEU score.
As the amount of data generated with BT correlates with the length of training time, we decide to focus on applying BT on directions with low performance (BLEU between 2 and 10) to improve our MMT system where it underperforms.
\subsection{Balancing Languages in a Many-to-Many Setting}
\begin{table}[t]
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{l c cccc c c c c}
\toprule
&~~& \multicolumn{4}{c}{\bf Supervised} && \bf Zero-Shot && \bf All\\
\cmidrule{3-6}
\bf Data sampling && \bf Low & \bf Mid & \bf High & \bf Avg && \bf Avg && \bf Avg\\
\midrule
Uniform && 6.1 & 20.4 & 38.4 & 19.0 && 11.8 && 15.7 \\
Temperature Rescaling && 10.2 & 23.7 & 38.0 & 21.8 && 13.0 && 18.1 \\
\midrule
Sinkhorn Temp. Rescaling && \bf 10.9 & \bf 24.1 & \bf 38.3 & \bf 22.2 && \bf 13.5 && \bf 18.6 \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of Various Sampling Strategies.}
We report BLEU on the validation set of our $1.2$B base multilingual model trained with different data sampling schemes.
Performance is broken down into different resource-setups (low, mid, high) where bitext data exists (supervised) or in the zero-shot setting for pairs without data.}
\label{tab:temp_ablation}
\end{table}
The data distribution produced by large-scale mining is not balanced between languages, so training a model would favor over-represented languages.
A standard solution is to rebalance each language independently with Temperature Sampling~\citep{arivazhagan2019massively}, e.g. replacing the probability $p_\ell$ of a language by $p_\ell^{\frac{1}{T}}$.
In the English-centric case, changing the probability of sampling a language changes the probability of the other languages only through the normalization.
However, in the Many-to-Many case, language distributions are more interdependent.
For example, some languages are only paired with a subset of other languages or to an overrepresented language.
Thus, sampling them will affect the probability of these other languages they are paired with.
This strategy thus has no guarantee to produce the target balanced distribution between languages.
We describe \textit{Sinkhorn Temperature Sampling}, which extends the temperature sampling strategy to the Many-to-Many setting.
Our goal is to design a sampling technique such that the distribution of languages on the source \textit{and} target sides is equal to a given target distribution.
Unfortunately, sequentially sampling the source language and then the target would not work, as some languages are only paired with a subset of languages --- making it impossible to sample the target language according to a given distribution.
Moreover, the sizes and distributions of bitexts greatly vary from a language to another.
Instead, we propose directly sampling a pair of languages from a matrix of pair probabilities such that the marginal distributions of languages corresponds to our target distribution.
In practice, this means that each row and column of the matrix should sum to the probability of the corresponding language.
More precisely, we estimate a square matrix $\mathbf{P}^*$ such that:
$$\max_\mathbf{P} ~\text{tr}~ \left( \mathbf{P Q} \right) ~~~~~~\text{s.t.}~~~\mathbf{P}1_L = \mathbf{p}^{\frac{1}{T}},~\mathbf{P}^\top1_L = \mathbf{p}^{\frac{1}{T}},$$
where $\mathbf{p}$ is the vector stacking the probabilities of the $L$ languages and $\mathbf{Q}$ is the matrix of pair probabilities.
This problem can be solved exactly with the Sinkhorn-Knopp algorithm.
The matrix $\mathbf{Q}$ has entries equal to $0$ for pairs with no bitext and this algorithm preserves them in the solution $\mathbf{P}^*$,
hence adding no probability mass to missing bitexts.
We calculate this once before training and set the temperature $T$ to $5$.
In Table~\ref{tab:temp_ablation}, we show the benefits of this strategy over temperature sampling with a constant improvement of $0.5$ in BLEU.
\section{Introduction}
\input{intro.tex}
\input{background.tex}
\input{data.tex}
\input{comp.tex}
\input{model.tex}
\section{Bringing it all Together}
\label{sec:combine}
We have explored the creation of a true many-to-many dataset for the multilingual translation of 100 languages, as well as how to effectively scale Many-to-Many models through a mix of dense and sparse scaling. In this section, we summarize our final results, compare to existing published work --- both multilingual benchmarks and competitive directions in WMT --- and end with a human evaluation of the overall quality of our translation quality.
\subsection{Real-world Settings for Many-to-Many Translation}
We highlight that there are several real-world usecases of translation directions not involving English. For example, many countries have official and regional languages that are not English, which would be natural candidates for direct translation. For example, it is intuitive to translate Kazakh directly to Russian in Kazakhstan. In Table~\ref{tab:languages_by_country}, we compare English-Centric models to Many-to-Many on a variety of different non-English directions. We see that across the board, our M2M-100 model has drastically better performance and on average improves over 7 BLEU across these directions.
\begin{table}
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{l l l | l | c c l }
\toprule
& \bf Source & \bf Target & \bf Test Set & \multicolumn{3}{c}{\bf BLEU} \\
& & && English-Centric & M2M-100 & \multicolumn{1}{c}{$\Delta$} \\
\midrule
India & Hindi & Bengali & TED & 3.9 & 8.7 & +4.8 \\
& Hindi & Marathi & TED & 0.4 & 8.4 & +8.0 \\
& Hindi & Tamil & TED & 1.1 & 7.5 & +6.4 \\
\midrule
South Africa & Afrikaans & Xhosa & Autshumato & 0.1 & 3.6 & +3.5 \\
& Afrikaans & Zulu & Autshumato & 0.3 & 3.6 & +3.3 \\
& Afrikaans & Sesotho & Autshumato & 0.0 & 2.1 & +2.1 \\
& Xhosa & Zulu & Autshumato & 0.1 & 3.6 & +3.5\\
& Sesotho & Zulu & Autshumato & 0.1 & 1.2 & +1.1 \\
Chad & Arabic & French & TED & 5.3 & 20.8 & +15.5 \\
DR Congo & French & Swahili & Tatoeba & 1.8 & 5.7 & +3.9 \\
\midrule
Kazakhstan & Kazakh & Russian & TED & 0.5 & 4.5 & +4.0 \\
Singapore & Chinese & Tamil & TED & 0.2 & 8.0 & +7.8 \\
\midrule
Austria & German & Croatian & TED & 9.6 & 21.3 & +11.7 \\
& German & Hungarian & TED & 11.3 & 17.4 & +6.1 \\
Belgium & Dutch & French & TED & 16.4 & 25.8 & +9.4 \\
& Dutch & German & TED & 18.1 & 26.3 & +8.2 \\
Belarus & Belarusian & Russian & TED & 10.0 & 18.5 & +8.5 \\
Croatia & Croatian & Serbian & TED & 22.4 & 29.8 & +7.4 \\
& Croatian & Hungarian & TED & 12.4 & 17.5 & +5.1 \\
& Croatian & Czech &TED & 15.2 & 22.5 & +7.3 \\
& Croatian & Slovak & TED & 13.8 & 24.6 & +10.8 \\
Cyprus & Greek & Turkish & TED & 4.8 & 12.6 & +7.8 \\
Czechia & Czech & Slovak & TED & 9.5 & 28.1 & +18.6 \\
Finland & Finnish & Swedish & TED & 7.9 & 19.2 & +11.3 \\
Italy & Italian & French & TED & 18.9 & 28.8 & +9.9 \\
& Italian & German & TED & 18.4 & 25.6 & +7.2 \\
Moldova & Romanian & Russian & TED & 8.0 & 19.0 & +11.0 \\
& Romanian & Ukrainian & TED & 8.7 & 17.3 & +8.6 \\
Montenegro & Albanian & Croatian & TED & 3.0 & 20.7 & +17.7 \\
& Albanian & Serbian & TED & 7.8 & 20.6 & +12.8 \\
Romania & Romanian & German & TED & 15.0 & 24.7 & +9.7 \\
& Romanian & Hungarian & TED & 11.0 & 16.3 & +4.3 \\
& Romanian & Turkish & TED & 5.1 & 12.0 & +6.9 \\
& Romanian & Armenian & TED & 0.4 & 8.2 & +7.8 \\
Russia & Bashkir & Russian & Tatoeba & 0.1 & 4.3 & +4.2 \\
& Ukrainian & Russian & TED & 18.0 & 23.7 & +5.7 \\
\midrule
\multicolumn{3}{c}{} & Average & 8.0 & 15.6 & \bf +7.6 \\
\bottomrule
\end{tabular}
\caption{\textbf{Performance translating between official and official regional languages} of several nations, focusing on non-English directions.}
\label{tab:languages_by_country}
\end{table}
\subsection{Comparison on Various Translation Benchmarks}
Next, we compare our M2M-100 model to various existing work on different benchmarks. While the training data is not the same, we conduct this comparison to provide a reference point for the overall strength of our model. An important note is that for each of these benchmarks, there are various different tokenizers used which affect BLEU --- we follow the tokenization and BLEU calculation of each of these benchmarks, rather than the evaluation methodology of our previous results. Thus, the numbers in this subsection are not comparable to the rest of the paper, as they use the tokenization of each benchmark. Further, this comparison was prepared in advance, so all sentences appearing in these evaluation sets were removed from the training data we used.
\paragraph{Comparison on WMT.} First, we compare our Many-to-Many model to submissions to WMT, the premier translation competition. We display results on a variety of different language directions, some of which are standard natural language processing machine translation benchmarks, such as English-French, English-German, and English-Russian. Results are shown in Table~\ref{tab:m2m_wmt}.
\footnote{
$\textbf{En} \leftrightarrow{} \textbf{De/En} \leftrightarrow{} \textbf{Ru:}$ we evaluated publicly available single model checkpoints prior to finetuning from~\citet{ng2019facebook} on WMT2019.
$\textbf{En} \leftrightarrow{} \textbf{Zh:}$ we report results from~\citet{li2019niutrans} which contains single model BLEU results on WMT2019.
$\textbf{En} \leftrightarrow{} \textbf{Lt:}$ we report results from~\citet{pinnis-etal-2019-tildes} on WMT2019; both directions are the best single model systems which use unconstrained training data.
$\textbf{En} \rightarrow{} \textbf{Fr:}$ we report results from~\citet{edunov2018bt}. $\textbf{Fr} \rightarrow{} \textbf{En:}$ we report results from~\citet{johnson2017google} on WMT2014.
$\textbf{En} \leftrightarrow{} \textbf{Lv:}$ we report results from~\citet{pinnis-etal-2017-tildes} on WMT2017.
$\textbf{En} \leftrightarrow{} \textbf{Tr:}$ we report results from~\citet{sennrich-etal-2017-university} on WMT17.
$\textbf{En} \leftrightarrow{} \textbf{Et:}$ we report results from~\citet{pinnis-etal-2018-tildes} on WMT18.
$\textbf{En} \leftrightarrow{} \textbf{Fi:}$ we report results from~\citet{talman-etal-2019-university} on WMT17.
}
Many submissions to the WMT shared task use ensembling, in-domain finetuning, or reranking methods, which are standard techniques to improve quality. As these could be added to our system at inference time as well, we focus instead on comparing single model results. To identify comparisons, we examine the WMT Shared Task proceedings as well as the submissions at \url{http://matrix.statmt.org/}.
As seen in Table~\ref{tab:m2m_wmt}, our M2M-100 system can achieve very competitive performance compared to bilingual models tuned especially for individual WMT translation directions.
This shows that our model maintains strong translation quality on individual directions.
\begin{table}[t]
\setlength{\tabcolsep}{5.5pt}
\centering
\footnotesize
\begin{tabular}{c l|l| cccc}
\toprule
& & & \multicolumn{3}{c}{\bf BLEU} \\
\multicolumn{2}{l|}{\bf Direction} & \bf Test Set & Published & M2M-100 & \bf $\Delta$\\
\midrule
\multicolumn{2}{l}{\bf Without Improvement} \\
& English-Chinese \citep{li2019niutrans} & WMT'19 & 38.2 & 33.2 & -5.0 \\
& English-Finnish \citep{talman-etal-2019-university} & WMT'17 & 28.6 & 28.2 & -0.4 \\
& English-Estonian \citep{pinnis-etal-2018-tildes} & WMT'18 & 24.4 & 24.1 & -0.3\\
& Chinese-English \citep{li2019niutrans} & WMT'19 & 29.1 & 29.0 & -0.1\\
\midrule
\multicolumn{2}{l}{\bf With Improvement} \\
& English-French \citep{edunov2018bt} & WMT'14 & 43.8 & 43.8 & 0 \\
& English-Latvian \citep{pinnis-etal-2017-tildes} & WMT'17 & 20.0 & 20.5 & +0.5 \\
& German-English \citep{ng2019facebook} & WMT'19 & 39.2 & 40.1 & +0.9 \\
& Lithuanian-English \citep{pinnis-etal-2019-tildes}~~~~~ & WMT'19 & 31.7 & 32.9 & +1.2\\
& English-Russian \citep{ng2019facebook} & WMT'19 & 31.9 & 33.3 & +1.4\\
& English-Lithuanian \citep{pinnis-etal-2019-tildes} & WMT'19 & 19.1 & 20.7 & +1.6 \\
& Finnish-English \citep{talman-etal-2019-university} & WMT'17 & 32.7 & 34.3 & +1.6 \\
& Estonian-English \citep{pinnis-etal-2018-tildes} & WMT'18 & 30.9 & 33.4 & +2.5 \\
& Latvian-English \citep{pinnis-etal-2017-tildes} & WMT'17 & 21.9 & 24.5 & +2.6 \\
& Russian-English \citep{ng2019facebook} & WMT'19 & 37.2 & 40.5 & +3.3 \\
& French-English \citep{edunov2018bt} & WMT'14 & 36.8 & 40.4 & +3.6 \\
& English-German \citep{ng2019facebook} & WMT'19 & 38.1 & 43.2 & +5.1 \\
& English-Turkish \citep{sennrich-etal-2017-university} & WMT'17 & 16.2 & 23.7 & +7.5 \\
& Turkish-English \citep{sennrich-etal-2017-university} & WMT'17 & 20.6 & 28.2 & +7.6 \\
\midrule
\multicolumn{3}{r}{Average} & 30.0 & 31.9 & \bf +1.9 \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of Many-to-Many and public results on WMT datasets.} We compare M2M-100 to published work (best single models) on WMT.
To identify previous work, we examine the WMT Shared Task proceedings for the top performing models and check reported results on \url{http://matrix.statmt.org/}.
For these comparisons, we report detokenized BLEU with sacrebleu~\citep{post2018sacrebleu} on the test set.}
\label{tab:m2m_wmt}
\end{table}
Next, we compare our models to other multilingual translation work. Table~\ref{tab:comparison_results} displays several previously published results on different sets of benchmarks. Note that for each comparison, we follow the published setting in tokenization, evaluation, and whether or not the BLEU is tuned on the validation set to maximize comparability.
\paragraph{Bilingual Models.} We first compare to mBART~\citep{liu2020multilingual}, which creates bilingual models based on finetuning a pretrained model on individual language directions. After pretraining as a denoising autoencoder, publicly available bitext data is used to create various different bilingual models, one for each evaluation direction. \citet{liu2020multilingual} tune the test set BLEU on the validation set. Following their setting, we tune the generation beam size between \{5,10\}, and length penalty between \{0.5, 1.0, 1.5\}, and the number of checkpoints to average between \{1, 5, 10\}. Our model provides +0.7 BLEU improvement.
We then compare to the bilingual baselines provided in CCMatrix~\citep{schwenk2019ccmatrix}, which trained individual models for each direction. As these models generate with no tuning, we generate on all pairs with beam size 5 and length penalty 1, using only the best checkpoint. Our one Many-to-Many multilingual model achieves a 2 BLEU point gain on average compared to training hundreds of individual models.
\paragraph{Multilingual Models.} We next compare the performance of our multilingual system to other published multilingual systems. We compare to the English-Centric multilingual model from~\citet{zhang2020improving} on the OPUS100 corpus. Their model is trained with noisily aligned through-English data from OPUS~\citep{tiedemann2012opus,zhang2020improving}, with online backtranslation to improve the performance of non-English pairs. Note that \citet{zhang2020improving} train on 100 directions, but we only overlap a subset of directions. However, we fully cover their full set of non-English evaluation pairs. Finally, the OPUS100 non-English directions come only with a test set, so we generate with beam size 5, length penalty 1, and use the best checkpoint. As shown in Table~\ref{tab:comparison_results}, we improve by more than 4 BLEU.
\begin{table}[t]
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{ll cc }
\toprule
\bf Benchmark & \bf Model && \bf BLEU \\
\midrule
\multirow{2}{*}{\textbf{mBART}} & Previous Work~\citep{liu2020multilingual} && 23.9 \\
& M2M-100 && \bf 24.6 \\
\midrule
\multirow{2}{*}{\textbf{CCMatrix}} & Previous Work~\citep{schwenk2019ccmatrix} && 16.3 \\
& M2M-100 && \bf 18.7\\
\midrule
\multirow{2}{*}{\textbf{OPUS100}} & Previous Work~\citep{zhang2020improving} && 14.1 \\
& M2M-100 && \bf 18.4 \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison on various evaluation settings from previous work}. We display the best performing model from the published work and report average BLEU on the test set. For these comparisons, we use the tokenization and BLEU evaluation script used by each work for comparability. \citet{liu2020multilingual} report Low/Mid resource directions into and out of English and High resource directions into English, we average across all. \citet{schwenk2019ccmatrix} report the full matrix on 28 languages, we average across all. \citet{zhang2020improving} report results on non-English directions, we average across all.}
\label{tab:comparison_results}
\end{table}
\subsection{Human Evaluation}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{human_eval.pdf}
\caption{
\textbf{Human Evaluation of Translation Accuracy of M2M-100 on Non-English Directions.} Evaluators are asked to score the semantic accuracy of translations on a scale of 1 to 10.
}
\label{fig:human_eval}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{human_eval_comparative.pdf}
\caption{
\textbf{Human Evaluation of Translation Accuracy of M2M-100 compared to English-Centric on 10 Non-English Directions.} Evaluators are asked to score the semantic accuracy of translations on a scale of 1 to 10.
}
\label{fig:human_eval_2}
\end{figure}
We end with a human evaluation study to understand the quality of our model translations. We focus on 20 different directions, none of them involving English. We include languages commonly spoken in the same region, such as Japanese-Chinese, Hindi-Tamil, and Russian-Ukrainian, as well as directions that cross language families, such as Chinese-French, French-Arabic, and Russian-Spanish. We also include several very low resource directions, such as French-Wolof, Hindi-Marathi, and Japanese-Mongolian. All of our evaluators are native speakers in one of the languages and fluent in the other.
Each evaluator rates 50 different translations for semantic accuracy on a scale of 1 to 10. Results are shown in Figure~\ref{fig:human_eval}. On semantic accuracy, most of our evaluations score between 8.5 and 9.5 (with 10 being the best possible score). For lower resource directions, the scores remain reasonable. Hindi to Tamil and Wolof to French score around 7-8. The most challenging direction based on human evaluation is French into Wolof (fr-wo), likely because there is not sufficient target-side Wolof data.
Next, we compare our model with an English-Centric model on 10 directions in Figure~\ref{fig:human_eval_2}. Each evaluator is asked to rate 100 sentences, 50 from each model, in a blind test. Across the board, we find that our Many-to-Many system scores better in translation accuracy - both for related and unrelated languages.
\subsection{Discussion}
\paragraph{Curating High-Quality Training Data.}
Creating high quality datasets to train translation models has been a long-standing area of research. For example, previous work has explored how to best filter noisy datasets~\citep{koehn2018findings,koehn2019findings}. Our use of large-scale mined training data presents large quantities of data to train multilingual models on, but brings challenges as well. For example, our mining methods mine both simplified and traditional Chinese text, tokenized and untokenized text, and many examples with code switching. We apply several data filtering methods, but the cleanliness and quality of alignment is critical for training high-quality translation systems. Further, multilingual translation can be affected by domain mismatch, as people in different parts of the world discuss different topics~\citep{shen2019source}, which presents additional challenges for curating good training sets. Thus, we see the continued improvement of data quality as an important direction for multilingual translation systems, which require a lot of data to train well.
\paragraph{Improvements on Very Low-Resource Languages.}
Strong performance for low-resource languages remains a critical area for future improvement~\citep{gu2018universal,sennrich-zhang-2019-revisiting}. For many languages, our system still requires substantial improvements. Examples include African languages such as Xhosa and Zulu, European languages such as Catalan and Basque, and Southeast Asian languages such as Iloko and Cebuano. For many of these, even monolingual resources on the internet are limited, which strongly affects the quantity and quality of mined data. Using curated data, possibly supplemented by mining, may provide a starting point for future improvement. For example, several resources for African languages exist, including JW300~\citep{agic-vulic-2019-jw300} used in the \textsc{masakhane} machine translation effort~\citep{orife2020masakhane} and datasets for Nigerian Pidgin~\citep{ahia2020towards}, Wolof~\citep{alla2020using}, Fon~\citep{emezue2020ffr}, Igbo~\citep{ezeani2020igbo}, Amharic, Tigrigna, Afan-Oromo, Wolaytta, and Ge'ez~\citep{abate2018parallel}. Other lines of work present resources for low-resource Asian languages, such as the ALT project~\citep{riza2016introduction,ding2016similar}, Mongolian, Uyghur, and Tibetian~\citep{anonymous2020}, or strategies for improvement on specific directions~\citep{chen2019facebook}. Further research is required to bring together small datasets of higher quality translations, mined data, and monolingual resources to create improved translation systems for very low resource languages.
\section{Conclusion}
We introduced M2M-100, a new Many-to-Many multilingual translation model that can translate between the 9,900 directions of 100 languages.
The underlying dataset was mined from CommonCrawl using a novel strategy which exploits language groupings to avoid mining every possible direction while maintaining good accuracy.
Such a large dataset requires models with increased capacity and to this end we explored densely scaling the number of parameters as well as sparsely, through introducing language-specific parameters trained with a novel random re-routing scheme.
Results show that M2M-100 outperforms English-Centric multilingual models trained on data where either the source or target language is English.
The system improves over 10 BLEU on average compared to an English-Centric baseline when translating directly between non-English directions.
M2M-100 is competitive to bilingual models from WMT and improves over existing publicly available multilingual translation systems.
Human judges indicate that our model translates fluently with high semantic accuracy.
\input{results.tex}
\acks{We thank Yuqing Tang and Peng-Jen Chen for their work on the multilingual translation infrastructure in fairseq. We thank Xian Li, Chau Tran, Yuqing Tang, Peng-Jen Chen, and Marc'Aurelio Ranzato for insightful conversations. We thank our volunteer human evaluators for closely examining the translation quality of our models through various directions.}
\section{Components for Scaling Multilingual Translation Models}
\label{sec:model}
Our goal is to build a single model capable of translating $9,900$ language directions covering $100$ languages.
This creates several challenges for models with insufficient capacity to capture that many languages and scripts adequately.
To this end, previous MMT work has considered different types of large capacity models~\citep{arivazhagan2019massively,lepikhin2020gshard}.
In this section, we investigate different ways to add capacity to an MMT model:
we first investigate dense scaling, where we increase the depth and width of standard Transformer architectures.
Then, we identify disadvantages of dense scaling, and propose an alternative to effectively add \textit{language-specific} parameters and exploit the nature of language similarities within the task of multilingual machine translation.
\subsection{Dense Scaling}
\subsubsection{Background: Model Parallel Training}
During the training of a neural network, we need to fit its weights, activations, gradients, and optimizer state in memory.
This restricts the maximum capacity of a network that we can train on a single accelerated device such as a GPU.
In this section, we describe two directions to circumvent this limitation.
The first direction focuses on fitting a larger model on single device by reducing the memory required by activations and optimizer states during the training process. The second direction focuses on efficient training of even larger models through model parallelism e.g. splitting a model across multiple devices.
In this work, we pursue both techniques to densely scale the capacity of Transformers.
\paragraph{Reducing Memory Consumption on a GPU.} To reduce the amount of memory, we consider optimizer state sharding and gradient checkpointing.
Optimizer state sharding~\citep{rajbhandari2019zero} divides the optimizer state across distributed data parallel workers so that each worker only needs to store a fraction of the optimizer state.
We also apply gradient checkpointing, which saves memory by discarding intermediate activations before the forward pass finishes~\citep{chen2016grad}.
During the backward pass, these activations are recomputed again as required.
This trades time for memory.
In the case of a Transformer based architecture, applying gradient checkpointing at pipeline parallel model partition boundaries reduces the memory used by activations by almost 50\%.
\paragraph{Models Sharded across Multiple GPUs.}
Reducing the memory consumption enables fitting greater model capacity on a single GPU, but the physical limitations of a single device still apply.
A solution is to split the model into separate components that are dispatched across different GPUs and trained in parallel.
This type of solution scales model capacity with the number of GPUs.
There are two broad paradigms to split a model: along the width or along the depth.
Tensor parallelism~\citep{shoeybi2019megatron,shazeer2018mesh} splits by width, while pipeline parallelism~\citep{huang2019gpipe,kim2020torchgpipe} splits by depth, placing different layers on different GPUs.
We use pipeline parallelism, but both methods work equally well with Transformers.
We use the implementation from \texttt{fairscale}\footnote{\url{https://github.com/facebookresearch/fairscale}}.
\begin{table}[t]
\centering
\includegraphics[width=0.98\linewidth]{bleu_by_size.pdf}
\captionof{figure}{
\textbf{(left) Comparison between deep versus wide models.}
We compare the performance in BLEU for different wide and deep models as a function of their words per second (WPS) at training time, evaluating on a subset of 38 directions.
\textbf{(right) Performance of wide models for different parameter sizes.}
We compare the performance of different wide models on different pairs at low, mid, and high resource levels, evaluating on all supervised evaluation pairs.
The white lines indicate comparisons between the different models at the same resource level.
\label{fig:dense}
}
\end{table}
\subsubsection{Training Large Dense Models}
We investigate several strategies to increase the capacity of a sequence-to-sequence Transformer model in the context of multilingual machine translation.
\paragraph{How to Scale: Wide or Deep?}
We consider increasing the capacity of a Transformer by either increasing the number of layers (depth axis) or the dimensions of each layer, including the feedforward (width axis).
On the left panel of Figure~\ref{fig:dense}, we analyze which axis to prioritize by comparing models with different sizes, $1$B, $2$B, and $10$B, obtained by growing their depth or width (see Appendix~\ref{app:dense} for model configurations and dimensions).
We report their performance in BLEU and their inference speed measured in words per second (WPS).
We train these models on a dataset that covers $80$ languages and evaluate them on $38$ different benchmark directions with more than $1$k parallel sentences per direction.
The main result of this study is that wider models scale better than deeper models in terms of performance and WPS.
In the rest of this paper, we thus focus on wider models.
\paragraph{Performance as a function of scale.}
In the right panel of Figure~\ref{fig:dense}, we compare the performance of wide models as we increase their capacity from $418$M to $12$B parameters.
We train these models on the full set of $100$ languages and evaluate them on all supervised evaluation pairs.
We report their performance in BLEU for pairs with either low, mid or high resource training data.
First, as we increase the number of parameters, we observe that the performance increases, \textit{even on low-resource pairs}.
This suggest that even a $12$B parameter model could be underfitting with our many-to-many multilingual dataset.
However, improvements increase roughly logarithmically in the number of parameters, and we need to scale model size by an order of magnitude to improve by a few BLEU points, e.g., $+1.5$ BLEU from $1.2$B to $12$B.
As we scale models densely, their runtime and memory usage becomes too prohibitive to justify the gain in performance, and so, we consider alternatives to increase the capacity of our models more efficiently.
\subsection{Scaling Model Capacity with Language-Specific Parameters}
\label{sec:lang_specific}
In this section, we introduce a layer whose parameters are split by language or language group based on similarity in vocabulary.
Each translation direction only accesses a subset of these parameters, allowing the model capacity to scale without significantly affecting the training and inference time.
The layer is trained with a novel re-routing scheme to improve generalization which we detail below. Compared to previous work~\citep{wang-etal-2018-three,bapna2019simple,zhang2020improving}, we focus on allocating entire language-specific layers and using this to scale model size while maintaining training speed.
\paragraph{Parallel Transformer Layer.}
We follow the sequence-to-sequence Transformer architecture and replace some of its layers by a set of parallel Transformer layers, one for each pre-defined group of languages.
More precisely, assuming we have split the languages into $K$ fixed groups, this parallel layer is composed of $K$ parallel Transformer sublayers, one per language group.
For each translation, we then select the corresponding sublayer among the $K$ possibilites depending on the language direction.
If the parallel layer is in the encoder, we select the sublayer according to the source language, while if it is in the decoder, we select according to the target language.
In practice, we only add these layers to either the encoder or decoder, not both.
This enables us to split translations along with their sublayers per GPU, leading to faster training and efficient memory usage.
Figure~\ref{fig:langspec_fig0} shows an example of the resulting \textit{trunk-and-branch} architecture when the parallel layer is in the decoder.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{lang_spec.png}
\caption{\textbf{Language-Specific Parameters} provide specialized capacity to an otherwise fully shared multilingual encoder and decoder.}
\label{fig:langspec_fig0}
\end{figure}
\paragraph{Grouping Languages by Frequency and Similarity.}
We group languages based on two criteria: the amount of training data and their vocabulary.
The motivation for these criteria is that we can learn a specific layer for a language with enough data, and for the rest, overlapping vocabulary is a good proxy for similar languages.
First, each language with more than $100$M sentences forms its own group and hence has its own sublayer.
We have $28$ languages that fit this criteria: hu, ru, hi, ro, fr, nl, fi, pt, ar, el, vi, en, ms, tr, he, id, pl, cs, sv, fa, zh, bg, de, es, ko, ja, it, da.
Second, we group the remaining languages by vocabulary overlap, leading to $18$ additional groups. To create these groupings, we calculate the vocabulary overlap between the training data of different languages and cluster those that have high overlap together. Note that some low resource languages have their own script --- such as Kannada --- and are not clustered with any similar languages as the script is unique. However, to maintain balance between groups~\citep{wang2020balancing}, we cluster the remaining languages together and roughly balance the amount of training data for each group.
In total, we form $46$ groups, each with its own sublayer in a language-specific layer.
\paragraph{Random Re-Routing between Sublayers.}
During training and inference, a sublayer is deterministically selected according to its language direction.
This guarantees that our model always uses the same memory and time during inference, regardless of the translation pair.
However, during training, this deterministic routing does not share information between similar languages if not associated with the same sublayer.
For example, the sublayer associated with Ukrainian does not benefit from the large quantity of Russian training data, since Russian has its own isolated sublayer.
We mitigate this shortcoming by \textit{random re-routing} of translations, i.e., randomly picking another sublayer instead of the designated one.
This shares information between languages associated with different sublayers, benefiting low resource languages by training on similar high resource languages. The re-routing is completely random, though could be restricted to re-route only to similar languages.
\paragraph{Adding Language-Specific layers to Pre-Trained Transformers.}
We can integrate a language-specific layer into an already pre-trained Transformer by adding it either at the end of the decoder or at the beginning of the encoder.
We can then freeze the parameters of the pre-trained Transformer and learn the language-specific components. These additional language-specific layers train rapidly as the rest of the model already has strong performance.
This strategy means it is straightforward to adapt pre-trained networks to a new domain or language by training a small number of dedicated parallel layers, and could easily be extended to various other applications.
\subsubsection{Evaluation of the Language-Specific Layer}
We experiment with different scenarios by adding a language-specific layer to the encoder or decoder, or to a pre-trained densely scaled model. We demonstrate the importance of random re-routing.
Finally, we validate this strategy by comparing it to scaling models densely.
\paragraph{Parallel layer in Encoder or Decoder?}
The trunk-and-branch architecture for language-specific layers is general and can be used to specialize capacity for any neural architecture. We explore adding language-specific capacity in the encoder or decoder using a smaller setting of 10 high-resource languages. Table~\ref{tab:langspec_scaling_encdec} shows that language-specific parameters are generally more effective when applied to the decoder network.
Recent studies show that encoders are more important for bilingual machine translation~\citep{wu2019pay,kasai2020deep}, however, these studies are based on systems modeling only a single language direction compared to our setting.
In our case, increasing the encoder or the decoder does not impact performance significantly, and we focus on decoder for the rest of this paper.
\begin{table}[t]
\begin{minipage}{0.47\textwidth}
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{lc c}
\toprule
\bf Model & \bf Params & \bf BLEU\\
\midrule
Language Specific Enc & 540M & 17.1 \\
& 920M & 17.5 \\
\midrule
Language Specific Dec & 540M & 17.3 \\
& 920M & 17.8 \\
\bottomrule
\end{tabular}
\caption{
\textbf{Comparing Language-Specific Encoders and Decoders}.
We add parallel language-specific layers to either the encoder or decoder, with different sizes.
}
\label{tab:langspec_scaling_encdec}
\end{minipage}
\hfill
\begin{minipage}{0.5\textwidth}
\vspace{-0.15cm}
\centering
\includegraphics[width=\linewidth]{rerouting.pdf}
\captionof{figure}{\textbf{Impact of Re-routing Rate} on the performance of high resource and low/mid resource languages.}
\label{tab:rerouting}
\end{minipage}
\end{table}
\paragraph{Random Re-routing.}
Figure~\ref{tab:rerouting} shows the impact of the re-routing strategy on performance as we increase the number of training samples routed to random parallel layers as opposed to their assigned layers.
With a re-routing rate of 20\%, an improvement of about 0.8 BLEU can be achieved over no re-routing for low and mid resource languages, without affecting the performance of high resource languages.
Too much stochasticity leads to performance similar to no random re-routing for high resource languages, but still improves mid to low resource performance compared to no re-routing.
\begin{table}[t]
\setlength{\tabcolsep}{5.5pt}
\centering
\small
\begin{tabular}{lll c ccc c c c}
\toprule
& & &~~& \multicolumn{3}{c}{\bf Supervised} && \bf All\\
\cmidrule{5-7}
\bf Model & \bf Params & \bf WPS && \bf Low & \bf Mid & \bf High && \bf Avg\\
\midrule
Dense Transformer & 1.2B & 40K && 10.1 & 23.4 & 37.5 && 17.5 \\
Dense Transformer & 3B & 20K && 10.3 & 23.8 & 38.0 && 17.9 \\
Dense Transformer & 12B & 3.5K && \bf 11.8 & 24.2 & 39.9 && 18.6 \\
\midrule
Dense Transformer 1.2B \\
~~~with 1 Language-Specific Layer & 1.9B & 38K && 10.7 & 24.1 & 38.5 && 18.1 \\
~~~with 3 Language-Specific Layers & 3.5B & 34K && 10.6 & \bf 24.7 & 39.5 && 18.8 \\
~~~with 6 Language-Specific Layers & 10B & 26K && 10.5 & 24.7 & \bf 40.3 && \bf 19.2 \\
\bottomrule
\end{tabular}
\caption{\textbf{Scaling Model Size with Language-Specific Parameters}.
We start with a $1.2$B parameter baseline with $24$ encoder layers and $24$ decoder layers.
We add increasingly more decoder layers to language specific layers.
For example, in the case of 1 language-specific decoder layer, the decoder has 23 shared layers and 1 language-specific layer.
We demonstrate the effect of using 1, 3, and 6 language specific layers. The additional parameters for language-specific layers are split across all language groups. We report WPS at training time holding the batch size fixed on 8 GPUs. The 12B baseline uses model parallel.
}
\label{tab:langspec_scaling}
\end{table}
\paragraph{Comparison with Large Dense Models.}
We compare adding language specific capacity with densely scaling model size in Table~\ref{tab:langspec_scaling} on $100$ languages.
As language-specific layers add many parameters, we compare to baseline models at various sizes for multiple points of comparison.
Our conclusion is that language-specific layers improve results compared to baselines of similar parameter size, particularly for mid and high resource languages where there is sufficient data to train the language-specific sublayers. Further, compared to dense scaling, sparse scaling only uses a fraction of the parameters in each forward pass, which maintains fast training speed despite large total model size.
\paragraph{Adding Language-Specific Layers to a Pre-Trained Model.}
We demonstrate the impact of adding language-specific layers to the decoder of a pre-trained $12$B parameter Transformer in Figure~\ref{fig:dense_sparse}. We show that adding language-specific layers for five languages improves results on the WMT evaluation datasets.
The language-specific layer adds $3.4$B parameters and we train it for $20$K updates with the rest of the network frozen. The total size of this model is $15.4$B parameters.
For several directions, we observe gains of more than 1 BLEU, which validates this strategy. On average, we observe gains of 0.9 BLEU.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{bleu_improvement.pdf}
\caption{\textbf{BLEU Improvement using Dense + Sparse} over Dense alone. We display evaluation on a subset of pairs using the WMT evaluation datasets.}
\label{fig:dense_sparse}
\end{figure}
|
1,116,691,500,379 | arxiv | \section{Introduction}
\subsection{Background and Our Results}
The population recovery problem was introduced by Dvir et al \cite{DRWY} and also studied by Wigderson and Yehudayoff \cite{WY}. To describe this basic statistical problem, we will borrow an example from \cite{WY}:
\begin{quote}
Imagine that you are a paleontologist, who wishes to determine the population of dinosaurs that roamed the Earth before the hypothesized meteor made them extinct. Typical observations of dinosaurs consist of finding a few teeth of one here, a tailbone of another there, perhaps with some more luck a skull and several vertebrae of a third, and rarely a near complete skeleton of a fourth....Using these fragments, you are supposed to figure out the population of dinosaurs, namely a complete description of (say) the bone skeleton of each species and the fraction of each species occupied in the entire dinosaur population.
\end{quote}
To make this precise, suppose there is an unknown distribution $\pi$ over binary strings of length $n$.
We are given samples from the following model:
\begin{itemize}
\item Choose a string $a$ according to $\pi$
\item Replace each coordinate with a ``?" independently with probability $1 - \mu$
\end{itemize}
\noindent The goal is to reconstruct the distribution up to an additive error $\varepsilon>0$.
We would like to output a set of strings $S$ and for each string $a$ in $S$, an estimate $\widetilde{\pi}(a)$
of $\pi(a)$ with the requirement that each of these estimates is within $\varepsilon$ of $\pi(a)$,
and for every string $a \not\in S$, $\pi(a)$ must be at most $\varepsilon$. This formulation of the problem is adapted from \cite{BIMPS}; in
the original version in \cite{DRWY} the support size of the distribution (which we will denote by $k$) is also a parameter.
We remark that the maximum likelihood estimator can be computed efficiently using a convex program \cite{BRW}. Yet the challenge is in showing that few samples are needed {\em information theoretically}. We will see another instance of this type of issue in our paper: our approach is based on the notion of a `robust local inverse' (defined later) \cite{DRWY}, which is easy to compute and the challenge is in showing that a good robust local inverse {\em exists} for any fixed $\mu > 0$.
Dvir et al \cite{DRWY} gave a polynomial time algorithm for lossy population recovery for any $\mu \gtrapprox 0.365$; their analysis was improved
by Batman, et al. \cite{BIMPS} who showed that the same algorithm works for any $\mu> 1- 1/\sqrt{2} \approx 0.30$.
Wigderson and Yehudayoff \cite{WY} gave an alternate approach based on a method termed ``partial identification" that runs in time quasi-polynomially in the support size $k$ for any fixed $\mu > 0$.
Interestingly, Wigderson and Yehudayoff \cite{WY} show that their framework cannot be used to get a polynomial time algorithm (and the number of samples needed is at least $k^{\log\log k}$). In fact, their algorithm works even in the presence of corruptions, not just erasures (whereas ours does not).
A generalization of the population recovery problem was introduced in the seminal work of Kearns et al \cite{KMRRSS}, which they called the problem of learning mixtures of Hamming balls: Again, we choose a string $a$ according to $\pi$ but now each bit in $a$ is flipped with probability $\eta_a < 1/2$ and this probability is allowed to depend on $a$. Kearns et al \cite{KMRRSS} give algorithms for the special case in which each flip probability is the same (which is exactly the noisy population recovery problem) and their algorithms run in time exponential in the support size $k$. This is an interesting phenomenon in learning distributions, that for many problems we do not know how to achieve a running time that is sub-exponential in the number of components. For example, this is the case when learning mixtures of product distributions \cite{FOS}, learning juntas \cite{MOS}, learning decision trees \cite{EH}, and learning mixtures of Gaussians \cite{MV}, \cite{BS}. In fact, many of these problems are inter-reducible \cite{FGKP}, so with this context it is interesting that the population recovery problem is one positive example where we can avoid exponential dependence on $k$. But is there a polynomial time algorithm?
Here we give an estimator that solves the population recovery problem, such that for any fixed $\mu>0$,
the running time and number of samples needed is polynomial in $n$ and $1/\varepsilon$.
\begin{theorem}
\label{pop rec alg}
There is an efficient algorithm for the population recovery problem whose running time and number of samples needed is
$O((n/\varepsilon )^{2f(\mu)} )$ where $f(\mu) = 1/\mu \log 2/\mu + O(1)$.
\end{theorem}
The population recovery problem arose naturally from the investigation of one of the central problems in
learning theory,
learning DNFs. The best known algorithm for learning DNFs in Valiant's PAC learning model \cite{V} runs in time (roughly) $2^{n^{1/3}}$ \cite{KS}.
Dvir et al \cite{DRWY} introduced a new model called restriction access that can be thought of as an interpolation between black box and white box access to the function: Each example consists of a restriction of the unknown DNF obtained by fixing a random $1-\mu$
fraction of the input variables.. Dvir et al \cite{DRWY} showed how to reduce the problem of learning an
$n$ term, $k$-variable DNF to solving an instance of the population recovery problem on strings of length $n$ and support size $k$. Previous
algorithms for population recovery yield a polynomial time algorithm for learning DNFs in the restriction access model for any $\mu > 1-1/\sqrt{2}$ \cite{DRWY,BIMPS}, and Wigderson and Yehudayoff \cite{WY} obtain a quasi-polynomial time algorithm that runs in time $k^{\log k}$ (where $k$ is the number of clauses). Combining the reduction of \cite{DRWY} with
Theorem \ref{pop rec alg} immediately gives:
\begin{theorem}
\label{dng alg}
There is an efficient algorithm to PAC learn DNFs in the restriction access model for any $\mu > 0$. The running time and number of samples needed is $O((n/\varepsilon)^{2f(\mu)} \mathrm{poly}(n,k))$ where $f(\mu) = 1/\mu \log 2/\mu$ and the algorithm succeeds with high probability.
\end{theorem}
\noindent The main open question in this paper is whether there is a polynomial time algorithm for {\em noisy} population recovery (see Section~\ref{sec:open}). If the goal was to learn a distribution that is close to the true distribution (rather than more stringent goal of learning its parameters), the maximum likelihood estimator would suffice. But the main obstacle here is showing that for two distributions whose parameters do not match, their statistical distance is noticeably large.
\subsection{The Robust Local Inverse}
\label{RLI}
At a more philosophical level, what makes the population recovery problem particularly interesting is that in order to give an efficient estimator we need to solve a certain inverse problem despite the fact that the corresponding matrix has many exponentially small eigenvalues.
As will be reviewed in the next section, Dvir et al \cite{DRWY} showed that the population
recovery problem can be reduced to a problem of the following form: We have two unknown probability distributions
$\pi$ and $\phi$ over the domain $\{0,\ldots,n\}$, which when viewed as vectors indexed by
$\{0,\ldots,n\}$ are related by the equation:
$$\phi^T=\pi^TA$$
where $A$ is a known (row) stochastic invertible matrix indexed by $\{0,\ldots,n\}$.
We want to estimate $\pi(0)$ but we only have access to samples chosen according to the
distribution $\phi$. We would like the running time and number of samples needed to be (at most) a polynomial in $n$ and $1/\varepsilon$.
Let $u$ denote the first column of $A^{-1}$. Then: $$\pi(0)=\phi^T u$$
So, if we knew the vector $\phi$ {\em exactly}, we could use it to recover $\pi(0)$ exactly.
But we do not know $\phi$. We can estimate $\phi$ from random random samples in the obvious way: let $\widetilde{\phi}(j)$ be the fraction of observed samples having exactly $j$ zero entries.
We might then hope that $\widetilde{\pi}(0)=\widetilde{\phi}^T u$ is a good estimate to $\pi(0)$. We will refer to this
as the {\em natural estimator} for $\pi(0)$.
The
error $|(\widetilde{\phi}-\phi)^T u|$ of this estimator is at most $n\|\widetilde{\phi}-\phi\|_{\infty} \|u\|_{\infty}$.
Thus to obtain estimation error $\varepsilon$, it is enough that
\[
\|\widetilde{\phi}-\phi\|_{\infty} \leq \frac{\varepsilon}{n\|u\|_{\infty}}
\]
and the Chernoff-Hoeffding bound says that $C\log(n)(\|u\|_{\infty}n /\varepsilon)^2$ samples are enough so that
the probability of exceeding the desired error is less than $e^{-C}$.
To ensure that this is not too many samples, we want that $\|u\|_{\infty}$ is polynomially bounded.
For the estimation problem that is derived from population recovery, it turns out that $\|u\|_{\infty} \leq 1$ provided that $\mu \geq 1/2$
and so in this case the natural estimator yields an accurate estimate from a polynomial number of samples. But if $\mu < 1/2$, then $\|u\|_{\infty}$ is
exponentially large in $n$ and this estimator requires exponentially many samples to be at all accurate.
What else can we do? The vector $u$ has entries that are too large, so Dvir et al \cite{DRWY} suggested replacing $u$ by another vector $v$ whose entries are not too
large and such that $\pi^TA v $ is close to $\pi^TA u$
for all distributions $\pi$. Remarkably, Dvir et al \cite{DRWY} managed to construct such a $v$ which works for $\mu \gtrapprox .365$ (the analysis was subsequently improved to $\mu \gtrapprox .3$ \cite{BIMPS}), which in turn yields a polynomial time algorithm
for the population recovery problem even in cases when the natural estimator fails!
Since $\pi^T A u = \pi(0)$, it follows that what we really want is to find a vector $v$ so that
$$\|A v - e_0\|_{\infty} \leq \varepsilon$$
where $e_0$ is the indicator vector for zero (i.e. its first entry is one and the rest are all zero). And furthermore we want $\|v\|_\infty$ to be as small as possible.
A vector satisfying the above condition is called a {\em $\varepsilon$-local inverse} for $A$ at $e_0$, and we will refer to $\|v\|_{\infty}$ as the
{\em sensitivity} of $v$.
If we can find a $v$ whose sensitivity is at most $\sigma$, then $poly(n,1/\varepsilon,\sigma)$ samples
suffice to get an estimate ($\widetilde{\phi}^T v$) to $\pi(0)$ that is within an additive $\varepsilon$.
Geometrically, a local inverse is obtained by taking $A^{-1}e$ where $e$ is a small perturbation of the vector $e_0$, which
is chosen so that $A^{-1}e$ has small norm even though $A^{-1} e_0$ does not. What controls the behavior of
$A^{-1}e$ is the representation of $e$ in the basis of singular vectors of $A$. In choosing $e$ we want to
remove from $e_0$ the components corresponding to tiny singular values, which will ensure that the sensitivity of $v=A^{-1}e$ is not too large.
We are hoping that that the weight on these deleted components is small so that
the result is a good local inverse.
The problem of finding the $\varepsilon$-local inverse of minimum sensitivity for a particular matrix $A$ can be expressed directly as a linear program whose variables are the vector $v$ and the sensitivity $\sigma$:
\begin{eqnarray}
\label{LP}
& \min \sigma & \\
\nonumber
Av & \geq & e_0-\varepsilon \bold{1} \\
\nonumber
-Av & \geq & -e_0-\varepsilon \bold{1}\\
\nonumber
v+\sigma \bold{1} & \geq & \bold{0}\\
\nonumber
-v + \sigma \bold{1} & \geq & \bold{0}
\end{eqnarray}
The solution $v$ can be used in to estimate $\pi(0)$ from $\widetilde{\phi}$, where the number of samples depends on $\sigma$, as above. Note that the matrix $A$ depends on $\mu$. Our main contribution is to prove that there is a good solution to the above linear program for any $\mu > 0$.
The approach in Dvir et al \cite{DRWY} and in Batman et al \cite{BIMPS} was to guess a solution to the above linear program and bound its sensitivity. Instead, we consider the dual (maximization) problem
and prove an upper bound on its maximum.
After some work, the dual problem becomes a problem of finding a polynomial $p$ of degree $n$ so as to maximize $p(0)-\varepsilon\|p\|_1$, where $\|\cdot\|_1$ denotes the sum of the absolute values of the coefficients,
subject to the constraint that the translated polynomial $q(x)=p(1+(x-1)/\mu)$ has $\|q\|_1=1$. Bounding this maximum from above is then reduced to a problem of
showing that if $p$ is a polynomial (indeed any holomorphic function)
on the complex plane and there exists a disk of nontrivial diameter where $|p(z)|$ is much smaller than $|p(0)|$ then
the maximum of $p(z)$ on the unit circle must be much larger than $|p(0)|$. This final result can be viewed as a kind of uncertainty principle
and is proved using tools from complex analysis (the Hadamard 3-circle theorem, and the M\"obius transform).
\section{Reductions for Population Recovery}\label{sec:rli}
Here we describe (informally) the reduction of Dvir, et al. \cite{DRWY} from the population recovery problem to the problem of constructing
a robust local inverse for a certain matrix $A$ (whose entries depend on $\mu$): Recall that if we choose a string $a \in \{0,1\}^n$ (according to $\pi$), the
observation is a (random) string in $\{0,1,?\}^n$ obtained from $a$ by
replacing each $a_i$ with `?' independently with probability $1-\mu$.
The first observation of Dvir et al \cite{DRWY} is that we may as well assume that we know all of the strings $a$ whose probability $\pi(a)$ is at least $\Omega(\varepsilon)$.
Of course, in the population recovery both the strings and their probabilities are unknown, so how can we
reduce from the case when everything is unknown to the case where at least the set of strings with large probability is known?
Suppose we ignore all but the first $n'$ coordinates; then we get an instance of the population recovery problem on length $n'$ strings. In particular, the
probability $\pi(a')$ of a length $n'$ string is the total probability of all length $n$ strings $a$ whose first $n'$ coordinates are exactly $a'$.
Now the rough idea is that we can incrementally solve the population recovery problem on longer and longer prefixes, each time we increase the length of the
prefix by one we at most double the number of candidate strings. The crucial insight is that we can always prune the set of strings because we never need to
keep a prefix whose total probability is less than $\varepsilon$.
The second observation of Dvir et al \cite{DRWY} is that if all the strings are known, then it suffices to estimate $\pi(0)$ within an additive $\varepsilon$. This type of reduction
is standard: given a string $a$, we can take each observation and XOR it with $a$ but keeping the symbol `?' unchanged. The samples we are given can be thought of
as samples from an instance of population recovery where every string is mapped to its XOR with $a$, and so we can recover $\pi(a)$ by finding the probability of the all zero
string in this new instance of the problem.
The final simplification is: suppose we ignore the locations of the ones, zeros and question marks in the samples but only recover the number of ones. Then we can map the probability
distribution $\pi$ to a length $n + 1$ vector where $\pi(i)$ is the total probability of all strings with exactly $i$ ones. What is the probability that we observe $j$ ones (and the remaining symbols are
zeros or question marks) given that the sample $a$ had $j$ ones? This quantity is exactly:
$$
A_{i, j} = {i \choose j} \mu^{j} (1-\mu)^{i-j}
$$
So if we only count the number of ones in each observation, we are given random samples from the distribution $\pi^T A$. Hence, if our goal is to recover the probability $\pi(0)$ assigned to the all zero string, and we ignore {\em where} the zeros, ones and question marks occur in our samples, we are faced with a particular matrix $A$ (whose entries depend on $\mu$) for which we would like to construct a robust local inverse.
\begin{definition}
Let $\sigma_n(\mu,\varepsilon)$ denote the minimum sensitivity of a $\varepsilon$-local
inverse (i.e. the optimum value of (\ref{LP})).
\end{definition}
The following family of vectors will play a crucial role in our analysis:
$$v^{\alpha}= \Big [1, \alpha, \alpha^2, ... \alpha^{n-1}\Big ]$$
Then it can be checked that setting $\alpha = -\frac{1-\mu}{\mu}$ is the natural estimator (i.e. $v^{\alpha} = A^{-1}e_0$) and the sensitivity of this estimator is exponentially large
for $\mu < 1/2$. We prove:
\begin{theorem}
\label{sigma bound}
For all positive integers $n$ and $\mu,\varepsilon >0$ we have $\sigma_n(\mu,\varepsilon) \leq (1/\varepsilon)^{f(\mu)}$
where $f(u)=\frac{1}{\mu} \log \frac{2}{\mu}$.
\end{theorem}
Theorem \ref{pop rec alg} follows since as discussed in Section \ref{RLI}, the number of samples we
need to obtain the desired approximation with high probability when using the best local
inverse is $\sigma_n(\mu,\varepsilon)^2 \mathrm{poly}(n,1/\varepsilon)$.
\section{A Transformed Linear Program}
As outlined earlier, the problem of finding an $\varepsilon$-local inverse can be expressed
as a linear programming problem whose objective is to minimize the sensitivity. We want to prove an upper bound
on the value of the solution, and we will accomplish this by instead bounding the maximum objective function of the dual.
However, before passing to the dual we will apply a crucial change of basis to the linear program. The reason we do this is so that the dual can then be interpreted
as a certain maximization problem over degree $n$ polynomials.
We will choose $n+1$ values $\alpha_0, \alpha_1, \ldots, \alpha_n$ (as we'll see the particular values won't matter)
and we will consider the estimators $v^{\alpha_i}$ defined in the previous section. We will abuse notation and refer to this estimator
as $v^i$. Since this family forms a basis, we can write any local inverse $v$ in the form $v = \sum_{i=0}^n \lambda_i v^i$.
Let $V$ be the columns $v^0,\ldots,v^n$ and let $B=AV$. Then our new linear program is:
\begin{eqnarray}
\label{LP}
& \min \sigma & \\
\nonumber
B \lambda & \geq & e_0-\varepsilon \bold{1}\\
\nonumber
-B \lambda & \geq & - e_0 +\varepsilon \bold{1} \\
\nonumber
V \lambda + \sigma \bold{1} & \geq & \bold{0} \\
\nonumber
-V\lambda + \sigma \bold{1} & \geq & \bold{0}\\
\nonumber
\sigma & \geq & 0
\end{eqnarray}
The final constraint is superfluous, but is helpful in formulating the dual linear program.
The coefficient matrix $V$ is a Vandermonde matrix (i.e. each column has the form $v^{\alpha}$ for some $\alpha$)
with the entry in row $i$ and column $j$ given by
$V_i^j=(\alpha_j)^i$ (with $V_0^0=1$). In fact, it turns out that $B$ is also a Vandermonde matrix whose $j^{th}$ column
is exactly $v_{1+\mu(\alpha_j-1)}$:
$$B_i^j = \sum_{k \leq i} {i \choose k} \mu^k (1-\mu)^{i-k} (\alpha_j)^k= (1-\mu)^i \sum_{k \leq i} {i \choose k} \Big (\frac{\alpha_j \mu}{1 - \mu} \Big )^k = (1-\mu)^i (1 + \frac{\alpha_j \mu}{1 - \mu})^i = (1 + \mu(\alpha_j-1))^i$$
Indeed, this simple form for $B$ is precisely the reason we chose this basis transformation.
The new linear program has $n+2$ variables and $4(n+1)$ constraints (consisting of four groups of $n+1$ constraints each)
so the dual will have $4(n+1)$ variables consisting of four vectors, denoted by $p^+,p^-,q^-,q^+$ each
indexed by $\{0,\ldots,n\}$.The resulting dual program is:
\begin{eqnarray}
\label{DLP}
\max p^+_0-p^-_0-\varepsilon \sum_i p^+_i+p^-_i && \\
\nonumber
(p^+ - p^-)^T B + (q^- -q^+) V & = & \bold{0}\\
\nonumber
\sum_i (q^+_i +q^-_i) & \leq & 1\\
\nonumber
p^+,p^-,q^+,q^- &\geq& \bold{0}
\end{eqnarray}
We can now make some simplifying observations.
If for any $i$, both $p^+_i$ and $p^-_i$ are positive, we can decrease them
each by their minimum without violating the constraints, and only increasing the objective function.
So we may assume that at least one of them is zero.
Similarly for $q^+_i$ and $q^-_i$. Then we can define $p=p^+ - p^-$ and $q=q^+-q^-$ to simplify the dual linear program
to:
\begin{eqnarray}
\label{DLP1}
\max p_0-\varepsilon \sum_i |p_i| && \\
\nonumber
p^T B & = & q^T V \\
\nonumber
\sum_i |q_i| & \leq & 1.
\end{eqnarray}
Define the polynomials $p(x)=\sum_{j=0}^n p_j x^j$ and $q(x)=\sum_{j=0}^n q_j x^j$.
The equality constraint gives $n+1$ equations indexed from 0 to $n+1$
where $j^{th}$ constraint is that $p(1+\mu(\alpha_j-1))=q(j)$. Since $p(1+\mu(x-1))$ and $q(x)$ agree
on $n+1$ values they must be the same polynomial. This leads to the following formulation:
\begin{quote}
The optimal sensitivity $\sigma_n(\mu,\varepsilon)$ is equal to the maximum of $p(0)-\varepsilon \|p\|_1$ over all
degree $n$ polynomials for which the translated polynomial
$q(x)=p(1+\mu(x-1))$
satisfies $\|q\|_1 \leq 1$.
\end{quote}
Recall that $\| p \|_1$ denotes the sum of the absolute values of the coefficients.
So now our goal is to prove an upper bound on the maximum of this linear program.
We can think of this as trying to show a type of {\em uncertainty principle} for the coefficients of a polynomial when applying an affine change of variables. There is a considerable amount of literature on establishing uncertainty principles for functions and their Fourier transforms (see e.g. \cite{DS}), but there seems to be no literature concerning other affine changes of variables (i.e. $p(1+\mu(x-1)) = q(x)$). In fact, here we will establish such an uncertainty principle via the Hadamard three circle theorem in complex analysis.
\section{Sup Relaxations}
The quantities $\|p\|_1$ and $\|q\|_1$ are unwieldy - e.g. given just the graph of the polynomial, what can we say about its coefficients? Here we will relax constraints on $\|p\|_1$ by instead considering the maximum of the polynomial over certain domains.
\begin{definition}
({\em Restricted $\sup$-norm}.)
For a subset $W$ of $\mathbb R$, let $\|q\|_{\sup}^{W} \stackrel{\small \mathrm{def}}{=} \sup_{x \in W} |q(x)|$.
\end{definition}
Recall that we used the notation $\|q\|_1$ to denote the sum of the absolute values of the coefficients of $q$. Then it is easy to see that:
\begin{claim}
$\|q\|_1 \geq \|q\|_{\sup}^{[-1,1]}$
\end{claim}
\begin{proof}
For each $x \in [-1,1]$,
$\sum_{i = 0}^n |q_i| |x|^i \leq \sum_{i = 0}^n |q_i| = \|q(x)\|_1$.
\end{proof}
In the polynomial formulation of $\sigma_n(\mu,\varepsilon)$ replacing the objective
function by $p(0)-\varepsilon \|p\|_{\sup}^{[-1,1]}$ only increases the value of the objective function.
Similarly, replacing the constraint $\|q\|_1 \leq 1$
by $\|q\|_{\sup}^{[-1,1]} \leq 1$ can only increase the objective function.
Since $q(x)=p(1+\mu(x-1))$ and the transformation $x \longrightarrow 1+\mu(x-1)$ maps
the interval $[-1,1]$ to the interval $[1-2\mu,1]$ we have $\|q\|_{\sup}^{[-1,1]}=\|p\|^{[1-2\mu,1]}$.
This leads to a relaxation of the polynomial formulation:
\begin{quote}
The optimal sensitivity $\sigma_n(\mu,\varepsilon)$ is at most the maximum of $p(0)-\varepsilon \|p\|_{\sup}^{[-1,1]}$
over all degree $n$ polynomials $p(x)$ for which $\|p\|_{\sup}^{[1-2\mu,1]} \leq 1$.
\end{quote}
For this relaxation to be useful to us we will need to prove that the new objective
function can not be too large if $p$ satisfies
the constraints of the relaxation.
Informally, we will say that a polynomial is bad if it satisfies the constraints of the relaxation
and makes the objective function very large. If $\mu \geq 1/2$ then $0 \in [1-2\mu,1]$
and so $|p(0)| \leq 1$ and no polynomial can be bad. So assume $\mu < 1/2$.
A bad polynomial
must be bounded between $-1$ and $1$ on the interval $[1-2\mu,1]$, must be very large at the origin,
and must have $|p(x)|$ at most $|p(0)|/\varepsilon$ for all $x \in [-1,1]$.
\iffalse
We can think about graphing the polynomial $p(x)$ for the range of values $-1 \leq x \leq 1$.
The value at zero is $\widehat{q}(0) = -1$ and since $1 > \varepsilon \| \widehat{q} \|_1$ we have that $\widehat{q}(x)$ is bounded between $-1/\varepsilon$ and $1/\varepsilon$ on the domain $-1 \leq x \leq 1$. In fact, from the constraint $1 > B \|q\|_1$ and since $\widehat{q}(1 + \mu(x - 1)) \equiv -q(x)$ we get that $\widehat{q}(x)$ must be between $-1/B$ and $1/B$ on the domain $1 - 2 \mu \leq x \leq 1$.
\fi
Is there any polynomial that satisfies these conditions? Unfortunately for this approach, there is. The polynomial
$(1-x^2)^{n/2}$ has its maximum on $[-1,1]$ at the origin, where it is $1$, and its maximum on $[1-2\mu,1]$ is at $1-2\mu$
where its value is $C=(2\mu-\mu^2)^{n/2}$ which is exponentially small. Thus the polynomial $p(x)=\frac{1}{C}(1-x^2)^{n/2}$
satisfies the constraints and has objective function value that is exponentially large
in $n$.
\iffalse
since we can take $p(x) ((1-x^2))^{n/2}$.
The value of this polynomial on the interval $ and this polynomial not only hits $-1$ at the origin, but is exponentially small (not even just polynomially small) in a neighborhood around its roots $+1$ and $-1$.
\fi
To salvage this approach we move to complex numbers. The definition of the restricted $\sup$-norm
extends directly to subsets $W$ of the complex numbers.
For $\beta \in \mathbb C$ and positive real number $\gamma$ let $D_{\gamma}(\beta)$ be the closed disk in the complex plane
of radius $\gamma$ centered at $\beta$. Let $C_{\gamma}(\beta)$ be the circle bounding
$D_{\gamma}(\beta)$. If $\beta=0$ we write simply $D_{\gamma}$ and $C_{\gamma}$.
As with
\begin{claim}
$\|q(x)\|_1 \geq \|q(x)\|_{\sup}^{D_1}$.
\end{claim}
Observe that the image of the disk $D_1$ under the transformation $x \longrightarrow (1+\mu(x-1))$
is $D_{\mu}(1-\mu)$. Just as before we obtain the following relaxation:
\begin{quote}
The optimal sensitivity $\sigma_n(\mu,\varepsilon)$ is at most the maximum of $p(0)-\varepsilon \|p\|_{\sup}^{D_1}$
over all degree $n$ polynomials $p(x)$ such that $\|p\|_{\sup}^{D_{\mu}(1-\mu)} \leq 1$
\end{quote}
As we will see, there are no bad polynomials for this relaxation.
In hindsight, it is not surprising that the values of the polynomial $p(x)$ over the whole complex disk reveals
much more information than just the values on $[-1,1]$; in particular, we can recover the values of a polynomial from integrating around the circle, so a polynomial cannot stay too small on the boundary of the disk if it is large at the origin.
In particular the polynomial $(x^2 -1)^{n/2}$ that was bad for the
$\| \cdot \|^{[-1, 1]}_{\sup}$ relaxation is no longer bad because its maximum (on $D_1$) is attained at $x=\imath$ and is exponentially large.
In the next section we will prove:
\begin{lemma}
\label{holo}
Let $h$ be a holomorphic function and suppose $D=D_{\rho}(\beta)$ is a disk contained in $D_1$
such that $\|h\|_{\sup}^D \leq 1$. Then there is a point $x \in C_1$
such that $|h(x)| \geq |h(0)|^{1+d}$, where $d=(1-|\beta|)/\log(2/\rho)$.
\end{lemma}
From this uncertainty principle, we can now prove Theorem \ref{sigma bound}.
\vspace{0.5pc}
\noindent
{\bf Proof of Theorem \ref{sigma bound}. }
We use the bound from the $D_1$-sup relaxation.
Let $p$ be a polynomial
satisfying the constraints and let $s=|p(0)|$. Then $p$ satisfies the conditions of Lemma \ref{holo} with $\beta=1-\mu$ and $\rho=\mu$.
Therefore $\|p\|_{\sup}^{D_1} \geq |s|^{d+1}$, where $d$ is as in the lemma.
From this we conclude that the
objective function in the $\| \cdot \|^{D_1}_{\sup}$ relaxation is at most $s-\varepsilon s^d$ which is maximized when
$s=(1/(d+1)\varepsilon)^{1/d}$, and this quantity is itself an upper bound on the objective function.
We can therefore conclude that $\sigma_n(\varepsilon,\mu) \leq (1/\varepsilon)^{1/d}$ where the exponent
is equal to $\frac{1}{\mu}\log\frac{2}{\mu}$.
\iffalse
Geometrically, the picture we obtain based on this relaxation is: We can again graph $\widehat{q}(x)$ now on the whole complex disk $D$. Then $|\widehat{q}(x)|$ is bounded by $1/\varepsilon$ on the disk $D$, and $\widehat{q}(0) = -1$. Furthermore $|\widehat{q}(x)|$ is bounded by $1/B$ on the disk $C = \{x \mbox{ s.t. } |x - (1-\mu)| \leq \mu \}$. It is now easy to see in this geometric picture why the case $\mu = 1/2$ is easy - it's because when $\mu= 1/2$, the disk $C$ contains the origin and we directly have a contradiction. But what if $\mu < 1/2$? In the next section, we prove that if $B = (1/\varepsilon)^{f(\mu)}$ where $f(\mu) = 1/\mu \log 4/\mu$ there is no polynomial $\widehat{q}(x)$ that satisfies these conditions.
\fi
\section{Proof of Lemma \ref{holo}}
Here we will prove the uncertainty principle stated in the previous section using tools from complex analysis. Perhaps one of the most useful theorems in understanding the rate of growth of holomorphic functions in the complex plane is Hadamard's Three Circle Theorem (and the related Three Lines Theorem):
\begin{theorem} \cite{H}
\label{Hadamard}
Let $0 < a \leq b \leq c$ and let $g(x)$ be holomorphic function on the $D_{c}$.
Then
$$\log \frac{c}{a}\log \|g\|_{\sup}^{C_b} \leq \log \frac{c}{b} \log \|g\|_{\sup}^{C_a}+ \log \frac{b}{a} \log \|g\|_{\sup}^{C_c}.$$
\end{theorem}
In Lemma \ref{holo} we do {\em not} have three concentric circles but we can apply a M\"obius transformation to
put the problem in the right form. Let $\beta$ be the center of the disk $D$ in the lemma
and consider the transformation $\phi(x)=\phi_{\beta}(x)=\frac{\beta+x}{1+ \beta^* x}$, where $(\cdot)^*$ denotes
complex conjugate. The following fact is well known and easy to check:
\begin{fact}
\label{mobius}
For $|\beta| < 1$, $\phi_{\beta}$ is a holomorphic function which maps $D_1$ to itself.
\end{fact}
\begin{enumerate}
\item $\phi(C_1)=C_1$.
\item $0 \in \phi(C_{|\beta|})$.
\item $\phi(C_{\rho/2}) \subseteq D= D_{\rho}(\beta)$.
\end{enumerate}
The first claim is a standard fact about M\"obius transformations. The second follows from $\phi(-\beta)=0$.
For the third,
$$|\phi(x)-\beta| = \Big |\frac{\beta+x}{1+\beta^* x}-\beta \Big | = \Big |\frac{x(1-|\beta|^2)}{1+\beta^* x} \Big | \leq |x|\frac{1-|\beta|^2}{1-|\beta|}=|x|(1+|\beta|)\leq 2|x|.
$$
Now consider the function $g$ defined on $D_1$ by $g(x)=h(\phi(x))$. From the three previous observations we have:
\begin{enumerate}
\item $g(C_1)=h(C_1)$ and so $\|g\|_{\sup}^{C_1}=\|h\|_{\sup}^{C_1}$.
\item $h(0) \in g(C_{\beta})$ so $\|g\|_{\sup}^{C_{\beta}} \geq |h(0)|$.
\item $g(C_{\rho/2}) \subseteq h(D)$ so $\|g\|_{\sup}^{C_{\rho/2}}| \leq \|h\|_{\sup}^{D} \leq 1$, by
the hypothesis of the lemma.
\end{enumerate}
Applying Theorem \ref{Hadamard} with
$a=\rho/2$, $b=|\beta|$ and $c=1$ we get:
$$\log \frac{2}{\rho} \|g\|_{\sup}^{C_{|\beta|}} \leq \log \frac{1}{\beta} \log \|g\|_{\sup}^{C_{\rho/2}}+
\log \frac{2\beta}{\rho} \log \|g\|_{\sup}^{C_1},$$
which when combined with the three previous bounds gives:
$$\log \frac{2}{\rho} |h(0)| \leq
\log \frac{2\beta}{\rho} \log \|h\|_{\sup}^{C_1},$$
from which we conclude:
$$
\|h\|_{\sup}^{C_1} \geq |h(0| ^t,
$$
where $t= \log(\frac{2}{\rho})/\log \frac{2\beta}{\rho} = 1+ \log(1/\beta)/\log(2\beta/\rho) \geq 1+ (1-\beta)/\log(2/\rho)$,
which is the parameter $d$ defined in the lemma.
\iffalse
There are three properties that we want to verify about this particular choice of the transform:
\begin{enumerate}
\item $\{ \phi(x) \mbox{ s.t. } |x| = 1\} = \{y | 1 = |y|\}$
\item $\{ \phi(x) \mbox{ s.t. } |x| = 1-\mu \} \ni 0$
\item $\{ \phi(x) \mbox{ s.t. } |x| = \delta \} \subset C = \{y \mbox{ s.t. } |y - (1-\mu)| \leq \mu \}$
\end{enumerate}
\noindent The first property follows immediately from the above fact about M\"obius transforms and the second fact follows because $\phi( -(1-\mu)) = 0$. The third fact is also easy to verify for $\delta = \frac{\mu}{4}$ since $$ \Big | \frac{\delta e^{\imath \theta} + 1 - \mu}{1 + (1-\mu) \delta e^{\imath \theta}} - 1-\mu \Big | = \Big | \frac{\delta e^{\imath \theta} - (1-\mu)^2 \delta e^{\imath \theta}}{1 + (1-\mu) \delta e^{\imath \theta}} \Big | \leq (1 + \delta)(2\delta) \leq \mu$$
Hence from the above corollary we obtain:
$$\log \frac{1}{1-\mu} \log B \leq \log \frac{4}{\mu} \log 1/\varepsilon$$
\noindent So if we choose $B$ large enough (and still a polynomial in $1/\varepsilon$) then we reach a contradiction and hence the inequality $1 > \varepsilon \|\widehat{q}(x)\|_{\sup}^{\mathbb C} + B \|q(x)\|_{\sup}^{\mathbb C}$ cannot be satisfied.
\begin{theorem}
For two holomorphic functions related by the affine change of variables $$\widehat{q}(1 + \mu(x - 1)) \equiv -q(x)$$ then setting $B = (1/\varepsilon)^{f(\mu)}$ where $f(\mu) = 1/\mu \log 4/\mu$ we have: $$ \varepsilon \|\widehat{q}(x)\|_{\sup}^{\mathbb C} + B \|q(x)\|_{\sup}^{\mathbb C} \geq |\widehat{q}(0)|$$
\end{theorem}
\noindent One can regard this as a type of uncertainty relation on maximum values of two holomorphic functions that are related by an affine change of variables. Initially we thought that the uncertainty relation we wanted to prove (for polynomials) would need to depend on the degree of the polynomial, but interestingly this is not the case.
\fi
\section{Open Question}\label{sec:open}
Is there a polynomial time algorithm for noisy population recovery -- i.e. when attributes are not deleted, but are flipped (with probability $\eta < 1/2$)? It seems that new ideas are needed to handle this case in part because if we try the same method of writing a linear program over a basis of estimators, then instead of two polynomials related by an affine change of variables, we get two polynomials $p(x)$ and $q(x)$ for which $p(x) = \ell(x)^n q(\phi(x))$ where $\ell(x)$ is a linear function and $\phi(x)$ is a M\"obius transformation. However this damping term $\ell(x)^n$ makes it much easier for $q(x)$ to be bounded in the complex disk.
|
1,116,691,500,380 | arxiv | \section{\label{sec:level1}Introduction}
Bulk SrRuO$_3$ (SRO) is a \textit{4d} itinerant oxide metal with a ferromagnetic Curie temperature ($T_c$) of 160 K.\cite{cao1997magnetic, mazin1997electronic, koster2012structure, klein1996transport} The system has been investigated widely as an oxide electrode in all-oxide based devices and, most recently, as a possible host of magnetic skyrmions manifesting a topological Hall effect (THE).\cite{gu2019interfacial,kimbell2020two, matsuno2016interface, ziese2020topological} Research efforts to characterize the atomic-scale structure of thin SRO films are motivated by observed thickness-dependent metal-insulator and magnetic transitions. \cite{xia2009critical,chang2009fundamental, ishigami2015thickness} A thickness-dependent metal-insulator transition (MIT) occurs in epitaxial SRO thin films fabricated by molecular beam epitaxy and pulsed laser deposition below a critical thickness of 2-4 unit cells(uc). \cite{xia2009critical,ishigami2015thickness} The suppression of ferromagnetism in thin SRO layers is attributed to a decrease in the density of states at the Fermi level due to quantum confinement and the possible existence of an antiferromagnetic interfacial layer for ultrathin SRO films.\cite{chang2009fundamental} An important property of the SRO films grown on STO(100) is their exceptionally strong perpendicular magnetic anisotropy, preserved in films as thin as few unit cells. \cite{Schultz2009AHESROfilms, Ziese2010SROanisotropy, Wakabayashi2021SROanisotropy} This magnetic anisotropy is directly related to the intriguing observation of humplike features in Hall effect resistance loops of thin SRO films and heterostructures containing SRO layers. The humps of the Hall loops have been interpreted as a signature of the THE originating from the formation of skyrmions, as a result of a surface/interface-induced or defects-induced Dzyaloshinskii-Moriya interaction. This is however still under debate, as other mechanisms apart of a THE contribution can explain the occurrence of humplike features in the Hall loops as well.\cite{matsuno2016interface, Ziese2018asymmetricSRO, sohn2021stable, huang2020detection, lu2021defect, wysocki2020validity} Multiple factors are known to contribute to the anomalous Hall effect in SRO including stoichiometry, structure, temperature and layer thickness. \cite{Schultz2009AHESROfilms, Ziese2018asymmetricSRO, wysocki2020validity} Hence, a detailed understanding of the atomic-scale properties of SRO is required to understand its complex electronic and magnetic properties.
Due to the strong coupling between the lattice structure and the electronic and magnetic properties of SRO, the atomic scale structure of SRO films has been investigated theoretically and experimentally as a function of the substrate-induced strain and the coupling of oxygen octahedra across heterointerfaces, to uncover the origins of the thickness-dependent properties of the system.\cite{ziese2019unconventional, xia2009critical, lu2013control, kartik2021srotheory} For example, signatures of the THE effect have been attributed to a substrate induced local orthorhombic to tetragonal phase transition which may stabilize a chiral spin structure \cite{gu2019interfacial} and/or magnetic inhomogeneity arising, for example, from variations in layer thickness.\cite{kimbell2020two} Thus, understanding the atomic-scale structural and interfacial interactions is crucial for decoupling intrinsic and extrinsic origins of the unique physical properties of SRO thin films.
\begin{figure*}[ht]
\includegraphics[width=1\textwidth]{Figure_1_Model.png}
\caption{Schematic of structural distortions in orthorhombic SrRuO$_3$ thin films along pseudocubic (c) directions defined by the (001)-oriented SrTiO$_3$ substrate. $\alpha_{rot}$, $\beta_{rot}$ and $\gamma_{rot}$ represent the rotation angles about the [100]$_{c}$, [010]$_{c}$ and [001]$_{c}$ axes, respectively. The orthorhombic structure is also characterized by anti-parallel displacements of the Sr planes along [001]$_{c}$ and [010]$_{c}$ directions. }
\label{fig:schematic}
\end{figure*}
In this work, we report on the layer-resolved atomic scale structure of thin SRO films as a function of temperature (10 - 300 K) and film thickness (1.2-17 nm), based on high resolution synchrotron X-ray crystal truncation rod measurements (CTR) and reciprocal space maps (RSMs) and scanning transmission electron microscopy (STEM). We observe a suppression of orthorhombic/monoclinic distortions within 2-3 interfacial SRO unit cells for all film thicknesses for films grown on (001)-oriented SrTiO$_3$ substrates. The magnitude of the oxygen octahedral distortions increases on cooling between 300 K and the ferromagnetic transition temperature (for bulk, T$_C$= 160 K) and remains constant below T$_C$ due to the Invar effect, a prominent example demonstrating the strong coupling between structure and magnetic order.\cite{kiyama1996invar}
Bulk SRO has a GdFeO$_3$-type orthorhombic crystal structure at 300 K with a$_o$=5.567 \AA{}, b$_o$= 5.530 \AA{}, c$_o$=7.845 \AA{} and $\beta=90^o$.\cite{jones1989structure} The oxygen octahedra are rotated out-of-phase along the pseudo-cubic (\textit{c}) [100]$_{c}$ and [010]$_{c}$ axes, in-phase along the [001]$_{c}$ axis(the \textit{o} and \textit{c} subscripts refer to the orthorhombic and cubic coordinates, respectively). In addition to the oxygen octahedral rotations, anti-parallel Sr cation displacements are present. Figure \ref{fig:schematic} shows the expected distortions relative to the cubic STO lattice vectors. In Glazer notation, the oxygen octahedral rotational pattern of orthorhombic SRO is represented as $a^-a^-c^+$ where the '-' superscript represents out-of-phase rotations and the '+' superscript represents in-phase rotations.\cite{glazer1972classification, woodward2005electron} At high temperatures (above 570 K), bulk SRO transitions into a tetragonal phase with in-phase rotations about the [001]$_{c}$ axis and no tilts about the [100]$_{c}$ and [010]$_{c}$ axis. The tetragonal phase is represented in Glazer notation as $a^0a^0c^+$.
Recent reports on the structure of SRO epitaxial thin films indicate that the crystal structure of SRO can be controlled by strain, the thickness of the SRO layers, the oxygen stoichiometry and the octahedral rotational pattern of the substrate.\cite{vailionis2008room, gao2016interfacial, lu2013control, aso2013atomic, chang2011thickness} SRO films compressively strained to cubic (001) SrTiO$_3$ (STO) (a=b=c= 3.905 \AA{}) are found to have a distorted orthorhombic(monoclinic) structure with the orthorhombic c-axis, [001]$_o$, parallel to the in-plane cubic [010]$_{c}$ axis of the substrate, and the orthorhombic[110]$_o$ parallel to the out-of-plane [001]$_{c}$ axis.\cite{vailionis2011misfit, gao2016interfacial} The preferential orientation of the orthorhombic axis is dictated by the step terraces.\cite{maria2000origin, rao1997growth, gan1998direct}
\section{Experimental Details}
SrRuO$_3$ samples used in our experiment were grown by
pulsed laser deposition (PLD). The samples were grown at a substrate temperature of 650-700 $^o$C and oxygen pressure of 100 mTorr, with a laser fluence of 1.5-2.0 J/cm$^2$ and a pulse repetition rate of 5 Hz. Representative atomic force microscope (AFM) images of the surface morphology of the 8, 16 and 44 unit cell (uc) SRO films are shown in supplemental Figure S1. Clear one unit cell high step terraces are resolved in the AFM images, inherent to the vicinal surface morphology of the Nb-doped (0.5 wt.$\%$ Nb) STO(100) substrates. The substrates were annealed at 925$^o$C for 1 hour in air, after they had been etched for 2.5 min in buffered HF solution.
Crystal truncation rods and half-order superstructure reflections were measured between 10 K and 300 K for 3, 8, 16 and 44 uc thick SrRuO$_3$ films. The diffraction measurements were performed at the 33ID beamline at the Advanced Photon Source using a photon energy of 16 keV ($\lambda=0.7749 \AA{}$). The X-ray beam was focused to a spot size of 50 $\mu$m - 100 $\mu$m. The diffraction intensities were measured with a 2D Pilatus 100 K detector.\cite{schleputz2005improved} Samples were mounted in a Be-dome chamber with a base pressure of $<1 \times 10^{-5}$ Torr on a cryo-displex and the temperature was varied from 300 K to 10 K. The CTRs and half-order reflections were fit using the GenX genetic-based X-ray fitting algorithm to determine the structural properties of the films.\cite{bjorck2007genx, koohfar2017confinement}
The structural properties of the films were also characterized with aberration corrected STEM. Samples for STEM analysis were prepared by conventional wedge polishing followed by Ar ion milling. Imaging and spectroscopy were performed on an aberration corrected FEI Titan G2 60-300 kV STEM operated at 200 kV. By simultaneously acquiring annular dark field (ADF) and integrated differential phase contrast (iDPC) images, the structure of both cationic and oxygen sublattices can be analyzed. The revolving STEM (RevSTEM) imaging technique\cite{sang2014revolving} was used to improve the signal-to-noise ratio to allow for the resolution of the shape of the atom columns revealing structural features beyond atom column positions. The film structure was determined by fitting each atom column to a two-dimensional Gaussian function allowing for a determination of the column amplitude (point image intensity), \textit{x} (in-plane) - and \textit{y} (out-of-plane)-positions, as well as the \textit{x} and \textit{y}-components of peak widths, $\sigma_x$ and $\sigma_y$. The local distortions and tilting of oxygen octahedra are characterized by the oxygen column positions and ellipticity, defined as $ E =\frac{\sigma_x}{\sigma_y}$. With the ellipticity, we characterize the out-of-phase octahedral tilting along a column.
\section{Results and Discussion}
Figure \ref{fig:specular} shows the measured specular (00L) CTRs for the 8 and 16 uc samples. The presence of finite thickness oscillations is indicative of flat surfaces and a chemically abrupt SRO/STO interface. The out-of-plane lattice parameters determined from fits to the (00L) CTRs for the 8, 16 and 44 uc SRO films are 3.957, 3.955 and 3.948 \AA{}, respectively. The increased lattice spacing compared to the bulk pseudocubic value for SRO (a$_{c, bulk}=3.93 \AA{}$) is due to the biaxial compressive in-plane strain imposed by the STO substrate. The films are coherently strained to the STO substrate with an in-plane pseudocubic lattice constant of 3.905 \AA{} as evidenced by RSMs measured by X-ray diffraction (Figure \ref{fig:Structure_44uc}(a)).
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{Figure_2_specular.png}
\caption{Specular 00L scans for 8 and 16 unit cell thick SRO films on STO. The plots are offset vertically for clarity.}
\label{fig:specular}
\end{figure}
\begin{figure*}[ht]
\includegraphics[width=1\textwidth]{Figure_3_S44.png}
\caption{Reciprocal space map around the STO [204]$_{c}$ Bragg peak showing the in-equivalent orthorhombic SRO Bragg peaks for a 44 uc SRO film. (1 STO reciprocal lattice unit = 1/0.3905 nm$^{-1}$) (b) Measured (blue circle) line profiles along the STO L direction in (a) and fits (solid red lines). (c) Measured half-order rods (d) Annular dark field images of the 44 uc SRO film along the [100]$_c$ projections with (e) corresponding iDPC images. The inset shows a magnification of the iDPC image with the XRD structure overlayed. (f) Measured in-phase tilting angle map along the [100] zone axis with respective layer-resolved (g) in-phase tilt angles and h) ellipticity profiles.}
\label{fig:Structure_44uc}
\end{figure*}
The reduced symmetry of the thickest (44 uc) SRO film relative to the cubic \textit{Pm$\bar{3}$m} STO substrate is observed in RSMs measured around the STO (204)$_c$,(024)$_c$,(-204)$c$ and (0-24)$_c$ Bragg peaks. The measured RSMs are compared in Figure \ref{fig:Structure_44uc}(a) where 1 STO reciprocal lattice unit (r.l.u.) = 1/0.3905 nm$^{-1}$. The different \textit{L} values for the SRO (6,2,0)$_o$ and (2,6,0)$_o$ Bragg peaks with fixed values of Q$_{in-plane}$ is due to the monoclinic distortion of the SRO lattice.\cite{vailionis2011misfit} The angle $\beta_{mon}$, between the $a_o^{SRO}$ and $b_o^{SRO}$ axes is determined from the relation $\beta_{mon}=90-arctan(\frac{\Delta Q_z}{Q_{inplane}})$\cite{gao2016interfacial}. From the RSMs we determine the monoclinic SRO lattice parameters to be $\beta_{mon}=89.5^o$, $a_o^{SRO}=5.72 \AA{}$, $b_o^{SRO}=5.52 \AA{}$ and $c_o^{SRO}=7.81 \AA{}$. The lattice parameters are also confirmed from fits to the in-equivalent (04L)$_c$ CTRs and half order peaks in Figure \ref{fig:Structure_44uc}(b) and \ref{fig:Structure_44uc}(c).
The RSMs for the 16 uc film around the STO $\{204\}_c$ Bragg peaks are shown in Figure \ref{fig:Strucutre16}(a). The film Bragg peaks for the 16 uc sample are broader along the \textit{L} direction than the 44 uc sample due to the effect of the finite thickness broadening and the increased fraction of the multiple rotational domains. Thus, care must be taken in relying solely on RSMs in verifying the orthorhombicity of the lattice.
The octahedral rotations about the out-of-plane [001]$_c$ axis and tilts about the in-plane [100]$_c$ and [010]$_c$ axes can be qualitatively predicted from the prescence/absence of half-order reflections. A c$^+$ in-phase rotation along the [001]$_c$ axis results in reflections of type $\frac{1}{2}$(\textit{odd,odd,even})$_{c}$.\cite{woodward2005electron, May2010LaNiO3} The absence of peaks at integer L along the (3/2, 1/2, L)$_c$ rod for the 44 uc film in Figure \ref{fig:Structure_44uc}(c) indicates that the axis with in-phase rotations does not lie along the out-of-plane [001]$_c$ axis.
Bragg peaks are expected for an $a^+b^-c^-$ structure
for reflections of type $\frac{1}{2}$(\textit{even,odd,even})$_{c}$ and $\frac{1}{2}$(\textit{odd,odd,odd})$_{c}$. The (0 1/2 \textit{even})$_c$ peaks observed in Figure \ref{fig:Structure_44uc}(c) indicate an a$^+$ tilt. Based on the observed reflections for the 44 uc film, the tilt system of the SRO on STO is determined to be $a^+b^-c^-$. However, the presence of reflections of type $\frac{1}{2}$(\textit{even, odd, odd})$_{c}$ signifies the presence of an $a^-b^+c^-$ domain with the in-phase tilt along the [010]$_c$ direction.
\begin{figure*}[ht]
\includegraphics[width=1\textwidth]{Figure_4_S16.png}
\caption{(a) Reciprocal space maps for the 16 uc SRO/STO sample.(b) Measured (blue circle) half order rod and fits (solid red lines).(c) Intensity of half order peaks as the function of temperature.(d) Sr displacement and octahedral rotation angles as functions of temperature.}
\label{fig:Strucutre16}
\end{figure*}
Due to the four-fold symmetry of the cubic STO substrate, 4 rotational variants of the SRO unit cell are expected with the [001]$_o$ axis aligned along either the STO [100]$_c$,[-100]$_c$,[010]$_c$ or[0-10]$_c$ axis.\cite{gao2016interfacial, vailionis2011misfit, siwakoti2021abrupt} The half order peaks associated with the $a^-b^+c^-$ domain are relatively weak for the 44 uc sample and the fraction of the film with the $a^-b^+c^-$ orientation is less than 5\% of the total film volume. However, for the 16 uc sample, the ratio of the $a^+b^-c^-$ domain fraction to the $a^-b^+c^-$ domain is 3:1 suggesting that the film transitions into a single domain structure as the thickness increases or slight differences exist in the miscut angles and direction of the STO substrates. \cite{maria2000origin, wang2020magnetic, kar2021high} Indeed topographic atomic force microscopy investigations (supplemental Fig. S1) showed that the STO(100) substrate used for the 16 uc sample has tilted terraces with respect to its edges, while the one used for the 44 uc has terraces running almost parallel to the substrate edges. It was shown that more tilted terraces promote the formation of crystallographic domains with different in-plane orientations of the long orthorhombic \textit{c} axis.\cite{wang2020magnetic}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Parameters &8 uc & 16 uc & 44uc \\
\hline
c (\AA{}) &3.957 & 3.955 & 3.948 \\ \hline
$\alpha_{rot}$ (in-phase), 300 K &0.9$^o$ & 3.4$^o$ & 5.31$^o$ \\
$\beta_{rot}$ (out-of-phase), 300 K &3.3$^o$ & 6.4 $^o$ & 5.72$^o$ \\
$\gamma_{rot}$ (out-of-phase), 300 K &9.9$^o$ & 9.7$^o$ & 5.21$^o$ \\
\hline
$\alpha_{rot}$ (in-phase),130 K &1.7$^o$ & 4.2$^o$ & - \\
$\beta_{rot}$ (out-of-phase),130 K &2.6$^o$ & 6.6$^o$ & - \\
$\gamma_{rot}$ (out-of-phase) 130 K &9.6$^o$ & 9.1$^o$ & - \\
\hline
$Ru-O-Ru $ &159$^o$ & 162$^o$ & 164$^o$ \\
$Sr \delta_z A$ (300 K) &0.009 & 0.040 & 0.062 \\
$Sr \delta_z A$ (130 K) &0.035 & 0.060 & - \\
\hline
\end{tabular}
\caption{Comparison of structural parameters of non-interfacial and non-surface layers of SRO films as a function of thickness and temperature. }
\label{tab:fitresults}
\end{table}
The local structure of the 44 uc film is investigated by STEM measurements shown in Figure \ref{fig:Structure_44uc}(d)-(h). The ADF image in Figure \ref{fig:Structure_44uc}(d) indicates a chemically abrupt interface between the SRO film and the STO substrate. The oxygen sublattice is imaged using the iDPC technique. Along the [100]$_c$ projection where the oxygen octahedra rotate in-phase, the tilt angles are determined directly from the atomic positions of the oxygen atoms in the iDPC image in Figure \ref{fig:Structure_44uc}(e). Figure \ref{fig:Structure_44uc}(f) shows a map of the unit-cell resolved oxygen octahedral tilts about the [100]$_c$ axis. A depth profile of the magnitude of the layer-averaged in-phase rotation angle is shown in Figure \ref{fig:Structure_44uc}(g). There are no in-phase rotations in the STO substrate as expected for bulk STO. The rotation angle increases gradually from 0 to 5$^\circ$ within the three interfacial SRO layers and remains uniform till the surface layer where a reduction occurs. The suppressed in-phase tilt at the film-substrate interface is consistent with fits to the X-ray diffraction data discussed below and previous reports.\cite{siwakoti2021abrupt}
A projection along the [010]$_c$ direction where the octahedra rotate out-of-phase results in an asymmetric smearing of the oxygen atomic columns. Thus, the projected out-of-phase octahedral tilting is determined by the ellipticity $E=\sigma_x/\sigma_y$ of the atomic columns, where $\sigma_x$ and $\sigma_y$ represent the horizontal and vertical peak widths measured from fitting the iDPC images. The layer-averaged ellipticity profile is shown in Figure \ref{fig:Structure_44uc}(h) along [010]$_c$ projection.
In the \textit{AO} planes, $E<1$ since the rotations lead to O displacements in the vertical direction. Conversely, $E>1$ in the \textit{$BO_2$} planes due to O displacements in the vertical direction. $E=1$ corresponds to no rotations. The alternating values of $E$ about 1 in Figure \ref{fig:Structure_44uc}(h) corresponds to successive \textit{SrO} and \textit{$BO_2$} (\textit{B}=Ti, Ru) planes. Some ellipticity is measured in the STO substrate, likely due to low order aberrations and/or rotations in the interfacial STO layers induced by the SRO adlayer.
The iDPC images provide a direct quantitative measure of the magnitude of the in-phase rotations about the [100]$_c$ axis. To quantitatively determine the layer-resolved rotations about the [010]$_c$ and [001]$_c$ axes and the orthorhombic Sr displacements, the CTR's and half-order rods measured by synchrotron X-ray diffraction are analyzed using the GenX genetic fitting algorithm.\cite{bjorck2007genx} To account for differences in the structure of the interfacial 3 unit cells, separate fit (rotation angles, Sr displacements) parameters are assigned to the interface layers and the non-interfacial layers.
Table \ref{tab:fitresults} summarizes the structural parameters determined for the SRO films as a function of thickness at 300 K and 130 K. The measured octahedral rotation angles for the non-interfacial layers of the 44 uc film about the orthogonal cubic axes are $\alpha_{rot}=5.3^o$, $\beta_{rot} = 5.7^o$, $\gamma_{rot} = 5.2^o$ in good agreement with bulk values of 6.19$^o$, 5.97$^o$ and 5.97$^o$ respectively. A suppression of tilts and Sr displacement is found at the interface due to the structural coupling to the cubic STO substrate which possesses no octahedral rotations at 300 K. This is consistent with the iDPC results in Figure \ref{fig:Structure_44uc}(g). No evidence for off-center oxygen displacements in the STO and SRO were found as has been recently reported in 4 uc SRO films. \cite{sohn2021stable}
For the 16 uc SRO film at 300 K, the in-phase rotation angle is suppressed to 3.4$^o$ while the rotation about c-axis increases (relative to bulk) to 9.7$^o$. The distorted orthorhombic structure at 300 K is in contrast to the tetragonal a$^0$b$^0$c$^-$ structure reported for SRO films with thicknesses below 17 uc.\cite{chang2011thickness} Since oxygen vacancies can stabilize the tetragonal structure, the discrepancy is most likely related to the oxygen stoichiometry.
The evolution of the structure of the 16 uc film with temperature between 10 K and 300 K is determined from fits to the temperature-dependent half order rods. Representative measured half order rods and fits at 300 K for the 16 uc film are shown in Figure \ref{fig:Strucutre16}(b). The intensity of the (0 -0.5 L)$_c$ and (0.5 -0.5 1.5)$_c$ peaks measured as a function of temperature from 300 K to 10 K for the 16 uc film are shown in Figure \ref{fig:Strucutre16}(c). The intensities of the half order peaks associated with the in-phase octahedral tilts and Sr displacements increase as the temperature decreases to the FM-PM transition at $\approx$150 K. Below 150 K, the intensity of the (0 -0.5 L)$_c$ peaks remains constant indicating a freezing of the octahedral distortions in the ferromagnetic phase. \cite{el2011modeling} A sharp increase in the intensity of the (0.5 -0.5 L)$_c$ peaks is observed below 105 K where the antiferrodistortive STO phase transition occurs. The STO phase transition involves rotation of the TiO$_6$ octahedra in the STO substrate leading to a doubling of the STO unit cell and the emergence of the STO substrate $\frac{1}{2}$(\textit{odd, odd,odd})$_c$ peaks. The temperature-dependent structural parameters for the 16 uc SRO film are summarized in Figure \ref{fig:Strucutre16}(d). The Sr displacements in the SRO layers increase from 0.04 \AA{} at 300 K to 0.06 \AA{} at 150 K and the in-phase rotation angle increases from 3.3$^o$ at 300 K to 4.2$^o$ at 150 K.
In contrast to the thicker films, the intensity of of the half-order reflections for the 8 uc film associated with the in-phase tilts and Sr displacements are strongly suppressed at 300 K. The STEM analysis and the XRD results are described in Figure S3 and S4 of the supplemental materials. The rotation angles at 300 K away from the film-substrate interface are $\alpha_{rot}=0.9 ^o$, $\beta_{rot}=3.3 ^o$ and $\gamma_{rot}=9.9 ^o$. The suppression of the in-plane tilts and the enhancement of the rotation around the c-axis relative to bulk leads to an in-plane Ru-O-Ru bond angle of 159$^o$ which is slightly less than the value of 162$^\circ$ for bulk orthorhombic SRO. Fits to the half-order reflections at 130 K in the ferromagnetic phase show a slight increase in the in-phase tilt angle to 1.7 $^o$ but still less than the expected value for bulk SRO.
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{Figure_5_S3.png}
\caption{{(a) Measured half-order rods as a function of temperature for a 3 uc SRO film. (b) Reciprocal space map along the H=1.5 K=0.5 L rod and (c) Model structure of rotations in SRO film and interfacial STO layers.}}
\label{fig:S3uc}
\end{figure}
Octahedral distortions are also observed for the thinnest 3 uc SRO sample. Half-order Bragg peaks are observed for $\frac{1}{2}$(\textit{odd}, \textit{odd}, \textit{odd}) reflections for the 3 uc SRO sample while peaks associated with in-phase rotations and Sr displacements are absent between 300 K and 10 K. A $a^0a^0c^-$ tetragonal film structure can be ruled out since (1/2 1/2 $odd/2$) diffraction peaks are present as shown in Figure \ref{fig:S3uc}(a) which indicate additional tilts about the [001]$_c$ and [010]$_c$ axes. The peak intensities increase as the temperature decreases from 250 K to 150 K indicating an increase in the magnitude of the octahedral tilts and rotations with decreasing temperature. The tilts and rotations observed in the thinnest sample are consistent with low energy electron diffraction results on single unit cell thick SRO films reported by Siwakoti \textit{et al}.\cite{siwakoti2021abrupt} The peak widths along the L direction in the RSM image in Figure \ref{fig:S3uc} is narrower than width expected for a 3 uc thick film suggesting that the octahedral rotations extend 2-3 layers into the STO substrate as it was observed for CaRuO$_3$ and (La,Sr)MnO$_3$ films on (001)-SrTiO$_3$ by STEM imaging.\cite{siwakoti2021abrupt, koohfar2017confinement} The induced rotations in the STO are found to be $<$ 1$^o$ in magnitude and out-of-phase around the [010] and [001] axes and thus, will be difficult to observe directly in ABF images. However, the half-order reflections obtained by synchrotron X-ray diffraction measurements are sensitive to these distortions.
The thickness and temperature-dependent structural results are summarized as follows: 1) a suppression of orthorhombic distortions occurs within the 2-3 SRO layers at the SRO/STO interface consistent with theoretical predictions\cite{he2010control}; 2) reduced A-site displacements are correlated with a decrease in the magnitude of in-phase rotations and an increase in the oxygen octahedral rotations about the c-axis as the film thickness is reduced from 44 uc to 8 uc; 3) freezing of rotations and A-site displacements occurs below the paramagnetic-ferromagnetic transition temperature.
The suppression of octahedral rotations about the orthogonal in-plane axis for the SRO layers close to the SRO/STO interface is expected due to structural coupling with the cubic STO substrate and these results are consistent with previous reports on the SRO/STO interface structure. We find that the orthorhombic distortions are present, but weak in the 8 uc film, with the magnitude of the distortions increasing with decreasing temperature.
The freezing out of distortions in the 8, 16 and 44 uc films below T$_C$ arises due to the coupling of the magnetic moment ordering to the lattice structure. This effect which is also known as the Invar effect is characterized by a freezing out of the unit cell volume and octahedral rotations below the ferromagnetic transition temperature, leading to anomalously low coefficient of thermal expansion: This effect was observed in bulk SRO and our results confirm the existence of the Invar effect in ultrathin strained SRO films.\cite{dabrowski2006magnetic, kiyama1996invar, bushmeleva2006evidence}
We find that, while the 8 uc sample is close to an orthorhombic-tetragonal transition at 300 K, the orthorhombic distortions become stronger as the temperature decreases and the low temperature structure, where humplike anomalies of the Hall resistance loops were observed and taken for a THE fingerprint \cite{gu2019interfacial}, is orthorhombic. Based on structural investigations done at 300 K, Gu \textit{et. al.} proposed that in their 8 uc SRO films, the interfacial RuO$_6$ octahedral tilting induced by a local orthorhombic-to-tetragonal structural phase transition across the SRO/STO interface resulted in breaking the inversion symmetry. This was further used to account for an interfacial DMI and to explain why a THE contribution may occur in the Hall resistance loops \cite{gu2019interfacial}. Our findings are therefore very important, because we have information about the structure at the temperature where the physical properties are measured. The temperature and thickness dependent structures of SRO layers is highly relevant for understanding the magnetocrystalline anisotropy, the anomalous Hall effect and the temperature dependence and sign of the anomalous Hall constant for tetragonal and orthorhombic SRO phases. \cite{Kan2013tetragonalSRO, ziese2019unconventional, Bern2013AHE, kartik2021srotheory}
In addition to the structural distortions observed in this work, local variations in film thickness arising from the step-flow-growth mechanism can lead non-uniform coercivity and humplike features in magnetoresistance measurements which mimick the THE. \cite{malsch2020correlating, kimbell2020two} Magnetic force measurements indicate lateral variations in the coercivity of nominally 4 uc films capped with SrIrO$_3$/SrZrO$_3$ layers which may be correlated with local variations in the film thickness. For the thicker films (8-44 uc), the surface occupation of the top 2-3 layers is slightly less than unity, indicating that the local film thickness averaged over the X-ray probe area (100 $\mu$m) fluctuates within this range. These variations become more critical as the film thickness is reduced below 8 unit cells.
While this analysis assumes no structural or chemical changes occur upon the surface exposure to ambient conditions, in some oxides such changes can be quite significant.\cite{Caspi2022SrVO3} SRO films exposed to ambient atmosphere and heated in vacuum may decompose.\cite{SHIN2005SROsurface-stability} However, the strong signal originating both from the half order reflections and the RSMs (Figure \ref{fig:S3uc}) indicate that even at 3 uc, a significant portion of the film retains its high quality crystalline structure. Overoxidation and other surface reactions could sometimes result in amorphous phases, which would not be observable in diffraction experiments. Nonetheless, the strong diffraction intensity (originating from inherently low-intensity reflections) indicates that this possibility is not playing a significant role in the current case.
\section{Conclusion}
In summary, the atomic-scale structure of SRO films was investigated as a function of temperature and film thickness. The ultrathin films (3 and 8 uc thick) show the most pronounced structural changes with respect to the bulk structure. An increase of the magnitude of the oxygen octahedral rotations and Sr displacements is observed as the temperature is reduced from 300 K to the ferromagnetic transition, below 130 K. The freezing out of the octahedral distortions observed below 130 K in the films as thin as 8 unit cells is associated with the Invar effect, which is known to occur in bulk ferromagnetic SRO, as a consequence of the coupling between the crystal structure and ferromagnetic ordering. The thickness-dependent structure of the SRO films may be related to kinetic and thermodynamic effects during nucleation of the orthorhombic structure of SRO thin film on the cubic STO substrate. Further structural investigations shall enable an \textit{in depth} understanding of the unique properties of SRO thin films, with the particular motivation of shedding light on the intriguing magnetotransport properties reported for ultra-thin films.
\section*{Acknowledgments}
The authors acknowledge financial support by the US National Science Foundation under Grant No. NSF DMR-1751455. This work was performed in part at the Analytical Instrumentation Facility (AIF) at North Carolina State University, which is supported by the State of North Carolina and the National Science Foundation (award number ECCS-2025064). The AIF is a member of the North Carolina Research Triangle Nanotechnology Network (RTNN), a site in the National Nanotechnology Coordinated Infrastructure (NNCI). This material is based upon work supported by the National Science Foundation under Grant No. DGE-1633587. We thank Ren\'{e} Borowski and Silvia de Waal for etching the STO substrates and Regina Dittmann and Felix Gunkel for access to the PLD system at FZ Jülich. I.L.-V. acknowledges the financial support from the German Research Foundation (DFG) for project no. 403504808 within SPP2137 and for project no. 277146847 within SFB1238 (project A01). L. K. thanks German Israeli Foundation for financial support (GIF Grant no. I-1510-303.10/2019). Use of the Advanced Photon Source was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
|
1,116,691,500,381 | arxiv | \section{Introduction}
\subsection{History and motivation} Let $\Sigma$ be a closed oriented surface of genus at least 2. The Teichm\"uller space $\mathcal{T}(\Sigma)$ is the space of discrete faithful representations of $\pi_1(\Sigma)$ into the Lie group $\mathrm{PSL}_2(\mathbb{R})$ modulo conjugation. It forms a connected component of the space of representations
\[
\mathcal{X}(\pi_1(\Sigma), \mathrm{PSL}_2(\mathbb{R}))= \mathcal{X}_2(\pi_1(\Sigma)):=\operatorname{Hom}(\pi_1(\Sigma), \mathrm{PSL}_2(\mathbb{R})) / \mathrm{PSL}_2(\mathbb{R}).
\]
By replacing $2$ in $\mathrm{PSL}_2(\mathbb{R})$ with a general natural number $n$, we can obtain the space $\mathcal{X}_n(\pi_1(\Sigma)):=\mathcal{X}(\pi_1(\Sigma), \mathrm{PSL}_n(\mathbb{R}))$. Observe that $\mathcal{T}(\Sigma)$ can be naturally embedded into $\mathcal{X}_n(\pi_1(\Sigma))$ as Fuchsian representations, that is, by definition, representations of the form $\iota_n \circ \rho$ where $\rho\in \mathcal{T}(\Sigma)$ and $\iota_n:\mathrm{PSL}_2(\mathbb{R}) \to \mathrm{PSL}_n(\mathbb{R})$ the unique irreducible representation of $\mathrm{PSL}_2(\mathbb{R})$ into $\mathrm{PSL}_n(\mathbb{R})$. Then one may expect that a connected component containing a Fuchsian representation resembles the Teichm\"uller space. The first answer to this guess is given by Hitchin \cite{hitchin1992} in 1992. Indeed he shows that any component containing a Fuchsian representation is diffeomorphic to the $(n^2-1) (2g-2)$ dimensional cell. We call a connected component of $\mathcal{X}_n(\pi_1(\Sigma))$ containing a Fuchsian representation the $\mathrm{PSL}_n(\mathbb{R})$-Hitchin component.
Besides Hitchin's result, it is known the Hitchin component $\operatorname{Hit}_n(\Sigma)$ enjoys a lot of properties that the classical Teichm\"uller space has. Labourie \cite{labourie2006}, for instance, gives a dynamical characterization of $\operatorname{Hit}_n(\Sigma)$ and shows that each Hitchin representation is discrete and faithful.
There are many known global parametrizations of the Teichm\"uller space. Because $\operatorname{Hit}_n(\Sigma)$ is also a cell, we may expect the existence of global coordinate system for $\operatorname{Hit}_n(\Sigma)$. For the Hitchin component $\operatorname{Hit}_3(\Sigma)$, Goldman \cite{goldman1990} finds such a global coordinate system. However Goldman's argument cannot be applied to general $\operatorname{Hit}_n(\Sigma)$ cases because construction of Goldman coordinates essentially relies on the fact that $\operatorname{Hit}_3(\Sigma)$ represents the deformation space of convex projective structures on the surface $\Sigma$. See Choi-Goldman \cite{choi1993}. A uniform parametrization scheme for general $\operatorname{Hit}_n(\Sigma)$ is obtained by Bonahon-Dreyer \cite{bonahon2014}. Their method is based on Fock-Goncharov's theory \cite{fock2006} or, Thurston's construction of shearing coordinates. In $\operatorname{Hit}_3(\Sigma)$ Goldman coordinates and Bonahon-Dreyer's are related and the explicit coordinates transformation is given by Bonahon and I. Kim \cite{bonahon2018}.
It is well-known that the Teichm\"uller space $\mathcal{T}(\Sigma)$ carries the natural symplectic structure called the Weil-Petersson form $\omega_{WP}$. As a symplectic manifold, the Teichm\"uller space $(\mathcal{T}(\Sigma), \omega_{WP})$ has been studied by many mathematicians. One of the remarkable results is due to Wolpert \cite{wolpert1985} which states that the Fenchel-Nielsen coordinates are Darboux coordinates, namely,
\[
\omega_{WP}= \sum_{i=1} ^{3g-3} \mathrm{d} \ell_i \wedge \mathrm{d}\theta_i.
\]
$\operatorname{Hit}_n(\Sigma)$ also carries a symplectic form as the classical Teichm\"uller space does. Indeed it is work of Goldman \cite{goldman1984} that extends the Weil-Petersson symplectic form on $\mathcal{T}(\Sigma)$ into the Atiyah-Bott-Goldman symplectic form $\omega_G$ on $\operatorname{Hit}_n(\Sigma)$. Now, it is natural to ask whether there is any global Darboux coordinate system with respect to $\omega_G$, analogues to the Fenchel-Nielsen coordinates.
For the Hitchin component $\operatorname{Hit}_3(\Sigma)$, H. Kim \cite{kim1999} claims that the Goldman coordinates \cite{goldman1990} are indeed Darboux coordinates for $\omega_G$. He first studies $\operatorname{Hit}_3(\Sigma)$ where $\Sigma$ is a compact surface with boundary. Although $\operatorname{Hit}_3(\Sigma)$ itself is not a symplectic manifold, it admits a foliation whose leaves are of the form $\operatorname{Hit}_3 ^{\mathscr{B}}(\Sigma)$, a subspace of $\operatorname{Hit}_3(\Sigma)$ whose holonomies of boundary components are in prescribed conjugacy classes $\mathscr{B}$. H. Kim, as well as Guruprasad-Huebschmann-Jeffrey-Weinstein \cite{guruprasad1997}, show that each leaf $\operatorname{Hit}_3 ^{\mathscr{B}}(\Sigma)$ can be given a symplectic form $\omega_K ^{\Sigma}$. When $\Sigma$ is a pants $P$, the space $\operatorname{Hit}_3 ^{\mathscr{B}}(P)$ can be parametrized by Goldman's coordinates $s$ and $t$. H. Kim shows that Goldman's $(s,t)$ parameters on $\operatorname{Hit}_3 ^{\mathscr{B}} (P)$ form Darboux coordinates with respect to $\omega_K ^P$. After then H. Kim tries to glue various $\operatorname{Hit}_3 ^{\mathscr{B}}(P)$ as Goldman does in \cite{goldman1990}. In the smooth category, this gluing process is relatively easy. In the symplectic category, however, it is more technical and his proof misses crucial intermediate steps. One goal of this paper is to fill the missing links and make the proof of H. Kim \cite{kim1999} more complete and clear.
More recently, Sun-Zhang \cite{sz2017} and Sun-Wienhard-Zhang \cite{swz2017} construct Darboux coordinates for the $\mathrm{PSL}_n(\mathbb{R})$-Hitchin components. Their tool is the deformation theory of Frenet curves.
\subsection{Statement of results} Our first result is, roughly speaking, that the symplectic manifold $(\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma), \omega_K ^\Sigma)$ can be decomposed into a product of simpler symplectic manifolds.
Let $\Sigma$ be a compact oriented surface with negative Euler characteristic and possibly with boundary. By an \emph{essential simple closed curve}, we mean an embedded $S^1$ in $\Sigma$ that is not homotopic to a point nor a boundary component. Let $\xi$ be an essential simple closed curve. Given a path $\eta$ from a base point $p$ to a point in $\xi$, we write $\xi^\eta$ to denote the loop $\eta \ast \xi \ast \eta^{-1}$ at $p$, where $\ast$ is the concatenation. We sometimes regard $\xi$ as an element of $\pi_1(\Sigma,p)$ up to conjugation by considering $\xi^\eta$ for some implicitly chosen path $\eta$ from $p$ to a point in $\xi$. We abuse the notation $\langle \xi \rangle$ to denote the subgroup of $\pi_1(\Sigma, p)$ generated by $\xi^\eta$ when we do not care about particular choice of an element in its conjugacy class.
Throughout this paper, $G$ denotes the Lie group $\mathrm{PSL}_n(\mathbb{R})$ and $\mathfrak{g}$ its Lie algebra $\mathfrak{sl}_n(\mathbb{R})$. If we have a representation $\rho : \pi_1(\Sigma) \to G$, $\mathfrak{g}$ becomes a $\pi_1(\Sigma)$-module $\mathfrak{g}_{\rho}$ via the action $\operatorname{Ad}_{\rho(\gamma)} (X)$, $\gamma\in \pi_1(\Sigma)$, $X\in \mathfrak{g}$. We sometimes write this action simply $\gamma\cdot X$ if its meaning is clear from the context. If there is no chance of confusion, we omit the subscript $\rho$ and simply write $\mathfrak{g}$ instead of $\mathfrak{g}_\rho$.
We denote by $\mathcal{X}(\pi_1(\Sigma), \mathrm{PSL}_n(\mathbb{R}))=\mathcal{X}_n(\pi_1(\Sigma))$ the space of representations. Although $\mathcal{X}_n(\pi_1(\Sigma))$ itself is a singular space it contains, as an open set, a smooth manifold
\[
\overline{\mathcal{X}}_n(\pi_1(\Sigma)):=\{\rho\in \operatorname{Hom}(\pi_1(\Sigma), G)\,|\,\rho \text{ is irreducible and }Z_G(\rho) = \{1\}\}/G
\]
where
\[
Z_G(\rho) = \{g\in G\,|\, g\rho(\gamma) g^{-1} = \rho (\gamma) \text{ for all } \gamma\in \pi_1(\Sigma)\}.
\]
We mostly focus on the smooth manifold $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$ because $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$ contains $\operatorname{Hit}_n(\Sigma)$ as a connected component.
Suppose that $\Sigma$ has boundary components say $\zeta_1, \cdots, \zeta_b$. Let
\[
\mathscr{B}=\{(\zeta_1,B_1), \cdots, (\zeta_b,B_b)\}
\]
be a set of pairs each of which consists of a boundary component and a conjugacy class of a purely loxodromic element. Then we can define the following subspace of $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$:
\[
\overline{\mathcal{X}}_n ^{\mathscr{B}}(\pi_1(\Sigma))= \{ [\rho]\in \overline{\mathcal{X}}_n(\pi_1(\Sigma))\,|\, \rho(\zeta_i)\in B_i\text{, }i=1,2,\cdots, b \}.
\]
We may define similarly $\operatorname{Hit}^{\mathscr{B}} _n (\Sigma)$. $\overline{\mathcal{X}}_n ^{\mathscr{B}}(\pi_1(\Sigma))$ and $\operatorname{Hit}^{\mathscr{B}} _n (\Sigma)$ are interesting because they admit a natural symplectic form $\omega_K ^{\Sigma}$. See Theorem \ref{Ksymp} or \cite{kim1999,guruprasad1997}.
Let $\mathcal{C}=\{\xi_1, \cdots, \xi_m\}$ be a family of mutually disjoint, non-isotopic essential simple closed curves. If we subtract these curves from $\Sigma$, we get a collection of subsurfaces $\Sigma_1, \cdots, \Sigma_l$. We assume that $\Sigma_i$ are all of hyperbolic type.
Define
\[
\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})=\{[\rho]\in \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)
\,|\,\rho(\xi_i)\in C_i\text{, }i=1,2,\cdots,m\}
\]
where $\mathscr{C}=\{(\xi_1,C_1), \cdots, (\xi_m,C_m)\}$ is a family of pairs each of which consists of an element of $\mathcal{C}$ and the conjugacy class of a purely loxodromic element. We know that there is a Hamiltonian $\mathbb{R}^{m(n-1)}$-action on $\operatorname{Hit}_n ^{\mathscr{B}} (\Sigma)$ and the moment map of this action takes $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})$ as a level set over a regular value (see Goldman \cite{goldman1986}).
Now we consider the quotient $q:\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})\to \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})/\mathbb{R}^{m(n-1)}$. The restriction map $\Phi=(\iota^* _{\Sigma_1}, \cdots, \iota^* _{\Sigma_l})$ identifies this quotient space with an open subspace of the product space $\operatorname{Hit}^{\mathscr{B}_1} _n (\Sigma_1)\times \cdots \times \operatorname{Hit}^{\mathscr{B}_l} _n (\Sigma_l)$ where
\[
\mathscr{B}_i=\{(\xi,B)\,|\,\xi\text{ is a component of } \partial \overline{\Sigma_i}\text{ and } (\iota_{\Sigma_i}(\xi),B)\in \mathscr{B}\cup \mathscr{C}\}.
\]
The quotient $q$ is not only topological but also symplectic in the following sense: $q$ pushes forward the symplectic form $\omega_K ^\Sigma$ and induces the symplectic form $\widetilde{\omega}_K ^\Sigma$ on the quotient space. On the other hand the the product space $\operatorname{Hit}^{\mathscr{B}_1} _n (\Sigma_1)\times \cdots \times \operatorname{Hit}^{\mathscr{B}_l} _n (\Sigma_l)$ carries the symplectic form $\omega^{\Sigma_1}_K \oplus \cdots \oplus \omega^{\Sigma_l} _K$. Now we can state our first main theorem. We remark that the theorem holds for general $n$.
\begin{theorem}\label{decompthmintro}
Let $\Sigma$ be a compact oriented hyperbolic surface. Then the map $\Phi$ is a symplectic diffeomorphism from $\operatorname{Hit}^{\mathscr{B}} _n(\Sigma,\mathscr{C})/\mathbb{R}^{m(n-1)}$ onto an open submanifold of $\operatorname{Hit}^{\mathscr{B}_1} _n (\Sigma_1)\times \cdots \times \operatorname{Hit}^{\mathscr{B}_l} _n (\Sigma_l)$.
\end{theorem}
For the precise statement, see Theorem \ref{gendecomp}.
Theorem \ref{decompthmintro} decomposes $\widetilde{\omega}_K ^\Sigma$ into a sum of symplectic forms and allows us to obtain some information about the symplectic structure on $\operatorname{Hit}^{\mathscr{B}} _n(\Sigma,\mathscr{C})$ by studying smaller symplectic manifolds individually. We apply Theorem \ref{decompthmintro} to the case when $\Sigma$ is closed and $\Sigma_i$'s come from a pants decomposition.
Let $\Sigma$ be a closed oriented hyperbolic surface. Take a pants decomposition of $\Sigma$. That is a choice of a maximal collection of mutually disjoint, non-isotopic essential simple closed oriented curves $\{\xi_1, \cdots, \xi_{3g-3}\}$. Goldman \cite{goldman1990} proves that $\operatorname{Hit}_3(\Sigma)$ can be parametrized by $16g-16$ global parameters which can be classified into three types
\begin{itemize}
\item \emph{internal parameters} $(\mathbf{s}_i,\mathbf{t}_i)$ parametrize $\operatorname{Hit}_3 ^{\mathscr{B}}(P_i)$ for each pants $P_i$.
\item \emph{length parameters} $(\ell_i,m_i)$ are positive numbers associated to each $\xi_i$.
\item \emph{twist-bulge parameters} $(u_i,v_i)$ are dual of the length parameters.
\end{itemize}
Internal and length parameters are canonical. Whereas twist-bulge parameters $u_i$, $v_i$ are rather ambiguous. These $(u_i,v_i)$ coordinates measure the amount of twist-bulge along a curve $\xi_i$ with respect to a certain origin and there is no canonical choice of such a reference point. To remove this ambiguity, we use the relationship between Goldman coordinates and Bonahon-Dreyer coordinates \cite{bonahon2018}.
After we obtain the well-defined Goldman coordinates, we prove that a canonical part of Goldman coordinates
\[
(\mathbf{s}_1, \cdots, \mathbf{s}_{2g-2}, \ell_1, m_1, \cdots, \ell_{3g-3},m_{3g-3})
\]
can be completed to a global Darboux coordinate system. A version of the action-angle principle (Theorem \ref{existenceofdarboux}) is essentially used to prove the result.
\begin{theorem}\label{globaldarbouxintro}
Let $\Sigma$ be a closed oriented surface with genus $g>1$. There is a smooth $\mathbb{R}^{8g-8}$-valued function
\[
(\overline{\mathbf{s}}_1, \cdots, \overline{\mathbf{s}}_{2g-2}, \overline{\ell}_1, \overline{m}_1, \cdots, \overline{\ell}_{3g-3}, \overline{m}_{3g-3})
\]
on $(\operatorname{Hit}_3(\Sigma),\omega_G)$ such that
\[
(\mathbf{s}_1,\cdots, \mathbf{s}_{2g-2}, \ell_1,m_1 \cdots, \ell_{3g-3},m_{3g-3}, \overline{\mathbf{s}}_1, \cdots, \overline{\mathbf{s}}_{2g-2}, \overline{\ell}_1, \overline{m}_1, \cdots, \overline{\ell}_{3g-3}, \overline{m}_{3g-3})
\]
becomes a global Darboux coordinate system.
\end{theorem}
Recently, Casella-Tate-Tillmann \cite{casella2018} shows that the Goldman bracket and Fock-Goncharov bracket of $\mathrm{PSL}_3(\mathbb{R})$ character variety of an open surface coincide on the trace algebra. Their results may cooperate with ours to give a further generalization of Theorem \ref{globaldarbouxintro}.
\subsection{About the proofs}
We first prove a variant of the action-angle principle. Suppose that we are given a Lagrangian fiber bundle $f: M\to B$ over a connected open subset $B$ of $\mathbb{R}^n$ such that $H^2(B;\mathbb{R})=0$. Under certain conditions, the bundle map $f=(f_1, \cdots, f_n)$ has complementary coordinates $g=(g_1, \cdots, g_n)$ such that $(f,g)$ forms a global Darboux coordinate system (Theorem \ref{existenceofdarboux}). Indeed if the bundle map $f$ has a global Lagrangian section, then we can find complementary coordinate functions (Lemma \ref{sectionimpliescoordinates}). So it is enough to prove the existence of a global Lagrangian section under the given conditions. We borrow the idea of Duistermaat \cite{duistermaat1980} to show this. We prove that one can find a Lagrangian section locally (Lemma \ref{local}) and then, using sheaf cohomology theory, we show that the obstruction for gluing local Lagrangian sections vanishes.
Then we prove the decomposition theorem. We prove Theorem \ref{decompthmintro} by the induction on the number of cutting curves and we eventually end up with the situation where we cut the surface by a single simple closed curve. There are two cases depending on whether the curve separates the surface or not. We prove the decomposition formulas for each of these cases.
To this end, we first decompose the tangent space of the Hitchin component. It can be done by means of the Mayer-Vietoris sequence which is known for the cohomology of group systems \cite{bieri1978} . We construct a similar sequence for the parabolic cohomology.
Suppose that $\xi$ is a separating essential simple closed curve in $\Sigma$, so that $\Sigma\setminus \xi = \Sigma_1 \cup \Sigma_2$. The Mayer-Vietoris sequence tells us that the natural inclusion map $\iota_{\Sigma_i}: \pi_1(\Sigma_i)\to \pi_1(\Sigma)$, $i=1,2$ induces a homomorphism between tangent spaces
\[
(\iota_{\Sigma_1} ^* , \iota_{\Sigma_2} ^*):T_{[\rho]} \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})\to T_{[ \rho\circ \iota_{\Sigma_1}]}\operatorname{Hit}^{\mathscr{B}_1} _n (\Sigma_1) \oplus T_{[ \rho\circ \iota_{\Sigma_2}]} \operatorname{Hit}_n ^{\mathscr{B}_2} (\Sigma_2)
\]
whose kernel is spanned by the tangent vectors along twist flows. Given two vectors $\alpha, \beta \in T_{[\rho]} \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})$, we prove in Theorem \ref{decomppairingsep} that
\begin{equation}\label{decompgeneralintro}
\omega^{\Sigma}_K(\alpha, \beta) = \omega_K ^{\Sigma_1}(\iota_{\Sigma_1} ^*\alpha, \iota_{\Sigma_1} ^*\beta) +\omega_K ^{\Sigma_2} (\iota_{\Sigma_2} ^* \alpha, \iota_{\Sigma_2} ^* \beta).
\end{equation}
When $\xi$ is non-separating, $\Sigma\setminus \xi = \Sigma_0$, we have a similar homomorphism
\[
\iota_{\Sigma_0} ^* : T_{[\rho]}\operatorname{Hit}_n ^{\mathscr{B}} (\Sigma) \to T_{[ \rho\circ \iota_{\Sigma_0}]} \operatorname{Hit}_n ^{\mathscr{B}_0}(\Sigma_0)
\]
induced from $\iota_{\Sigma_0} : \pi_1(\Sigma_0) \to \pi_1(\Sigma)$ whose kernel is again spanned by twist flows. Then, we show in Theorem \ref{decomppairingnonsep} that
\begin{equation}\label{decompgeneralintrononsep}
\omega_K ^{\Sigma}(\alpha, \beta) = \omega_K ^{\Sigma_0} (\iota_{\Sigma_0} ^* \alpha, \iota_{\Sigma_0} ^*\beta).
\end{equation}
In fact, (\ref{decompgeneralintro}) and (\ref{decompgeneralintrononsep}) hold under weaker assumptions on $[\rho]$. See Theorem \ref{decomppairingsep} and Theorem \ref{decomppairingnonsep} for precise statements.
We prove (\ref{decompgeneralintro}) and (\ref{decompgeneralintrononsep}) by using the Fox calculus. The key point, which stems from Zocca \cite{zocca1998}, is that we can decompose a relative fundamental class (Lemma \ref{funclcpt}) into a sum of relative fundamental classes of subsurfaces together with some extra terms. Roughly writing $[\Sigma] = [\Sigma_1]+[\Sigma_2] + \text{extra}$ for the separating case and $[\Sigma] =[\Sigma_0] + \text{extra}$ for the non-separating case. Then we choose a nice representative in the given cohomology class in such a way that all the extra terms vanish. Applying the decomposition formulas inductively, we can prove Theorem \ref{decompthmintro}.
Lemma \ref{lagrangian} implies that the map $F:\operatorname{Hit}_3(\Sigma)\to \mathbb{R}^{8g-8}$ assigning to each $[\rho]$ the coordinates
\[
(\mathbf{s}_1([\rho]),\cdots,\mathbf{s}_{2g-2}([\rho]), \ell_1([\rho]), m_1([\rho]),\cdots, \ell_{3g-3}([\rho]),m_{3g-3}([\rho]))
\]
is a Lagrangian fiber bundle. Then we show that $F$ satisfies all the conditions of Theorem \ref{existenceofdarboux}. Therefore Theorem \ref{globaldarbouxintro} follows as a consequence of Theorem \ref{existenceofdarboux}.
\subsection{Acknowledgements}
Discussions with Francis Bonahon, William Goldman, Michael Kapovich, Inkang Kim, Ana Cannas da Silva, and Tengren Zhang were very helpful for us to complete our paper. Sun Zhe and Johannes Huebschmann kindly explained their work and gave many constructive comments. We specially appreciate their help. The second author would like to give special thanks to Francis Bonahon, Daniel Douglas and Hatice Zeybek for their hospitality and helpful conversations during the visit to USC. Finally the first and second author visited Stanford University for GEAR retreat in 2017 and UC Davis in 2015, where we made large progression on this paper.
\section{The space of representations and Hitchin components}
In this section, we review basic facts on Hitchin components for both closed and compact surfaces. To describe their tangent spaces, we introduce the group cohomology and the parabolic group cohomology which represent the tangent spaces of $\operatorname{Hit}_n(\Sigma)$ and $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)$ respectively.
\subsection{Definition and properties}
Let $\Sigma$ denote a closed oriented hyperbolic surface. Throughout this paper, the Lie group $G$ always denotes $\mathrm{PSL}_n(\mathbb{R})$ and $\mathfrak{g}$ the Lie algebra $\mathfrak{sl}_n(\mathbb{R})$ of $\mathrm{PSL}_n(\mathbb{R})$. Let
\[
\mathcal{X}(\pi_1(\Sigma), \mathrm{PSL}_n(\mathbb{R}))=\mathcal{X}_n(\pi_1(\Sigma)):=\operatorname{Hom}(\pi_1(\Sigma),\mathrm{PSL}_n(\mathbb{R}))/\mathrm{PSL}_n(\mathbb{R})
\]
be the space of representations. We sometimes consider the GIT quotient instead of the usual one. However do not have to distinguish them because these two quotients coincide on a subspace $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$ defined below and we focus only on $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$ throughout this paper.
\begin{definition}
A \emph{Hitchin component} $\operatorname{Hit}_n(\Sigma)$ is a connected component of $\mathcal{X}(\pi_1(\Sigma), \mathrm{PSL}_n(\mathbb{R}))$ that contains a Fuchsian representation.
\end{definition}
When $n=2$, $\operatorname{Hit}_2(\Sigma)$ coincides with the usual Teichm\"uller space. It is known that the Teicum\"uller space is homeomorphic to the cell of dimension $6g-6$ where $g$ is the genus of $\Sigma$. Similar result holds for Hitchin components. Indeed Hitchin \cite{hitchin1992} himself shows that $\operatorname{Hit}_n(\Sigma)$ is homeomorphic to the cell of dimension $(n^2 -1) \cdot (2g-2)$.
Now suppose that $\Sigma$ is a compact hyperbolic surface possibly with boundary. We can naturally generalize the notion of Hitchin components for such a non-closed surface.
\begin{definition}[Labourie-McShane \cite{labourie2009}]
Let $\Sigma$ be a compact oriented hyperbolic surface. $[\rho]\in \mathcal{X}_n(\pi_1(\Sigma))$ is said to be a \emph{Hitchin representation} if $\rho$ can be continuously deformed into a Fuchsian representation in such a way that the holonomies of boundary components are purely loxodromic (i.e., it is diagonalizable and all eigenvalues are distinct positive real numbers) in the course of the deformation. A connected component of $\mathcal{X}_n (\pi_1(\Sigma))$ that consists of Hitchin representations is denoted by the same notation $\operatorname{Hit}_n(\Sigma)$.
\end{definition}
Suppose that $\Sigma_0$ is a hyperbolic incompressible subsurface of a closed hyperbolic surface $\Sigma$. Given $[\rho]\in \operatorname{Hit}_n(\Sigma)$, its restriction $\rho|_{\pi_1(\Sigma_0)}$ to the subgroup $\pi_1(\Sigma_0)$ is also in $\operatorname{Hit}_n (\Sigma_0)$. See Theorem 9.1 of Labourie-McShane \cite{labourie2009}.
The space $\mathcal{X}_n(\pi_1(\Sigma))$ contains an open subspace, the space of `good' representations
\[
\overline{\mathcal{X}}_n (\pi_1(\Sigma)):=\operatorname{Hom}_s(\pi_1(\Sigma),G)/G
\]
where
\[
\operatorname{Hom}_s(\pi_1(\Sigma),G):=\{\rho\in \operatorname{Hom}(\pi_1(\Sigma),G)\,|\,\rho\text{ is irreducible and } Z_G(\rho)=\{1\} \}.
\]
Suppose that $\rho\in \operatorname{Hom}_s(\pi_1(\Sigma),G)$ is given. Let $X \in \mathfrak{sl}_n(\mathbb{R})$ be an $\operatorname{Ad}_\rho$-invariant element. Then $\exp X$ is in $Z_G(\rho)$. So by definition of $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$, we have $\exp X=1$ or, equivalently, $X=0$. It follows that $\mathfrak{g}_{\rho}$ has no nontrivial $\operatorname{Ad}_\rho$ invariant element. Therefore $\operatorname{Hom}_s(\pi_1(\Sigma),G)$ is a smooth manifold. Moreover, it is proven by Johnson-Millson \cite{Johnson1987} that the $G$-action on $\operatorname{Hom}_s(\pi_1(\Sigma),G)$ is proper and free. Consequently the quotient space $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$ is also a smooth manifold.
Hitchin representations for a compact surface have many interesting properties. We summarize them as the following lemma, which is implicitly used several times throughout this paper.
\begin{lemma}\label{NoInvariantElement} Let $\Sigma$ be a compact oriented hyperbolic surface. Let $[\rho]\in \operatorname{Hit}_n(\Sigma)$.
\begin{itemize}
\item $\rho$ is faithful, irreducible and discrete.
\item For each nontrivial $\gamma\in \pi_1(\Sigma)$, $\rho(\gamma)$ is purely loxodromic.
\item The centralizer of $\rho$, $Z_G(\rho)$, is trivial.
\end{itemize}
In particular $\operatorname{Hit}_n(\Sigma)$ is a connected component of $\overline{\mathcal{X}}_{n}(\pi_1(\Sigma))$.
\end{lemma}
\begin{proof}
First assume that $\Sigma$ is closed. Then first two statements are nothing but Proposition 3.4 of Labourie \cite{labourie2006}.
Suppose that $\Sigma$ has a nonempty boundary component. We consider the Hitchin double $\widehat{\rho}$. It is known that $\widehat{\rho}$ is in the Hitchin component $\operatorname{Hit}_n(\widehat{\Sigma})$ of the double $\widehat{\Sigma}$ of $\Sigma$. See Corollary 9.2.2.4 of \cite{labourie2009}. As $\widehat{\Sigma}$ is closed, we know that $\widehat{\rho}$ is discrete, faithful and $\widehat{\rho}(\gamma)$ is purely loxodromic for any nontrivial element $\gamma$. It follows that the $\rho$ has the same properties.
We now show that $\rho$ is irreducible. By Theorem 9.1 of \cite{labourie2009}, $\rho$ is a positive representation. Therefore by Lemma 5.12 of Guichard-Wienhard \cite{guichard2012}, $\rho$ is irreducible.
For the third statement, suppose that $X$ is in the center of $\rho(\pi_1(\Sigma))$. Since $\rho(\gamma)$ is purely loxodromic, we observe that $X$ must be diagonal. Therefore, Schur's lemma is applied so we can conclude that $X$ must be a scalar.
\end{proof}
Let us introduce a nice submanifold of $\operatorname{Hit}_n(\Sigma)$ which is essential for our discussion.
\begin{definition}
Let $\Sigma$ be compact oriented hyperbolic surface with boundary components $\{\zeta_1, \cdots, \zeta_b\}$. By a \emph{boundary frame}, we mean a collection $\mathscr{B}=\{(\zeta_1,C_1), \cdots, (\zeta_b,C_b)\}$ of pairs each of which consists of a boundary component and conjugacy class in $G$. Given a boundary frame $\mathscr{B}$, we define the following space
\[
\operatorname{Hit}^{\mathscr{B}} _n (\Sigma)= \{[\rho]\in \operatorname{Hit}_n(\Sigma)\,|\, \rho(\zeta_i) \in C_i\text{ for }i=1,2,\cdots, b\}.
\]
We also define $\overline{\mathcal{X}}_n ^{\mathscr{B}} (\pi_1(\Sigma))\subset \overline{\mathcal{X}}_n (\pi_1(\Sigma))$ in the same fashion.
Let $\mathcal{C}= \{\xi_1, \cdots, \xi_m\}$ be a collection of pairwise disjoint, non-isotopic essential simple closed curves. A $\mathcal{C}$-frame is a family $\mathscr{C}=\{(\xi_1,C_1), \cdots, (\xi_m,C_m)\}$ of pairs each of which consists of $\xi_i$ and a conjugacy class in $G$. Given a $\mathcal{C}$-frame $\mathscr{C}$, define
\begin{align*}
\operatorname{Hit}^{\mathscr{B}} _n (\Sigma,\mathscr{C})& = \{[\rho]\in \operatorname{Hit}_n ^{\mathscr{B}} (\Sigma)\,|\, \rho(\xi_i) \in C_i\text{, }i=1,2,\cdots,m\},\quad \text{and}\\
\overline{\mathcal{X}}_n ^{\mathscr{B}}(\pi_1(\Sigma),\mathscr{C})&=\{[\rho]\in \overline{\mathcal{X}}_n ^{\mathscr{B}} (\pi_1(\Sigma))\,|\, \rho(\xi_i) \in C_i\text{, }i=1,2,\cdots,m\}.
\end{align*}
\end{definition}
To be more precise, we should understand $\zeta_i$ (and $\xi_i$) as a loop at the base point $p$ of $\pi_1(\Sigma, p)$ by choosing a path from $p$ to a point in $\zeta_i$ (and $\xi_i$). However since we are dealing with the conjugacy classes, we may ignore such a technicality.
We observe that $\operatorname{Hit}_n(\Sigma)=\bigcup \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)$ where the union runs over all possible choice of boundary frames. This foliation plays the key role in the study of Poisson geometry of $\operatorname{Hit}_n(\Sigma)$.
\subsection{Group cohomology}
Cohomology of group is a model for the tangent spaces of $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$. In this subsection we review a definition of group cohomology and its properties.
Let $\Gamma$ be a finitely presented group. Given a representation $\rho:\Gamma \to G$, we denote by $\mathfrak{g}_{\rho}$ the $\Gamma$-module $\mathfrak{g}$ under the action $\operatorname{Ad} _\rho$. If the action is clear from the context, we omit the subscript $\rho$ and simply write $\mathfrak{g}$ instead of $\mathfrak{g}_\rho$.
By a resolution over $\Gamma$, we mean any projective resolution of $M=\mathbb{R}$ or $\mathbb{Z}$
\[
\cdots \to R_2\to R_1 \to R_0 \to M \to 0
\]
where $M$ is regarded as a trivial $M \Gamma$-module. Then the group cohomology $H^q(\Gamma; V)$ with coefficient in a $M\Gamma$-module $V$ is the cohomology of the complex $\operatorname{Hom}_{\Gamma} (R_\ast (\Gamma), V)$. Our major concern is the case where $M=\mathbb{R}$ and $V=\mathfrak{g}_\rho$.
We mostly use the \emph{normalized bar resolution} $(\mathbf{B}_\ast(\Gamma),\mathrm{d}_\Gamma)$ throughout this paper. Recall that $\mathbf{B}_q (\Gamma)$ is a free $\Gamma$-module on symbols $[x_1|x_2| \cdots |x_q]$ where $x_1, \cdots, x_q\in \Gamma\setminus\{1\}$.
Now we give a relative version of group cohomology. We follow Trotter's paper \cite{trotter1962} (see also \cite[section 1]{guruprasad1997}). A group system is, by definition, a pair $(\Gamma, \mathcal{S})$ of finitely presented group $\Gamma$ and a collection $\mathcal{S}=\{\Gamma_1, \cdots, \Gamma_m\}$ of its finitely presented subgroups $\Gamma_1, \cdots, \Gamma_m$.
\begin{definition}
Let $M=\mathbb{R}$ or $\mathbb{Z}$. An \emph{auxiliary resolution} $(R_\ast,A_\ast ^i)$ over the group system $(\Gamma, \mathcal{S})$, or simply $(\Gamma, \mathcal{S})$-resolution, consists of
\begin{itemize}
\item $R_\ast$, a resolution over $\Gamma$
\item $A^i _\ast$, a resolution over $\Gamma_{i}$
\item $A_\ast:= \bigoplus_{i=1} ^m M\Gamma \otimes_{M\Gamma_{i}} A^i _\ast$ is a direct summand of $R_\ast$.
\end{itemize}
\end{definition}
Since $A_\ast$ is a direct summand of $R_\ast$, we can form a short exact sequence of chain complexes
\begin{equation}\label{esofrelative}
0\to A_\ast \to R_\ast \to R_\ast/A_\ast \to 0.
\end{equation}
For a given $\Gamma$-module $\mathfrak{g}_{\rho} =\mathfrak{g}$, we apply the $\operatorname{Hom}_{\Gamma}(-,\mathfrak{g})$ functor on this exact sequence. Then we get the long exact sequence
\[
\cdots \to H^q (\Gamma,\mathcal{S}; \mathfrak{g}) \to H^q(\Gamma; \mathfrak{g}) \to H^q(\mathcal{S};\mathfrak{g}) \to H^{q+1}(\Gamma,\mathcal{S}; \mathfrak{g}) \to \cdots.
\]
Note that $H^q(\mathcal{S};\mathfrak{g}_\rho)\approx \bigoplus H^q(\Gamma_{i};\mathfrak{g}_{\rho|_{\Gamma_i}})$.
\begin{definition}
The parabolic cohomology of $\Gamma$ of degree $q$ with coefficient in $\mathfrak{g}_\rho$, $H^q _{\mathrm{par}} (\Gamma,\mathcal{S}; \mathfrak{g}_\rho)$, is the image of $H^q(\Gamma, \mathcal{S};\mathfrak{g}_\rho)$ in $H^q(\Gamma; \mathfrak{g}_\rho)$. In other words
\[
H^q _{\mathrm{par}} (\Gamma, \mathcal{S}; \mathfrak{g}_\rho) \approx H^q(\Gamma,\mathcal{S}; \mathfrak{g}_\rho) / H^{q-1} (\mathcal{S}; \mathfrak{g}_\rho).
\]
\end{definition}
We are interested in the case where $q=1$ and $\Gamma=\pi_1(\Sigma)$. In the appendix, we describe how to compute the (1st) parabolic cohomology by finding a nice resolution over a group system $(\Gamma, \mathcal{S})$. In terms of the normalized bar resolution $\mathbf{B}_\ast(\Gamma)$, elements of $H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g})$ can be represented by parabolic cocycles
\[
Z^1_{\mathrm{par}} (\Gamma, \mathcal{S}; \mathfrak{g}):=\{\alpha\in Z^1(\Gamma, \mathfrak{g})\,|\, \iota^\# _{\Gamma_{i}} (\alpha) \in B^1(\Gamma_i, \mathfrak{g})\}
\]
where $\iota^\# _{\Gamma_{i}}$ is the restriction defined at the end of this subsection.
\begin{remark}\label{conj}
Consider a group system $(\Gamma, \mathcal{S}')$, $\mathcal{S}' = \{\Gamma_1' ,\cdots, \Gamma_m'\}$, which is conjugate to $(\Gamma,\mathcal{S})$ in the sense that for each $i=1,2,\cdots, m$ there is a $g_i\in \Gamma$ such that $\Gamma_{i} ' = g_i \Gamma_{i} g_i ^{-1}$. Then the parabolic cohomology $H^1_{\mathrm{par}} (\Gamma,\mathcal{S}';\mathfrak{g})$ is the same as $H^1_{\mathrm{par}} (\Gamma,\mathcal{S};\mathfrak{g})$ because $\iota^\# _\gamma (\alpha)\in B^1(\Gamma_i; \mathfrak{g})$ if and only if $\iota^\# _{g\gamma g^{-1}}(\alpha) \in B^1(\Gamma_i; \mathfrak{g})$. In other words, the parabolic cohomology $H^q _{\mathrm{par}} (\Gamma, \mathcal{S}; \mathfrak{g})$ depends only on the conjugacy class of $\Gamma_{i}$. So in regard of parabolic cohomology, we define a group system as a pair of group $\Gamma$ and a family of \emph{conjugacy classes} of subgroups $\Gamma_1, \cdots, \Gamma_m$ of $\Gamma$.
\end{remark}
We finish this subsection by introducing the restriction map. Let $\mathbf{B}_\ast(\Gamma)$ be the normalized bar resolution over $\Gamma$. Suppose that we are given a subgroup $\iota_{\Gamma'} : \Gamma'\to \Gamma$. Then we have a natural chain map $\iota_{\Gamma'} ^\# : \operatorname{Hom}_{\Gamma} (\mathbf{B}_\ast (\Gamma),\mathfrak{g})\to \operatorname{Hom}_{\Gamma} (\mathbb{R} \Gamma\otimes \mathbf{B}_\ast(\Gamma') , \mathfrak{g}) $. Since $\operatorname{Hom}_{\Gamma} (\mathbb{R} \Gamma\otimes \mathbf{B}_\ast(\Gamma') , \mathfrak{g})\approx \operatorname{Hom}_{\Gamma'} (\mathbf{B}_\ast (\Gamma'), \mathfrak{g})$ as chain complexes of $\mathbb{R}$-vector spaces, this inclusion induces a homomorphism $H^1(\Gamma,\mathfrak{g}) \to H^1(\Gamma', \mathfrak{g})$ which we denote by $\iota_{\Gamma'}^*[\alpha]=[\iota_{\Gamma'}^\# (\alpha)]$ where $[\alpha]\in H^1(\Gamma, \mathfrak{g})$. Similarly, if $(\Gamma', \mathcal{S}')$ is a group subsystem of $(\Gamma, \mathcal{S})$, we have the natural restriction map $H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g}) \to H^1_{\mathrm{par}} (\Gamma', \mathcal{S}';\mathfrak{g})$.
\subsection{Tangent spaces of $\overline{\mathcal{X}}_n (\pi_1(\Sigma))$}
It is well-know that the tangent space of $\overline{\mathcal{X}}_n (\pi_1(\Sigma))$ at each point $[\rho]\in \overline{\mathcal{X}}_n (\pi_1(\Sigma))$ can be identified with $H^1(\pi_1(\Sigma); \mathfrak{g}_{\rho})$. See for example \cite{weil1964}, \cite{goldman1984}, \cite{guruprasad1997} and \cite{labourie2013}. Since $\operatorname{Hit}_n(\Sigma)$ is a component of $\overline{\mathcal{X}}_n(\pi_1(\Sigma))$ (Lemma \ref{NoInvariantElement}), we can say that the tangent space of $\operatorname{Hit}_n(\Sigma)$ at $[\rho]$ is $H^1(\pi_1(\Sigma); \mathfrak{g}_{\rho})$.
As in the closed surface case, we have a nice description of local geometry for $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)$. Guruprasad-Huebschmann-Jeffrey-Weinstein \cite{guruprasad1997} shows that the tangent space of $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)$ at $[\rho]\in \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)$ can be identified with the parabolic group cohomology
\begin{align*}
T_{[\rho]} \overline{\mathcal{X}}_n ^{\mathscr{B}}(\pi_1(\Sigma)) &\approx H^1 _{\mathrm{par}} (\pi_1(\Sigma),\{\langle \zeta_1\rangle, \cdots, \langle \zeta_b \rangle\} ;\mathfrak{g}_{ \rho}).
\end{align*}
More generally, we show the following in the appendix, Proposition \ref{tangentA}.
\begin{proposition}\label{tangent}
Let $\Sigma$ be a compact oriented hyperbolic surface possibly with boundary components $\{\zeta_1, \cdots, \zeta_b\}$. Let $\{\xi_1, \cdots, \xi_m\}$ be mutually disjoint, non-isotopic essential simple closed curves. At each $[\rho]\in \overline{\mathcal{X}}_n ^{\mathscr{B}}(\pi_1(\Sigma),\mathscr{C})$,
\[
T_{[\rho]}\overline{\mathcal{X}}_n ^{\mathscr{B}}(\pi_1(\Sigma),\mathscr{C}) \approx H^1 _{\mathrm{par}}(\pi_1 (\Sigma),\{\langle{\xi_1}\rangle, \cdots, \langle\xi_m\rangle,\langle\zeta_1\rangle, \cdots, \langle \zeta_b\rangle\};\mathfrak{g}_\rho)
\]
where $\rho$ is a representative of the class $[\rho]$.
\end{proposition}
Recall, by Remark \ref{conj}, that particular choices of a subgroups $\langle\xi_i \rangle$ and $\langle \zeta_i \rangle$ within their conjugacy classes are not important.
\section{Aspects of symplectic geometry}
In this section, we collect elements of symplectic geometry related to our discussion. We review the construction of the Atiyah-Bott-Goldman symplectic form on $\operatorname{Hit}_n(\Sigma)$ as well as the symplectic form of H. Kim on $\operatorname{Hit}_n ^\mathscr{B}(\Sigma)$.
The key part of this section is subsection \ref{aaprinciple} where we prove a variant of the action-angle principle (Theorem \ref{existenceofdarboux}) that allows us to find global Darboux coordinates under certain conditions.
\subsection{Definitions and properties}
A symplectic manifold is a smooth manifold $M$ with a non-degenerate closed 2-form $\omega$. Given a smooth function $f\in C^\infty(M)$, the Hamiltonian vector field associated to $f$ is characterized by a unique vector field $\mathbb{X}_f$ such that
\[
\omega(\mathbb{X}_f,Y)= \mathrm{d} f (Y) = Yf
\]
for any vector field $Y$. Since $\mathrm{d} \omega =0$, we have
\[
\mathcal{L}_X \omega = \mathrm{d} \iota_X \omega + \iota_X \mathrm{d} \omega = \mathrm{d} \iota_X \omega
\]
where $\mathcal{L}$ denotes the Lie derivative. Hence a vector field $X$ preserves $\omega$ if and only if the induced 1-form $\iota_X \omega=\omega(X,-)$ is closed. In particular, $\mathcal{L}_{\mathbb{X}_f}\omega=0$. It follows that the flow $\Psi^t$ associated to the vector field $\mathbb{X}_f$ is a symplectomorphism for each $t\in\mathbb{R}$ whenever $\Psi^t$ is defined.
Suppose that we have a symplectic manifold $(M, \omega)$. By defining the Poisson bracket of smooth functions $f,g\in C^\infty(M)$ by $\{f,g\} = \omega(\mathbb{X}_f, \mathbb{X}_g)$, we can turn $M$ into a Poisson manifold.
Let $x\in M^{2n}$ be any point of a symplectic manifold. \emph{Darboux's theorem} states that there is a coordinate neighborhood $(U, (f_1, \cdots, f_n,g_1, \cdots, g_n))$ of $x$ such that $\omega|_U = \sum_{i=1} ^n \mathrm{d} f_i \wedge \mathrm{d} g_i$. Such coordinates are called (local) \emph{Darboux coordinates}. Global Darboux coordinates are global coordinates
\[
(f_1, \cdots, f_n,g_1, \cdots, g_n):M\to \mathbb{R}^{2n}
\]
of $M$ where $\omega$ can be expressed as $\omega= \sum_{i=1} ^{n} \mathrm{d} f_i \wedge \mathrm{d} g_i$.
There is a particularly important symplectic manifold which arises naturally from any manifold. Let $M$ be any smooth $n$-manifold. The cotangent bundle $p:T^*M\to M$ has a canonical 1-form $\lambda_{\text{can}}$ which is characterized by the following property: $\sigma^* \lambda_{\text{can}} = \sigma$ for every 1-form $\sigma$. If $(U, (x_1, \cdots, x_n))$ is a local coordinate chart of $M$, then there are natural coordinates $y_i$ that parametrize each $T^* _q M$, $q\in M$, with respect to an ordered basis $\{\mathrm{d} x_1, \cdots, \mathrm{d} x_n\}$. Then we observe that $(p^{-1}(U),(x_1,\cdots, x_n, y_1, \cdots, y_n))$ is a local coordinate chart of $T^* M$. With respect to this coordinates, $\lambda_{\text{can}}$ can be written as $\sum_{i=1} ^n y_i \mathrm{d} x_i$. Define the 2-form $\omega_{\text{can}}$ by $\omega_{\text{can}} = - \mathrm{d} \lambda_{\text{can}}$. Then $(T^*M , \omega_{\text{can}})$ is a symplectic manifold of dimension $2n$. Observe that that the Hamiltonian flow on $p^{-1}(U)$ associated to each coordinate function $x_i$ is linear.
\subsection{Marsden-Weinstein quotient}
Let $(M, \omega)$ be a symplectic manifold. Suppose that a Lie group $K$ acts on $M$ as symplectomorphisms. This action is called \emph{weakly Hamiltonian} if for each $X\in \mathfrak{k}$, its fundamental vector field $\xi_X$ is a Hamiltonian vector field. That is, for each $X\in \mathfrak{k}$, there is a unique smooth function $H_X$ such that $\iota_{\xi_X}\omega = \mathrm{d} H_X$. A weakly Hamiltonian action of $K$ on $M$ is \emph{Hamiltonian} if the rule $X\mapsto H_X$ is a Lie algebra homomorphism, i.e., $H_{[X,Y]} = \{H_X , H_Y\}$ for all $X, Y \in \mathfrak{k}$. The obstruction for a weakly Hamiltonian action to be Hamiltonian stays in the Lie algebra cohomology $H^2(\mathfrak{k}; \mathbb{R})$. In particular, if $H^2(\mathfrak{k};\mathbb{R})=0$, every weakly Hamiltonian action of $K$ becomes Hamiltonian.
Suppose that we have a Hamiltonian action by a Lie group $K$. For each $x\in M$ there is a unique element $\mu(x)\in \mathfrak{k}^*$ such that $H_X (x) = \langle \mu(x) , X \rangle$ for all $X\in \mathfrak{k}$ where $\langle\cdot, \cdot\rangle$ is the canonical pairing between $\mathfrak{k}^*$ and $\mathfrak{k}$. The map $\mu: x\mapsto \mu(x)$ so defined is called the \emph{moment map}.
If the action is Hamiltonian then $\mu$ is $K$-equivariant where $\mathfrak{k}^*$ is equipped with the coadjoint action. Observe that if $z\in \mathfrak{k}^*$ is a regular value of $\mu$ and is a fixed point of a coadjoint action, then $\mu^{-1}(z)$ is an invariant coisotropic submanifold. For each $x\in \mu^{-1}(z)$, the symplectic complement $T_x \mu^{-1}(z)^\omega$ is precisely the tangent space of the orbit space $K\cdot x$. Therefore we can hope that a new symplectic manifold may be constructed by collapsing this `bad' directions. That is the ideal of \emph{symplectic reduction} which we state as follow:
\begin{theorem}[Symplectic reduction or Marsden-Weinstein quotient]\label{MWq} Let $(M, \omega)$ be a symplectic manifold with a Hamiltonian action of a Lie group $K$. Let $\mu$ be the moment map. Suppose that $z\in \mathfrak{k}^*$ is a fixed point of the coadjoint action and that it is a regular value of $\mu$. If, in addition, $K$ acts properly and freely on $\mu^{-1}(z)$, then the quotient
\[
\mu^{-1}(z) /K
\]
is a smooth manifold and carries the canonical symplectic structure $\widetilde{\omega}$ which is uniquely determined by the property $\omega|_{\mu^{-1}(z)} = (q^* \widetilde{\omega})|_{\mu^{-1}(z)}$ where $q:\mu^{-1}(z) \to \mu^{-1}(z) /K$ is the quotient map.
\end{theorem}
One can find more details about symplectic reduction for example in \cite{mcduff2017, marsden2007, da2001}.
\subsection{The Fox calculus and the Atiyah-Bott-Goldman symplectic form} Motivated by Atiyah-Bott \cite{atiyah1983}, Goldman \cite{goldman1984} gives an algebraic construction of the symplectic form on $\overline{\mathcal{X}}_n (\pi_1(\Sigma))$. This symplectic form is now called the Atiyah-Bott-Goldman symplectic form which we denote by $\omega_G ^\Sigma$ or simply $\omega_G$ if the surface $\Sigma$ is understood from the context. The following theorem, one of the main result of free differential calculus by Fox \cite{fox1953}, is the key ingredient of Goldman's construction.
\begin{theorem}[Fox \cite{fox1953}]\label{fox}
Let $\Gamma$ be a free group on free generators $s_1, \cdots, s_n$. Let $\mathbb{Z}\Gamma$ be the group ring. There is a collection of operators $\frac{\partial}{\partial s_i}: \mathbb{Z} \Gamma \to \mathbb{Z} \Gamma$, $i=1,2,\cdots, n$ having the following properties
\begin{itemize}
\item Given $x,y\in \mathbb{Z} \Gamma$,
\[
\frac{\partial xy}{\partial s_i} = x \frac{\partial y}{\partial s_i} + \frac{\partial x}{\partial s_i } \epsilon(y)
\]
where $\epsilon(x)$ denotes the sum of coefficients of $x$.
\item $\displaystyle \frac{\partial s_i } {\partial s_j } = \begin{cases}1 & i=j \\ 0 &i\ne j \end{cases}$
\item For any $x \in \mathbb{Z}\Gamma$,
\[
x = \epsilon(x)1+ \sum_{i=1} ^n \frac{\partial x} {\partial s_i} (s_i-1).
\]
\end{itemize}
\end{theorem}
Theorem \ref{fox} allows us to construct a non-trivial homology class in $H_2(\pi_1(\Sigma), \mathbb{Z})$. To do this we use the normalized bar resolution $\mathbf{B}_\ast(\Gamma)$ over $\Gamma$. For our convenience, let us make the following convention: $[a\pm b| x] = [a|x]\pm [b|x] \in \mathbf{B}_2(\Gamma)$ for any $a,b,x\in \Gamma\setminus\{1\}$. Now, choose a presentation
\[
\langle x_1, y_1, x_2, y_2, \cdots, x_g, y_g\,|\, R\rangle
\]
for $\Gamma=\pi_1(\Sigma)$ where $R=\prod_{i=1} ^g [x_i,y_i] $. Then
\[
[\Sigma] = \sum_{i=1} ^g \left. \left[\frac{\partial R}{\partial x_i}\right| x_i\right] + \sum_{i=1} ^g \left. \left[ \frac{\partial R}{\partial y_i}\right| y_i\right]
\]
represents a generator of $H_2(\Gamma; \mathbb{Z})\approx \mathbb{Z}$. See Proposition 3.9 of Goldman \cite{goldman1984}. We call $[\Sigma]$ a \emph{fundamental class} of $\Gamma$. If we use a different relation say $R' =hRh^{-1}$ for some $h\in \Gamma$, then the fundamental class with respect to the new relation $R'$ reads
\[
\sum_{i=1} ^g \left. \left[ h\frac{\partial R}{\partial x_i}\right| x_i\right] + \sum_{i=1} ^g \left.\left[ h \frac{\partial R}{\partial y_i}\right| y_i\right]
\]
which is homologues to the original fundamental class $[\Sigma]$.
\begin{theorem}[Goldman \cite{goldman1984}]\label{goldmansymplectic} Let $\Sigma$ be a closed oriented hyperbolic surface. Then $\overline{\mathcal{X}}_n(\Gamma)$ is a symplectic manifold with the symplectic form defined at each point $[\rho]\in \overline{\mathcal{X}}_n (\Gamma)$ by
\[
\omega_G ^\Sigma([\alpha],[\beta]) = \langle \alpha \smile \beta, [\Sigma] \rangle,
\]
where $[\alpha],[\beta]\in H^1(\Gamma, \mathfrak{g}_{\rho})$.
\end{theorem}
Explicitly, in terms of normalized bar resolution,
\begin{equation}\label{explicitformula}
\langle \alpha \smile \beta, [\Sigma]\rangle = - \sum_{i=1} ^g \operatorname{Tr} \alpha\left(\overline{ \frac{\partial R}{\partial x_i}}\right) \beta(x_i) -\sum_{i=1} ^g \operatorname{Tr} \alpha\left(\overline{ \frac{\partial R}{\partial y_i}}\right) \beta(y_i)
\end{equation}
where $\overline{(\cdot)}: \mathbb{R} \Gamma \to \mathbb{R} \Gamma$ is the map induced from the inversion that sends an element $g$ of $\Gamma$ to $g^{-1}$.
Suppose that $\Sigma$ has a boundary component. Then $\overline{\mathcal{X}}_n (\pi_1(\Sigma))$ is no longer symplectic. However by controlling the boundary conjugacy classes, we get a foliation each leaf of which is a symplectic manifold. See Theorem 2.2.1 of Audin \cite{audin1997} for more details. To present the result, we need a relative version of a fundamental class. Choose a path $\eta_i$ from a base point $p$ to a point in $\zeta_i$ in such a way that $z_i := \zeta_i ^{\eta_i}$ fits into a standard presentation
\begin{equation}\label{presentationforcompact}
\Gamma=\pi_1(\Sigma,p)= \langle x_1, y_1, x_2, y_2, \cdots, x_g, y_g, z_1, \cdots, z_b\,|\,R\rangle
\end{equation}
where $R= \prod_{i=1} ^g [x_i,y_i] \prod_{j=1} ^b z_j$.
\begin{lemma}\label{funclcpt}
Let
\[
[\Sigma] = \sum_{i=1} ^g\left(\,\left.\left[ \frac{\partial R} {\partial x_i}\right| x_i\right] + \left[\left. \frac{\partial R}{\partial y_i} \right| y_i\right]\, \right) + \sum_{j=1} ^ b \left.\left[ \frac{\partial R}{\partial z_j}\right| z_j\right]
\]
be a (absolute) 2-chain in $\mathbf{B}_2(\Gamma)\otimes \mathbb{Z}$. There is an auxiliary resolution $(R_\ast, A^i _\ast)$ over the group system $(\Gamma, \{\langle z_i \rangle\})$ with a chain equivalence $\mathbf{B}_\ast (\Gamma)\otimes \mathbb{Z} \to R_\ast \otimes \mathbb{Z}$ such that the image of $[\Sigma]$ under the map
\[
\mathbf{B}_2(\Gamma)\otimes \mathbb{Z} \to R_2 \otimes \mathbb{Z} \to (R_2/A_2) \otimes \mathbb{Z}
\]
represents a generator of $H_2(\Gamma, \{ \langle z_i\rangle\}; \mathbb{Z})\approx \mathbb{Z}$. We call $[\Sigma]$ a \emph{relative fundamental class} of $\Gamma$.
\end{lemma}
We prove Lemma \ref{funclcpt} in the appendix.
Note that if $\Sigma$ is not closed, $H_2(\Gamma;\mathbb{Z})=0$ and that $[\Sigma]$ itself is not even a 2-cycle in the absolute chain complex $\mathbf{B}_\ast (\Gamma) \otimes \mathbb{Z}$.
\begin{theorem}[Guruprasad-Huebschmann-Jeffrey-Weinstein \cite{guruprasad1997}, H. Kim \cite{kim1999}]\label{Ksymp} Let $\Sigma$ be a compact oriented hyperbolic surface possibly with boundary. Let $\mathscr{B}$ be a boundary frame. Fix a presentation of $\pi_1(\Sigma)$ as in (\ref{presentationforcompact}) and a representation $\rho$ such that $[\rho]\in \overline{\mathcal{X}}_{n} ^{\mathscr{B}}(\Gamma)$. Let $[\alpha],[\beta]\in H^1_{\mathrm{par}} (\Gamma,\{\langle z_i \rangle\} ; \mathfrak{g}_{\rho})$. We choose, for each boundary component $z_i$, an element $X_i \in \operatorname{Hom}_\Gamma(\mathbf{B}_0(\langle z_i\rangle),\mathfrak{g}) \approx \mathfrak{g}$ such that $\iota_{\langle z_i\rangle} ^{\#} \alpha = \mathrm{d}_{\langle z_i\rangle} X_i$. Define $\omega_{K} ^\Sigma$ to be
\[
\omega_K ^{\Sigma}([\alpha], [\beta]) = \langle \alpha \smile \beta, [\Sigma]\rangle - \sum_{i=1} ^b \operatorname{Tr} X_i \beta(z_i)
\]
where $ \langle \alpha \smile \beta, [\Sigma]\rangle$ is defined as in (\ref{explicitformula}). Then $\omega_K$ is a closed, non-degenerate 2-form and $(\overline{\mathcal{X}}_n ^{\mathscr{B}}(\Gamma),\omega_K^\Sigma)$ becomes a symplectic manifold.
\end{theorem}
\begin{remark} Unlike Theorem \ref{goldmansymplectic}, the operation $\langle \alpha \smile \beta, [\Sigma]\rangle$ in Theorem \ref{Ksymp} is defined only on the chain level and cannot descend to cohoomlogy. In fact, H. Kim \cite{kim1999} computes that for any $X\in C^0(\Gamma;\mathfrak{g})$ and $\beta \in Z^1(\Gamma; \mathfrak{g})$,
\[
\langle \mathrm{d}_\Gamma X \smile \beta, [\Sigma]\rangle = \sum_{i=1} ^b \operatorname{Tr} X \beta(z_i)\ne 0.
\]
Nevertheless, the whole expression $\omega_K ^\Sigma ([\alpha], [\beta])$ is a nice cohomological operation.
\end{remark}
\begin{remark}
The formula given in Lemma 8.4 of Guruprasad-Huebschmann-Jeffrey-Weinstein \cite{guruprasad1997} is incorrect. Since (with notation in \cite{guruprasad1997}) $\langle c, u\smile v\rangle$ is not antisymmetric, we have to replace $\langle c, u\smile v\rangle$ with $\frac{1}{2}(\langle c, u\smile v\rangle-\langle c, v\smile u\rangle)$. Then the formula of Lemma 8.4 of Guruprasad-Huebschmann-Jeffrey-Weinstein \cite{guruprasad1997} and Theorem 5.6 of H. Kim \cite{kim1999} are identical.
\end{remark}
We state the relevant lemmas to prove Theorem \ref{Ksymp}.
\begin{lemma}[H. Kim \cite{kim1999}] Suppose that $X_i'\in \mathfrak{g}$ is another element such that $\iota_{\langle z_i\rangle} ^{\#} \alpha = \mathrm{d}_{\langle z_i\rangle} X_i'$, for $i=1,2,\cdots,b$. Then
\[
\langle \alpha \smile \beta, [\Sigma]\rangle - \sum_{i=1} ^b \operatorname{Tr} X_i \beta(z_i)= \langle \alpha \smile \beta, [\Sigma]\rangle - \sum_{i=1} ^b \operatorname{Tr} X'_i \beta(z_i).
\]
In particular, $\omega_K ^\Sigma$ is well-defined in the chain level.
\end{lemma}
\begin{proof}
This lemma is also proven in H. Kim \cite{kim1999} but is not stated in an explicit form. So we recall the proof here.
Suppose that there are two elements $X_i$ and $X_i '$ of $\mathfrak{g}=C^0(\langle z_i\rangle , \mathfrak{g})$ such that $\iota^\# _{\langle z_i\rangle} \alpha = \mathrm{d}_{\langle z_i\rangle} X_i=\mathrm{d}_{\langle z_i\rangle} X_i'$. Let $Y_i = X_i - X_i'$. Then we have $\mathrm{d}_{\langle z_i\rangle} Y_i =0$. That is $z_i \cdot Y_i - Y_i =0$. Since $\beta\in Z^1_{\mathrm{par}}(\Gamma,\{\langle z_i\rangle \};\mathfrak{g})$, we can find $Z_i\in \mathfrak{g}$ such that $\beta(z_i) = z_i \cdot Z_i -Z_i$. Then we compute
\begin{align*}
\sum_{i=1} ^b\operatorname{Tr} (X_i - X_i') \beta(z_i) & = \sum_{i=1} ^b\operatorname{Tr} Y_i \beta(z_i)\\
&= \sum_{i=1} ^b \operatorname{Tr} Y_i (z_i \cdot Z_i -Z_i) \\
&= \sum _{i=1} ^b \operatorname{Tr} (z_i ^{-1} \cdot Y_i -Y_i )Z_i = 0.
\end{align*}
Therefore, $\omega_K ^\Sigma$ is independent of choice of $X_i$.
\end{proof}
\begin{lemma} Suppose that $\alpha= \mathrm{d}_{\Gamma} X$ for some $X \in C^0(\Gamma;\mathfrak{g})$. For any $\beta\in Z^1_{\mathrm{par}} (\Gamma, \{\langle z_i \rangle\};\mathfrak{g})$, we have
\[
\langle\alpha \smile \beta, [\Sigma]\rangle - \sum_{i=1} ^b \operatorname{Tr} X_i \beta(z_i)= 0.
\]
Likewise, for any $\alpha\in Z^1_{\mathrm{par}} (\Gamma, \{\langle z_i \rangle\};\mathfrak{g})$, and $\beta= \mathrm{d}_{\Gamma} X$,
\[
\langle \alpha \smile \beta, [\Sigma]\rangle - \sum_{i=1} ^b \operatorname{Tr} X_i \beta (z_i)= 0.
\]
\end{lemma}
\begin{proof}
See Proposition 5.4 of H. Kim \cite{kim1999}.
\end{proof}
In other words, $\omega^\Sigma _K$ is well-defined and descends to a pairing on parabolic cohomology groups.
\begin{lemma}For any $[\alpha],[\beta]\in H^1_{\mathrm{par}}(\Gamma, \{\langle z_i \rangle \} ;\mathfrak{g})$,
\[
\omega^\Sigma _K ([\alpha],[\beta]) = - \omega^\Sigma _K ([\beta],[\alpha]).
\]
Moreover, if $\omega^\Sigma _K ([\alpha],[\beta])=0$ for all $[\alpha]\in H^1_{\mathrm{par}}(\Gamma, \{\langle z_i \rangle \} ;\mathfrak{g})$, we have $[\beta]=0$.
\end{lemma}
\begin{proof}
See Proposition 5.5 of H. Kim \cite{kim1999}.
\end{proof}
Therefore, $\omega^\Sigma _K$ is a nondegenerate 2-form on $\overline{\mathcal{X}} ^{\mathscr{B}} _n (\Gamma)$.
Then we have to show that $\omega^\Sigma _K$ is closed to conclude that it is indeed a symplectic form. This is not a trivial result and can be proven by various ways. See, for instance, H. Kim \cite{kim1999}, Guruprasad-Huebschmann-Jeffrey-Weinstein \cite{guruprasad1997}, and Karshon \cite{karshon1992}.
The expression of $[\Sigma]$ depends on the choice of the relation $R$. We may wonder the value of $\omega_K$ changes if we use another relation. The following lemma shows that it is not the case.
\begin{lemma}\label{welldefinedness}
Let $R' = h Rh^{-1}$ for some $h\in \Gamma$. Let $[\Sigma']$ be the relative fundamental class defined as in Lemma \ref{funclcpt} with respect to $R'$. Then $\langle \alpha \smile \beta, [\Sigma]\rangle = \langle \alpha \smile \beta, [\Sigma']\rangle$ for all $\alpha, \beta \in H^1_{\mathrm{par}}(\Gamma,\{\langle z_i \rangle \}; \mathfrak{g})$.
\end{lemma}
\begin{proof}
It is straightforward to obtain
\[
[\Sigma'] = \sum_{i=1} ^g\left(\,\left.\left[ h \frac{\partial R} {\partial x_i} \right| x_i\right] + \left.\left[ h \frac{\partial R}{\partial y_i} \right| y_i\right]\,\right) + \sum_{j=1} ^ b \left.\left[ h\frac{\partial R}{\partial z_j}\right| z_j\right].
\]
Since $\alpha$ is a cocycle, $\alpha\left(\overline{ h \frac{\partial R} {\partial x_i}}\right) = \alpha\left(\overline{\frac{\partial R}{\partial x_i}} \right) + \overline{\frac{\partial R}{\partial x_i}}\cdot \alpha(h^{-1})$. Here we use the convention $(x\pm y) \cdot X = x\cdot X \pm y\cdot X$ where $x,y\in \Gamma$, $X\in \mathfrak{g}$. By definition of $\langle \alpha\smile \beta, [\Sigma']\rangle$,
\begin{multline*}
\langle \alpha\smile \beta, [\Sigma']\rangle = \langle \alpha \smile \beta, [\Sigma]\rangle \\
+ \operatorname{Tr} \alpha(h^{-1})\beta\left(\sum _{i=1} ^g \left(\frac{\partial R}{\partial x_i} (x_i-1) + \frac{\partial R}{\partial y_i} (y_i-1)\right) +\sum _{i=1} ^b\frac{\partial R}{\partial z_i} (z_i-1)\right).
\end{multline*}
By Theorem \ref{fox}, we conclude that the second term is $\operatorname{Tr} \alpha(h^{-1}) \beta(R-1) = 0$.
\end{proof}
\subsection{The existence of global Darboux coordinates}\label{aaprinciple}
Now we prove series of results toward the existence of a global Darboux coordinate system. Our main goal of this subsection is Theorem \ref{existenceofdarboux}.
\begin{lemma}[A variation of Theorem 18.12 of da Silva \cite{da2001}]\label{sectionimpliescoordinates}
Let $f=(f_1, \cdots, f_n):M^{2n}\to B$ be a fiber bundle over a connected open subset $B$ of $\mathbb{R}^n$. Suppose that $M$ is given a symplectic structure $\omega$ such that each fiber is a simply connected Lagrangian submanifold. Suppose moreover that the Hamiltonian vector fields $\mathbb{X}_{f_1}, \cdots, \mathbb{X}_{f_n}$ are linearly independent at each point in $M$ and complete. Then the following hold:
\begin{itemize}
\item $f:M\to B$ becomes an affine bundle over $B$.
\item If $f:M\to B$ admits a global Lagrangian section, then there is a function $g=(g_1, \cdots, g_n):M \to \mathbb{R}^n$ such that $(f_1, \cdots, f_n, g_1, \cdots, g_n)$ is a global Darboux coordinate system.
\end{itemize}
\end{lemma}
\begin{proof}
Since $\mathbb{X}_{f_1}, \cdots, \mathbb{X}_{f_n}$ are complete, they induce a Hamiltonian $\mathbb{R}^n$-action on $M$. Observe that $\mathbb{X}_{f_1}, \cdots, \mathbb{X}_{f_n}$ are tangent to each fiber. Since fibers are Lagrangian, the rule $x\mapsto (\mathbb{X}_{f_1}|_x,\cdots, \mathbb{X}_{f_n}|_x)$ defines a completely integrable distribution. Since $\mathbb{X}_{f_1}, \cdots, \mathbb{X}_{f_n}$ are linearly independent, the integral manifold is $n$-dimensional, and thus, is an open subset of each fiber. Since each fiber is connected, the integral manifold must be the whole fiber. We know that $\omega([\mathbb{X}_{f_i},\mathbb{X}_{f_j}],Z) = Z \omega(\mathbb{X}_{f_i},\mathbb{X}_{f_j})$ for any vector field $Z$. Since $\omega(\mathbb{X}_{f_i},\mathbb{X}_{f_j})=0$, we have $[\mathbb{X}_{f_i},\mathbb{X}_{f_j}]=0$ for all $i,j$. This, together with the fact that $\mathbb{X}_{f_i}$'s are complete, yields that the Hamiltonian flows associated to $f_1,\cdots, f_n$ induce an $\mathbb{R}^n$-action. Since the integral manifold is the whole fiber, the action must be fiberwise transitive. Since each fiber is simply connected and since the $\mathbb{R}^n$-action is transitive, we conclude that the action is a free action and this gives an affine bundle structure on $M\to B$.
Let $\sigma:B \to M$ be a Lagrangian section. Since the action is free and fiberwise transitive, for each $x\in M$, there is a unique $\mathbf{t}_x\in \mathbb{R}^n$ such that $x=\mathbf{t}_x \cdot \sigma(f(x))$. Define the smooth function $g=(g_1, \cdots, g_n):M\to \mathbb{R}^n$ by $g(x) = \mathbf{t}_x$.
We claim that $x\mapsto (f(x), g(x))$ is a global Darboux coordinate system. We first observe that $(f,g)$ is regular and one-to-one. Hence $(f,g)$ is a global coordinate system. Moreover $\frac{\partial}{\partial f_i}=\mathrm{d} \sigma\frac{\partial}{\partial x_i}$ spans a Lagrangian subspace at each point $x$ of $\sigma(B)$. Then we compute
\begin{align*}
\omega_x \left(\frac{\partial}{\partial g_i}|_x, \frac{\partial}{\partial f_j}|_x\right)& =\omega_x \left(\mathbb{X}_{f_i}|_x, (\mathrm{d} \sigma \frac{\partial}{\partial x_j})|_x\right) \\
&=\mathrm{d} f_i\left( \mathrm{d} \sigma\frac{\partial}{\partial x_j}\right)\\
&= \mathrm{d} (f_i \circ \sigma) \frac{\partial}{\partial x_j} \\
&= \frac{\partial x_i}{\partial x_j}= \begin{cases} 1 & i=j \\ 0 & i\ne j\end{cases}
\end{align*}
at each $x\in\sigma(B)$. Now consider a general point $x\in M$. We may assume that $x$ can be reached from $\sigma(f(x))$ by the Hamiltonian flow $\Psi$ associated to some $f_i$. That is $x= \Psi^t (\sigma(f(x)))$ for some $t\in \mathbb{R}$. Since $\Psi^t$ preserves $\omega$, we have
\begin{align*}
\omega_x\left(\frac{\partial}{\partial g_i}|_x, \frac{\partial}{\partial f_j}|_x\right)&=\omega_x \left(\mathbb{X}_{f_i}|_x , \mathrm{d} \Psi^t \left( \frac{\partial}{\partial f_j}|_{\sigma(f(x))}\right)\right) \\
&= ((\Psi^{-t} )^\ast \omega)_{x}\left(\mathbb{X}_{f_i}|_x, \mathrm{d} \Psi^t \left( \frac{\partial}{\partial f_j}|_{\sigma(f(x))}\right)\right) \\
&= \omega_{\sigma(f(x))}\left(\mathbb{X}_{f_i}|_{\sigma(f(x))}, {\frac{\partial}{\partial f_j}}|_{\sigma(f(x))}\right)=\begin{cases} 1 & i=j \\ 0 & i\ne j\end{cases}.
\end{align*}
Therefore $(f,g)$ is a global Darboux coordinate system.
\end{proof}
\begin{remark} Lemma \ref{sectionimpliescoordinates}, looks similar to the well-known action-angle principle (see for example Theorem 18.12 of da Silva \cite{da2001}). One difference is that, in our result, the given ``integral of motion'' can be taken as action coordinates without any modification. An additional condition, the existence of a Lagrangian section, must be imposed to obtain this stronger conclusion.
\end{remark}
\begin{lemma}\label{local}
Let $(M^{2n},\omega)$ be a symplectic manifold and $f=(f_1,\cdots, f_n):M\to B\subset \mathbb{R}^n$ be as in Lemma \ref{sectionimpliescoordinates}. Then each $c \in B$ has a neighborhood $U$ such that there is a symplectomorphism $F:f^{-1}(U)\to T^*B$ which is also a morphism of affine bundles.
\end{lemma}
\begin{proof}
Note that, by the first assertion of Lemma \ref{sectionimpliescoordinates}, $M$ is an affine bundle.
We first show that there is a neighborhood $U$ of $c$ where a local Lagrangian section $\sigma|_U$ on $U$ exists. For this we first choose a neighborhood $V_0$ of $c\in B$ where both $M$ and $T^* B$ are trivialized. Let $T: f^{-1}(V_0) \to V_0 \times \mathbb{R}^n$ be a trivialization.
Let $x\in f^{-1}(c)$. Carath\'eodory-Jacobi-Lie theorem states that there is a neighborhood $U_0$ of $x$ and function $g=(g_1, \cdots, g_n)$ such that $(U_0,(f,g))$ is a local Darboux chart. We may assume that $U_0\subset f^{-1}(V_0)$ and that $T(U_0) = U \times I$ for some open box $I$ of $\mathbb{R}^n$ and an open neighborhood $U$ of $c$. So locally, $\omega|_{U_0} = \sum_{i=1} ^n \mathrm{d} f_i \wedge \mathrm{d} g_i$. Therefore, $f:M\to B$ admits a local Lagrangian section over $U:=f(U_0)\subset V_0$. Let $\sigma$ be this section.
Let $z:U \to z(U)$ be the zero section of $T^* B \to B$. Define a map $F_0 : \sigma(U) \to z(U)$ by $z\circ f$. Observe that $(T^* B, \omega_{\text{can}})$ is a vector bundle with fiber preserving the Hamiltonian $\mathbb{R}^n$-action acting fiberwise freely, linearly and transitively. We also have the fiberwise free, transitive and linear Hamiltonian $\mathbb{R}^n$-action on $M$. So, for each $x\in f^{-1}(c)$, there is a unique $\mathbf{t}_x\in \mathbb{R}^n$ such that $\mathbf{t}_x \cdot \sigma(c) = x$. Extend $F_0$ to the map $F: (f^{-1}(U) ,\sigma(U))\to (T^* B|_U,z(U))$ by $F(x)=\mathbf{t}_x \cdot z(f(x))$. Then $F$ is clearly an affine bundle map. Lastly we have to prove that $F$ is symplectomorphic. To this end, observe that $F$ is symplectomorphic at each point of $\sigma(U)$. Let $\Phi^t$ and $\Psi^t$ be Hamiltonian flows on each bundle corresponding to the same 1-dimensional subgroup of $\mathbb{R}^n$. Then $F\circ \Phi^t = \Psi^t$. Since Hamiltonian flows preserve the symplectic form, we conclude that $F$ is symplectomorphic.
\end{proof}
Assuming further that $B$ has the trivial 2nd cohomology, we can prove the existence of a global Lagrangian section. The proof is based on sheaf cohomology theory and the idea of Duistermaat \cite{duistermaat1980}.
\begin{proposition}\label{existsection}
Assume that $B$ is connected and $H^2(B;\mathbb{R})=0$. Under the assumptions of Lemma \ref{local}, $f:M\to B$ admits a global Lagrangian section.
\end{proposition}
\begin{proof}
For each $y\in B$, vector fields $\mathbb{X}_{f_1},\cdots,\mathbb{X}_{f_n}$ are tangent to the fiber $M_y:=f^{-1}(y)$. We write $\mathbb{X}_{f_i}(M_y)$ for the vector field on $M_y$ induced by $\mathbb{X}_{f_i}$. Let $N$ be a vector bundle over $B$ whose fiber over $y$ is the $\mathbb{R}$-vector space spanned by $\mathbb{X}_{f_1}(M_y),\cdots, \mathbb{X}_{f_n}(M_y)$. Since $\mathrm{d} f_i$ annihilates $T _x M_y\subset T_x M$, $x\in M_y$, there is a unique \emph{closed} 1-form $\eta_i$ on $B$ such that $f^* \eta_i= \mathrm{d} f_i$. The assignment $\eta: \mathbb{X}_{f_i} \mapsto \eta_i$ induces the isomorphism of vector bundles $\eta : N \to T^* B$ in the obvious way.
Note that under the assumptions of Lemma \ref{local}, $M$ has the structure of affine bundle over $B$ and the vector bundle $N$ acts on $M$ by fiberwise translation.
\begin{claim}
Let $\sigma_1$ be a local Lagrangian section of the affine bundle $M\to B$ over an open set $U\subset B$.
\begin{itemize}
\item If $\sigma_2$ is another local Lagrangian section over $U$, then $\sigma_1-\sigma_2$ naturally defines a local section of $N\to B$. Moreover, $\eta(\sigma_1 - \sigma_2 ) $ is a closed 1-form on $U$.
\item Conversely, let $\gamma$ be a local section of $N\to B$ over $U$ such that $\eta(\gamma)$ is a closed 1-form. Then $\sigma_1 + \gamma$ is another Lagrangian local section of $M\to B$ on $U$.
\end{itemize}
\end{claim}
\begin{proof}[Proof of the Claim]
It is clear that $\sigma_1-\sigma_2$ is naturally a section of $N$ since for each $y\in B$, there is a unique translation vector from $\sigma_1(y)$ to $\sigma_2(y)$.
Let $y\in B$. Lemma \ref{local} guarantees that we can find a neighborhood $V$ of $y$ such that there is a symplectomorphism $F: f^{-1}(V)\to T^* B$ sending $\sigma_1(V)$ to the zero section. By construction of $F$, we have $F(\sigma_1-\sigma_2) = \eta (\sigma_1 - \sigma_2)$. Since $F$ is a symplectomorphism $F(\sigma_1)$ and $F(\sigma_2)$ are both closed 1-forms. Since $F$ is an affine bundle morphism, we observe that $\eta(\sigma_1 -\sigma_2) = F(\sigma_1) - F(\sigma_2)$ is also a closed 1-form. Therefore, each point $y\in U$ has a neighborhood where $\eta(\sigma_1-\sigma_2)$ is closed, which proves the first part of the claim.
If $\gamma$ is a local section of $N$ such that $\eta(\gamma)$ is closed, then we have that $F(\sigma_1 + \gamma)=\eta(\sigma_1+\gamma)$ is a closed 1-form so it is a local Lagrangian section of $T^*B$. Since $F$ is symplectomorphism, $\sigma_1+\gamma$ must be a local Lagrangian section.
\end{proof}
We can cover $B$ by open sets $\{W_i\}$ such that the affine bundle $M\to B$ is trivial over $W_i$ for each $i$ and that there is a local Lagrangian section $\sigma_i$ on each $W_i$. We can further assume that each finite intersection of $\{W_i\}$ is contractible. Observe, by the above claim, that the difference $\sigma_{ij} := \sigma_i - \sigma_j$ of the sections on $W_i\cap W_j$ gives a (\v Cech) 1-cocycle $\{\eta (\sigma_{ij})\}$ of the sheaf $\operatorname{Ker} \mathrm{d}^1$. By the above claim again, the cohomology class in $H^1(B, \operatorname{Ker} \mathrm{d}^1)$ represented by $\{\eta (\sigma_{ij})\}$ is the obstruction of finding a global Lagrangian section. We show that this obstruction class vanishes.
Consider an exact sequence of sheaves
\[
0\to \underline{\mathbb{R}} \to \Omega_{B} ^0 \to \operatorname{Im} \mathrm{d}^0 \to 0.
\]
Here, $\underline{\mathbb{R}}$ denotes the constant sheaf and $\Omega_B ^0$ is the sheaf of smooth functions on $B$. The above exact sequence induces the long exact sequence
\[
\cdots \to H^1(B,\Omega^0 _{B}) \to H^1(B , \operatorname{Im} \mathrm{d}^0) \to H^2 (B, \underline{\mathbb{R}}) \to H^2(B,\Omega^0 _{B}) \to \cdots.
\]
Observe that $\operatorname{Ker} \mathrm{d}^1=\operatorname{Im} \mathrm{d}^0$ as sheaves. Moreover because $\Omega^0 _{B}$ is a soft sheaf, it follows that
\[
H^1(B,\Omega^0 _{B})=H^2(B,\Omega^0 _{B})=0.
\]
Therefore
\[
H^1(B , \operatorname{Ker} \mathrm{d}^1) \approx H^1(B , \operatorname{Im} \mathrm{d}^0) \approx H^2 (B, \underline{\mathbb{R}})\approx H^2(B;\mathbb{R}) =0.
\]
Consequently, $\{\eta (\sigma_{ij})\}$ represents the trivial class so Proposition \ref{existsection} follows.
\end{proof}
Finally, by putting all the above results together, one can deduce the following theorem.
\begin{theorem}[A variation of Duistermaat \cite{duistermaat1980}]\label{existenceofdarboux}
Let $(M^{2n},\omega)$ be a symplectic manifold and $f=(f_1, \cdots, f_n):M^{2n}\to B$ be a fiber bundle over a connected open subset $B$ of $\mathbb{R}^n$. Suppose:
\begin{itemize}
\item $ H^2(B;\mathbb{R})=0$,
\item each fiber is a simply connected Lagragian submanifold, and
\item the Hamiltonian vector fields $\mathbb{X}_{f_1}, \cdots, \mathbb{X}_{f_n}$ are linearly independent at each point in $M$ and complete.
\end{itemize}
Then there is a function $g=(g_1, \cdots, g_n):M \to \mathbb{R}^n$ such that
\[
(f_1, \cdots, f_n, g_1, \cdots, g_n)
\]
is a global Darboux coordinate system. \end{theorem}
\section{Decomposition formulas}
This section is devoted to the proof of our first main result, Theorem \ref{decompthmintro}. As mentioned in the introduction, we do induction on the number of curves. We deal with the base cases by using the Fox calculus and cocycle computations. Induction process is somewhat technical particularly when we try to cut the surface by more than three curves. Suppose for instance that three curves $\xi_1$, $\xi_2$ and $\xi_3$ are positioned as in Figure \ref{f1}. Then $\xi_1$, $\xi_2$ and $\xi_3$ are all non-separating in $\Sigma$. However $\xi_3$ becomes separating seen as a curve in $\Sigma\setminus( \xi_1\cup \xi_2)$. On the other hand, $\xi_2$ becomes separating if we subtract $\xi_1$ and $\xi_3$. Therefore we get at least three different decompositions of $\pi_1(\Sigma)$ depending on the order of cutting. To treat this technicality systematically, we borrow the notation of graph of groups.
We then present a (parabolic) group cohomology version of the Mayer-Vietoris sequence to prove the decomposition formula for a single cutting. This sequence decomposes the tangent space into one or two components and our formulas show that the pairing $\omega^\Sigma _K$ is additive with respect to this decomposition.
\begin{figure}[ht]
\begin{tikzpicture}
\draw(0,0) node {\includegraphics[scale=0.3]{f1.pdf}};
\draw (0,.4cm) node {$\xi_2$};
\draw (-1.2cm, -.7cm) node {$\xi_1$};
\draw (1.7cm, -.7cm) node {$\xi_3$};
\end{tikzpicture}
\caption{}\label{f1}
\end{figure}
\subsection{Decomposition of fundamental groups}\label{tree}
Let $\Sigma$ be a compact oriented hyperbolic surface of genus $g$ with boundary components $\zeta_1, \cdots, \zeta_b$. We denote by $\Gamma$ its fundamental group $\pi_1(\Sigma)$. Let $\{\xi_1, \cdots, \xi_m\}$ be a collection of pairwise disjoint, non-isotopic essential simple closed curves in $\Sigma$ that divide the surface into subsurfaces $\Sigma_1, \cdots, \Sigma_l$. We assume that each $\Sigma_i$ is hyperbolic.
Following Johnson-Millson \cite{Johnson1987}, we can construct a tree $\mathcal{T}$ as follow. Let $p:\widetilde{\Sigma}\to \Sigma$ be the universal cover. The set of vertices $V(\mathcal{T})$ consists of connected components of $\widetilde{\Sigma} \setminus \bigcup_{i=1} ^ m p^{-1}(\xi_i)$. Two vertices are joined by an edge in $E(\mathcal{T})$ if they are adjacent along some component of $p^{-1}(\xi_i)$. Observe that each vertex corresponds to the universal cover of some $\Sigma_i$. Johnson-Millson show in \cite[Lemma 5.3]{Johnson1987} that $\mathcal{T}$ is indeed a tree and admits a $\Gamma$-action without inversion. Hence, we have the following theorem.
\begin{theorem}[Johnson-Millson \cite{Johnson1987}, see also Serre \cite{serre2003}]\label{decomposerep}
Let $\Sigma$ be a compact oriented hyperbolic surface, $\{\xi_1, \cdots, \xi_m\}$ a collection of pairwise disjoint, non-isotopic essential simple closed curves in $\Sigma$ that divides the surface into hyperbolic subsurfaces $\Sigma_1, \cdots, \Sigma_l$. Then $\Gamma:=\pi_1(\Sigma)$ is isomorphic to the fundamental group $\pi_1(\Gamma, \mathcal{G}, \mathcal{D})$ of a graph of groups $(\Gamma,\mathcal{G})$, $\mathcal{G} = \mathcal{T}/\Gamma$ where $\mathcal{D}$ is a choice of a maximal tree of $\mathcal{G}$. We can label vertices of $\mathcal{G}$ by $\Sigma_1, \cdots, \Sigma_l$ and edges by $\xi_1, \cdots, \xi_m$. Choose a lift $j$ of $\mathcal{D}$ in $\mathcal{T}$. The vertex group at $\Sigma_i$ is $\Gamma_{\Sigma_i}=\operatorname{Stab}_{\Gamma} (j(\Sigma_i))$ and the edge group at $\xi_i$ is $\Gamma_{\xi_i} = \operatorname{Stab}_{\Gamma}(j(\xi_i))$.
\end{theorem}
Observe that $\Gamma_{\Sigma_i}$ is conjugate in $\Gamma$ to $\pi_1(\Sigma_i)$ and that $\Gamma_{\xi_i}$ is conjugate to $\pi_1(\xi_i)$.
Let us choose a base vertex of $\mathcal{D}$ and define a relation $\le$ on $V(\mathcal{D})$ by declaring that $\Sigma_i \le {\Sigma_j}$ if and only if ${\Sigma_i}$ is nearer to the base vertex than ${\Sigma_j}$. It is clear that $\le$ is a partial order and the set $V(\mathcal{D})$ becomes a poset. For each vertex $\Sigma_i$ we define the following subset
\[
S({\Sigma_i}):=\{{\Sigma_j} \in V(\mathcal{D})\,|\,{\Sigma_i}\le {\Sigma_j}\}.
\]
For an edge ${\xi_i}$ and $\gamma \in \Gamma_{\xi_i}$, denote by $\gamma ^-$ the image of $\gamma$ in the vertex group of the origin $\operatorname{o}({\xi_i})$. Similarly $\gamma^+$ is the image of $\gamma$ in the vertex group of the terminal $\operatorname{t}({\xi_i})$. Therefore, for each $\xi_i \in E(\mathcal{G})$,
\begin{align*}
\Gamma_{\xi_i} ^{+}&:= \{\gamma^{+} \,|\, \gamma\in \Gamma_{\xi_i}\}, \quad \text{ and}\\
\Gamma_{\xi_i} ^{-}&:= \{\gamma^{-} \,|\, \gamma\in \Gamma_{\xi_i}\}
\end{align*}
are subgroups of $\Gamma=\pi_1(\Gamma, \mathcal{G}, \mathcal{D})$.
For each $\xi_i \in E(\mathcal{G})$ and each $\gamma \in \Gamma_{\xi_i}$, we have $\gamma^+ = \gamma^-$ in the whole group $\Gamma$. If $\xi_i$ is an edge of $\mathcal{G}$ but $\xi_i\notin E(\mathcal{D})$, then we have an additional generator $\xi_i^{\perp}$ with relation $\xi_i^{\perp} \gamma^+ (\xi_i^{\perp})^{-1} = \gamma^-$ for each $\gamma \in \Gamma_{\xi_i}$ in $\Gamma$. Note that $\xi_i^{\perp}$ corresponds to a loop transverse to $\xi_i$.
Let $\rho:\Gamma \to G$ be a representation. Since each vertex group $\Gamma_{\Sigma_i}$ injects into $\Gamma$, $\rho$ induces a representation $\rho_{\Gamma_{\Sigma_i}} : \Gamma_{\Sigma_i} \to G$ for each vertex group. $\rho$ also induces a representation $\rho_{\xi_i^{\perp}}:\langle \xi_i^{\perp} \rangle \to G$ for each edge $\xi_i$ which is not in $E(\mathcal{D})$. In this way, we obtain a collection of representations $\rho_{\Gamma_{\Sigma_i}}: \Gamma_{\Sigma_i } \to G$ for each $i=1,2,\cdots,l$ and $\rho_{\xi_i^{\perp}}:\langle \xi_i^{\perp}\rangle \to G$ for each $\xi_i\in E(\mathcal{G})\setminus E(\mathcal{D})$.
Conversely, suppose that we are given a collection of representations $\rho_{\Gamma_{\Sigma_i}}: \Gamma_{\Sigma_i } \to G$ for each $i=1,2,\cdots,l$ and $\rho_{\xi_j^{\perp}}:\langle \xi_j^{\perp}\rangle \to G$ for each $\xi_j\in E(\mathcal{G})\setminus E(\mathcal{D})$, subject to relations
\begin{itemize}
\item If ${\xi_k} \in E(\mathcal{D})$ and if ${\Sigma_i}=\operatorname{o}(\Gamma_{\xi_k})$ and ${\Sigma_j}=\operatorname{t}(\Gamma_{\xi_k})$ then for each $\gamma\in \Gamma_{\xi_k}$,
\begin{equation}\label{rel1}
\rho_{\Gamma_{\Sigma_i}}(\gamma ^-) = \rho_{\Gamma_{\Sigma_j}}( \gamma ^+).
\end{equation}
\item If ${\xi_k}\notin E(\mathcal{D})$ and if ${\Sigma_i}=\operatorname{o}({\xi_k})$ and ${\Sigma_j}=\operatorname{t}({\xi_k})$ then, for each $\gamma \in \Gamma_{\xi_k}$,
\begin{equation}\label{rel2}
\rho_{\xi_k^{\perp}} (\xi_k^{\perp}) \rho_{\Gamma_{\Sigma_j}}(\gamma^+) \rho_{\xi_k^{\perp}}(\xi_k^{\perp})^{-1} = \rho_{\Gamma_{\Sigma_i}}(\gamma^-).
\end{equation}
\end{itemize}
Then there is a unique representation $\rho: \Gamma \to G$ whose restrictions are precisely prescribed representations.
\begin{figure}[ht]\label{f2}
\begin{tikzpicture}
\draw(0,0) node {\includegraphics[scale=0.3]{f2.pdf}};
\end{tikzpicture}
\caption{An example of decomposition. Curves in $\mathcal{C}$ are depicted by blue lines. The graph $\mathcal{G}$ is drawn in red with a maximal tree $\mathcal{D}$ bolded.}
\end{figure}
\subsection{The Mayer-Vietoris sequence}\label{MVS}
Before go further, we summarize the general settings that we consider in the subsequent section.
\begin{itemize}
\item $\Sigma$ is a compact oriented hyperbolic surface with boundary components $\{\zeta_1, \cdots, \zeta_b\}$. $\{\xi_1, \cdots, \xi_m\}$ is a collection of pairwise disjoint, non-isotopic essential simple closed curves in $\Sigma$ that divide the surface into hyperbolic subsurfaces $\Sigma_1, \cdots, \Sigma_l$. We stick to the notation $\Gamma=\pi_1(\Sigma)$ (which is also isomorphic to $\pi_1(\Gamma, \mathcal{G}, \mathcal{D})$), $\Gamma_{\Sigma_1}, \cdots, \Gamma_{\Sigma_l}$ and $\Gamma_{\xi_1}, \cdots, \Gamma_{\xi_m}$ of the previous subsection.
\item Denote by $\iota_{\Sigma_i}$ the map $\overline{\Sigma_i} \to \Sigma$, the extension of the inclusion $\Sigma_i \to \Sigma$ to the completion $\overline{\Sigma_i}$ of $\Sigma_i$. We sometimes use the same notation $\iota_{\Sigma_i}$ to denote the induced homomorphism $\iota_{\Gamma_{\Sigma_i}}:\Gamma_{\Sigma_i} \to\Gamma$.
\item Unless otherwise stated, $[\rho]$ denotes an element in $\overline{\mathcal{X}}_n ^{\mathscr{B}}(\Gamma, \mathscr{C})$ such that $[\rho_{\Gamma_{\Sigma_i}}]\in \overline{\mathcal{X}}_n ^{\mathscr{B}_i} (\Gamma_{\Sigma_i})$ for each $i=1,2,\cdots, l$ where
\[
\mathscr{B}_i=\{(\xi,B)\,|\,\xi\text{ is a component of } \partial \overline{\Sigma_i}\text{ and } (\iota_{\Sigma_i}(\xi),B)\in \mathscr{B}\cup \mathscr{C}\}.
\]
\end{itemize}
The Mayer-Vietoris sequence for cohomology of group system is proven in \cite{bieri1978}. Our version of the Mayer-Vietoris sequence is the following.
\begin{proposition}\label{mvs} Fix a representation $\rho$ in the class $[\rho]$. Let $(\Gamma, \mathcal{S})$ be a group system where $\Gamma=\pi_1(\Sigma)$ and $\mathcal{S} = \{\Gamma_{\xi_1}^+, \cdots, \Gamma_{\xi_m}^+, \langle \zeta_1 \rangle, \cdots, \langle \zeta_b \rangle\}$. Define for each $i=1,2,\cdots, l$, $\mathcal{S}_i =\{ \langle \zeta \rangle \subset \Gamma_{\Sigma_i}\,|\, \zeta \text{ is a component of } \partial \overline{\Sigma_i} \}$ so that $(\Gamma_{\Sigma_i}, \mathcal{S}_i)$ is a group subsystem of $(\Gamma, \mathcal{S})$.
\begin{itemize}
\item The sequence
\[
0\to \bigoplus_{i=1} ^{m} H^0(\Gamma_{\xi_i}; \mathfrak{g}_{\rho_{\Gamma_{\xi_i}^+}})\overset{\delta}{\to} H^1_{\mathrm{par}} (\Gamma,\mathcal{S}; \mathfrak{g}_{\rho})\overset{\iota^*}{\to} \bigoplus_{i=1} ^l H^1 _{\mathrm{par}} (\Gamma_{\Sigma_i},\mathcal{S}_i ;\mathfrak{g}_{\rho_{\Gamma_{\Sigma_i}}})\to 0
\]
is exact.
\item The connecting homomorphism $\delta$ sends $X\in H^0(\Gamma_{\xi_i}; \mathfrak{g})$ to the tangent cocycle of an algebraic bending by $X$ along $\xi_i$ and $\iota^*$ is induced from the inclusions $\iota_{{\Sigma_i}}:(\Gamma_{\Sigma_i},\mathcal{S}_i)\to (\Gamma,\mathcal{S})$ that is,
\[
\iota^* ([\alpha]) = \iota_{\Sigma_1} ^* [\alpha] \oplus \cdots \oplus \iota^* _{\Sigma_l}[\alpha].
\]
\end{itemize}
\end{proposition}
We do not prove the first statement at this moment because their proof has no dependency on remaining parts of our paper. For the sake of completeness however we give a proof in the appendix.
The second assertion regarding the map $\delta$ is shown in \cite[Lemma 5.8]{Johnson1987}. Because we do need some details about the connecting homomorphism, we give more descriptions here.
Choose a representation $\rho$ in the class $[\rho]$. Let $\xi_0$ be an edge of $\mathcal{G}$. Let $X\in H^0(\Gamma_{\xi_{i_0}}; \mathfrak{g}_{\rho_{\Gamma_{\xi_0}^+}})$ where $H^0 (\Gamma_{\xi_{i_0}}; \mathfrak{g}_{\rho_{\Gamma_{\xi_0}^+}}) = \ker(\operatorname{Ad}_{\rho_{\Gamma_{\xi_0}^+}} - \operatorname{Id})\subset \mathfrak{g}$. We introduce a flow $\Phi^t _{X, \xi_{i_0}}$ in $\operatorname{Hit}_n(\Sigma)$ as follow. If $\xi_{i_0}$ is an edge in $E(\mathcal{D})$ joining $\Sigma_p$ and $\Sigma_q$ with $\Sigma_p<\Sigma_q$, define
\[
\Phi^t _{X, \xi_{i_0}} (\rho) (x) = \begin{cases}
\rho(x) & x\in \Gamma_{\Sigma_j} \text{, } \Sigma_j \le \Sigma_p\text{ or incomparable}\\
(\exp tX) \rho(x) (\exp - tX)& x\in \Gamma_{\Sigma_j} \text{, }\Sigma_j \ge \Sigma_q\\
(\exp tX) \rho(x) (\exp - tX) & x=\xi_k^{\perp}\text{, } \operatorname{o}({\xi_k}), \operatorname{t}({\xi_k})\in S(\Sigma_q)\\
(\exp tX) \rho(x) & x=\xi_k^{\perp} \text{, } \operatorname{o}(\xi_k )\in S(\Sigma_q)\\
\rho(x)(\exp - tX) & x=\xi_k^{\perp} \text{, } \operatorname{t}(\xi_k )\in S(\Sigma_q)\\
\end{cases}.
\]
For each $t$, $\Phi^t _{X, \xi_{i_0}}(\rho)$ satisfies all relations in (\ref{rel1}) and (\ref{rel2}).
If $\xi_{i_0}$ is not in $E(\mathcal{D})$, we define
\[
\Phi^t _{X, \xi_{i_0}} (\rho) (x) = \begin{cases}
\rho(x) (\exp tX) & x=\xi_{i_0}^{\perp} \\
\rho(x) & \text{otherwise}
\end{cases}.
\]
Again $\Phi^t _{X,\xi_{i_0}} (\rho)$ fulfills the relations in (\ref{rel1}) and (\ref{rel2}) for each $t$. Therefore, in both cases, we get the flow of representations $\Phi^t _{X, \xi_{i_0}}$. Call this flow the \emph{algebraic bending} by $X$ along $\xi_{i_0}$. The last assertion of Proposition \ref{mvs} states that $\frac{\partial}{\partial t} \Phi^t_{X,\xi_{i_0}} = \delta(0,0,\cdots,0,X,0,\cdots,0)$ where $X$ is in the $H^0 (\Gamma_{\xi_{i_0}};\mathfrak{g})$ component of $\bigoplus_{i=1} ^m H^0(\Gamma_{\xi_i}; \mathfrak{g})$.
\subsection{Local decomposition formula: separating case}
Assume $\xi$ is separating so that $\Sigma \setminus \xi$ has two components $\Sigma_1, \Sigma_2$ each of which is hyperbolic. In view of Theorem \ref{decomposerep}, $\Gamma$ is the fundamental group of \raisebox{-2ex}{\begin{tikzpicture}
\draw (0,0) node {$\bullet$} ;
\draw (0,0) -- (1,0) node {$\bullet$};
\draw[->, thick] (.5,0)--(.51,0);
\draw (0,0) node[above] {$\Sigma_1$};
\draw (1,0) node[above] {$\Sigma_2$};
\draw (.5,0) node[above]{$\xi$};
\end{tikzpicture}}.
From Proposition \ref{mvs}, we have the short exact sequence
\begin{multline*}
0\to H^0(\Gamma_\xi; \mathfrak{g}_{\rho_{\Gamma_\xi ^+}}) \overset{\delta}{\to} H^1 _{\mathrm{par}} (\Gamma,\mathcal{S} ; \mathfrak{g}_\rho) \\ \overset{(\iota_{\Sigma_1} ^*,\iota_{\Sigma_2} ^* )}{\to} H^1_{\mathrm{par}} (\Gamma_{\Sigma_1},\mathcal{S}_1; \mathfrak{g}_{\rho_{\Gamma_{\Sigma_1}}}) \oplus H^1 _{\mathrm{par}}(\Gamma_{\Sigma_2},\mathcal{S}_2; \mathfrak{g}_{\rho_{\Gamma_{\Sigma_2}}})\to 0
\end{multline*}
where, as before, $\mathcal{S}= \{\Gamma_\xi ^+, \langle \zeta_1\rangle, \cdots, \langle \zeta_b\rangle \}$ is a collection of subgroups of $\Gamma$ that forms a group system $(\Gamma, \mathcal{S})$ and $\mathcal{S}_i = \{\langle \zeta \rangle \subset \Gamma_{\Sigma_i} \,|\, \zeta \text{ is a component of } \partial \overline{\Sigma_i}\}$. We also abbreviate the inclusion $\iota_{\Gamma_{\Sigma_i}}:\Gamma_{\Sigma_i} \to \Gamma$ to $\iota_{\Sigma_i}$.
Now we can state the decomposition formula.
\begin{theorem}\label{decomppairingsep} Let $\Sigma$ be a compact oriented hyperbolic surface possibly with boundary components $\{\zeta_1, \cdots, \zeta_b\}$ and $\xi$ an essnetial simple closed curve that separates $\Sigma$ into two hyperbolic subsurfaces $\Sigma_1$ and $\Sigma_2$. Let $(\Gamma, \mathcal{S})$ be the group system where $\Gamma=\pi_1(\Sigma)$ and $\mathcal{S}= \{\Gamma_\xi^+, \langle \zeta_1\rangle, \cdots, \langle \zeta_b\rangle \}$. Choose a boundary frame $\mathscr{B}$ and $\{\xi\}$-frame $\mathscr{C}$. Let $[\rho] \in \overline{\mathcal{X}}_n ^{\mathscr{B}}(\Gamma, \mathscr{C})$ be such that $[\rho_{\Gamma_{\Sigma_i}}] \in \overline{\mathcal{X}}_n ^{\mathscr{B}_i}(\Gamma_{\Sigma_i})$ where
\[
\mathscr{B}_i=\{ (\zeta, B)\,|\, \zeta\text{ is a component of }\partial \overline{\Sigma_i}\text{ and }(\iota_{\Sigma_i}(\zeta), B) \in \mathscr{B}\cup \mathscr{C}\}
\]
for each $i=1,2$. Fix a representation $\rho$ in the class $[\rho]$. For $[\alpha],[\beta]\in H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g}_\rho)$, we have
\[
\omega_K^{\Sigma}([\alpha], [\beta]) = \omega^{\Sigma_1} _K (\iota^* _{\Sigma_1}[\alpha],\iota^* _{\Sigma_1}[\beta])+\omega^{\Sigma_2} _K(\iota^* _{\Sigma_2}[\alpha],\iota^* _{\Sigma_2}[\beta]).
\]
\end{theorem}
We prove the following lemma first.
\begin{lemma}\label{normalize}
Let $\mathcal{S}_i = \{\langle \zeta \rangle \,|\, \zeta \text{ is a component of } \partial \overline{\Sigma_i}\}$, $i=1,2$. If $\iota^*_{\Sigma_2} [\alpha]=0$ then there is a unique 1-cocycle $\widetilde{\alpha_1}\in Z^1_{\mathrm{par}}(\Gamma,\mathcal{S}; \mathfrak{g})$ such that $[\widetilde{\alpha_1}]=[\alpha]$ in $H^1_{\mathrm{par}}(\Gamma,\mathcal{S} ; \mathfrak{g})$ and that $\iota_{\Sigma_2} ^\# (\widetilde{\alpha_1} )=0$ in $Z^1_{\mathrm{par}} (\Gamma_{\Sigma_2}, \mathcal{S}_2;\mathfrak{g})$. Similarly, if $\iota^* _{\Sigma_1} [\alpha] =0$ then there is a unique 1-cocycle $\widetilde{\alpha_2}$ such that $[\widetilde{\alpha_2}]=[\alpha]$ and that $\iota_{\Sigma_1}^\# (\widetilde{\alpha_2}) =0$ in $Z^1_{\mathrm{par}} (\Gamma_{\Sigma_1}, \mathcal{S}_1;\mathfrak{g})$.
\end{lemma}
\begin{proof}
We prove the first case. Pick any representative of $[\alpha]$ say $\alpha_1'\in Z^1_{\mathrm{par}}(\Gamma,\mathcal{S};\mathfrak{g})$. Since $\iota^*_{\Sigma_2} [\alpha]=\iota^*_{\Sigma_2} [\alpha_1 ']=0$, there is $X\in \mathfrak{g}$ such that $\iota_{\Sigma_2} ^\# (\alpha_1') = \mathrm{d}_{\Gamma_{\Sigma_2}} X$. Let $\widetilde{\alpha_1} = \alpha_1 ' -\mathrm{d}_{\Gamma} X$. Then clearly $[\widetilde{\alpha_1}]$ satisfies the required properties. For the uniqueness, suppose that there is another class $\alpha_1 ''\in Z^1_{\mathrm{par}}(\Gamma,\mathcal{S}; \mathfrak{g})$ satisfying the same properties. Then since $[\alpha_1'']=[\widetilde{\alpha_1}]$ in $H^1_{\mathrm{par}} (\Gamma,\mathcal{S}; \mathfrak{g})$, we have $\alpha_1'' = \widetilde{\alpha_1} +\mathrm{d}_\Gamma Y$ for some $Y\in \mathfrak{g}$. Applying $\iota_{\Sigma_2}^\#$, we have $0=\iota^\# _{\Sigma_2}\mathrm{d}_\Gamma Y=\mathrm{d}_{\Gamma_{\Sigma_2}} Y$ in $Z^1_{\mathrm{par}}(\Gamma_{\Sigma_2},\mathcal{S}_2;\mathfrak{g})$. By Lemma \ref{NoInvariantElement}, $Y=0$ and the uniqueness follows.
The second case can be achieved along the same lines.
\end{proof}
\begin{proof}[Proof of Theorem \ref{decomppairingsep}] We borrow the idea of Zocca \cite{zocca1998}. We use the following presentations
\begin{align*}
{\Gamma}_{\Sigma_1}&=\langle {x}_{1,1}, {y} _{1,1},\cdots, x_{1,g_1}, y_{1,g_1} , {z}_{1,0}, \cdots, z_{1,b_1} \,|\,{\mathbf{r}}_1\rangle,\\
{\Gamma}_{\Sigma_2}&= \langle {x}_{2,1}, {y} _{2,1} ,\cdots, x_{2,g_2}, y_{2,g_2}, {z}_{2,0}, \cdots, z_{2,b_2} \,|\,{\mathbf{r}}_2\rangle,\\
\Gamma_{\xi} &= \langle \xi \rangle\text{ with }\xi^- =z_{1,0},\, \xi^+ = z_{2,0}^{-1},\qquad \text{and}\\
\Gamma&=\pi_1(\Sigma)=\langle{x}_{1,j}, {y} _{1,j} , {z}_{1,j} ,{x}_{2,j}, {y} _{2,j} , {z}_{2,j} \,|\,{\mathbf{r}} \rangle
\end{align*}
where relations are given by
\begin{align*}
{\mathbf{r}}_1 &= (\prod_{j=1} ^{g_1}[{x} _{1,j}, {y}_{1,j}])(\prod_{j=1} ^{b_1} {z}_{1,j}){z}_{1,0},\\
{\mathbf{r}}_2 &= {z}_{2,0} (\prod_{j=1} ^{b_2} {z}_{1,j})(\prod_{j=1} ^{g_2}[{x} _{2,j}, {y}_{2,j}]),\quad\text{and}\\
{\mathbf{r}}&=(\prod_{j=1} ^{g_1}[{x} _{1,j}, {y}_{1,j}])(\prod_{j=1} ^{b_1} {z}_{1,j}) (\prod_{j=1} ^{b_2} {z}_{2,j})(\prod_{j=1} ^{g_2}[{x} _{2,j}, {y}_{2,j}]).
\end{align*}
We can decompose the relative fundamental class (Lemma \ref{funclcpt}) as
\[
[\Sigma] = [\Sigma_1]+ [\Sigma_2]- [{z}_{1,0} ^{-1}| {z}_{1,0}].
\]
We first assume $\iota^* _{\Sigma_2} [\alpha_1] =0$ and compute $\omega_K ^\Sigma([\alpha_1], [\beta])$. By Lemma \ref{normalize}, we can find a representative $\widetilde{\alpha_1}$ of $[\alpha_1]$ such that $\iota_{\Sigma_2} ^\# \widetilde{\alpha_1}=0$. We use $\widetilde{\alpha_1}$ to compute $\omega_K ^\Sigma$ as follow
\begin{align*}
\omega_K ^\Sigma([\alpha_1],[\beta])&=\omega_K ^{\Sigma}([\widetilde{\alpha_1}],[\beta]) \\
&=\langle \widetilde{\alpha_1} \smile \beta, [\Sigma_1] \rangle+\langle \widetilde{\alpha_1} \smile \beta, [\Sigma_2] \rangle+\langle \widetilde{\alpha_1} \smile \beta, [z_{1,0}^{-1}|z_{1,0}] \rangle \\
&\qquad -\sum_{i=1} ^{b_1} \operatorname{Tr} X_{1,i} \beta(z_{1,i})-\sum_{i=1} ^{b_2} \operatorname{Tr} X_{2,i} \beta(z_{2,i})
\end{align*}
where $X_{i,j} \in \mathfrak{g}$ is such that $\mathrm{d}_{\langle z_{i,j} \rangle} X_{i,j} = \iota_{\langle z_{i,j} \rangle} ^\# \widetilde{\alpha_1}$. By the construction of $\widetilde{\alpha_1}$, we have $\langle \widetilde{\alpha_1} \smile \beta, [\Sigma_2]\rangle =0$ and $\widetilde{\alpha_1}({z}_{i,0})=\widetilde{\alpha_1}({z}_{i,0}^{-1})= 0$, for $i=1,2$. Again, by the construction of $\widetilde{\alpha_1}$, we can choose $X_{2,i}$ to be $0$ for all $i=1,2,\cdots, b_2$. Hence
\[
\omega_K ^\Sigma([\alpha_1],[\beta])=\omega_K ^{\Sigma_1}(\iota^*_{\Sigma_1}[\alpha_1],\iota_{\Sigma_1} ^*[\beta])+\operatorname{Tr} X \beta({z}_{1,0})
\]
where $X\in \mathfrak{g}$ is given by the property $\mathrm{d}_{\langle z_{1,0}\rangle} X = \iota_{\langle z_{1,0} \rangle} ^\# \widetilde{\alpha_1}$. Observe that $\iota_{\langle z_{1,0}\rangle} ^\# \widetilde{\alpha_1}=\iota_{\langle z_{2,0}\rangle} ^\# \widetilde{\alpha_1} =0$. Therefore we can take $X$ to be zero. This leads us to
\[
\omega_K ^\Sigma([\alpha_1],[\beta])=\omega_K ^{\Sigma_1}(\iota^*_{\Sigma_1}[\alpha_1],\iota_{\Sigma_1} ^*[\beta]).
\]
Similarly, if $[\alpha_2] \in H^1_{\mathrm{par}} (\Gamma_{\Sigma_2},\mathcal{S}_2;\mathfrak{g})$ is such that $\iota^*_{\Sigma_1} [\alpha_2]=0$, we choose $\widetilde{\alpha_2}$ as before to get
\[
\omega_K ^\Sigma([\alpha_2],[\beta]) =\omega_K ^{\Sigma_2}(\iota^*_{\Sigma_2}[\alpha_2],\iota_{\Sigma_2} ^* [\beta]).
\]
Now suppose that we are given a general $[\alpha]\in H^1_{\mathrm{par}} (\Gamma, \mathcal{S};\mathfrak{g})$. Since $H^1_{\mathrm{par}} (\Gamma, \mathcal{S};\mathfrak{g}) = \ker \iota^* _{\Sigma_1} + \ker \iota ^* _{\Sigma_2}$ (not direct), we can decompose $[\alpha]$ as a sum $[\alpha]=[\alpha_1]+[\alpha_2]$ where $[\alpha_1]\in \ker \iota^* _{\Sigma_2}$ and $[\alpha_2]\in \ker \iota^* _{\Sigma_1}$. Then by linearity we have
\begin{align*}
\omega_K ^\Sigma ([\alpha],[\beta]) & =\omega_K ^{\Sigma}([\alpha_1],[\beta])+\omega_K ^{\Sigma}([\alpha_2], [\beta])\\
&=\omega_K ^{\Sigma_1}(\iota_{\Sigma_1} ^*[\alpha_1],\iota_{\Sigma_1} ^*[\beta])+\omega_K ^{\Sigma_2}(\iota_{\Sigma_2} ^*[\alpha_2],\iota_{\Sigma_2} ^*[\beta])\\
&=\omega_K ^{\Sigma_1}(\iota_{\Sigma_1} ^*[\alpha],\iota_{\Sigma_1} ^*[\beta])+\omega_K ^{\Sigma_2}(\iota_{\Sigma_2} ^*[\alpha],\iota_{\Sigma_2} ^*[\beta]).
\end{align*}
It completes the proof of Theorem \ref{decomppairingsep}.
\end{proof}
\subsection{Local decomposition formula: non-separating case}\label{secnonsep}
Suppose that $\xi$ is non-separating such that $ \Sigma_0:=\Sigma\setminus \xi$ is hyperbolic. In this case $\Gamma$ is the fundamental group of \raisebox{-1.5ex}{\begin{tikzpicture}
\draw (0,0) node[left] {$\Sigma_0$};
\draw (0,0) node {$\bullet$};
\draw (.3,0) circle (.3cm);
\draw[->, thick] (.6,.01)--(.6,.02);
\draw (.6,0) node[right] {${\xi}$};
\end{tikzpicture}} and we have the exact sequence:
\[
0\to H^0(\Gamma_\xi, \mathfrak{g}_{\rho_{\Gamma_\xi^ +}}) \to H^1_{\mathrm{par}} (\Gamma,\mathcal{S} ; \mathfrak{g}_\rho)\overset{\iota^* _{\Sigma_0}}{\to}H^1_{\mathrm{par}} (\Gamma_{\Sigma_0},\mathcal{S}_0; \mathfrak{g}_{\rho_{\Gamma_{\Sigma_0}}})\to 0
\]
where $\mathcal{S} = \{\Gamma_\xi^+, \langle \zeta_1\rangle, \cdots, \langle \zeta_b \rangle \}$, $\mathcal{S}_0=\{\langle \zeta \rangle \,|\, \zeta \text{ is a component of } \partial\overline{ \Sigma_0}\}$ so that $(\Gamma_{\Sigma_0}, \mathcal{S}_0)$ is a group subsystem of $(\Gamma, \mathcal{S})$. Note that the homomorphism $\iota_{\Gamma_{\Sigma_0}}:\Gamma_{\Sigma_0} \to \Gamma$ and $\iota_{\Gamma_\xi ^\pm } : \Gamma_\xi ^{\pm} \to \Gamma_{\Sigma_0}$ are abbreviated to $\iota_{\Sigma_0}$ and $\iota_{\xi^\pm}$ respectively.
The corresponding decomposition formula is the following:
\begin{theorem}\label{decomppairingnonsep} Let $\Sigma$ be a compact oriented hyperbolic surface and $\xi$ a non-separating essential simple closed curve such that $\Sigma_0 = \Sigma\setminus \xi $ is hyperbolic subsurface. Let $(\Gamma, \mathcal{S})$ be a group system where $\Gamma=\pi_1(\Sigma)$ and $\mathcal{S}= \{\Gamma_\xi^+, \langle \zeta_1\rangle, \cdots, \langle \zeta_b\rangle \}$. Choose a boundary frame $\mathscr{B}$ and $\{\xi\}$-frame $\mathscr{C}$. Let $[\rho] \in \overline{\mathcal{X}}_n ^{\mathscr{B}}(\Gamma, \mathscr{C})$ be such that $[\rho_{\Gamma_{\Sigma_0}}] \in \overline{\mathcal{X}}_n ^{\mathscr{B}_0}(\Gamma_{\Sigma_0})$ where
\[
\mathscr{B}_0= \{ (\zeta, B)\,|\,\zeta\text{ is a component of }\partial\overline{ \Sigma_0}\text{ and }(\iota_{\Sigma_0} (\zeta),B)\in \mathscr{B}\cup \mathscr{C} \}.
\]
Fix a representation $\rho$ in $[\rho]$. For $[\alpha],[\beta]\in H^1 _{\mathrm{par}} (\Gamma,\mathcal{S}; \mathfrak{g}_{\rho})$, we have
\[
\omega_K ^\Sigma ([\alpha], [\beta]) = \omega_K ^{\Sigma_0} (\iota^* _{\Sigma_0} [\alpha], \iota^* _{\Sigma_0} [\beta]).
\]
\end{theorem}
As in the separating case, we start with proving the following:
\begin{lemma}\label{normalizenonsep}
Let $\mathcal{S}_0 = \{\langle \zeta \rangle\subset \Gamma_{\Sigma_0} \,|\, \zeta \text{ is a component of } \partial \overline{\Sigma_0}\}$. There are 1-cocycles $\widetilde{\alpha}, \widetilde{\alpha}'$ such that $[\widetilde{\alpha}]=[\widetilde{\alpha}']=[\alpha]$ in $H^1_{\mathrm{par}} (\Gamma, \mathcal{S}; \mathfrak{g})$ and that $\iota_{\Gamma_\xi ^-} ^\# \widetilde{\alpha} =0$, $\iota_{\Gamma_\xi ^+} ^\# \widetilde{\alpha}' =0$ in $Z^1(\Gamma_\xi ^+ ;\mathfrak{g})$ and $Z^1(\Gamma_\xi ^- ; \mathfrak{g})$ respectively.
\end{lemma}
\begin{proof}Choose any representative $\alpha'$ of $[\alpha]$. By the assumption, $\iota^\# _{\Gamma_\xi ^+}(\alpha')= \mathrm{d}_{\Gamma_\xi ^+} X$ for some $X\in \mathfrak{g}$. Let $\widetilde{\alpha} = \alpha' -\mathrm{d}_\Gamma X$ so that $\widetilde{\alpha}(\Gamma_\xi ^+) =0$. The construction of $\widetilde{\alpha}'$ is almost the same.
\end{proof}
\begin{proof}[Proof of Theorem \ref{decomppairingnonsep}]
We use the following presentations
\[
\Gamma= \langle {x}_1,{y}_1, \cdots, {x}_g, {y}_g, {z}_1, \cdots, {z}_{b+1},\xi^\perp \,|\,{\mathbf{r}}\rangle,
\]
\[
\Gamma_{\xi} = \langle \xi \rangle \text{ with }\xi^+ = z_{b+1} ,\xi^- = z_{b+2}^{-1}
\]
and
\[
{\Gamma}_{\Sigma_0} = \langle {x}_1,{y}_1, \cdots, {x}_g, {y}_g, {z}_1, \cdots, {z}_{b+2}\,|\,{\mathbf{r}}_0 \rangle
\]
where
\[
{\mathbf{r}}=[{z}_{b+1},\xi^\perp](\prod_{j=1} ^g [{x}_j,{y}_j])\prod_{j=1} ^{b} {z}_j
\]
and
\[
{\mathbf{r}}_0={z}_{b+1}{z}_{b+2}(\prod_{j=1} ^g [{x}_j,{y}_j])\prod_{j=1} ^{b} {z}_j.
\]
Then we have
\[
[\Sigma] = [\Sigma_0]-[{z}_{b+1}| {z}_{b+2}]-[{z}_{b+1} \xi^\perp {z}_{b+1} ^{-1}| {z}_{b+1}] +[{z}_{b+1}|\xi^\perp]-[{z}_{b+1}{z}_{b+2}|\xi^\perp].
\]
By Lemma \ref{normalizenonsep}, we can find $\widetilde{\alpha}$ and $\widetilde{\beta}$ such that $[\alpha] = [\widetilde{\alpha}]$ ($[\beta] = [\widetilde{\beta}]$, respectively) and $\widetilde{\alpha}({z}_{b+2} ) =0$ ($\widetilde{\beta}({z}_{b+1} ) =0$, respectively). Now we have
\begin{multline*}
\omega_K ^\Sigma([\alpha], [\beta]) = \omega_K ^{\Sigma_0}(\iota^* _{\Sigma_0}[\widetilde{\alpha}], \iota^* _{\Sigma_0}[\widetilde{\beta}]) -\operatorname{Tr} (\widetilde{\alpha}({z}_{b+1}){z}_{b+1}\cdot \widetilde{\beta}({z}_{b+2}))\\ +\operatorname{Tr}( \widetilde{\alpha}({z}_{b+1}) {z}_{b+1}\cdot \widetilde{\beta}(\xi^\perp)) - \operatorname{Tr}( \widetilde{\alpha}({z}_{b+1}{z}_{b+2})({z}_{b+1}{z}_{b+2})\cdot \widetilde{\beta}(\xi^\perp))\\ +\operatorname{Tr} X_{b+1} \widetilde{\beta} (z_{b+1}) + \operatorname{Tr} X_{b+2} \widetilde{\beta}(z_{b+2})
\end{multline*}
where $X_{b+1}$ and $X_{b+2}$ are elements of $\mathfrak{g}$ such that $\iota^\#_{{\xi^+}} \widetilde{\alpha}=\mathrm{d}_{\Gamma_{\xi}^+} X_{b+1}$ and $\iota^\#_{\xi^-} \widetilde{\alpha}=\mathrm{d}_{\Gamma_{\xi}^-}X_{b+2}$. Since $\widetilde{\beta}(z_{b+1})=\widetilde{\alpha}(z_{b+2})=0$ the last two terms vanish.
We expand using $z_{b+2} = \xi^{\perp} z_{b+1} ^{-1} (\xi^{\perp} )^{-1}$,
\begin{align*}
\widetilde{\alpha}({z}_{b+1}){z}_{b+1}\cdot \widetilde{\beta}({z}_{b+2})& = \widetilde{\alpha}({z}_{b+1}) {z}_{b+1}\cdot (\widetilde{\beta}(\xi^\perp) + (\xi^\perp {z}_{b+1}^{-1})\cdot \widetilde{\beta}((\xi^\perp)^{-1}))\\
&= \widetilde{\alpha}({z}_{b+1}) {z}_{b+1}\cdot (\widetilde{\beta}(\xi^\perp) - (\xi^\perp {z}_{b+1}^{-1}(\xi^\perp)^{-1})\cdot \widetilde{\beta}(\xi^\perp))\\
&= \widetilde{\alpha}({z}_{b+1}) {z}_{b+1}\cdot (\widetilde{\beta}(\xi^\perp) - {z}_{b+2}\cdot \widetilde{\beta}(\xi^\perp)).
\end{align*}
On the other hand, since $\widetilde{\alpha}({z}_{b+2})=0$,
\[
\widetilde{\alpha}({z}_{b+1}{z}_{b+2})({z}_{b+1}{z}_{b+2})\cdot \widetilde{\beta}(\xi^\perp)= \widetilde{\alpha}({z}_{b+1})({z}_{b+1}{z}_{b+2})\cdot \widetilde{\beta}(\xi^\perp).
\]
Therefore, all terms except $ \omega_K ^{\Sigma_0}(\iota^* _{\Sigma_0}[\widetilde{\alpha}], \iota^* _{\Sigma_0}[\widetilde{\beta}]) $ cancel each other. So we get
\[
\omega_K ^\Sigma([\alpha], [\beta]) = \omega_K ^{\Sigma_0}(\iota^* _{\Sigma_0}[\widetilde{\alpha}], \iota^* _{\Sigma_0}[\widetilde{\beta}])
\]
as desired.
\end{proof}
Combining Theorem \ref{decomppairingsep} and Theorem \ref{decomppairingnonsep}, one gets the following general local decomposition theorem.
\begin{corollary}\label{localdecomp}
Let $\Sigma$ be a compact oriented hyperbolic surface and let $\{\xi_1, \cdots, \xi_m\}$ be a collection of pairwise disjoint, non-isotopic essential simple closed curves in $\Sigma$ that divide the surface into hyperbolic subsurfaces $\Sigma_1, \cdots, \Sigma_l$. Let $(\Gamma, \mathcal{S})$ be a group system where $\Gamma=\pi_1(\Sigma)$ and $\mathcal{S}= \{\Gamma_{\xi_1}^+, \cdots, \Gamma_{\xi_m}^+, \langle \zeta_1\rangle, \cdots, \langle \zeta_b\rangle \}$. Choose a boundary frame $\mathscr{B}$ and $\mathcal{C}$-frame $\mathscr{C}$. Let $[\rho]$ be an element in $\overline{\mathcal{X}}_n ^{\mathscr{B}}(\Gamma, \mathscr{C})$ such that $[\rho_{\Gamma_{\Sigma_i}}]\in \overline{\mathcal{X}}_n ^{\mathscr{B}_i} (\Gamma_{\Sigma_i})$ for each $i=1,2,\cdots, l$ where
\[
\mathscr{B}_i = \{ (\zeta, B)\,|\, \zeta\text{ is a component of }\partial \overline{\Sigma_i}\text{ and } (\iota_{\Sigma_i} (\zeta), B)\in \mathscr{B}\cup \mathscr{C}\}.
\]
Fix a representative $\rho$ of $[\rho]$. Then for any $[\alpha],[\beta]\in H^1 _{\mathrm{par}} (\Gamma, \mathcal{S};\mathfrak{g}_\rho)$, we have
\[
\omega^\Sigma _K ([\alpha], [\beta] ) = \sum_{i=1} ^l \omega^{\Sigma_i} _K (\iota^* _{\Sigma_i} [\alpha], \iota^* _{\Sigma_i} [\beta]).
\]
\end{corollary}
\begin{proof}
We use induction on the number of curves in $\mathcal{C}$. If $\mathcal{C}$ consists of a single curve $\xi$, we are done by Theorem \ref{decomppairingsep} or \ref{decomppairingnonsep} depending on whether $\xi$ is separating or not.
Suppose that a collection $\mathcal{C} = \{\xi_1, \cdots, \xi_m\}$, $m>1$, is given where $\xi_m$ is separating. Without loss of generality, we may assume that $\Sigma_-:=\Sigma_1\cup \cdots\cup \Sigma_p$ and $\Sigma_+ :=\Sigma_{p+1} \cup \cdots \cup \Sigma _l$ are two components of $\Sigma \setminus \xi_m$. By virtue of Theorem \ref{decomposerep}, we can identify $\Gamma$ with the fundamental group of graph of groups \raisebox{-2ex}{\begin{tikzpicture}
\draw (0,0) node {$\bullet$} ;
\draw (0,0) -- (1,0) node {$\bullet$};
\draw[->, thick] (.5,0)--(.51,0);
\draw (0,0) node[above] {$\Sigma_-$};
\draw (1,0) node[above] {$\Sigma_+$};
\draw (.5,0) node[above]{$\xi_m$};
\end{tikzpicture}}. Let $\mathcal{S}_{\pm}=\{\langle \zeta\rangle\subset \Gamma_{\Sigma_\pm} \,|\, \zeta\text{ is a component of }\partial \overline{\Sigma_{\pm}}\}$ so that $(\Gamma_{\Sigma_{+}}, \mathcal{S}_{+})$ and $(\Gamma_{\Sigma_{-}}, \mathcal{S}_{-})$ become group subsystems of $(\Gamma, \{\Gamma_{\xi_m}^+,\langle\zeta_1\rangle, \cdots, \langle\zeta_b\rangle\})$. Then by Theorem \ref{decomppairingsep},
\[
\omega^{\Sigma}_K ([\alpha], [\beta]) = \omega^{\Sigma_-} _K (\iota^* _{\Sigma_-} [\alpha] ,\iota^* _{\Sigma_- } [\beta]) + \omega^{\Sigma_+} _K (\iota^* _{\Sigma_+} [\alpha] ,\iota^*_{\Sigma_+} [\beta]),
\]
where $\iota_{\Sigma_{+}}$ and $\iota_{\Sigma_{-}}$ are the natural maps $(\Gamma_{\Sigma_{+}}, \mathcal{S}_{+}) \to (\Gamma, \mathcal{S})$ and $(\Gamma_{\Sigma_{-}}, \mathcal{S}_{-}) \to (\Gamma, \mathcal{S})$ respectively.
Observe that a collections of curves $ \{\xi\in \mathcal{C} \,|\,\xi\cap \Sigma_+ \ne \emptyset \}$ and $\{\xi\in \mathcal{C} \,|\,\xi\cap \Sigma_- \ne \emptyset \}$ cut $\overline{\Sigma_+}$ and $\overline{\Sigma_-}$ into $\Sigma_1,\cdots,\Sigma_p$ and $\Sigma_{p+1}, \cdots, \Sigma_l$ respectively. By the induction hypothesis, we have
\[
\omega^{\Sigma_-} _K (\iota^* _{\Sigma_-} [\alpha] ,\iota^* _{\Sigma_- } [\beta])= \sum_{i=1} ^p \omega^{\Sigma_i} _K (\widebar{\iota _{\Sigma_i}}^* \iota^* _{\Sigma_-} [\alpha], \widebar{\iota _{\Sigma_i}}^* \iota^*_{\Sigma_-} [\beta]),
\]
and
\[
\omega^{\Sigma_+} _K (\iota^* _{\Sigma_+} [\alpha] ,\iota^* _{\Sigma_+ } [\beta])= \sum_{i=p+1} ^l \omega^{\Sigma_i} _K (\widebar{\iota _{\Sigma_i}}^* \iota^* _{\Sigma_+} [\alpha], \widebar{\iota _{\Sigma_i}}^* \iota^*_{\Sigma_+} [\beta])
\]
where $\widebar{\iota_{\Sigma_i}}: \Gamma_{\Sigma_i} \to \Gamma_{\Sigma_\pm}$, $i=1,2,\cdots, l$ are the natural inclusions. We observe that $\widebar{\iota _{\Sigma_i}}^*\iota^* _{\Sigma_\pm} = \iota^* _{\Sigma_i}$. Therefore, we obtain
\[
\omega^\Sigma _K ([\alpha], [\beta] ) = \sum_{i=1} ^l \omega^{\Sigma_i} _K (\iota^* _{\Sigma_i} [\alpha], \iota^* _{\Sigma_i} [\beta]).
\]
Now suppose that $\xi_m$ is non-separating. Let $\Sigma_0:=\Sigma\setminus \xi_m$. Then by Theorem \ref{decomposerep}, $\Gamma$ is the fundamental group of a graph of groups \raisebox{-1.5ex}{\begin{tikzpicture}
\draw (0,0) node[left] {$\Sigma_0$};
\draw (0,0) node {$\bullet$};
\draw (.3,0) circle (.3cm);
\draw[->, thick] (.6,.01)--(.6,.02);
\draw (.6,0) node[right] {${\xi_m}$};
\end{tikzpicture}}. By Theorem \ref{decomppairingnonsep}, we have
\[
\omega^\Sigma _K ([\alpha],[\beta]) = \omega^{\Sigma_0} _K (\iota^* _{\Sigma_0} [\alpha], \iota^* _{\Sigma_0} [\beta]).
\]
Here $\iota_{\Sigma_0}$ is the injection from $\Gamma_{\Sigma_0}$ into $\Gamma$. Since $\mathcal{C}\setminus \{\xi_m\}$ divides $\Sigma_0$ into $\Sigma_1, \cdots, \Sigma_l$, by the induction hypothesis, we obtain
\[
\omega^{\Sigma_0} _K (\iota^* _{\Sigma_0} [\alpha], \iota^* _{\Sigma_0} [\beta])= \sum_{i=1} ^{l} \omega^{\Sigma_i} _K (\widebar{\iota_{\Sigma_i}} ^* \iota^ *_{\Sigma_0} [\alpha], \widebar{\iota_{\Sigma_i}} ^* \iota^ *_{\Sigma_0}[\beta]).
\]
Since $\widebar{\iota_{\Sigma_i}}^* \iota^* _{\Sigma_0}=\iota^* _{\Sigma_i}$, we have
\[
\omega^{\Sigma_0} _K (\iota^* _{\Sigma_0} [\alpha], \iota^* _{\Sigma_0} [\beta])=\sum_{i=1} ^{l} \omega^{\Sigma_i} _K (\iota_{\Sigma_i} ^* [\alpha], \iota_{\Sigma_i} ^* [\beta]).
\]
This completes the induction and Corollary \ref{localdecomp} follows.
\end{proof}
\subsection{Global decomposition} As mentioned in the introduction, we can decompose $\pi_(\Sigma)$ into $\pi_1(\Sigma_i)$'s and this decomposition allows us to construct the map
\begin{equation}\label{restmap}
\mathcal{X}_n (\Gamma) \to \mathcal{X}_n (\Gamma_{\Sigma_1}) \times \cdots \times \mathcal{X}_n (\Gamma_{\Sigma_l})
\end{equation}
induced from $[\rho]\mapsto ([\rho_{\Gamma_{\Sigma_1}}],[\rho _{\Gamma_{\Sigma_2}}],\cdots, [\rho _{\Gamma_{\Sigma_l}}] )$.
Recall that Theorem 9.1 of Labourie-McShane \cite{labourie2009} shows that if $[\rho]$ is Hitchin, then so is each factor $\rho_{\Gamma_{\Sigma_i}}$. Therefore, if we restrict (\ref{restmap}) to $ \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$ we get the map
\begin{equation}\label{rest}
\overline{\Phi}: \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C}) \to \operatorname{Hit}_n ^{\mathscr{B}_1} (\Sigma_1) \times \cdots \times \operatorname{Hit}_n ^{\mathscr{B}_l}(\Sigma_l)
\end{equation}
where
\[
\mathscr{B}_i=\{(\xi,B)\,|\,\xi\text{ is a component of } \partial \overline{\Sigma_i}\text{ and } (\iota_{\Sigma_i}(\xi),B)\in \mathscr{B}\cup \mathscr{C}\}.
\]
\begin{proposition}\label{JM2} Let $\Sigma$ be a compact oriented hyperbolic surface possibly with boundary components $\{\zeta_1, \cdots, \zeta_b\}$ and let $\{\xi_1, \cdots, \xi_m\}$ be a collection of pairwise disjoint, non-isotopic oriented essential simple closed curves in $\Sigma$ that divide the surface into hyperbolic subsurfaces $\Sigma_1, \cdots, \Sigma_l$. We have the following:
\begin{itemize}
\item Let $(\Gamma, \mathcal{S})$ be a group system where $\Gamma=\pi_1(\Sigma)$,
\[
\mathcal{S}=\{\langle \zeta_1\rangle, \cdots, \langle \zeta_b\rangle, \Gamma_{\xi_1}^+, \cdots, \Gamma_{\xi_m}^+\}
\]
and let
\[
\mathcal{S}_i = \{\langle\zeta\rangle\subset\Gamma_{\Sigma_i}\,|\, \zeta \text{ is a compoment of }\partial \overline{\Sigma_i}\}.
\]
Then we have identifications
\[
T_{[\rho]} \operatorname{Hit} ^{\mathscr{B}} _n (\Sigma, \mathscr{C}) = H^1_{\mathrm{par}} (\Gamma, \mathcal{S}; \mathfrak{g}_\rho)
\]
and
\[
T_{\overline{\Phi}([\rho])} \operatorname{Hit}_n ^{\mathscr{B}_1}(\Sigma_1)\times\cdots \times \operatorname{Hit}_n ^{\mathscr{B}_l}(\Sigma_l) =\bigoplus_{i=1} ^l H^1 _{\mathrm{par}} (\Gamma_{\Sigma_i } , \mathcal{S}_i; \mathfrak{g}_{\rho_{\Gamma_{\Sigma_i}}}).
\]
\item Under the above identifications, the differential $\mathrm{d} \overline{\Phi} $ fits into the Mayer-Vietoris sequence
\[
0\to \bigoplus_{i=1} ^ m H^0(\Gamma_{\xi_i}; \mathfrak{g}) \overset{\delta}{\to} H^1 _{\mathrm{par}} (\Gamma,\mathcal{S}; \mathfrak{g}) \overset{\mathrm{d} \overline{\Phi} }{\to} \bigoplus_{i=1} ^l H^1 _{\mathrm{par}} (\Gamma_{\Sigma_i},\mathcal{S}_i; \mathfrak{g}) \to 0,
\]
that is,
\[
\mathrm{d} \overline{\Phi} ([\alpha]) = \iota_{\Sigma_1} ^* [\alpha] \oplus \cdots \oplus \iota^* _{\Sigma_l}[\alpha].
\]
\end{itemize}
\end{proposition}
\begin{proof}
The first statement is already done in Proposition \ref{tangent}.
The second assertion follows from the definition of $\overline{\Phi}$ and Proposition \ref{mvs}.
\end{proof}
\begin{lemma}\label{connectedfiber}
Each fiber of $\overline{\Phi}$ is connected.
\end{lemma}
\begin{proof}
We complete $\mathcal{C}$ to get a maximal geodesic lamination of $\Sigma$ and construct the Bonahon-Dreyer coordinates on $\operatorname{Hit}_n (\Sigma)$ and on $\operatorname{Hit}_n (\Sigma_i)$ (see \cite{bonahon2014} or Appendix \ref{BDreview}) with respect to this maximal lamination. In this coordinates, $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$ is the set
\begin{multline*}
\{[\rho]\in \operatorname{Hit}_n (\Sigma)\,|\, l ^{\zeta_j}(\rho) = l ^{\zeta_j} (\rho_0),\, l ^{\xi_k}(\rho) = l ^{\xi_k} (\rho_0),\\ j=1,2,\cdots, b,\text{ and }k=1,2,\cdots, m\}
\end{multline*}
for some fixed reference point $[\rho_0]\in \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$. Here $l ^{\xi}$ is defined by
\[
l ^{\xi} (\rho) = \left( \log \frac{|\lambda_1(\rho(\xi))|}{|\lambda_{2}(\rho(\xi))|}, \cdots, \log \frac{|\lambda_{n-1}(\rho(\xi))|}{|\lambda_{n}(\rho(\xi))|}\right)\in \mathbb{R}^{n-1}
\]
where $\xi$ is a closed leaf or a boundary component and $\lambda_i(g)$ is the $i$th largest eigenvalue of $g\in G$. Recall that each component of $l^{\zeta_i}$ and $l^{\xi_i}$ can be expressed as a linear combination of triangle invariants and shear invariants. Moreover one can express $\overline{\Phi}$ as
\[
\overline{\Phi}=\operatorname{pr}_{\Sigma_1}\times \cdots \times \operatorname{pr}_{\Sigma_l}
\]
where $\operatorname{pr}_{\Sigma_i}$ denotes the projection onto the triangle invariants and shear invariants associated to ideal triangles and (infinite or closed) leaves contained in the interior of $\Sigma_i$. It follows that the fiber of $\overline{\Phi}$ is spanned by the shear invariants associated to closed leaves $\mathcal{C}$. Therefore the fiber of $ \overline{\Phi}$ is connected.
\end{proof}
We now introduce a Hamiltonian $\mathbb{R}^{m(n-1)}$-action that makes $\overline{\Phi}$ an affine bundle over the base space $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})/ \mathbb{R}^{m(n-1)}$. Then we prove that the base space $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})/ \mathbb{R}^{m(n-1)}$ is the symplectic reduction.
Let $\mathbf{Hyp}^+$ be the set of purely loxodromic (or positive hyperbolic) elements in $G=\mathrm{PSL}_n(\mathbb{R})$. By an invariant function we mean a smooth function $f:\mathbf{Hyp}^+ \to \mathbb{R}$ such that $f(ghg^{-1}) = f(h)$ for all $h\in \mathbf{Hyp}^+$ and $g\in G$. Given an invariant function $f$, there associated another function $F:\mathbf{Hyp}^+ \to \mathfrak{g}$ characterized by the property that $\frac{d}{dt}|_{t=0} f (g \exp tX) = \operatorname{Tr} (F(g) X)$ for all $X\in \mathfrak{g}$. Observe that $\operatorname{Ad}_g (F(h)) =F(ghg^{-1})$.
Let $f_1, \cdots, f_{n-1}$ be invariant functions such that
\[
g\mapsto f(g):=(f_1(g),\cdots, f_{n-1}(g))
\]
is injective and that $\{F_1(g), F_2(g), \cdots, F_{n-1}(g)\}$ forms a basis of $\ker(\operatorname{Ad}_g - \operatorname{Id})$ where $g\in \mathbf{Hyp}^+$. To each oriented essential simple closed curve $\xi$, associate a map $f_\xi:\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)\to \mathbb{R}^{n-1}$ which is defined by $f_\xi ([\rho]) = f(\rho(\xi))$.
Given $\mathcal{C}=\{\xi_1, \cdots, \xi_m\}$ a family of mutually disjoint, non-isotopic oriented essential simple closed curves, let $\mathbf{T}^t_{\xi_i, j}([\rho])= [\Phi^t_{F_j(\rho(\xi_i )), \xi_i}(\rho)]$, the algebraic bending by $F_j(\rho(\xi_i))$ along $\xi_i$. Then for $(\textbf{t}_1, \cdots, \textbf{t}_m)\in \mathbb{R}^{m(n-1)}$, where $\mathbf{t}_i= (t^1_i, \cdots, t^{n-1}_i)\in \mathbb{R}^{n-1}$, we define the complete flow
\[
\mathbf{T}^{(\textbf{t}_1, \cdots, \textbf{t}_m)}([\rho]) = \mathbf{T}^{t^{n-1} _m} _{\xi_m,n-1}\circ\mathbf{T}^{t^{n-2} _m} _{\xi_m,n-2}\circ\cdots \circ \mathbf{T}^{t^2_1}_{\xi_1, 2}\circ \mathbf{T}^{t^1_1}_{\xi_1, 1} ([\rho]).
\]
The above formula is well-defined in the sense that it does not depend on the order of compositions. Hence we obtain the $\mathbb{R}^{m(n-1)}$-action on $\operatorname{Hit}^{\mathscr{B}} _n(\Sigma)$ given by
\[
(\textbf{t}_1, \cdots, \textbf{t}_m)\cdot [\rho] = \mathbf{T}^{(\textbf{t}_1, \cdots, \textbf{t}_m)}([\rho]).
\]
Recall that $\delta(F_j(\rho(\xi_i)))$ is the fundamental vector field of the unit vector (seen as a Lie algebra element) in the direction of $t^j _i$ at $[\rho]$.
\begin{lemma}\label{freeaction}
The $\mathbb{R}^{m(n-1)}$-action on $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$ is free.
\end{lemma}
\begin{proof}
Choose a representative $\rho$ of $[\rho]$. We observe that, by construction of the $\mathbb{R}^{m(n-1)}$-action, $(\mathbf{t}\cdot \rho)|_{\Gamma_{\Sigma_0}}=\rho_{\Gamma_{\Sigma_0}}$ on the vertex group $\Gamma_{\Sigma_0}$ of the base vertex. Suppose that $\mathbf{t}\cdot [\rho]=[\rho]$ for some $\mathbf{t}\in \mathbb{R}^{m(n-1)}$. Then, by Lemma \ref{NoInvariantElement}, $\mathbf{t} \cdot \rho =\rho$ as representations. Now by induction and the definition of the action we have $\mathbf{t}=\mathbf{0}$. Therefore, the $\mathbb{R}^{m(n-1)}$-action is free.
\end{proof}
\begin{lemma}\label{properness}
Let
\[
\mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C}):=\{\rho\in \operatorname{Hom}(\Gamma, G)\,|\,[\rho]\in\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})\}.
\]
The $\mathbb{R}^{m(n-1)}$-action on $\mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$ is proper.
\end{lemma}
\begin{proof}
Define
\[
\mathcal{H} (\Gamma_{\Sigma_i}):=\{\rho\in \operatorname{Hom}(\Gamma_{\Sigma_i}, G)\,|\, [\rho]\in \operatorname{Hit}_n(\Sigma_i)\}
\]
for each $i=1,2,\cdots, l$. We know, by Lemma \ref{NoInvariantElement}, that $\mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$ and $\mathcal{H}(\Gamma_{\Sigma_i})$ are subspaces of $\operatorname{Hom}_s(\Gamma, G)$ and $\operatorname{Hom}_s(\Gamma_{\Sigma_i},G)$ respectively. Let $C$ be a compact subset of $\mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$. We know that the restriction map
\[
\iota_{\Sigma_i}: \mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C}) \to \mathcal{H}(\Gamma_{\Sigma_i})
\]
and
\[
\iota_{\xi_j^\perp}:\mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C}) \to \operatorname{Hom}(\langle \xi_j ^\perp \rangle, G),\quad \xi_j \in E(\mathcal{G})\setminus E(\mathcal{D}),
\]
are continuous and equivariant with respect to the $\mathbb{R}^{m(n-1)}$-action. Let
\begin{align*}
U_i &:=\{\mathbf{t}\in \mathbb{R}^{m(n-1)}\,|\, \mathbf{t}\cdot \iota_{\Sigma_i}(C) \cap \iota_{\Sigma_i}(C) \ne \emptyset \},\\
V_j&:= \{\mathbf{t}\in \mathbb{R}^{m(n-1)}\,|\, \mathbf{t}\cdot \iota_{\xi_j^\perp}(C) \cap \iota_{\xi_j ^\perp}(C) \ne \emptyset \}.
\end{align*}
Since $\iota_{\Sigma_i}$ and $\iota_{\xi_j^ \perp}$ are equivariant,
\[
\{\mathbf{t}\in \mathbb{R}^{m(n-1)}\,|\, \mathbf{t} \cdot C \cap C \ne \emptyset\} \subset \bigcap_{i=1} ^l U_i \cap \bigcap_{j=1} ^N V_j
\]
where $N= |E(\mathcal{G})\setminus E(\mathcal{D})|$. We claim that $ \bigcap_{i=1} ^l U_i \cap \bigcap_{j=1} ^N V_j$ is compact. Since
\begin{equation}\label{properset}
\{\mathbf{t}\in \mathbb{R}^{m(n-1)}\,|\, \mathbf{t} \cdot C \cap C \ne \emptyset\}
\end{equation}
is closed, this shows that (\ref{properset}) is compact.
It is known that the $G$-action on $\operatorname{Hom}_s(\Gamma_{\Sigma_i}, G)$ is proper. See Proposition 1.1 of Johnson-Millson \cite{Johnson1987}. Hence, on each $\mathcal{H}(\Gamma_{\Sigma_i})\subset \operatorname{Hom}_s(\Gamma_{\Sigma_i},G)$, the set
\[
D:=\{g\in G\,|\, g \iota_{\Sigma_i}(C) g^{-1} \cap \iota_{\Sigma_i}(C) \ne \emptyset \}
\]
is compact. Suppose that $\xi$ is in $E(\mathcal{D})$ and precedes $\Sigma_i$. Let
\begin{multline*}
E:=\{(t_1, \cdots, t_{n-1}) \in \mathbb{R}^{n-1}\,|\, \exp (t_1 F_1(\rho(\xi))+ \cdots+ t_{n-1}F_{n-1}(\rho(\xi)))\in D\\ \text{ for some } \rho \in C\}.
\end{multline*}
Recall that the $\mathbb{R}^{n-1}$ action on $\operatorname{Hom}_s(\Gamma_{\Sigma_i}, G)$ corresponding to the flow along $\xi$ is conjugation by $\exp (t_1 F_1 (\rho(\xi))+\cdots+t_{n-1}F_{n-1}(\rho(\xi)))$, $\rho \in C$. Hence we have
\[
\{\mathbf{t} \in \mathbb{R}^{n-1}\,|\, \mathbf{t}\cdot \iota_{\Sigma_i}(C) \cap \iota_{\Sigma_i}(C)\ne \emptyset\} \subset E.
\]
We claim that $E$ is compact which also proves that
\[
\{\mathbf{t} \in \mathbb{R}^{n-1}\,|\, \mathbf{t}\cdot \iota_{\Sigma_i}(C) \cap \iota_{\Sigma_i}(C)\ne \emptyset\}
\]
is compact. Consider the map $k: \mathbb{R}^{n-1} \times C \to G$ given by
\[
((t_1, \cdots, t_{n-1}), \rho)\mapsto \exp (t_1 F_1 (\rho(\xi))+\cdots+t_{n-1}F_{n-1}(\rho(\xi))).
\]
This map is continuous. Moreover if $W$ is an unbounded subset of $\mathbb{R}^{n-1}$ then so is $k(W\times C)$ where $G$ is given the operator norm. Since $C$ is compact, the projection $p_1:\mathbb{R}^{n-1}\times C \to \mathbb{R}^{n-1}$ onto the first factor is a closed map. Therefore, $E=p_1(k^{-1}(D))$ is closed and bounded subset of $\mathbb{R}^{n-1}$ so $E$ is compact.
By simple induction, we have
\[
\bigcap_{i=1} ^l U_i = A_1 \oplus \cdots \oplus A_l \oplus \mathbb{R}^{N(n-1)}
\]
where each $A_i$ is a compact subspace of a subgroup $\mathbb{R}^{n-1}$ of $\mathbb{R}^{m(n-1)}$ corresponding to the flow along an edge in $\mathcal{D}$.
Now we claim that the set
\[
B_j:= \{\mathbf{t}\in \mathbb{R}^{n-1} \,|\, \mathbf{t}\cdot \iota_{\xi_j ^\perp}(C) \cap \iota_{\xi_j ^\perp}(C) \ne \emptyset\}
\]
is compact. Recall that the $\mathbb{R}^{n-1}$ action on $\iota_{\xi_j ^\perp}(C)$ is the right multiplication by $\exp (t_1F_1(\rho(\xi_j ))+\cdots+t_{n-1}F_{n-1}(\rho(\xi_j)))$. Let $\overline{A}^+$ be the set of diagonal matrices with diagonal entries being sorted from largest to smallest. Consider the Cartan projection $\mathfrak{a}: G \to \overline{A}^+$ which is known to be continuous and proper. We may assume that $K\cdot \iota_{\xi_j^\perp}(C)=\iota_{\xi_j^\perp}(C)\cdot K = \iota_{\xi_j^\perp}(C)$ where $K$ is a maximal compact subgroup of $G$. Then we observe that
\[
F:=\{ g\in \overline{A}^+\,|\, \iota_{\xi_j^\perp}(C) g \cap \iota_{\xi_j^\perp}(C) \ne \emptyset\}
\]
is compact. Indeed if $F$ is not compact, there is an unbounded sequence $\{g_i\}$ in $\overline{A}^+$ such that $ \iota_{\xi_j^\perp}(C) g_i \cap \iota_{\xi_j ^\perp}(C) \ne \emptyset$ for all $i$. Then $\iota_{\xi_j^\perp}(C)$ must be unbounded, which contradicts the assumption that $\iota_{\xi_j^\perp}(C)$ is compact. Since $\mathfrak{a}$ is proper, $\mathfrak{a}^{-1}(F)=\{ g\in G\,|\, \iota_{\xi_j^\perp}(C)g \cap \iota_{\xi_j^\perp}(C) \ne \emptyset\}$ is compact in $G$. Thus $B_j=p_1(k^{-1}(\mathfrak{a}^{-1}(F)))$ is also closed and bounded. It follows that $B_j$ must be compact.
Hence topologically, $\bigcap_{i=1} ^l U_i \cap \bigcap_{j=1} ^N V_j$ is a closed subspace of $A_1\times \cdots \times A_l \times B_1 \times \cdots \times B_N$. Since each $A_i$ and $B_j$ are compact, $\bigcap_{i=1} ^l U_i \cap \bigcap_{j=1} ^N V_j$ is also compact.
\end{proof}
Let $\mu:\operatorname{Hit}_n ^\mathscr{B}(\Sigma) \to \mathbb{R}^{m(n-1)}$ be the function defined by
\begin{equation}\label{momentmap}
\mu([\rho])= (f_{\xi_1}(\rho), \cdots, f_{\xi_m}(\rho))
\end{equation}
and let $\mathbf{L}=\operatorname{image} \mu$. $\mu$ is the complete invariant of conjugacy classes of $\mathbf{Hyp}^+$. Therefore, the value $\mu(g)$ determines the conjugacy class in which $g\in \mathbf{Hyp}^+$ is contained.
\begin{theorem}[Generalization of Goldman \cite{goldman1986}]\label{twist} Keep the assumption of Proposition \ref{JM2}. For any boundary frame $\mathscr{B}$, the $\mathbb{R}^{m(n-1)}$-action on $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma)$ is Hamiltonian whose moment map is given by (\ref{momentmap}). Each $y\in \mathbf{L}$ is a regular value of $\mu$ and the action is proper on $\mu^{-1}(y)$.
\end{theorem}
\begin{proof}
Theorem 4.3 of Goldman \cite{goldman1986} states that when $\Sigma$ is closed, the $\mathbb{R}^{m(n-1)}$-action on $(\operatorname{Hit}_n(\Sigma),\omega_G)$ is weakly Hamiltonian. Since curves in $\mathcal{C}$ are pairwise disjoint, non-isotopic, Theorem 3.5 of \cite{goldman1986} implies that the Hamiltonian functions $[\rho] \mapsto f_i (\rho(\xi_j))$ commute each other. Therefore this action is Hamiltonian.
Now we assume that $\Sigma$ has boundary. Let $(\Gamma, \mathcal{S})$ be a group system where $\Gamma=\pi_1(\Sigma)$ and $\mathcal{S}=\{\langle \zeta_1\rangle, \cdots, \langle \zeta_b\rangle, \Gamma_{\xi_1}^+, \cdots, \Gamma_{\xi_m}^+\}$. We first consider the cohomological operation
\[
H^1 (\Gamma ,\mathcal{S} ; \mathfrak{g}) \otimes H^1(\Gamma; \mathfrak{g} ) \overset{\smile}{\to} H^2(\Gamma,\mathcal{S} ; \mathbb{R} ) \overset{\cap [\Sigma]}{\to} \mathbb{R}
\]
where the first arrow is the usual cup product and the second is the cap product with a relative fundamental class $[\Sigma] \in H_2(\Gamma, \mathcal{S};\mathbb{R})$. It descends to the operation
\[
H^1 _{\mathrm{par}} (\Gamma,\mathcal{S};\mathfrak{g}) \otimes H^1_{\mathrm{par}}(\Gamma, \mathcal{S}; \mathfrak{g}) \to \mathbb{R}
\]
which is the same as the explicitly defined form $\omega^\Sigma _K$. See Lemma 8.4 of \cite{guruprasad1997}. Then as in the proof of Proposition 3.7 of \cite{goldman1986}, we can show that the Poincar\'{e} dual of the cohomology element $\mathbb{X}_{f_{\xi_i}}|_{[\rho]}\in H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g}_\rho)\subset H^1(\Gamma;\mathfrak{g}_\rho)$ is $\xi_i \otimes F_j (\rho(\xi_i))\in H_1(\Gamma,\mathcal{S};\mathfrak{g})$. This follows from the commutativity of following diagram, absolute version of which appears in the proof of Proposition 3.7 of \cite{goldman1986}:
\[
\xymatrix{
H^1(\Gamma;\mathfrak{g})\ar[d] ^{\tilde{\omega_K }}\ar[r]^{\cap [\Sigma]}\ar[rd]^{\theta} & H_1(\Gamma,\mathcal{S}; \mathfrak{g})\ar[d]^{\eta}\\
H^1(\Gamma,\mathcal{S};\mathfrak{g})^* & \ar[l]^{\operatorname{Tr}^*} H^1(\Gamma,\mathcal{S};\mathfrak{g}^*)^*
}.
\]
For the precise definition of each map, we refer to Goldman \cite{goldman1986}. One can also prove that, by exactly the same argument of Theorem 4.3 of \cite{goldman1986}, the Poincar\'{e} dual of $[\frac{\partial}{\partial t^j _i } \mathbf{T}]=\delta(F_j(\rho(\xi_i)))\in H^1(\Gamma; \mathfrak{g})$ is given by $\xi_i \otimes F_j (\rho(\xi_i))$ as well. This proves that the action is weakly Hamiltonian. To prove that the action is Hamiltonian, we again use the fact that $\xi_i$ are all disjoint which implies that $\{f_{\xi_i}, f_{\xi_j}\}=0$ for all $i, j$.
It remains to prove the properness of the action. We show the following claim first.
\begin{claim}
Let $\mathscr{C}$ be the $\mathcal{C}$-frame such that $\mu^{-1}(y) = \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$. For any given compact subset $C\subset \mu^{-1}(y)=\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$, there is a compact set $C' \subset \mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$ satisfying the following properties:
\begin{itemize}
\item $C= p(C')$ where $p: \mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C}) \to \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$ is the projection $\rho\mapsto [\rho]$
\item If $\rho \in C'$ and $[\mathbf{t} \cdot \rho ] \in C$ for some $\mathbf{t} \in \mathbb{R}^{m(n-1)}$ then $\mathbf{t} \cdot \rho \in C'$.
\end{itemize}
\end{claim}
\begin{proof}[Proof of the Claim]
We extend $\mathcal{C}$ to a maximal geodesic lamination on $\Sigma$ and fix an ideal triangle $T$ contained in $\Sigma_0$, the origin of the tree $\mathcal{D}$. By Labourie-McShane \cite{labourie2009}, there is a equivariant flag curve $\mathcal{F}_\rho:\partial_\infty \widetilde{\Sigma} \to \mathrm{Flag}(\mathbb{R}^n)$ for each $\rho \in \mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$ where $\widetilde{\Sigma}$ is the universal cover of $\Sigma$. Fix also flags $P$, $Q$ and a projective line $R$ such that $(P,Q,R)$ is generic. Denote by $p,q,r$ the three vertices of a lift $\widetilde{T}$ of $T$. Then for each $[\rho]\in \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$ there is a unique $\widetilde{\rho}\in \mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$ such that $[\widetilde{\rho}] = [\rho]$, $\mathcal{F}_{\widetilde{\rho}} (p)=P,$ $\mathcal{F}_{\widetilde{\rho}} (q) = Q$ and $\mathcal{F}_{\widetilde{\rho}}(r)^{(1)}= R$. Let $s: \mu^{-1}(y) \to \mathcal{H}^{\mathscr{B}} (\Gamma,\mathscr{C})$ be the map defined by $s([\rho]) = \widetilde{\rho}$. Define $C' = s(C)$. Since the map $s$ is continuous, $C'$ is also compact. Suppose that $p(\mathbf{t}\cdot \rho)=[\mathbf{t}\cdot \rho] \in C$ for some $\mathbf{t}\in \mathbb{R}^{m(n-1)}$ and some $\rho \in C'$. Since $(\mathbf{t}\cdot \rho)|_{\Gamma_{\Sigma_0}} = \rho|_{\Gamma_{\Sigma_0}}$, we have that $\mathcal{F}_{\mathbf{t}\cdot \rho} (p)= P$, $\mathcal{F}_{\mathbf{t}\cdot \rho}(q)= Q$, and $\mathcal{F}_{\mathbf{t}\cdot \rho} (r) ^{(1)}= R$. Hence $\mathbf{t} \cdot \rho = s([\mathbf{t}\cdot \rho])$. It follows that $C'$ is the desired compact set.
\end{proof}
To prove the properness, we lift the compact set $C$ to $C'$ as above. We observe that
\[
\{\mathbf{t}\in \mathbb{R}^{m(n-1)} \,|\, \mathbf{t} \cdot C\cap C \ne \emptyset\} = \{\mathbf{t}\in \mathbb{R}^{m(n-1)} \,|\, \mathbf{t}\cdot C' \cap C' \ne \emptyset\}.
\]
The right hand side is compact by Lemma \ref{properness}. Therefore, the $\mathbb{R}^{m(n-1)}$-action is proper on $\mu^{-1}(y)$.
\end{proof}
In particular, by virtue of Theorem \ref{MWq}, we can construct the Marsden-Weinstein quotient
\[
q: \mu^{-1}(y) \to \mu^{-1}(y)/ \mathbb{R}^{m(n-1)}
\]
We denote by $\widetilde{\omega}_K ^\Sigma$ the induced symplectic form on $ \mu^{-1}(y)/ \mathbb{R}^{m(n-1)}$.
Let $\mathscr{C}$ be the $\mathcal{C}$-frame such that $\mu^{-1}(y) = \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})$. As we mentioned above the quotient space $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C}) /\mathbb{R}^{m(n-1)}$ carries the symplectic form $\widetilde{\omega}_K ^\Sigma$. On the other hand, the target of $\overline{\Phi}$, $\operatorname{Hit}_n ^{\mathscr{B}_1}(\Sigma_1)\times\cdots \times \operatorname{Hit}_n ^{\mathscr{B}_l}(\Sigma_l)$, also admits a symplectic form $\omega_K ^{\Sigma_1}\oplus \cdots \oplus \omega_K ^{\Sigma_l}$.
\begin{theorem}\label{gendecomp} Let $\Sigma$ be a compact oriented hyperbolic surface possibly with boundary components $\{\zeta_1, \cdots, \zeta_b\}$ and let $\{\xi_1, \cdots, \xi_m\}$ be a collection of pairwise disjoint, non-isotopic oriented essential simple closed curves in $\Sigma$ that divide the surface into hyperbolic subsurfaces $\Sigma_1, \cdots, \Sigma_l$. Let $\mathscr{B}$ and $\mathscr{C}$ be a boundary frame and $\mathcal{C}$-frame respectively. Then $\overline{\Phi}$ in (\ref{rest}) induces the natural map
\[
\Phi:\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})/\mathbb{R}^{m(n-1)} \to \operatorname{Hit}_n ^{\mathscr{B}_1}(\Sigma_1)\times\cdots \times \operatorname{Hit}_n ^{\mathscr{B}_l}(\Sigma_l)
\]
where
\[
\mathscr{B}_i=\{(\xi,B)\,|\,\xi\text{ is a component of } \partial \overline{\Sigma_i}\text{ and } (\iota_{\Sigma_i}(\xi),B)\in \mathscr{B}\cup \mathscr{C}\}.
\]
Moreover $\Phi$ is a symplectic diffeomorphism onto an open submanifold of
\[
\operatorname{Hit}_n ^{\mathscr{B}_1}(\Sigma_1)\times\cdots \times \operatorname{Hit}_n ^{\mathscr{B}_l}(\Sigma_l).
\]
\end{theorem}
\begin{proof}
Since $\mathbb{R}^{m(n-1)}$ acts as conjugation on each $\Gamma_{\Sigma_i}$, $\Phi$ is well-defined.
We now prove that $\Phi$ is symplectic. Since
\[
T_{q[\rho]} \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})/\mathbb{R}^{m(n-1)}\approx \mathrm{d} q ( H^1_{\mathrm{par}} (\Gamma, \mathcal{S};\mathfrak{g}_\rho)),
\]
each element in $T_{q[\rho]} \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})/\mathbb{R}^{m(n-1)}$ can be written as $ \mathrm{d} q ([\alpha])$ for some $[\alpha] \in H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g}_\rho)$. Moreover, by Proposition \ref{JM2}, we have
\[
\mathrm{d} (\Phi \circ q) ([\alpha]) =\mathrm{d} \overline{\Phi}([\alpha])= \iota^* _{\Sigma_1} [\alpha]\oplus \cdots \oplus \iota^* _{\Sigma_l} [\alpha].
\]
Therefore, it follows that
\[
\Phi^* (\omega^{\Sigma_1} _K \oplus \cdots \oplus \omega^{\Sigma_l} _K ) ( \mathrm{d} q [\alpha], \mathrm{d} q [\beta]) = \sum_{i=1} ^l \omega^{\Sigma_i} _K (\iota^* _{\Sigma_i} [\alpha], \iota^* _{\Sigma_i} [\beta]).
\]
By Corollary \ref{localdecomp}, we have
\[
\sum_{i=1} ^l \omega^{\Sigma_i} _K (\iota^* _{\Sigma_i} [\alpha], \iota^* _{\Sigma_i} [\beta])= \omega^\Sigma _K ( [\alpha], [\beta]).
\]
Since $q^* (\widetilde{\omega}_K ^{\Sigma}) = \omega^\Sigma _K$ on $T_{[\rho]} \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})=H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g}_\rho)$, and since $[\alpha]$ and $[\beta]$ were chosen in $H^1_{\mathrm{par}}(\Gamma, \mathcal{S};\mathfrak{g}_\rho)$, it follows that
\[
\omega^\Sigma _K ([\alpha], [\beta])=\widetilde{\omega}^\Sigma _K (\mathrm{d} q[\alpha], \mathrm{d} q[\beta]).
\]
Therefore, $\widetilde{\omega}^\Sigma _K=\Phi^*(\omega_K ^{\Sigma_1}\oplus \cdots \oplus \omega_K ^{\Sigma_l})$ as we wanted.
$\Phi$ is one-to-one. Indeed by Lemmas \ref{connectedfiber} and \ref{freeaction}, $\mathbb{R}^{m(n-1)}$ acts on each fiber of $\overline{\Phi}$ transitively and freely. Hence $\Phi([\rho_1])= \Phi([\rho_2])$ if and only if $[\rho_1]$ and $[\rho_2]$ are in the same fiber of $\overline{\Phi}$ if and only if there is a unique $\mathbf{t}\in \mathbb{R}^{m(n-1)}$ such that $\mathbf{t}\cdot [\rho_1]=[\rho_2]$. Therefore, $[\rho_1]$ and $[\rho_2]$ represent the same element in $\operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})/\mathbb{R}^{m(n-1)}$.
We now show that $\Phi$ is an open embedding. Observe that $\delta( \bigoplus_{i=1} ^ m H^0(\Gamma_{\xi_i}; \mathfrak{g}))$ is tangent to the orbits of the $\mathbb{R}^{m(n-1)}$-action. Therefore we have, by Proposition \ref{JM2},
\begin{align*}
T_{q[\rho]} \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma,\mathscr{C})/\mathbb{R}^{m(n-1)}&\approx \mathrm{d} q ( H^1_{\mathrm{par}} (\Gamma, \mathcal{S};\mathfrak{g}_\rho))\\
&= H^1_{\mathrm{par}} (\Gamma, \mathcal{S};\mathfrak{g}_\rho)/\delta (\bigoplus_{i=1} ^m H^0(\Gamma_{\xi_i}; \mathfrak{g})) \\
&\approx \bigoplus_{i=1} ^l H^1(\Gamma_{\Sigma_i}, \mathcal{S}_i; \mathfrak{g}).
\end{align*}
Therefore, Proposition \ref{JM2} shows that $\Phi$ has the full rank and that
\[
\dim \operatorname{Hit}_n ^{\mathscr{B}}(\Sigma, \mathscr{C})/\mathbb{R}^{m(n-1)} =\dim \operatorname{Hit}_n ^{\mathscr{B}_1}(\Sigma_1)\times\cdots \times \operatorname{Hit}_n ^{\mathscr{B}_l}(\Sigma_l).
\]
It follows that $\Phi$ is an open embedding.
\end{proof}
\section{Global Darboux coordinates on $\operatorname{Hit}_3(\Sigma)$}
In this section we prove Theorem \ref{globaldarbouxintro}.
We first review Goldman's construction of global parametrization on $\operatorname{Hit}_3(\Sigma)$ where $\Sigma$ is a closed surface and then compute $\omega_G$ between some coordinate vector fields. We then construct an $\mathbb{R}^{8g-8}$-valued function and prove that this function satisfies all the conditions of Theorem \ref{existenceofdarboux}. Corollary \ref{comm} is essentially used in the proof.
Throughout this section, $\Sigma$ denotes a closed oriented hyperbolic surface unless otherwise stated.
\subsection{Review on the Goldman coordinates}
Choi-Goldman \cite{choi1993} show that $\operatorname{Hit}_3(\Sigma)$ can be seen as the deformation space of convex projective structures on the surface $\Sigma$. It allows Goldman \cite{goldman1990} to construct global coordinates of $\operatorname{Hit}_3(\Sigma)$ based on projective geometry. Let us briefly summarize the construction of Goldman coordinates.
Take a maximal collection of disjoint, non-isotopic essential simple closed curves $\mathcal{C}=\{\xi_1, \cdots, \xi_{3g-3}\}$ in $\Sigma$. This collection $\mathcal{C}$ cuts the surface into $2g-2$ pants $P_1, \cdots, P_{2g-2}$. As mentioned in Lemma \ref{NoInvariantElement}, if $[\rho]\in \operatorname{Hit}_n(\Sigma)$, then each $\rho(\xi_i)$ is in $\mathbf{Hyp}^+$. Therefore, by giving an orientation to each $\xi_i$, we can associate the following invariants $m_i$ and $\ell_i$ to each oriented simple closed curve $\xi_i$:
\[
\ell_i (\rho )= \log \frac{|\lambda_1(\rho(\xi_i))|}{|\lambda_3(\rho(\xi_i))|}, \quad m_i(\rho)= 3 \log| \lambda_2(\rho(\xi_i))|.
\]
Here $\lambda_i$ denotes the $i$th largest eigenvalue.
Recall that there is a Hamiltonian $\mathbb{R}^{6g-6}$-action on $\operatorname{Hit}_3(\Sigma)$ with moment map
\[
\mu : [\rho] \mapsto (\ell_1(\rho), m_1(\rho), \cdots, \ell_{3g-3}(\rho), m_{3g-3}(\rho)).
\]
The quotient $q: \operatorname{Hit}_3(\Sigma) \to \operatorname{Hit}_3(\Sigma)/\mathbb{R}^{6g-6}$ is an affine bundle. Recall also that $\operatorname{Hit}_3(\Sigma)$ is foliated by $\bigcup_{y\in \mathbf{L}} \mu^{-1}(y)$. As each $\mu^{-1}(y)$ is invariant under the $\mathbb{R}^{6g-6}$-action, the quotient space $\operatorname{Hit}_3(\Sigma)/\mathbb{R}^{6g-6}$ is also foliated by symplectic manifolds of the form $\mu^{-1}(y)/\mathbb{R}^{6g-6}$. We have seen that each leaf $\mu^{-1}(y)/\mathbb{R}^{6g-6}$ can be identified with the symplectic manifold of the form $\operatorname{Hit}_3 ^{\mathscr{B}^y _1}(P_1)\times \cdots\times \operatorname{Hit}_3 ^{\mathscr{B}^y _{2g-2}}(P_{2g-2})$ via the map $\Phi_y$ in Theorem \ref{gendecomp} where $\mathscr{B}^y _i$ are boundary frames corresponding to $y$.
Goldman \cite{goldman1990} shows that each factor $\operatorname{Hit}_3 ^{\mathscr{B}_i}(P_i)$ is parametrized by two coordinates $(s, t)$. We can therefore parametrize the quotient $\operatorname{Hit}_3(\Sigma)/\mathbb{R}^{6g-6}$ by $\ell_i$, $m_i$ coordinates together with interior coordinates $s_i$ and $t_i$ defined by
\[
s_i:=\log s\circ \operatorname{pr}_i\circ \Phi_y,\quad \text{and}\quad t_i:=\log t\circ\operatorname{pr}_i \circ\Phi_y
\]
where $\operatorname{pr}_i$ is the projection onto $i$th factor of $\operatorname{Hit}_3 ^{\mathscr{B}^y _1}(P_1)\times \cdots\times \operatorname{Hit}_3 ^{\mathscr{B}^y _{2g-2}}(P_{2g-2})$.
To complete our discussion, we have to parametrize the fiber of the affine bundle $q: \operatorname{Hit}_3(\Sigma) \to \operatorname{Hit}_3(\Sigma)/\mathbb{R}^{6g-6}$. To this end, we need to specify the origin of the affine bundle $\operatorname{Hit}_3(\Sigma) \to \operatorname{Hit}_3(\Sigma)/\mathbb{R}^{6g-6}$. We make the following observation first:
\begin{lemma}\label{origin}
Let $([\rho_1], \cdots, [\rho_{2g-2}]) \in \operatorname{Hit}_3 ^{\mathscr{B}_1 ^y}(P_1)\times \cdots \times\operatorname{Hit}_3^{\mathscr{B}^y _{2g-2}}(P_{2g-2})$ be in the image of $\Phi_y$. Then there is a unique $[\rho]\in \mu^{-1}(y)\subset \operatorname{Hit}_3(\Sigma)$ such that $\Phi_y (q([\rho]))=([\rho_1],\cdots,[\rho_{2g-2}])$ and that $\sigma^\rho _j(\xi_i)=0$ for each $i=1,2,\cdots, 3g-3$ and $j=1,2$. Here $\sigma^\rho _j(\xi_i) $ are shear invariants of $\xi_i$ given in (\ref{ClosedLeafInvariant}) of Appendix \ref{BDreview}.
\end{lemma}
\begin{proof}
We have to show the uniqueness of $[\rho]$. Suppose that there is another $[\rho']\in \operatorname{Hit}_3(\Sigma)$ such that $\Phi([\rho'])=([\rho_1], \cdots, [\rho_{2g-2}])$ and that $\sigma^{\rho'}_j(\xi_i) =0$ for all $i=1,2,\cdots, 3g-3$ and $j=1,2$. Then we can find a non-zero vector $\mathbf{t}\in\mathbb{R}^{6g-6}$ such that $\mathbf{t}\cdot[\rho']=[\rho]$. Due to Proposition 5.2 of Bonahon and I. Kim \cite{bonahon2018}, there is a block-diagonal matrix
\[
A:=\begin{pmatrix}
D_1 & 0 &\cdots &0 \\
0 & D_2 & \cdots &0\\
0 & 0 &\ddots&0\\
0& 0& 0 & D_{3g-3}
\end{pmatrix}, \qquad D_i = \begin{pmatrix} 1 & -3 \\ 1 & 3 \end{pmatrix}
\]
such that $A \, \mathbf{t}=(\sigma^{\rho}_1(\xi_1),\sigma^\rho _2 (\xi_1), \cdots, \sigma^\rho _1(\xi_{3g-3}),\sigma^\rho _2 (\xi_{3g-3}))^T =\mathbf{0}$. Since $A$ is non-singular, $\mathbf{t}$ must be the null vector which is a contradiction.
\end{proof}
Therefore, we obtain a section $\mathfrak{s}: \operatorname{Image} \Phi_y \to \mu^{-1}(y) \subset \operatorname{Hit}_3(\Sigma)$ of $\Phi_y \circ q|_{\mu^{-1}(y)}$ by assigning to each $([\rho_1], \cdots, [\rho_{2g-2}])$ in the image of $\Phi_y$, the unique $[\rho]\in \operatorname{Hit}_3(\Sigma)$ constructed in Lemma \ref{origin}. Use the image of $\mathfrak{s}$ as the origins of the action to get the well-defined twist-bulge parameters $u_i$, $v_i$ to each $\xi_i$. In summary, the global coordinates of $\operatorname{Hit}_3(\Sigma)$ are given by
\[
\{\mathbf{s}_1,\mathbf{t}_1, \cdots, \mathbf{s}_{2g-2},\mathbf{t}_{2g-2}, \ell_1, m_1, \cdots, \ell_{3g-3},m_{3g-3},u_1, v_1, \cdots, u_{3g-3},v_{3g-3}\}
\]
where
\[
\mathbf{s}_i := s_i \circ q,\quad\text{and}\quad\mathbf{t}_i := t_i \circ q.
\]
We have to remark that these coordinates may not be compatible with the symplectic form $\omega_G$.
\subsection{Proof of Theorem \ref{globaldarbouxintro}}
In this subsection we give a proof of Theorem \ref{globaldarbouxintro}. We start with some lemmas aiming to apply Theorem \ref{existenceofdarboux} at the end.
Recall that for a function $f$, we denote by $\mathbb{X}_{f}$ its Hamiltonian vector field. Recall also that
\[
\frac{\partial}{\partial \mathbf{s}_i} = \mathrm{d} \mathfrak{s} \frac{\partial}{\partial s_i},\quad\text{and}\quad \frac{\partial}{\partial \mathbf{t}_i} = \mathrm{d} \mathfrak{s} \frac{\partial}{\partial t_i}
\]
for each $i=1,2,\cdots, 2g-2$.
\begin{corollary}[See also \cite{kim1999}] \label{comm}
Let $\Sigma$ be a closed oriented hyperbolic surface. For any section $\mathfrak{s}: \operatorname{Image} \Phi \to \operatorname{Hit}_3(\Sigma)$ of $\Phi\circ q:\operatorname{Hit}_3(\Sigma) \to \operatorname{Image} \Phi$ and at any point $[\rho]\in \operatorname{Hit}_n(\Sigma)$, we have:
\[
\omega_G\left(\frac{\partial}{\partial \mathbf{s}_i},\frac{\partial}{\partial \mathbf{s}_j}\right)=\omega_G\left(\frac{\partial}{\partial \mathbf{s}_i},\frac{\partial}{\partial \mathbf{t}_j}\right)=\omega_G\left(\frac{\partial}{\partial \mathbf{t}_i},\frac{\partial}{\partial \mathbf{t}_j}\right)=0
\]
whenever $i\ne j$ and
\[
\omega_{G} \left( \frac{\partial}{\partial \mathbf{s}_i} , \frac{\partial}{\partial \mathbf{t}_i}\right) =-1.
\]
\end{corollary}
\begin{remark}
The first part of Corollary \ref{comm} is partially proven by H. Kim, see Proposition 6.4 of \cite{kim1999}. He uses the mathematica to prove them. We can obtain the same result and more without mathematica.
\end{remark}
\begin{proof}
Enough to consider the case when $P_i$ and $P_j$ are adjacent.
Suppose that $[\rho]\in \operatorname{Hit}_n(\Sigma, \mathscr{C})$ for some $\mathscr{C}$. Then $\frac{\partial}{\partial\mathbf{s}_i}=\mathrm{d}\mathfrak{s} \frac{\partial}{\partial s_i}$ and $\frac{\partial}{\partial \mathbf{t}_i}=\mathrm{d}\mathfrak{s} \frac{\partial}{\partial t_i}$ are tangent to $\operatorname{Hit}_n(\Sigma, \mathscr{C})$. Observe that $\omega^\Sigma _K = \omega_G ^\Sigma$ when $\Sigma$ is closed and that $(q^* \widetilde{\omega}_G)|_{\operatorname{Hit}_n(\Sigma, \mathscr{C})}=\omega_G|_{\operatorname{Hit}_n(\Sigma, \mathscr{C})}$. Theorem \ref{gendecomp} yields
\begin{align*}
\omega_G\left(\mathrm{d}\mathfrak{s}\frac{\partial }{\partial s_i}, \mathrm{d}\mathfrak{s}\frac{\partial }{\partial s_j}\right)& =\widetilde{\omega}_K \left(\mathrm{d}(q\circ \mathfrak{s})\frac{\partial }{\partial s_i}, \mathrm{d}(q\circ \mathfrak{s})\frac{\partial }{\partial s_j}\right)\\
& = \omega_K ^{P_i} \left(\iota_{\Gamma_{P_i}}^*\frac{\partial }{\partial s_i}, \iota_{\Gamma_{P_i}}^* \frac{\partial }{\partial s_j}\right)+\omega_K ^{P_j} \left(\iota_{\Gamma_{P_j}}^*\frac{\partial }{\partial s_i}, \iota_{\Gamma_{P_j}}^*\frac{\partial }{\partial s_j}\right).
\end{align*}
If $i\ne j$, we have $\iota_{\Gamma_{P_j}}^* \frac{\partial}{\partial s_i}=\iota_{\Gamma_{P_i}}^* \frac{\partial}{\partial s_j}=0$ and the result follows.
Similarly,
\begin{align*}
\omega_G\left(\mathrm{d}\mathfrak{s}\frac{\partial }{\partial s_i}, \mathrm{d}\mathfrak{s}\frac{\partial }{\partial t_j}\right)& =\widetilde{\omega}_K \left(\mathrm{d}(q\circ \mathfrak{s})\frac{\partial }{\partial s_i}, \mathrm{d}(q\circ \mathfrak{s})\frac{\partial }{\partial t_j}\right)\\
& = \omega_K ^{P_i} \left(\iota_{\Gamma_{P_i}}^*\frac{\partial }{\partial s_i}, \iota_{\Gamma_{P_i}}^* \frac{\partial }{\partial t_j}\right)+\omega_K ^{P_j} \left(\iota_{\Gamma_{P_j}}^*\frac{\partial }{\partial s_i}, \iota_{\Gamma_{P_j}}^*\frac{\partial }{\partial t_j}\right)\\
&=0,
\end{align*}
since $\iota_{\Gamma_{P_j}}^* \frac{\partial}{\partial s_i}=\iota_{\Gamma_{P_i}}^* \frac{\partial}{\partial t_j}=0$.
When $i=j$, we argue in the same fashion:
\[
\omega_G\left(\mathrm{d}\mathfrak{s}\frac{\partial }{\partial s_i}, \mathrm{d}\mathfrak{s}\frac{\partial }{\partial t_i}\right)=\widetilde{\omega}_K \left(\mathrm{d}(q\circ \mathfrak{s})\frac{\partial }{\partial s_i}, \mathrm{d}(q\circ \mathfrak{s})\frac{\partial }{\partial t_i}\right)= \omega_K ^{P_i} \left(\frac{\partial }{\partial s_i}, \frac{\partial }{\partial t_i}\right)=-1.
\]
Here $\omega_K ^{P_i} (\frac{\partial }{\partial s_i}, \frac{\partial }{\partial t_i})=-1$ is due to Theorem 5.8 of H. Kim \cite{kim1999}.
\end{proof}
\begin{lemma}\label{form} For each $i=1,2,\cdots, 2g-2$, the Hamiltonian vector field $\mathbb{X}_{\mathbf{s}_i}$ is of the form
\[
\mathbb{X}_{\mathbf{s}_i} = \frac{\partial}{\partial \mathbf{t}_i } + \sum_{j=1} ^{3g-3} \left(a_j \frac{\partial}{\partial u_j} + b_j \frac{\partial}{\partial v_j}\right)
\]
for some smooth functions $a_j$ and $b_j$.
\end{lemma}
\begin{proof}
The most generic form of the Hamiltonian vector field $\mathbb{X}_{\mathbf{s}_i}$ of $\mathbf{s}_i$ is
\begin{multline*}
\mathbb{X}_{\mathbf{s}_i}= \sum_{j=1} ^{2g-2} a_{\mathbf{s}_j} \frac{\partial}{\partial \mathbf{s}_j}+ \sum_{j=1} ^{2g-2} a_{\mathbf{t}_j} \frac{\partial}{\partial \mathbf{t}_j}+ \sum_{j=1} ^{3g-3} a_{u_j} \frac{\partial}{\partial u_j}+\sum_{j=1} ^{3g-3} a_{v_j} \frac{\partial}{\partial v_j}\\ +\sum_{j=1} ^{3g-3} a_{\ell_j} \frac{\partial}{\partial \ell_j}+\sum_{j=1} ^{3g-3} a_{m_j} \frac{\partial}{\partial m_j}.
\end{multline*}
If we compute $\omega_G(\mathbb{X}_{\mathbf{s}_i}, \mathbb{X}_{\ell_k})$, we get
\[
\omega_G(\mathbb{X}_{\mathbf{s}_i}, \mathbb{X}_{\ell_k})= \mathrm{d} \mathbf{s}_i \left(\frac{\partial }{\partial u_k} \right)= \frac{\partial \mathbf{s}_i}{\partial u_k} =0.
\]
On the other hand,
\begin{align*}
-\omega_G(\mathbb{X}_{\mathbf{s}_i}, \mathbb{X}_{\ell_k}) & =\sum_{j=1} ^{2g-2} a_{\mathbf{s}_j} \frac{\partial \ell_k}{\partial \mathbf{s}_j}+ \sum_{j=1} ^{2g-2} a_{\mathbf{t}_j}\frac{\partial \ell_k}{\partial \mathbf{t}_j}+ \sum_{j=1} ^{3g-3} a_{u_j} \frac{\partial \ell_k}{\partial u_j}+\sum_{j=1} ^{3g-3} a_{v_j} \frac{\partial \ell_k}{\partial v_j}\\
& \qquad+\sum_{j=1} ^{3g-3} a_{\ell_j} \frac{\partial \ell_k}{\partial \ell_j}+\sum_{j=1} ^{3g-3} a_{m_j} \frac{\partial \ell_k}{\partial m_j}\\
&=a_{\ell_k}.
\end{align*}
It follows that $\mathbb{X}_{\mathbf{s}_i}$ does not have $\frac{\partial}{\partial \ell_k}$ components, $k=1,2,\cdots, 3g-3$. Similarly, since $-\omega_G(\mathbb{X}_{\mathbf{s}_i} , \mathbb{X}_{m_k}) = 0 = a_{m_k}$, we can conclude that $\mathbb{X}_{\mathbf{s}_i}$ does not contain $\frac{\partial}{\partial m_k}$, $k=1,2,\cdots, 3g-3$ factors either. Thus,
\[
\mathbb{X}_{\mathbf{s}_i} = \sum_{j=1} ^{2g-2} a_{\mathbf{s}_j} \frac{\partial}{\partial \mathbf{s}_j}+ \sum_{j=1} ^{2g-2} a_{\mathbf{t}_j} \frac{\partial}{\partial \mathbf{t}_j}+ \sum_{j=1} ^{3g-3} a_{u_j} \frac{\partial}{\partial u_j}+\sum_{j=1} ^{3g-3} a_{v_j} \frac{\partial}{\partial v_j}.
\]
We showed in Corollary \ref{comm} that
\[
\omega_G \left(\frac{\partial}{\partial \mathbf{s}_j},\frac{\partial}{\partial \mathbf{s}_k}\right)=0,\quad\text{ and }\quad \omega_G \left(\frac{\partial}{\partial \mathbf{s}_j},\frac{\partial}{\partial \mathbf{t}_k}\right)=\begin{cases} -1& \text{if }j=k \\ 0 & \text{if }j\ne k\end{cases}.
\]
Recall also that $\mathbb{X}_{\ell_i}=\frac{\partial}{\partial u_i}$, and $\mathbb{X}_{m_i}=\frac{\partial}{\partial v_i}$. Hence for any $j$ and $k$,
\[
\omega_G\left(\frac{\partial }{\partial u_j}, \frac{\partial }{\partial \mathbf{s}_k}\right) =\omega_G\left(\frac{\partial }{\partial v_j}, \frac{\partial }{\partial \mathbf{s}_k}\right) =\omega_G\left(\frac{\partial }{\partial u_j}, \frac{\partial }{\partial \mathbf{t}_k}\right) =\omega_G\left(\frac{\partial }{\partial v_j}, \frac{\partial }{\partial \mathbf{t}_k}\right) =0.
\]
Combining these two results, we get
\[
1=\omega_G \left(\mathbb{X}_{\mathbf{s}_i}, \frac{\partial}{\partial \mathbf{s}_i}\right)= a_{\mathbf{t}_i} \omega_G\left(\frac{\partial}{\partial \mathbf{t}_i} , \frac{\partial}{\partial \mathbf{s}_i}\right) = a_{\mathbf{t}_i}
\]
and, whenever $k$ is not equal to $i$,
\[
0=\omega_G\left(\mathbb{X}_{\mathbf{s}_i},\frac{\partial}{\partial \mathbf{s}_k}\right)=a_{\mathbf{t}_k} \omega_G\left(\frac{\partial}{\partial \mathbf{t}_k}, \frac{\partial}{\partial \mathbf{s}_k}\right)=a_{\mathbf{t}_k}.
\]
Thus it follows that
\[
a_{\mathbf{t}_k}=\begin{cases} 1 & \text{if } k=i\\ 0 & \text{if }k\ne i\end{cases}.
\]
In particular $\mathbb{X}_{\mathbf{s}_i}$ does not have $\frac{\partial}{\partial \mathbf{t}_k}$ components for all $k$ different from $i$.
Finally by computing
\[
0=\frac{\partial \mathbf{s}_i}{\partial \mathbf{t}_k}=\omega_G\left(\mathbb{X}_{\mathbf{s}_i}, \frac{\partial}{\partial \mathbf{t}_k}\right) = a_{\mathbf{s}_k}
\]
we can show that $\mathbb{X}_{\mathbf{s}_i}$ does not have $\frac{\partial}{\partial \mathbf{s}_k}$ components for all $k$.
\end{proof}
\begin{lemma}\label{complete}
Each vector field $\mathbb{X}_{\mathbf{s}_i}$, $i=1,2,\cdots, 2g-2$, is complete.
\end{lemma}
\begin{proof} From Lemma \ref{form}, $\mathbb{X}_{\mathbf{s}_i}$ is of the form
\[
\mathbb{X}_{\mathbf{s}_i} = \frac{\partial}{\partial \mathbf{t}_i }+ \sum_j \left(a_j \frac{\partial}{\partial u_j} + b_j \frac{\partial}{\partial v_j}\right)
\]
We investigate the coefficient functions $a_i$, $b_i$.
Observe that
\[
\omega_G\left(\mathbb{X}_{\mathbf{s}_i}, \frac{\partial}{\partial \ell_j}\right)=\frac{\partial \mathbf{s}_i}{\partial \ell_j}=0.
\]
On the other hand
\[
\omega_G\left(\mathbb{X}_{\mathbf{s}_i}, \frac{\partial}{\partial \ell_j}\right)=\omega_G\left(\frac{\partial }{\partial \mathbf{t}_i}, \frac{\partial}{\partial \ell_j}\right)+a_{j}.
\]
Therefore
\[
a_{j} = -\omega_G\left(\frac{\partial}{\partial \mathbf{t}_i}, \frac{\partial}{\partial \ell_j}\right).
\]
Similarly,
\[
\omega_G\left(\mathbb{X}_{\mathbf{s}_i}, \frac{\partial}{\partial m_j}\right)=\frac{\partial \mathbf{s}_i}{\partial m_j}=\omega_G \left(\frac{\partial}{\partial \mathbf{t}_i}, \frac{\partial}{\partial m_j}\right)+b_j=0
\]
shows that
\[
b_j =- \omega_G\left(\frac{\partial}{\partial \mathbf{t}_i}, \frac{\partial}{\partial m_j}\right).
\]
Since $\mathbb{X}_{\ell_k}= \frac{\partial}{\partial u_k}$ is a Hamiltonian vector field as well as a coordinate vector field, we have
\[
0=(\mathcal{L}_{\mathbb{X}_{\ell_k}} \omega_G) \left(\frac{\partial}{\partial \mathbf{t}_i}, \frac{\partial}{\partial \ell_j}\right) = \mathbb{X}_{\ell_k}\omega_G \left(\frac{\partial}{\partial \mathbf{t}_i},\frac{\partial}{\partial \ell_j}\right)=-\frac{\partial a_j}{\partial u_k}
\]
which yields that functions $a_{1},\cdots,a_{3g-3}$ do not depend on $u_1, \cdots, u_{3g-3}$. Same argument using $\mathbb{X}_{m_k}$ instead of $\mathbb{X}_{\ell_k}$ shows that $a_{1},\cdots,a_{3g-3}$ do not depend on the $v_1,\cdots, v_{3g-3}$ variables either. Similarly, $b_{1},\cdots,b_{3g-3}$ are functions depending only on $\mathbf{s}_i,\mathbf{t}_i, \ell_i$ and $m_i$.
The equation $\dot{\mathbf{x}}(t) = \mathbb{X}_{\mathbf{s}_i} (\mathbf{x}(t))$ for an integral curve reads
\begin{gather}
\frac{\mathrm{d} \mathbf{s}_{j}(t)}{\mathrm{d} t} = 0, \quad j=1,2,\cdots, 2g-2 \label{flow1}\\
\frac{\mathrm{d} \mathbf{t}_j (t)}{\mathrm{d} t}= 0, \quad j=1,2,\cdots, 2g-2, \, j\ne i \\
\frac{\mathrm{d} \mathbf{t}_i (t)}{\mathrm{d} t}=1\\
\frac{\mathrm{d} \ell_j (t)}{\mathrm{d} t} = 0, \quad j=1,2,\cdots, 3g-3\\
\frac{\mathrm{d} m_j (t)}{\mathrm{d} t}=0, \quad j=1,2,\cdots, 3g-3 \label{flow2}\\
\frac{\mathrm{d} u_j (t)}{\mathrm{d} t}=a_j, \quad j=1,2,\cdots, 3g-3 \label{flow3}\\
\frac{\mathrm{d} v_j (t)}{\mathrm{d} t}=b_j, \quad j=1,2,\cdots, 3g-3.\label{flow4}
\end{gather}
A solution for equations (\ref{flow1})-(\ref{flow2}) is
\begin{equation}\label{sol}
\begin{cases}
\mathbf{t}_i =t + \text{const.}& \\
\mathbf{t}_j = \text{const.} & j=1,2,\cdots, 2g-2,j \ne i\\
\mathbf{s}_j= \text{const.} & j=1,2,\cdots, 2g-2\\
\ell_j,m_j = \text{const.} & j =1,2,\cdots, 3g-3
\end{cases}.
\end{equation}
Having the fact that $a_j$ and $b_j$ are functions of $\mathbf{s}_i, \mathbf{t}_i, \ell_i, m_i$ in mind, plug the solution (\ref{sol}) into $a_j$ and $b_j$. Then $a_j$ and $b_j$ become purely smooth functions of the time $t$. It means that the equations (\ref{flow3}), (\ref{flow4}) have a solution for all $t$. Therefore the vector field $\mathbb{X}_{\mathbf{s}_i}$ for each $i=1,2,\cdots, 2g-2$ is complete.
\end{proof}
We define a function $F:\operatorname{Hit}_3(\Sigma) \to \mathbb{R}^{8g-8}$ to be
\[
F([\rho])= (\mathbf{s}_1(\rho), \cdots, \mathbf{s}_{2g-2}(\rho), \ell_1(\rho),\cdots,\ell_{3g-3} (\rho), m_1(\rho),\cdots, m_{3g-3}(\rho)).
\]
\begin{lemma}\label{lagrangian}
For each $x\in \operatorname{Image} F\subset \mathbb{R}^{8g-8}$, the fiber $F^{-1}(x)$ is a simply connected Lagrangian submanifold.
\end{lemma}
\begin{proof}
Tangent space at each point of $F^{-1}(x)$ is spanned by vectors
\[
\mathbb{X}_{\mathbf{s}_1},\cdots, \mathbb{X}_{\mathbf{s}_{2g-2}}, \mathbb{X}_{\ell_1}, \cdots, \mathbb{X}_{\ell_{3g-3}}, \text{ and } \mathbb{X}_{m_1}, \cdots, \mathbb{X}_{m_{3g-3}}.
\]
By Corollary \ref{comm} and Lemma \ref{form}, $F^{-1}(x)$ is a Lagrangian submanifold.
By Lemma \ref{complete}, $\mathbb{X}_{\mathbf{s}_1},\cdots, \mathbb{X}_{\mathbf{s}_{2g-2}}$, $\mathbb{X}_{\ell_1},\mathbb{X}_{m_1},\cdots,\mathbb{X}_{\ell_{3g-3}} , \mathbb{X}_{m_{3g-3}}$ are commuting complete vector fields tangent to each fiber $F^{-1}(x)$. Thus, the Hamiltonian flows of $\mathbf{s}_1, \cdots, \mathbf{s}_{2g-2}, \ell_1,m_1, \cdots, \ell_{3g-3},m_{3g-3}$ induce an $\mathbb{R}^{8g-8}$-action on $F^{-1}(x)$. By Lemma \ref{form}, this action is free and transitive on each fiber $F^{-1}(x)$. Therefore, each fiber is diffeomorphic to $\mathbb{R}^{8g-8}$ which is simply connected.
\end{proof}
Now we can prove Theorem \ref{globaldarbouxintro}. Let $B$ be the image of the function $F: \operatorname{Hit}_3(\Sigma) \to \mathbb{R}^{8g-8}$. According to section 1.8 of Goldman \cite{goldman1990}, $B$ is diffeomorphic to $\mathbb{R} ^{2g-2}\times \mathfrak{R}$ and
\[
\mathfrak{R}=\{(\ell_1, \cdots, \ell_{3g-3}, m_1, \cdots, m_{3g-3})\in \mathbb{R}_+^{3g-3}\times \mathbb{R}^{3g-3}\,|\,|m_i|< \ell_i \}
\]
where $\mathbb{R}_+=\{x\in \mathbb{R}\,|\, x>0\}$. In particular, $B$ is contractible. In addition to this, due to Lemma \ref{complete} and Lemma \ref{lagrangian}, we observe that $F: \operatorname{Hit}_3(\Sigma) \to \mathbb{R}^{8g-8}$ satisfies the conditions of Theorem \ref{existenceofdarboux}. Therefore the result follows from Theorem \ref{existenceofdarboux}.
|
1,116,691,500,382 | arxiv | \section{Introduction}
The Boltzmann transport equation has played a very important role in the
development of non-equilibrium statistical mechanics. This microscopic
equation describes the time-evolution of a distribution-function in
phase-space and has also provided a connection with macroscopic hydrodynamic
equations by a moment expansion of the momentum. Important applications
are for example
the well-known Chapman-Enskog calculations of transport coefficients.
In later developments the Markovian Boltzmann-equation has been extended
to include memory and correlation-effects in the collision-integral
and there are a large number
of publications concerning such improvements. These classical kinetic
equations describe the time-evolution of a one-time distribution
function $f({\bf r,p},t)$.
Meanwhile, a quantum two-time theory for the
time-evolution of real time Green's functions $G({\bf r,p},t,t')$ has been
developed using the Schwinger-Keldysh formalism. The quantum image of the
classical Boltzmann equation is usually referred to as the Kadanoff-Baym
(KB)
equations \cite{KB62}. These equations have often been considered too
complicated to solve numerically in the past. However, several numerical applications exist now. The Kadanoff-Baym
equations have also played an important
role in the improvements of the Boltzmann equation especially by using
the \it Generalised Kadanoff Baym Ansatz \rm (GKB) of Lipavsky et al \cite{LSV86}. This
ansatz allows a reduction of the two-time formalism to a formally
simpler one-time formalism e.g. the Boltzmann equation.
The time off-diagonal Green's function
elements are related by GKB to the time-diagonal by the spectral-functions.
By various approximations of the spectral-functions various one-time
approximations of the two-time equations can be obtained.(See e.g.
\cite{hsk96})
These kinetic equations describe different relaxation stages.
During the very fast first stage, correlations imposed by the initial
preparation of the system
are decaying \cite{B46,BKSBKK96}. These are contained in
off-shell or dephasing processes described by two-time propagators. During this stage of
relaxation the quasiparticle picture is established \cite{LKKW91,MSL97a}.
After this very fast process the second state develops
during which the one-particle distribution relaxes towards the equilibrium
value \cite{RTb90} with a relaxation time $\tau_{\rm rel}$.
First the momentum anisotropy relaxes by small angle scattering events and
then the energetic degrees of freedom relax. During this
relaxation state the virial corrections are established and can
be consistently described by a nonlocal Boltzmann kinetic
equation \cite{SLM96,LSM97}.
The time of the first stage $\tau_c$ is mostly shorter than the relaxation
time $\tau_{\rm rel}$ of one particle distributions which is entirely
determined by the collision process. We will focus on the first stage
which is related to the formation of correlations.
The formation of correlations is connected with an increase of the kinetic energy or equivalently the build
up of correlation energy. This is due to rearrangement processes which let decay higher order correlation
functions until only the one - particle distribution function relaxes. Because the correlation energy is a
two - particle observable we expect that the relaxation of higher order correlations can be observed best
within this quantity. Of course, the total energy of the system is conserved
\begin{eqnarray}
\frac{\partial}{\partial t} \left ( \langle \frac{p_1^2}{2 m} \rangle (t) +
E_{\rm corr} (t)\right ) &=& 0\label{cons},
\end{eqnarray}
which means that the kinetic energy increases on cost of the correlation energy $E_{\rm corr}(t)$.
We will observe a transformation of correlation into kinetic energy. This process saturates on the end of the
first stage of relaxation. It is more convenient to calculate the kinetic energy than the correlation energy
because
the kinetic energy is a one-particle observable. Consequently, the time dependence of the kinetic energy will
be investigated within the kinetic theory. This can only be accomplished if we employ a kinetic equation which
leads to the total energy conservation (\ref{cons}). It is immediately obvious that the ordinary Boltzmann
equation cannot be appropriate for this purpose because the kinetic energy is in this case an invariant of the collision
integral and constant in time. Imposing the conservation of the form (\ref{cons}) we have to consider
non-Markovian kinetic equations \cite{M94}, which account for the formation of two particle correlations.
Within these kinetic equations the collision integral is an
expression of the two-particle correlations. While the one-particle distribution remains almost unchanged
during the first stage of relaxation, the two-particle correlations relaxes. Consequently the one- particle
spectral function is changing. The latter one is responsible for the dephasing and therefore formation of
correlations.
Analytical expressions for the time dependence of the kinetic and correlation energy are obtained in this paper
by considering explicitly this dephasing process.
We start from a kinetic equation appropriate for short time scale in Born
approximation.
It contains the full
memory-effect but no damping i.e. no explicit width
of the spectral function, because quasiparticles are not
yet formed on this time scales.
In Chapter II we give an overview of the gradient approximation
with emphasis on energy-conservation and correlation energy.
In appendix \ref{append} we discuss the limit of
complete collisions and the weakening of initial correlations.
In appendix \ref{appc} we calculate the equilibrium value of
the correlation energy for high and low temperature
limits analytically using Gaussian and Yukawa type interactions.
On the very first time scale we can neglect retardation effects in the one
particle distribution function, but we have to
keep into account off-shell properties of the collision integral.
Therefore we use the {\it finite duration } approximation in
Chapter III.
It leads to the correct equilibrium value and is intuitively
clear from Fermi's Golden rule. It is compared numerically
with the KB results.
These calculations also bring to the attention the correlation-time i.e.
the time for build-up of correlations.
From the observation that the time-variation of the
distribution-functions can be neglected in the first stage of relaxation we
obtain an
analytic expression for the time dependent formation of correlations.
Especially we give analytical results for the formation
time of correlations in a high and in a low temperature limit.
Comparisons are made with numerical calculations.
Chapter IV summarizes our results and we discuss some
aspects regarding the correlation time.
The appendices show some important relations necessary for our analytic
calculations.
\section{Correlation Energy in gradient expansion}
The kinetic equation in Born approximation for spatial
homogeneous media
including complete time convolution (memory effect) but no damping
is called Levinson equation and reads
\cite{L65,L69,JW84,MWR93}
\begin{eqnarray}\label{kinetic}
&& \frac{\partial}{\partial t} f(p_1)
=\nonumber\\
&& \frac{2 s_1 s_2}{\hbar ^2}
\int \frac{dp_2 dp_1' dp_2'}{(2 \pi \hbar )^6}
V(\mid p_1 - p_1'\mid )^2
\delta (p_1 + p_2 - p_1' - p_2')
\nonumber \\
&\times& \int_{t_0}^{t }d\tau {\rm cos}
(\frac{1}{\hbar} (E_1 + E_2 - E_1' - E_2')(t-\tau)
)\nonumber\\
&\times &
(f(p_1', \tau ) f(p_2', \tau )
{\bar f}(p_1, \tau )
{\bar f}(p_2, \tau ) \nonumber\\
&-& f(p_1, \tau ) f(p_2, \tau ) {\bar f}(p_1',
\tau )
{\bar f}(p_2', \tau ) )
\nonumber \\
\end{eqnarray}
with $\bar f = 1-f$, the free particle dispersion
$E=p^2/2m$ and the spin-isospin degeneracy $s_1,s_2$.
The distribution functions are normalized to the density as $s \int {d p\over (2 \pi \hbar)^3} f(p)=n$. For the sake of simplicity we have omitted the Hartree and Fock contribution. Since we discuss the correlation energy the meanfield contribution is just additive. The dispersion $E(p)$ in the collision integral is modified by the Fock term, but we use this effect only in an approximative way by understanding $m$ as effective mass.
The Boltzmann collision integral is obtained from
equation (\ref{kinetic})
if: (i) One neglects the time
retardation in
the distribution functions, i.e. the memory effects
and (ii) The finite initial time $t_0$ is set equal to
$-\infty$ corresponding to what is usually referred to as
the limit of complete
collisions.
The memory effect is condensed in the explicit retardation of the distribution function. This would lead to
gradient
contributions to the kinetic equation which can be shown to be responsible for the formation of high energetic
tails in the distribution function \cite{MR95,SL95}. This effect will be established on the second stage of
relaxation.
The second effect is contained in the energy broadening or off-shell behavior in (\ref{kinetic}). This is
exclusively related to the spectral properties of the one-particle propagator and therefore determined by
the relaxation of two-particle correlation.
Since we are studying the very short time region after the initial disturbance we can separate the one-particle
and two-particle relaxation. On this time scale the memory in the distribution functions can be neglected but
we will keep the spectral relaxation implicit in the off-shell $\cos$-function of (\ref{kinetic}).
This effect is the most relevant one
for obtaining
the time evolution of the interaction (or correlation) energy and
therefore
energy conservation.
In the following discussion we shall only be concerned with the
time integration.
Therefore we introduce the short hand notation of
equation (\ref{kinetic})
\begin{equation}\label{short}
\frac{\partial}{\partial t} f(p_1)=\frac{ 1}{ \hbar^2}
\int_{t_0}^{t }d\tau
\cos{{\Delta E (t-\tau ) \over \hbar}} \; F(\tau),
\end{equation}
where
\beq
F(\tau)&=&
2 V(\mid p_1 - p_1'\mid )^2
\delta (p_1 + p_2 - p_1' - p_2') \nonumber\\
&& \times
(f(p_1', \tau ) f(p_2', \tau ) {\bar f}(p_1, \tau )
{\bar f}(p_2, \tau ) \nonumber \\
&&- f(p_1, \tau ) f(p_2, \tau ) {\bar f}(p_1',
\tau ) {\bar f}(p_2', \tau ) )\nonumber \\
\label{short2}
\eeq
and the 9-dimensional momentum-integration
is suppressed in
Eq. (\ref{short}) and in the following.
From this equation one derives
balance equations
by integration over momentum $p_{1}$. The first two moments,
i.e. the density and
total linear momentum, are
conserved. For the Markovian Boltzmann equation the kinetic energy is
conserved, while potential energy is zero. In the present case
including the memory effect one finds \cite{M94}
\begin{eqnarray}\label{energy}
\frac{\partial}{\partial t} ( \langle \frac{p_1^2}{2 m}
\rangle +
E_{\rm corr}) &=& 0,
\end{eqnarray}
where the correlation energy $E_{\rm corr}$
is given by
\begin{eqnarray}\label{energ}
E_{\rm corr}(t)-E_{\rm corr}(t_0) &=& -\frac{1}{4 \hbar }
\langle \int_{0}^{t-t_0 }\!\!d\tau \, \sin{ \left({\Delta E\tau
\over \hbar}\right )} F(t-\tau) \rangle.\nonumber\\
&&
\end{eqnarray}
Here $<>$ indicates the integration over $p_1$. We like to point out that we have neglected initial correlations in the kinetic equation (\ref{kinetic}) in agreement with the studied sudden switching approximation. Consequently, $E_{\rm corr}(t_0)$ describes only possible constant background correlations not formed by the binary collisions.
Expanding $F(t-\tau)$ around $t$ one obtains a
gradient expansion series for the interaction energy that reads
\beq
E_{\rm corr}(t)-E_{\rm corr}(0)&=&\sum\limits_{n=0}^{\infty} <V_n(t)
F^{(n)}(t)>\nonumber\\
V_n(t)&=&- \frac {1}{ 4 \hbar} {(-1)^n \over n !}
\int\limits_0^{t-t_0} d t' t'^n \sin{{ \Delta E t'
\over \hbar}},\nonumber\\\label{s1}
\eeq
where the n-th time
derivative of $F(t)$ is given by
$F^{(n)}(t)={\partial^n \over \partial t^n} F(t)$.
Taking the time-derivative of (\ref{s1}) one finds
\beq
{\pa t} E_{\rm corr}&=&\sum\limits_{n=0}^{\infty}
<V'_n(t)F^{(n)}(t)+V_n(t)F^{(n+1)}(t)>.\nonumber
\\\label{s3}
\eeq
On the other hand one
can express the time derivative of the
interaction energy in terms of a
gradient expansion of the collision integral directly from Eq.
(\ref{kinetic}).
This leads to
\beq
{\pa t} E_{\rm corr}&=&\sum\limits_{n=0}^{\infty} <I_n(t)
F^{(n)}(t)>\nonumber\\
I_n(t)&=&- \frac {1}{ 4 \hbar} {(-1)^n \over n !}
\int\limits_0^{t-t_0} d t' t'^n \Delta E \cos{{ \Delta
E t' \over \hbar}}.\nonumber\\\label{s2}
\eeq
Note the difference between the two expansions
(\ref{s2}) and (\ref{s3}). For example, the zero order
term in Eq.(\ref{s3}) does contain not only the zero order term but also
part
of the first order term of Eq.(\ref{s2}).
This is understandable, because the collision integral
determines the \it time derivative \rm (\ref{s2}) of the correlation
energy. One
has to expand
the collision integral
one step further
in order
to obtain the correlation energy (\ref{s1}) up to a specific level
of
gradient expansion. This is a quite general observation
for any order of gradient approximation.
Comparing the two gradient expansions (\ref{s2}) and (\ref{s3})
we establish a relation between
$I_{n}(t)$ and $V_{n}(t)$
\beq
I_n&=&{\pa t}V_n+V_{n-1}\nonumber\\
I_0&=&{\pa t} V_0
=-\frac {1}{ 4 \hbar} \sin{{ \Delta E (t-t_0)\over
\hbar}},
\label{z1}
\eeq
or inversely
\beq
V_n(t)&=&\int\limits_{t_0}^t dt'
(I_n(t')-V_{n-1}(t'))\nonumber\\
V_0(t)&=&\int\limits_{t_0}^t dt' I_0(t')=\frac {1}{ 4 } {\cos{{ \Delta E (t-t_0)\over
\hbar}}-1 \over \Delta E}.
\label{z2}
\eeq
The long time limit of the different gradient approximations of the kinetic equation are presented in appendix \ref{append} and is found to be unique.
The limit of complete collisions $t_0\rightarrow -\infty$ and the connected problem of weakening of initial correlations are discussed there.
\section{The formation of correlations}
To lowest order
the gradient expansion (\ref{s1}), the correlation energy is
\beq\label{vv}
E_{\rm corr}(t)-E_{\rm corr}(0) &=& \frac 1 4 \langle
\frac{\cos{{\Delta E
(t-t_0)\over \hbar}}-1}{\Delta E}
F^{(0)}(t) \rangle. \nonumber\\
\eeq
Retaining only the first term for this correlation energy is
equivalent to
an approximation of
the non-Markovian collision integral (\ref{kinetic})
where we neglect the time dependence of the
distribution functions while keeping the finite initial time
$t_0$. This approximation gives instead of (\ref{short})
\beq\label{short1}
&&\frac{\partial}{\partial t} f(p_1)=\frac 1 \hbar
{\sin{{\Delta E (t-t_0 ) \over \hbar}} \over \Delta E
}\; F(t)\nonumber\\
&=& \frac{2}{\hbar }
\int \frac{dp_2 dp_1' dp_2'}{(2 \pi \hbar )^6}
V(\mid p_1 - p_1'\mid )^2
\delta (p_1 + p_2 - p_1' - p_2')
\nonumber \\
&\times& {\sin{(E_1 + E_2 - E_1' - E_2')(t-t_0)
/\hbar} \over E_1 + E_2 - E_1' - E_2'}\nonumber\\
&\times &
[f(p_1', t ) f(p_2', t )
{\bar f}(p_1, t )
{\bar f}(p_2, t ) \nonumber\\
&-& f(p_1, t ) f(p_2, t ) {\bar f}(p_1',
t )
{\bar f}(p_2', t ) ].
\nonumber \\
\eeq
Using the same steps which was used
to derive the correlation energy
$E_{\rm corr}(t)$ in
(\ref{energ}) from the collision integral
(\ref{kinetic}) one easily finds that the collision integral
(\ref{short1}) gives the lowest order term
of the time derivative of this correlation energy, i.e. (\ref{vv}).
The off-shell function in this collision integral contains a memory of the initial state at $t_0$. This induces a
memory in the kinetic equation in spite of the fact that the collision integral is formally Markovian.
From (\ref{s1}) one recognizes that the equilibrium or long time
limit of the correlation energy is exactly give by the first
order gradient approximation (\ref{vv}) since all next orders
include time derivatives and vanishes therefore on the long time scale.
We summarize three important features of this
approximate collision integral: (i) It reproduces the zero order
gradient term of the time dependent correlation energy (\ref{s1}).
(ii) It leads to the complete expression for the
equilibrium correlation energy (\ref{eq}) and (iii) it
is a direct consequence of Fermi's Golden Rule, where
we keep time dependent perturbation theory.
Previously, this approximation has been considered also
for electron plasmas \cite{BKSBKK96,MSL97a,KBBS96,BK96}.
It was shown in appendix \ref{append} that the two operations of taking the
time-derivative and the long-time limit of $t_{0}$ do not commute.
This is nicely illustrated by Eq. (\ref{short1})
which gives
the correct correlation
energy (\ref{eq}) if the limit $t_0\rightarrow -\infty$ is
performed afterwards.
If however the limit of complete
collision $t_0 \rightarrow -\infty$ (or $t \rightarrow
\infty$) is performed first on the kinetic equation
(\ref{short1}) it reduces to the Markovian Boltzmann equation without
correlation energy.
We refer to Eq. (\ref{short1}) as the {\it finite duration }
time approximation. It carries the most important features of the build up of
correlations after the interactions are switched on in the initially uncorrelated system.
This will be demonstrated by some numerical examples below.
We shall first present some analytical results for high and low
temperature limit and compare
them with the numerical solution later.
To this end we assume a system consisting of two different types
of particles $a,b$ with
different masses $m_a,m_b$. As a first example
we shall consider a Yukawa-type potential of the form
\beq
V_{\rm Y}(r)={g_{ab} \over r} {\rm e}^{-\kappa r}
\label{pot}
\eeq
where in nuclear physics applications $g_{ab}$ is the
coupling and $\kappa$ the effective range of potential
given by the inverse mass of interchanging mesons. In
plasma physics applications the potential (\ref{pot})
represents the Debye potential with $g_{ab}=e_a e_b/\epsilon_0$
given by the two charges and the inverse screening
length
\beq
\kappa^2=\sum\limits_a {4 \pi e_a^2\over \epsilon_0} {\partial n_a \over \mu_a}\label{kappa}
\eeq
where $n_a$ is the density and $\mu_a$ the chemical potential of specie $a$.
As a second example we shall use
a Gau\ss{}-type potential
\beq
V_{G}(r)=V_0 {\rm e}^{-(r/\eta)^2}
\label{ggg}
\eeq
which has been used in nuclear physics applications\cite{D84,hsk95}
with $\eta=0.57 {\rm fm}^{-3}$ and $V_0=-453$ MeV.
We will proceed and derive analytical expressions which are
compared with the numerical solution of (\ref{vv}).
The numerical values are compared in table
\ref{tab1} with the solution of the KB equations. It is found an overall good agreement.
\subsection{Time dependent correlation energy}
We
calculate the build up time of correlation, $\tau_c$
by inspecting the time derivative of the interaction energy.
We define $\tau_c$ as the time at which this
derivative becomes sufficiently small. This
corresponds to using (\ref{s2}) instead of
(\ref{s1}), but only with the first term according
to the finite
duration approximation of the last chapter. We have from (\ref{z1})
\beq\label{v2}
{\pa t} E_{\rm corr} &=& -\frac {1}{ 4 \hbar} \langle
\sin{{\Delta E
(t-t_0)\over \hbar}}
F(t) \rangle.
\eeq
\subsubsection{High temperature limit}
In the limit
of high temperature we neglect the
degeneracy and the equilibrium distribution takes the
Maxwell-Boltzmann form
\beq
f(p)=\frac n s \lambda^3 {\rm e}^{-{p^2 \over 2 m T}};\qquad
\lambda^2={2 \pi \hbar^2 \over m T}.\label{bol}
\eeq
We get for the Gau\ss{} potential and $b^2=\hbar^2/(2\mu T \eta^2)$ the result
\beq
&&{\pa t} E_{\rm corr}^G=\sum\limits_{ab}{4 n_a n_b \pi \eta^2 V_0^2 \over \sqrt{2 \mu T}
b^4} \left ({4 \mu \over M} \right )^2
\nonumber\\
&&\nonumber\\
&\times&{\pa \beta} \left ({\pi (\sqrt{\beta^2+16 t^2 T^2/\hbar^2}-\beta)\over 8 (\beta^2+16 t^2 T^2/\hbar^2)} \right )^{1/2}_{\beta
=2/b^2+4 t^2 T^2 /\hbar^2}\nonumber\\
&=& -\sum\limits_{ab}{3 n_a n_b \pi^{3/2} \eta^6 V_0^2 (2 \mu)^{3/2} \over 16 T^{5/2}} \left ( {4 \mu \over M} \right )^2
{1 \over t^4} + o(t^{-5}).\nonumber\\
&&\label{highe}
\eeq
We see
a monotonic decrease of the time derivative or
equivalently a monotonic increase of the correlation energy to
its equilibrium value. It is remarkable that the long time limit is
entirely determined by the classical value. Obviously
no quantum effects enter the formation of correlation in the high temperature limit.
We can also calculate the time dependent formation for Yukawa like potentials.
The time derivative of the correlation
energy leads to [appendix (\ref{i4})]
\beq
{\pa t} {E_{\rm corr}^Y(t) \over n}&=& -{e^2 \kappa T\over 2 \hbar}{\rm Im}
\left [(1+2 z^2 ) {\rm e}^{z^2} (1- {\rm erf} (z)) -{2 z \over \sqrt{\pi}} \right ] \nonumber\\&&
\label{class1}
\eeq
where we used $z =\omega_p \sqrt{t^2 - i t {\hbar \over T}}$ and
the collective (plasma)
frequency $\omega_p^2=\kappa^2 T/m$, compare with (\ref{kappa}).
This is the analytical quantum result of the time derivative of the formation of correlation. For the classical limit we
are able to integrate expression (\ref{class1}) with respect to times and arrive at \cite{MSL97a}
\beq
&&E_{\rm corr}^Y(t) -E_{\rm corr}^Y(0)= -{n \kappa e^2\over 4 } \nonumber\\
&\times&
\left [1+ {2 \omega_p t \over \sqrt{\pi}}-(1+2 (\omega_p t)^2 ) {\rm e}^{(\omega_p t)^2} (1- {\rm erf} (\omega_p t)) \right
].
\label{class2}
\eeq
It shows that the formation of correlations is basically given by the inverse
collective frequency $\omega_p$. Therefore during the first
stage of relaxation the fluctuating and collective effects are more important
than collisions.
\subsubsection{Low temperature limit}
The low temperature value is of special interest,
because it leads to a natural definition of the build up of
correlations.
Using the same steps as in appendix (\ref{low}) one
obtains from (\ref{v2})
\beq\label{v3}
{\pa t} E_{\rm corr} &=&
-\frac 1 2 m^4 p_f \langle {V^2 \over \cos{\frac \theta
2}} \rangle \tilde I_f
\label{to1}
\eeq
with abbreviation as in (\ref{vcos}).
It is now easy to perform the time integral in (\ref{to1})
to obtain the time dependence of the correlation
energy
\beq
&&E^{low}_{\rm corr}(t)-E_{\rm corr}^{low}(0)= E_{\rm corr}^{\rm low}
(1 + \frac 1 3 ({ \epsilon_f+\epsilon_c \over \pi T})^2)^{-1}
\nonumber\\
&\times&
\left \{ 1-\frac 1 x \sin(x)
+\left ({\epsilon_f +\epsilon_c \over \pi T}\right )^2 \left (\frac 1 3 + \left [\frac 1 x \sin(x)\right ]''\right )\right \}
\label{corrt}
\eeq
with $x=2{\epsilon_f+\epsilon_c \over \hbar} t$ and the equilibrium correlation energy
$E_{\rm corr}^{\rm low}$ from
(\ref{equil}) respectively.
This shows that the correlation energy
is built up and oscillates around the equilibrium value.
This oscillations are damped
with $t^{-1}$ in time.
We would like to point out here that the result for the
formation of correlations at low temperatures (\ref{corrt}) is
independent of the interaction used. Because higher
order interaction described by higher order diagrams
can be cast into a Boltzmann- like collision integral
with more involved transition matrix elements we have always the
same time dependence of
(\ref{corrt}) for binary interactions however with different
equilibrium correlation energy $E_{\rm corr}$.
\subsection{Numerical results}
\vspace{2ex}
\begin{figure}
\epsfxsize=8cm
\epsffile{sig2.epsi}
\caption{\label{1}The time dependent kinetic and
correlation energy vs. time for a
counter-flowing streams of nuclear matter from
(\protect\ref{short1}). The
temperatures and densities of the
colliding beams are $T_1=10$ MeV, $n_1=n_o/60 $ and
$T_2=5$ MeV
$n_2=n_o/10$ and the relative momentum $1.5 \hbar/fm$,
which
corresponds a colliding energy of $45$ MeV/nucleon. The beams
are starting to interact at time point
$t_0$. One sees the build up of correlation energy
during a correlation-time $\tau_c=3 fm/c$. The parameter are chosen in
such a way that the system is degenerate, which is
described by $n \lambda^3=0.416 <1$.
The total initial kinetic
energy corresponds (neglecting correlation energy)
to an equilibrated system with a temperature
of $T=32$ MeV.}
\end{figure}
Figure \ref{1} shows the time development of
the
kinetic energy as well as the correlation energy. The equation
(\ref{short1}) has been solved numerically
for two initially
counter-flowing streams of nuclear matter, where the
nucleons interact via the Gau\ss{} type of potential
(\ref{ggg}). The initial temperatures and densities of the
colliding beams are $T_1=10$ MeV $n_1=n_o/60 $ and
$T_2=5$ MeV
$n_2=n_o/10$ respectively. The relative momentum is $1.5 \hbar/fm$,
which
corresponds to a colliding energy of $45$ MeV/n.
We observe a
build up of correlations during the initial $3-4
fm/c$. Total energy is conserved and the
kinetic energy is increased by the same amount as the
correlation energy is decreased. Similar results has been found for a different system in \cite{BKSBKK96}.
This is because the system is initially prepared
to be uncorrelated at $t_0=0$.
If
time $t_0$ i.e. the time when the system is uncorrelated is shifted to
$t_0=-\infty$ i.e. infinite
past, we would not observe any build up of
correlations. The equation
(\protect\ref{short1}) would then in fact reduce to the Boltzmann
equation.
\vspace{8ex}
\begin{figure}
\epsfxsize=8cm
\epsffile{sig3.epsi}
\caption{\label{3}The time dependent kinetic energy from a
solution of the Kadanoff-Baym equation (KB) is shown
together with the results from the finite duration
approximation
(\protect\ref{short1}) and the Boltzmann equation.
For Boltzmann transport the kinetic energy is conserved in each
collision and therefore globally.
The conclusion is that the broadening of the
$\delta$
function of energy conservation in the finite duration approximation
almost accounts for the
time dependent built up of correlations from the exact KB
equation. }
\end{figure}
In figure \ref{3} we compare the results with the exact
solution of Kadanoff-Baym equations \cite{hsk95,hsk96}. We see
that the
finite duration approximation reproduces the exact
result
quite nicely. The small deviation is due to higher
order effects.
In order to investigate a situation with higher degeneracy
we choose a model of two
initially counter -
flowing streams of
nuclear matter with density and temperature $n_1=n_o/60
\quad
T_1=0.5 $ MeV
and $n_2=n_o/20 \quad T_2=0.1$ MeV moving with relative
velocity of $1
\hbar/fm$ corresponding to a collision energy of
$21$ MeV/n. The interaction is again a Gau\ss{}-type of
potential.
In figure \ref{5} is plotted the time
evolution of
the kinetic energy and the correlation.
\vspace{2ex}
\begin{figure}
\epsfxsize=8cm
\epsffile{sig4.epsi}
\caption{\label{5}The time evolution of the correlation
and kinetic energy for two
initially counter -flowing streams of
nuclear matter with density and temperature $n_1=n_o/60
\quad
T_1=0.5 $ MeV
and $n_2=n_o/20 \quad T_2=0.1$ MeV moving with relative
velocity of $1
\hbar/fm$ which corresponds to a collision energy of
$21$ MeV/n. In this case the correlation time $\tau_c=9 fm/c$ which is
appreciably
larger than in Fig \protect\ref{1}. We ascribe this
to the smaller
thermal velocity of the particles in the present case.
The parameters are here in contrast to
figure \protect\ref{1} chosen such that the system is
degenerate $n \lambda^3=1.14 \ge 1$.
The equilibrated temperature is here
(neglecting correlations) $T=11.6$ MeV.}
\end{figure}
This build up of correlations is independent of the
initial distribution form. If for example we choose a (equilibrium)
Fermi
distribution as the initial distribution
a build up of correlations will occur as well.
This is due to the fact that the spatial correlations relate in momentum
space to excitations,
resulting in a distribution looking somewhat like a Fermi-distribution
but with a temperature higher than that of the initial uncorrelated
Fermi-distribution.\cite{hsk95}
In order to illustrate the temperature dependence of $\tau_{c}$
as well as to demonstrate the quality of limiting analytical
formulae, we plot in figure \ref{ill} (thick lines)
results from the solution of the Kadanoff
and Baym equations for a fixed chemical potential of $37.1$ MeV
and for three different temperatures. The figure shows
the increase of the kinetic energy (equivalent to the decrease of correlation
energy) with time.
The KB results are compared with those from
approximation (\protect\ref{corrt}). One sees that initially while
correlations are built up the agreement is good.
Especially at low temperatures the
oscillations discussed above are obvious in the approximate results
while the KB calculations only show a slight overshoot at the lowest
temperatures.
We believe that the discrepancy is due to the
neglected damping (and perhaps due to the necessary approximations used
in the integrations as discussed above).
The opposite approximate formula for high temperatures (\ref{highe}) is plotted also for the $T=40$ MeV case. The built up of correlations is too fast according this formula.
\vspace{2ex}
\begin{figure}
\parbox[t]{8cm}{
\psfig{figure=sig1_1.eps,width=8cm,height=7cm,angle=-90}}
\caption{\label{ill}
The formation of correlation plotted as an increase of the kinetic energy with time for temperatures $1,10,40$ MeV. The
chemical potential is fixed to $37.1$ MeV which corresponds to
densities $0.16, 0.18,0.35 {\rm fm}^{-3}$ . The thick lines show results
from KB calculations while the thin lines
are approximate values via formula (\protect\ref{corrt}).
The equilibrium correlation energy was chosen to be equal to the
KB result. The oscillations are overestimated by the approximate formula. For $T=40$ MeV we plotted also the high temperature approximate value via (\protect\ref{highe}) as thin solid line. The built up of correlations is too fast.}
\end{figure}
From the numerical inspection of this section we
summarize three facts:
(i) the build up of correlations is monotonic and
reaches the final value smoothly, (ii) the finite
duration approximation (\ref{short1}) is an excellent
approximation and (iii) also for initial Fermi distributions we see
the same build up of correlations. The latter point shows that this build
up of correlation is not due to an equilibration of an initial nonequilibrium
distribution but mainly due to the decay of higher order
correlation function which are condensed in the off-shell $sin$ -
factor in the collision integral (\ref{kinetic}).
\subsection{Correlation time}
From the numerical observation we conclude that the
correlation energy (\ref{vv}) is increasing monotonously with time
until it reaches its almost final value (\ref{eq}).
We assume
that the main formation time of correlations
$\tau=t_c-t_0$ is given by reaching this asymptotic limit.
This time can be estimated by the condition
\beq
E_{\rm corr}(\tau) &=& \frac 1 4 \langle
\frac{\cos{{\Delta E
\tau\over \hbar}}-1}{\Delta E}
F(t_c) \rangle \nonumber\\
&\approx& \frac 1 4 \langle \frac{1}{\Delta E}
F(\infty) \rangle
\equiv E_{\rm corr}.
\label{conditio}
\eeq
We solve this equation for $\tau$ approximately
by replacing the $\cos$-function by a linear
approximation within the build up time interval
$t-t_0=(0,\pi \hbar /\Delta E)$
\beq
{1-\cos{{\Delta E (t-t_0) \over \hbar}}\over \Delta E}
\approx {2 (t-t_0) \over \pi \hbar} {\rm Sgn}(\Delta E).
\eeq
This approximation is correct in three points, the
initial time, the final formation time where
$<V(\tau)>$ has its maximum and a time point half of
this maximal time at $\pi \hbar /2/\Delta E$. The
latter point coincides too because $1-\cos{x t}$ has
there a turning point. Therefore this linear
approximation overestimates the function in the first
half of the interval $t-t_0=(0,\pi \hbar /\Delta E)$ and
underestimates it in the second part. Furthermore we use
equilibrium distribution functions. With the help of
this approximation we can solve (\ref{conditio})
\beq
\tau &\approx& \frac 1 2 \pi \hbar {\langle \frac{1}{\Delta E}
F(\infty) \rangle \over \langle
{\rm Sgn}(\Delta E) F(\infty) \rangle}.
\label{tau}
\eeq
If we use $<{\rm Sgn} (\Delta E) F>\approx E_{\rm corr} <\Delta E>$, where $<\Delta E>$ is
the mean transition
energy of the collision, we obtain $\tau \approx {\hbar
\over <\Delta E>}$.
This gives the intuitive picture of an {\it uncertainty} principle, i.e. a
smallest time scale determined by $\hbar$ divided by the transition
energy.
The high temperature value of (\ref{tau}) can be
calculated analogously to appendix \ref{high}. Instead of
(\ref{v}) and (\ref{vg}) we have now
\beq
{\pi \hbar \over 2 \tau}=4 T {\int\limits_0^{\infty} dx x^2
V^2(x)\int\limits_{-\infty}^{\infty}ds \,{\rm Sgn}(x (s+x)) {\rm
e}^{-s^2}
\over
\int\limits_0^{\infty} dx x
V^2(x)\int\limits_{-\infty}^{\infty}ds {{\rm
e}^{-s^2}\over x+s}},
\label{max}
\eeq
with $V(x)=1/(x^2+b^2)$ for Yukawa potential and $V(x)=\exp{-(x/b)^2}$
for Gau\ss{} potential.
For the latter we obtain with the help of appendix \ref{b}
[(\ref{bb})]
\beq
\tau&=& {\pi^2 \hbar \over 4 T} {1\over (2+b^2) (\frac \pi 2+
{\sqrt{2} b \over 2+b^2}- {\rm arctan}(\sqrt{2}/b))}\nonumber\\
&=&{\pi^2 \eta \over 8 v_{th}}(1- { b^2 \over 6}+ o(\hbar^4))
\label{max1}
\eeq
with the thermal velocity $v_{th}^2=T/\mu$ and
$b=\hbar/\eta/\sqrt{2 \mu T}$.
We see that in the low density or quasi-classical limit
$b\rightarrow 0$ the formation time of correlations are
determined entirely by the range of potential $\eta$ divided by
the thermal velocity.
This result is intuitively appealing.
The opposite limit of low temperatures can also be
evaluated like in appendix \ref{low}. This limit is
independent of the used interaction because following
(\ref{to}) the interaction part cancels out in
(\ref{tau}). Then we end up with Fermi integrals
which are similar to (\ref{c1}) and the resulting formation time of correlations is via (\ref{tau})
\beq
\tau_{\rm low T}&=&{\pi \hbar \over 3 \epsilon_f+\epsilon_c}
\left ( 1 + ({\pi T \over \epsilon_f+\epsilon_c})^2 \right) .
\label{landau}
\eeq
This time agrees with the time where the correlation
energy (\ref{corrt})
has reached its first maximum
\beq
\tau_c\approx {2 \hbar \over
\epsilon+\epsilon_c}.
\label{exact}
\eeq
This correlation time limits the validity of quasiparticle picture which is established at times greater than this \cite{MSL97a}.
Incidentally, in the early 1950s the criterion $
\hbar/k_BT<\tau$
was supposed to limit the validity of the Landau
Fermi-liquid
theory for metals \cite{P55}. Later it was
shown by Landau that
this criterion is irrelevant and he proposed the
correct
criterion $\tau > \hbar/\epsilon_F$. Here we have
explicitly calculated the formation time of
correlations which is found to be equivalent to the
memory time.
It is to remark that this result describes
just the
memory- or
collision duration time $\tau_{mem}\equiv1/E$, see
\cite{MR95}. For nuclear matter at saturation density this time is about $4-5$~fm/c and agrees with the numerical result of memory time \cite{GWR94}.
\subsection{Range of validity}
The validity of the low and high temperature expressions
can be discussed with the help of two parameters, the
value of the degeneracy $n\lambda^3=\exp{\mu/T}$ and the ratio
$\epsilon_f/T$. There are 4 cases
\beq
\matrix{case \;1 & n\lambda^3>1\qquad T>\epsilon_f & only\;for\;s>1\cr
case \;2 & n\lambda^3<1\qquad T>\epsilon_f & \cr
case \;3 & n\lambda^3<1\qquad T<\epsilon_f & only\;for\;s=1\cr
case \;4 & n\lambda^3>1\qquad T<\epsilon_f & \cr}
\label{mat}
\eeq
where $s$ is the degeneracy. In figure (\ref{range}) we plot this case for
nuclear matter with $m=938$ MeV.
\vspace{2ex}
\begin{figure}
\epsfxsize=8cm
\epsffile{range.epsi}
\caption{\label{range}
The 4 different areas of density- temperature range according to (\ref{mat}). The high
temperature expansion is applicable in case 2 and the low temperature
expansion in case 4. The right figure represents the situation for
spin-isospin degeneracy $s=4$ and the left figure the situation for $s=1$ for comparison.}
\end{figure}
It is clear that the low temperature expansion can only hold for case 4
and the high temperature expansion in case 2. For nuclear matter with
$s>1$ we see that the case 1 is not covered at all by these expansions.
\subsection{Discussion}
We compare in Table \ref{tab1} the different expansions with each other
as well as with exact results from KB-calculations.
The general agreement of the numerical value of the correlation energy (\ref{eq}) with the KB result is
striking. The correlation time (\ref{tau}) calculated numerically agrees as well reasonably.
For the case 2 in row 1 (for a high temperature and nondegenerate system)
we can
reproduce the build up time quite well with the approximate
formula (\ref{max1}) . This
becomes worse if we approach the degenerated condition $n \lambda^3=1$ in
second line of the table. Both of the high temperature cases are too narrow to
this condition to reproduce the correlation energy correctly. The high
temperature approximate formula (\ref{vtg}) overestimates the exact
value
$ E_{\rm corr}$
obtained from the solution of the
time dependent kinetic equation. However in the case 4 for
low temperature and degenerated systems we can reproduce the simulation
value quite well. The build up time is slightly overestimated by the exact
low temperature value (\ref{exact}). The case 1 is
not covered by the approximate values which is demonstrated in the 5.
row of the table. There the correlation energy is much overestimated which is
quite clear from the discussion of Fig \ref{range}.
The systematic slight increase of the correlation time with temperature (for low temperature limit) is explained by the low
temperature expansion.
The results related to figure \ref{ill} are represented in rows
6-8. It underlines the general good description of case 4
and the failure of case 1.
The chemical potential was kept constant initially in these three
cases and the temperature
gradually increases, such that the chemical potential decreases.
According to ({\ref{exact}}) the correlation time should increase if
using the final chemical potential instead of the initial one.
This is clearly not the case in K/B solutions, where the
correlation time stays almost constant.
This shows clearly that the formation time cannot be completely
described by equilibrium formulae as done above. The correct formation
time is described by a chemical potential somewhat between initial and
final value.
We would like to repeat here that the expression
(\protect\ref{corrt})
for the low temperature limit is universal for
binary
collision approximations, independent of the interaction. The high temperature limit should also be
correct because the Born approximation used here is believed to be a good
approximation for fast particles. The intermediate region is quite open.
We have calculated only in Born approximation. Here especially
higher order correlations beyond second Born should be
employed. In summary: For low temperatures approximate analytical
expressions can be given and
for higher temperatures the analytical second
Born approximation should be approached, while the
intermediate region is left for numerical investigations.
\section{Summary}
The gradient approximation of the kinetic equation in second order Born
approximation is investigated.
The interaction energy is derived within different expansions.
The equilibrium value of the correlation energy is obtained from the first
order gradient expansion.
This equilibrium value is calculated for Yukawa and Gau\ss{} type of
potentials and
the results are analytically given in high and low temperature limit.
For contact potentials we rederive the known result for the ground state correlation energy.
A finite duration approximation of the non Markovian collision integral
is proposed which follows from time dependent Fermi's
Golden Rule and which is in good agreement with the numerical solution of
complete
collision integral. Furthermore it leads to the correct equilibrium value.
Numerical comparisons are made with the solution of the complete
Kadanoff and Baym equation in Born approximation and with this finite
duration approximation.
The build up time of correlations is investigated and it is found that
the low temperature value is universal for any approximation at the
binary collision level. It is shown that the formation time of
correlations is nearly determined by a ratio of $\hbar$ to the transfer
energy which can be considered as an analogue to uncertainty principle.
The high temperature limit shows roughly the time scale
that particle needs to travel through the potential range.
The validity of both the high and the low
temperature estimates are confirmed by numerical comparisons with
KB-results.
The time scale we are describing are just the life time of fireball assumed
in the early stage of nuclear collision. This means the extracted temperatures
from final stage products are usually wrongly extrapolated to early stages
of nuclear reactions. This should effect the conclusions towards the caloric curve which is much discussed recently. Also the size and lifetime of hot reaction centers, which are extracted by interferometry methods should be critically revised as demonstrated in \cite{MK98}.
The authors like to thank P. Lipavsk{\'y} and V. {\v S}pi{\v c}ka for interesting discussions and A. Sedrakian for helpful comments.
|
1,116,691,500,383 | arxiv | \section{Introduction} \label{Sec:intro}
Surface-acoustic waves (SAWs) are sound waves traveling along solid surfaces with a speed of $c_\textrm{SAW} = \lambda_\textrm{SAW} f_\textrm{SAW}$ of the order of $3500\:\textrm{ms}^{-1} < c_\textrm{SAW} < 4000\:\textrm{ms}^{-1}$ at frequencies of $10\:\textrm{kHz} < f_\textrm{SAW} < 100\:\textrm{MHz}$ and wavelengths of the order of $35\:\mu\textrm{m} < \lambda_\textrm{SAW} < 0.4\:\textrm{m}$. Most commonly, they are generated with interdigitated transducer (IDT) electrode combs, which are fabricated lithographically on a piezo-active substrate such as a single crystal lithium niobate (LiNbO$_3$, LN) wafer \cite{Franke:LabChip2009} and actuated by a periodic voltage signal. The comb spacing is chosen such that it approximates $\lambda_\textrm{SAW}$. Applying electric power to the IDT leads to the emission of Rayleigh waves traveling along the substrate surface and away from the IDT. If such a wave reaches an area where the substrate is in contact with a liquid, the SAW leaks into the liquid at a refraction (Rayleigh) angle determined by $\theta_\textrm{R} = \textrm{sin}^{-1}(c_\textrm{liq}/c_\textrm{SAW})$. For common liquids, the speed of sound is given by $c_\textrm{liq} \approx 1450\:\textrm{ms}^{-1}$, so that $\theta_\textrm{R} \approx 22\:^{\circ}$ {\cite{Dentry:PRE2014}. The transition into the liquid to form a bulk acoustic wave (BAW) typically proceeds on three length scales relative to the viscous boundary (Stokes) layer thickness $\delta_\eta = \sqrt{2 \eta/(\rho \omega_\textrm{SAW})}$ \cite{Wiklund:LabChip2012} as well as to $\lambda_\textrm{SAW}$. In the definition of $\delta_\eta$, $\eta$ and $\rho$ denote the dynamic viscosity and the density of the liquid, respectively, while the angular frequency is given by $\omega_\textrm{SAW} = 2 \pi f_\textrm{SAW}$. For water and the frequency range given above, one finds $50\:\textrm{nm} \leq \delta_\eta \leq 5\:\mu\textrm{m}$. For length scales $l \leq \delta_\eta$, the leaking sound wave drives the (inner) viscous boundary layer or Schlichting streaming, which is a steady rotational fluid motion. For $\delta_\eta < l < \lambda_\textrm{SAW}$, the Schlichting vortices are balanced by counter-rotating vortices in the outer boundary layer, commonly referred to as Rayleigh streaming \cite{Wiklund:LabChip2012,Sadhal:LabChip2012,Wu:Fluids2018}. Finally, for $l > \lambda_\textrm{SAW}$ the attenutation of the sound wave leads to a gradient in an acoustic radiation pressure that is the source of Eckart or "quartz wind" streaming \cite{Eckart:PhysRev1947,Riley:AnnRevFluidMech2001}. As a rough classification, Rayleigh-Schlichting or boundary-driven streaming \cite{Nyborg:JAcoustSocAm1958} occurs in the near field while Eckart streaming is a far-field phenomenon. Since the acoustic actuation is sinusoidal in time, for a fully linear system no time-averaged velocity would be observed. In the simplest case, the propagation of the sound pressure wave is described by a linear viscous wave equation whose solution includes a coefficient defined by Stokes' law of sound attenuation in a Newtonian fluid \cite{Stokes:TCPS1845}. This viscous attenuation, caused by the dissipation-induced lag between pressure actuation and flow reaction, gives rise to a non-linear body force in the fluid and to the observation of a finite net flow velocity \cite{Riley:AnnRevFluidMech2001,Wiklund:LabChip2012}. Actuation by SAWs has received vivid attention for spraying applications \cite{Kurosawa:SensAct1995,Qi:PoF2008} and inducing a fluid motion in small liquid entities such as drops and films \cite{Franke:LabChip2009,Yeo:AnnRevFluidMech2014}. Corresponding flow velocities largely depend on geometric factors but may reach ${\cal O}(10^1)\:\textrm{cm s}^{-1}$ \cite{Beyssen:SensActB2006,Yeo:Biomicrofluidics2009,Dentry:PRE2014}. Such high velocities cannot be induced in strongly confined domains such as microchannels, since for channel widths smaller than $\lambda_\textrm{SAW}$ or even smaller than $\delta_\eta$ the conventional Rayleigh-Schlichting streaming is disturbed. While it has been shown that adding a wall texture whose amplitude is of similar order as $\delta_\eta$ may improve streaming \cite{Lei:MANO2018}, even for relatively wide channels with a free (no-stress) surface on one side, flow velocities do not exceed ${\cal O}(10^1)\:\textrm{mm s}^{-1}$ \cite{Tan:EPL2009}. Since the pressure wave in the liquid propagates at an angle of $\theta_\textrm{R}$ compared to the SAW, it is continuously reflected (and refracted) at the channel walls if the channel is wide enough, with only the reflected and subsequently interfering BAWs leading to a net flow in the direction along the channel. Studies of SAW-driven liquid transport in channels bounded by solid walls focus mostly on cases where the SAW propagation is perpendicular to the channel main axis. For instance, circular flow patterns with velocities of ${\cal O}(10^0-10^1)\:\textrm{mm s}^{-1}$ can be generated \cite{Schmid:MANO2012,Kiebert:LabChip2017}, and Hags\"ater \textit{et al.} observed flow velocities of the order of a few tens of $\mu \textrm{m s}^{-1}$ in a microfluidic chamber \cite{Hagsaeter:LabChip2007}. In addition, SAW-induced acoustic streaming may cause liquid atomization and subsequent drop coalescence in a channel, causing filling velocities of ${\cal O}(10^0)\:\textrm{mm s}^{-1}$ in a direction opposing the SAW propagation \cite{Cecchini:APL2008}. To date, estimates of transport velocities in nanochannels driven by acoustic effects are solely simulation-based. Here, a clear judgment is difficult as the velocities are typically reported in a dimensionless fashion. For instance, Xie and Cao report by means of molecular dynamics (MD) simulation "fast nanofluidics" in nanochannels driven by SAWs \cite{Xie:MANO2017}, but converting their findings into physical units by employing their non-dimensionalization would imply transport velocities of more than $100\:\:\textrm{m s}^{-1}$. Tan and Yeo mention at one point of their numerical study based on the Lattice-Boltzmann approach that the average velocities do not exceed ${\cal O}(10^{-5})\:\textrm{m s}^{-1}$ \cite{Tan:PRF2018}, which appears to be a more realistic estimate.
In a piezo-active material, the displacement of the substrate atoms due to the propagating SAW wave is proportional to the (complex) electrostatic surface potential given by $U_\textrm{SAW} = \hat{U}_\textrm{SAW}\exp[i(\omega_\textrm{SAW}t-\boldsymbol{k}_\textrm{SAW} \cdot \boldsymbol{x})]$ \cite{Jakubik:SensActB2014}, where $\hat{U}_\textrm{SAW}$, $\boldsymbol{k}_\textrm{SAW}$, and $\boldsymbol{x} = (x,y,z)^\textrm{T}$ denote the amplitude of the surface potential, the SAW wave vector, and the position vector, respectively. In this work, it is assumed that the SAW substrate is in contact with a dilute aqueous electrolyte, i.e., with a (moderately) conducting liquid, so that the flow actuation principle discussed herein would be weakened by a charge transfer between both. Yet, it has recently been shown that acoustic streaming induced by SAWs can still be present in strong electrolytes \cite{Huang:AdvMat2020}, while LN acoustic plate sensors may detect liquid electrolytes based on the conductivity-dependent frequency shift and attenuation of the SAW \cite{Josse:SensActB1992}. The latter finding was mainly attributed to the polarization of the electrolyte by the electric field via migration of dissolved ions rather than to the reduction of the surface potential by conduction through the electrolyte between oppositely charged areas on the LN substrate. In addition, the performance of LN as an anode material in contact with aqueous electrolytes has been studied using galvanostatic charge-discharge measurements \cite{Son:ElectrochemComm2004} and cyclic voltammetry \cite{Lui:MatLett2014}. In these experiments, a pronounced capacitive behavior of LN was proven. In turn, this indicates that an electric double layer (EDL) builds up on the surface of the material. This claim is supported by an experimental study in which a liquid micro-lens array is generated on top of a LN substrate by electrowetting effects and driven by the pyroelectric behavior of LN \cite{Grilli:OpticsExpr2008}. Together, these studies suggest that typical setups for (small-scale) acoustofluidics can be used to address SAW-driven electroosmotic flow (EOF) as well. Considering the high SAW frequencies, the ion cloud in the vicinity of the wall cannot reach a state of mechanical equilibrium, or in other terms, the Gibbs-Duhem equation, expressing that the sum of the gradients in chemical potential (at constant temperature) of each electrolyte component equals zero, is not fulfilled \cite{Fitts:McGrawHill1962}. This leads to an electroosmotic propulsion (EOP), which originates from the SAW along the surface and adds to the flow induced by acoustic streaming. Typical SAW flow domains are large in comparison to the EDL thickness so that electric fields and affiliated currents in between oppositely charged regions, ultimately driving the EOP, are small. In the case that narrow channels are considered, where both channels walls are subjected to a SAW, these currents and the corresponding EOP can be large, leading to flow that might exceed the one induced by acoustic effects. This is the focus of the present paper.
The underlying physical principle of SAW-induced EOP is identical to the one of induced-charge (IC) EOF \cite{Squires:JFM2004} and alternating current (AC) EOF. In the simplest form of EOF, an externally applied (i.e., source-free) electric field acts on the charge density of an EDL formed at a charged wall. Over the past two decades, a plethora of technological applications for such electrokinetic flows has emerged. For instance, one can take advantage of the large surface-to-volume ratio to perform sensing applications in micro-total-analysis systems ($\mu$-TAS) \cite{Suh:Micromachines2010,Luka:Sensors2015,Olanrewaju:LabChip2018}, enhance the cooling of micro-processors \cite{Murshed:RenewSustainErgRev2017}, or to embody small-scale energy conversion systems \cite{Yang:JMicroMechEng2003,vdHeyden:PRL2005}, in which mechanical energy is partially converted into electric energy. For moderate external fields, the ion cloud is in a state of mechanical equilibrium, implying that the osmotic and the electrostatic forces caused by the local electric field within the EDL exactly cancel each other. Most prominently, this is the case for solutions that invoke the classical Poisson-Boltzmann (PB) theory. Nevertheless, the mechanical balance can be disturbed -for instance- by applying gradients in bulk ion concentration or temperature, giving rise to diffusoosmosis \cite{Liu:Langmuir2013,Jing:PCCP2018} or thermoosmosis \cite{Bregulla:PRL2016,Dietzel:JFM2017}, respectively. Furthermore, using permselective membranes as wall material permits electric currents passing through them, which -beyond certain thresholds of the driving electric field- may lead to concentration polarization and a mechanical imbalance of the ion cloud in the vicinity of the wall. As a consequence, an electrohydrodynamic (EHD) instability develops, which is the reason for the experimentally observed overlimiting current through nanopores \cite{Rubinstein:PRE2000}. By the same principle, the application of large or time-varying electric fields at the walls as used in ICEOF and ACEOF, respectively, may disturb the mechanical equilibrium of the ion cloud near them as well.
For the latter, the equations governing the ion distribution and the electric potential remain non-linear even in the limit of the Reynolds number $Re \rightarrow 0$, so that the time-averaged ACEOF does not vanish \cite{Green:PRE2000,Gonzalez:PRE2000}. In the thin EDL limit and disregarding the Stern layer, the flow velocity at the surface can be approximated by $\langle u \rangle= \epsilon/(4 \eta) \boldsymbol{\nabla}\:_\textrm{s}[(\Delta U)^2]$ \cite{Gonzalez:PRE2000}. The dielectric permittivity of the electrolyte solution is denoted by $\epsilon$, while the spatial gradient along the electrode surface is given by $\boldsymbol{\nabla}\:_\textrm{s}$. The voltage drop across the diffuse part of the EDL is denoted by $\Delta U$. Classically, two electrodes separated by a small gap are arranged along the flow boundary to apply peak-to-peak AC potentials of ${\cal O}(10^0)\textrm{V}$, leading to time-averaged vortex patterns with peak velocities of ${\cal O}(10^{-1} - 10^{0})\:\textrm{mm s}^{-1}$ at ${\cal O}(10^0)\:\textrm{kHz}$ \cite{Green:PRE2000}. The characteristic time scale is given by the RC time of the EDL defined by $t_\textrm{RC} = (\lambda_\textrm{D} h)/D$ \cite{Squires:JFM2004}, where $h$ and $D$ denote the channel width and the ion diffusivity, respectively. The nominal EDL thickness is given by $\lambda_\textrm{D} = \sqrt{\epsilon k_\textrm{B} T/(2 e^2 \nu^2 n_0)}$, with $k_\textrm{B}$, $T$, $e$, $\nu$, and $n_0$, denoting the Boltzmann constant, the absolute reference temperature, the elementary charge, the valence of the symmetric $\nu:\nu$ electrolyte, and the reference ion concentration in the bulk, respectively. Using much lower frequencies than $t^{-1}_\textrm{RC}$ provides sufficient time for the ions to attain a mechanical equilibrium, while significantly exceeding $t^{-1}_\textrm{RC}$ leads to a practically uncharged EDL, so that both limits go along with vanishing ACEOF. Using an array of electrodes placed along the flow boundaries together with appropriate phase differences of the driving voltages according to a traveling AC wave may lead to unidirectional flow \cite{Ramos:JAP2005,Ramos:JCollIntScie2007,Gonzalez:MANO2008,Yang:IEEETransDielectricsElectricIns2009,Yeh:PRE2011,Hrdlicka:Electrophoresis2014}. The magnitude of the corresponding flow largely depends on the ability to pattern the flow boundary with narrow-spaced and interconnected electrodes.
The work reported in this paper can be viewed from two different perspectives. First, it is a fundamental study of the electrokinetic effects induced by SAWs in narrow channels. In the second perspective, the idea of traveling-wave (TW) ACEOF is picked up, but surface acoustic waves (SAWs) moving or standing along a (piezo-active) flow boundary are considered to induce the alternating electric wall potential instead of micro-fabricated electrode arrays. In the following, the mathematical description used to compute the flow field is detailed, succeeded by the numerical implementation and its verification by means of known results from ACEOF. Subsequently, parameter variations are performed and discussed.
\section{Mathematical description} \label{Sec:math}
The momentum equation for an incompressible, (Newtonian) liquid electrolyte is given by
\begin{equation}
\label{Eq:NSE} \rho[\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot\boldsymbol{\nabla}\:)\boldsymbol{u}] = -\boldsymbol{\nabla}\: p +\eta \boldsymbol{\nabla}\:^2 \boldsymbol{u} - \rho_\textrm{f} \boldsymbol{\nabla}\:{\phi},
\end{equation}
where the partial time derivative, the velocity vector, and the fluid pressure are denoted by $\partial_t \equiv \partial/\partial t$, $\boldsymbol{u} = (u,v,w)^\textrm{T}$, and $p$, respectively. The last term on the right-hand side (RHS) of Eq. (\ref{Eq:NSE}) describes the Maxwell stress, where the charge density is given by $\rho_\textrm{f}$, while $\phi$ denotes the electric potential. The latter two are connected via the Poisson equation according to
\begin{equation}
\label{Eq:Poisson} \boldsymbol{\nabla}\:^2 \phi = -\frac{\rho_\textrm{f}}{\epsilon}.
\end{equation}
In this work, the frequency dependence of $\epsilon$ is neglected since for water this becomes significant only for frequencies above $1\:\textrm{GHz}$ \cite{Kaatze:JChemEngData1989}. The charge density can be expressed by the number concentrations of the ion species. For a symmetric electrolyte, one finds
\begin{equation}
\label{Eq:charge_dens} \rho_\textrm{f} = e \nu (n_+ - n_-).
\end{equation}
In turn, the ion number concentrations $n_\pm$ are determined by the Nernst-Planck equations (NPEs) given by
\begin{equation}
\label{Eq:NPE} \partial_t n_\pm + \boldsymbol{u} \cdot \boldsymbol{\nabla}\: n_\pm = \boldsymbol{\nabla}\: \cdot [D_\pm \boldsymbol{\nabla}\: n_\pm + e \nu_\pm \mu_\pm n_\pm \boldsymbol{\nabla}\: \phi],
\end{equation}
where the electrophoretic mobilities of the cation (subscript "+") and the anion (subscript "-") are given by the Stokes-Einstein relation according to $\mu_\pm = D_\pm/(k_\textrm{B} T)$, while the valences are given by $\nu_+=-\nu_-=\nu$. To focus on the essential physics, constant and equal diffusion coefficients for both ion species are assumed, i.e., $D_\pm = D$.
This work focuses on flows through planar, parallel-plate channels so that a description in two spatial dimensions is sufficient. For such flows, the stream function-vorticity formulation can be employed. Following the standard procedure, the pressure can be removed from Eq. (\ref{Eq:NSE}) to read
\begin{equation}
\label{Eq:NSE_stream} \rho[\partial_t \omega + \partial_y \psi \partial_x \omega - \partial_x \psi \partial_y \omega]\!=\!\eta\!\boldsymbol{\nabla}\:^2 \omega + \partial_x \rho_\textrm{f} \partial_y \phi - \partial_y \rho_\textrm{f} \partial_x \phi,
\end{equation}
where $\psi$ denotes the stream function with $\partial_y \psi \equiv \partial \psi/\partial y = u$ and $\partial_x \psi \equiv \partial \psi/\partial x = -v$, while $\omega = \boldsymbol{\nabla}\:^2 \psi$. For cases that can be described by the PB theory, the ions in the vicinity of the wall follow the Boltzmann distribution. Consequently, for a symmetric electrolyte with $n_0$ denoting a reference ion concentration at $\phi=0$, the corresponding charge density is given by $\rho_\textrm{f,PB} = -2 e \nu n_0 \textrm{sinh}[e \nu \phi/(k_\textrm{B} T)]$. Thus, in this case at constant temperature and if in addition no external electric field is applied, the electroosmotic propulsion force defined by
\begin{equation}
\label{Eq:EOP} F_\textrm{EOP}=\partial_x \rho_\textrm{f} \partial_y \phi\!-\!\partial_y \rho_\textrm{f} \partial_x \phi
\end{equation}
is identical to zero. This remains valid for any ion distribution for which $\rho_\textrm{f} = f(\phi)$ and with $\phi$ denoting the only variable. For instance, this also includes all systems that can be described by the Debye-H\"uckel (DH) approximation in the limit of low $\phi$. Hence, within the PB theory, for stationary and isothermal electrokinetic systems with vanishing inertial effects and no external field, the stream function is determined by the biharmonic equation $\boldsymbol{\nabla}\:^4 \psi_\textrm{PB} = 0$ \cite{Levich:Prentice1962,Pascall:JFM2011,Dietzel:JFM2017}. In general, the charge density cannot be expressed by $\rho_\textrm{f,PB}$ so that the EOP may be non-vanishing even without the application of an external electric field, indicating that the ion cloud in the vicinity of the wall is not in a state of mechanical equilibrium.
Expression (\ref{Eq:NSE_stream}) is made dimensionless by scaling the $x$- and $y$-directions by the SAW wavelength $\lambda_\textrm{SAW}$ and the channel height $h$, respectively, i.e., $\overline{x} = x/\lambda_\textrm{SAW}$ and $\overline{y} = y/h$. The aspect ratio is given by $A = h/\lambda_\textrm{SAW} = h f_\textrm{SAW}/c_\textrm{SAW}$. Furthermore, employing a scaling velocity denoted by $u_0$, the dimensionless stream function is defined by $\overline{\psi} = \psi/(h u_0)$, while $\overline{\omega} = \omega h/u_0$. Time is scaled using $f_\textrm{SAW}$ according to $\overline{t} = t f_\textrm{SAW}$. This leads to
\begin{eqnarray}
\label{Eq:NSE_stream_nondim} Ro \partial_{\overline{t}} \overline{\omega} + Re (\partial_{\overline{y}} \overline{\psi} \partial_{\overline{x}} \overline{\omega} - \partial_{\overline{x}} \overline{\psi} \partial_{\overline{y}} \overline{\omega}) = \\ \nonumber
(A^2 \partial^2_{\overline{x}} + \partial^2_{\overline{y}}) \overline{\omega} + Ha (\partial_{\overline{x}} \overline{\rho}_\textrm{f} \partial_{\overline{y}} \overline{\phi} - \partial_{\overline{y}} \overline{\rho}_\textrm{f} \partial_{\overline{x}} \overline{\phi}).
\end{eqnarray}
The dimensionless charge density is given by $\overline{\rho}_\textrm{f} = \rho_\textrm{f}/(e \nu n_0)$, while $\overline{\phi} = \phi e\nu/(k_\textrm{B} T)$. The (viscous) Roshko number, the Reynolds number, and the Hartmann number are defined by $Ro = h^2 f_\textrm{SAW} \rho/\eta$, $Re = A \rho u_0 h/\eta$, and $Ha = A h n_0 k_\textrm{B} T/(\eta u_0)$, respectively.
The non-dimensional Poisson equation is given by
\begin{equation}
\label{Eq:Poisson_nondim} (A^2\partial^2_{\overline{x}} + \partial^2_{\overline{y}}) \overline{\phi} = -\frac{\overline{\kappa}^2_0}{2}(\overline{n}_+ - \overline{n}_-),
\end{equation}
where the non-dimensional Debye parameter is defined by $\overline{\kappa}_0 = h/\lambda_\textrm{D}$. The non-dimensional ion number concentrations $\overline{n}_\pm = n_\pm/n_0$ are governed by the non-dimensional NPEs, given by
\begin{align}
\label{Eq:NPE_nondim} Ro_{\textrm{i},\pm} \partial_t \overline{n}_\pm +& Pe_{\textrm{i},\pm}(\partial_{\overline{y}} \overline{\psi} \partial_{\overline{x}} \overline{n}_\pm - \partial_{\overline{x}} \overline{\psi} \partial_{\overline{y}} \overline{n}_\pm) \\ \nonumber
=&(A^2 \partial^2_{\overline{x}} + \partial^2_{\overline{y}}) \overline{n}_\pm \pm \nu \overline{n}_\pm (A^2 \partial^2_{\overline{x}} + \partial^2_{\overline{y}}) \overline{\phi} \\ \nonumber
\pm& \nu (A^2 \partial_{\overline{x}} \overline{n}_\pm \partial_{\overline{x}}\overline{\phi} + \partial_{\overline{y}} \overline{n}_\pm \partial_{\overline{y}} \overline{\phi}),
\end{align}
where the ionic Roshko number and the ionic P\'eclet number are given by $Ro_{\textrm{i},\pm} = h^2 f_\textrm{SAW}/D$ and $Pe_{\textrm{i},\pm} = A u_0 h/D$, respectively.
This work focuses on flows driven by a non-vanishing EOP alone. In such systems, a non-uniform osmotic pressure is the main source for fluid motion and balanced by viscous stresses of the same order of magnitude. Hence, according to its definition, $Ha=1$ is assumed and used to define the scaling velocity according to $u_0 = A h n_0 k_\textrm{B} T/\eta = h^2 n_0 k_\textrm{B} T f_\textrm{SAW}/(c_\textrm{SAW} \eta)$. In this study, the thermophysical properties of an aqueous electrolyte solution and the operation parameters as summarized in Table \ref{Tbl:thermoprops_operation} are used.
\begin{table}
\begin{center}
\begin{tabular}{l|c||l|c}
\multicolumn{2}{c||}{Fluid properties} & \multicolumn{2}{c}{Other parameters} \\ \hline
Parameter & Range of variation & Parameter & Range of variation \\ \hline
& & & \\
$\rho_0\:(\textrm{kg m}^{-3})$ & $1000$ & $c_\textrm{SAW}\:(\textrm{m s}^{-1})$ & $3965$$^{\textrm{a}}$ \\
$\eta_0\:(\textrm{Pa s})$ & $1 \times 10^{-3}$ & $f_\textrm{SAW}\:(\textrm{MHz})$ & $10^{-1}-10^{2}$ \\
$n_0\:(\textrm{M})$ & $(0.1-10) \times 10^{-3}$ & $\lambda_\textrm{SAW}\:(\textrm{m})$ & $10^{-5}-10^{-2} $ \\
$D\:(\textrm{m}^2\:\textrm{s}^{-1})$ & $(1-5)\times 10^{-9}$ & $\hat{U}\:(\textrm{V})$ & $0.25-1.25$ \\
$\epsilon/\varepsilon_0$ & $78.14$ $^{\textrm{b}}$ & $h\:(\textrm{m})$ & $(3-480) \times 10^{-9}$ \\
$\nu$ & $1$ & $\lambda_\textrm{D}\:(\textrm{m})$ & $(3-30) \times 10^{-9}$ \\
\end{tabular}
\end{center}
\caption{Thermophysical properties of aqueous electrolyte solution and other parameters employed in this study. a - \citet{Yeo:Biomicrofluidics2009}, b - \citet{Buchner:PhysChemA1999}.}
\label{Tbl:thermoprops_operation}
\end{table}
With these values, the characteristic velocity and numbers pertinent to the present study can be computed as listed in Table \ref{Tbl:scaling}.
\begin{table}
\begin{center}
\begin{tabular}{l|c||l|c}
\multicolumn{4}{c}{Scaling parameters} \\ \hline
Parameter & Range of variation & Parameter & Range of variation \\ \hline
& & & \\
$u_0\:(\textrm{m s}^{-1})$ & $10^{-11}-10^{-1}$ & $Re$ & $10^{-20}-10^{-4}$ \\
$A$ & $10^{-8}-10^{-2}$ & $Ro$ & $10^{-5}-10^{1}$ \\
$\overline{\kappa}_0$ & $10^{0}-10^{2}$ & $Pe_{i,\pm}$ & $10^{-18}-10^{-1}$ \\
$Ha$ & $1$ & $Ro_{i,\pm}$ & $10^{-1}-10^{3}$ \\
\end{tabular}
\end{center}
\caption{Typcial values of the characteristic velocity and dimensionless numbers pertinent to the present study.}
\label{Tbl:scaling}
\end{table}
Hence, flow velocities corresponding to $Ha=1$ are far below $1\:\textrm{m s}^{-1}$. For narrow channels as considered in this work, $Re \ll 1$, i.e., advective momentum transport is negligibly small, while the unsteady term in Eq. (\ref{Eq:NSE_stream_nondim}) proportional to $Ro$ is not. With respect to the NPEs, in this work $Pe_{\textrm{i},\pm} = {\cal O}(10^{-7}-10^0)$, while $Ro_{\textrm{i},\pm}$ is typically several orders of magnitude larger. Hence, the ion transport governed by the NPEs is decoupled from the momentum transport. As a side note, for systems with an ion cloud pushed away from mechanical equilibrium by a strong externally driven flow characterized by a large $u_0$, counterintuitively, the corresponding EOP is independent of $u_0$. As can be seen from Eq. (\ref{Eq:NSE_stream_nondim}), the EOP is proportional to $Ha$, while $\overline{\rho}_\textrm{f}$ is governed by Eq. (\ref{Eq:NPE_nondim}) and proportional to $Pe_{\textrm{i},\pm}$. Hence, the EOP scales as $Ha Pe_{\textrm{i},\pm} = h n_0 k_\textrm{B} T/(\eta D)$, which is independent of $u_0$. Furthermore, since the base flow is proportional to $Re$ and scales as the EOP linearly with respect to $h$, enlarging $u_0$ or $h$ will not enhance the magnitude of the secondary flow driven by the EOP in comparison to the base flow velocity that causes the non-equilibrium EDL in the first place. Thus, it can be concluded that for an externally imposed base flow the secondary EOP-driven flow can practically always be disregarded, except for the special case of unusually small ion diffusivities.
Standing (SD) or traveling (TV) SAWs are considered in this work. The corresponding electrostatic wall potentials of amplitude $\hat{U}$ are assumed to be given by
\begin{equation}
\label{Eq:Wallpot_SW} U_\textrm{SD} = \hat{U} \textrm{sin}(2 \pi \overline{t})\textrm{cos}(2\pi \overline{x} + \Delta \varphi)
\end{equation}
and
\begin{equation}
\label{Eq:Wallpot_TW} U_\textrm{TV} = \hat{U} \textrm{sin}[2 \pi(\overline{x} - \overline{t}) + \Delta \varphi],
\end{equation}
respectively. The phase shift between two SAWs is denoted by $\Delta \varphi$, which is relevant for the cases where SAWs of identical frequency are applied on both channel walls. As will be discussed later on, the application of SAWs of different frequencies were found to be not beneficial to maximize the induced EOF transport.
The wall potential is screened not only by the diffuse part of the EDL but also by the immobile ions in the Stern layer. To account for this effect, the Stern layer model described by \citet{Olesen:PRE2010} is commonly implemented. It considers the Stern layer as a parallel-plate capacitor of capacitance $C_\textrm{St}$, while the continuity of the dielectric displacement field vector at the interface between the Stern and the diffusive layer leads to the following mixed boundary condition \cite{Olesen:PRE2010}
\begin{equation}
\label{Eq:Stern} C_\textrm{St}(U-\phi)+\epsilon \boldsymbol{n}\cdot\boldsymbol{\nabla}\: \phi=0.
\end{equation}
The surface normal pointing into the electrolyte is denoted by $\boldsymbol{n}$. In non-dimensional form and with $\overline{U} = U e \nu/(k_\textrm{B} T)$, Eq. (\ref{Eq:Stern}) reads
\begin{equation}
\label{Eq:Stern_nondim} \overline{U}-\overline{\phi} + \frac{\delta_\textrm{St}}{\overline{\kappa}} \overline{\boldsymbol{n}} \cdot \boldsymbol{\nabla}\: \overline{\phi}=0.
\end{equation}
For the parallel-plate channel $\overline{\boldsymbol{n}} \cdot \boldsymbol{\nabla}\: \overline{\phi} = \mp \partial_{\overline{y}} \overline{\phi}$, with the minus sign valid for the lower wall and the plus sign for the upper wall. The ratio between the (nominal) capacitance of the diffuse part of the EDL relative to the one of the Stern layer is denoted by $\delta_\textrm{St} = C_\textrm{D}/C_\textrm{St}$ with $C_\textrm{D} = \epsilon/\lambda_\textrm{D}$. While the model of the Stern layer has been incorporated in the mathematical framework of this work, the results discussed in the following have been obtained without it. Since it is a simple capacitor model, it just lowers the effective $\zeta$ potential acting on the fluid. This was verified by several numerical tests (not shown). Commonly used as a parameter to fit model results to those obtained from experiments, values of $\delta_\textrm{St}$ are difficult to estimate and may be in the range of $\delta_\textrm{St} = 0.01 - 10$ \cite{Olesen:PRE2010}. Furthermore, recent work suggests that due to extraordinarily slow ad- and desorption processes it may take a few hundred seconds to fully charge the Stern layer \cite{Werkhoven:PRL2018}. This would be orders of magnitude too long to follow the electric signal of the SAW. Thus, it is likely that for the high frequencies considered in this work the Stern layer remains practically uncharged. To avoid building our results on a boundary condition that is speculative in the present context, we neglect the Stern layer in most of our simulations. Instead, to obtain results that are conservative with respect to the flow velocities achieved, we assume a voltage amplitude not exceeding $1.25\:\textrm{V}$, which is approximately $30-40$ times smaller than typical values used for IDTs in practice \cite{Lei:LabChip2014}.
The voltage amplitude is attenuated with increasing distance from the IDT. If the voltage attenuation is proportional to the attenuation of the acoustic wave, one may write \cite{Dentry:PRE2014}
\begin{equation}
\label{Eq:attenuation_volt} \hat{\overline{U}} = \hat{\overline{U}}_{|\overline{x}=0} \textrm{e}^{-\overline{\alpha}\; \overline{x}},
\end{equation}
with the (non-dimensional) attenuation coefficient given by
\begin{equation}
\label{Eq:attenuation_coeff} \overline{\alpha} = \frac{\rho c_\textrm{liq}}{\rho_\textrm{s} c_\textrm{SAW}},
\end{equation}
with $\rho_\textrm{s}$ denoting the density of the piezo-active channel walls. Equation (\ref{Eq:attenuation_coeff}) is an accurate estimate only for an acoustic wave propagating in a solid that is in contact with a semi-infinite liquid body. For narrow channels with their increased viscous dissipation, it may serve only as a rough orientation. Herein, $\overline{\alpha}^{\:-1} \approx 12$, i.e., the voltage signal attenuates to $1/e$ of its initial value on a length that is about $12$ times larger than $\lambda_\textrm{SAW}$. This suggests that for channels that measure only a few multiples of $\lambda_\textrm{SAW}$ in length, attenuation may be neglected.
Large wall potentials can induce large ion densities, eventually leading to ion crowding \cite{Bazant:AdvCollIntScie2009,Bazant:PRL2011}. With respect to the latter, steric effects can be accounted for by employing the modified NPEs of the Bikerman model \cite{Kilic:PRE2007b}, which can be obtained by adding
\begin{align}
\label{Eq:NPE_ioncrowd_nondim} \overline{j}^{\textrm{B}}_\pm = A^2 &\partial_{\overline{x}} \left[\overline{n}_\pm\frac{\partial_{\overline{x}}(\overline{n}_+ +\overline{n}_-)}{\overline{n}_\textrm{max} - \overline{n}_+ -\overline{n}_-}\right] \\ \nonumber
+ &\partial_{\overline{y}} \left[\overline{n}_\pm\frac{\partial_{\overline{y}}(\overline{n}_+ +\overline{n}_-)}{\overline{n}_\textrm{max} - \overline{n}_+ -\overline{n}_-}\right]
\end{align}
to the RHS of Eq. (\ref{Eq:NPE_nondim}). The maximal non-dimensional number concentration is given by $\overline{n}_\textrm{max} = 1/(a^3_\textrm{ion} n_0)$, where $a_\textrm{ion}$ denotes the distance between densely packed ions. Herein, $a_\textrm{ion} = {\cal O}(10^0-10^1)\:$\AA$\:$ \cite{Greberg:JChemPhys1998,Kilic:PRE2007a,Kilic:PRE2007b}, while $n_0 = {\cal O}(10^{-4}-10^{-2})\textrm{M}$, so that $\overline{n}_\textrm{max} = {\cal O}(10^3-10^6)$. Hence, in the light of the relatively low bulk ion concentrations assumed in this work, corrections obtained from Eq. (\ref{Eq:NPE_ioncrowd_nondim}) are negligibly small for most cases addressed in this study. This was verified numerically (not shown).
For sufficiently small potentials, the model outlined above can be linearized to provide an analytical expression for the time-averaged EOF velocity just outside of the EDL, which was derived by \citet{Gonzalez:PRE2000} for ACEOF. This (effective slip) velocity is given by
\begin{equation}
\label{Eq:ACICEOF_vel} \langle u\rangle = -\frac{\epsilon}{4 \eta} \Lambda \partial_x |\phi-U|^2,
\end{equation}
where $\Lambda = 1/(1+\delta_\textrm{St})$. Using the linearized and time-averaged expressions for the charge density and the electric potential as derived in that work, one finds $|\langle \partial_y \rho_f \partial_x \phi \rangle| = |\langle \partial_x \rho_f \partial_y \phi \rangle| + \Lambda \partial_x |\phi-U|^2/4$. Thus, $F_\textrm{EOP}$ as expressed by Eq. (\ref{Eq:EOP}) is indeed non-vanishing for ACEOF. In a notation using primitive variables, this implies that $||\langle \boldsymbol{\nabla}\: p \rangle|| \neq || \langle \rho_f \boldsymbol{\nabla}\: \phi \rangle ||$. As illustration of this imbalance, one may assume that in the normal direction to the wall the pressure gradient and the gradient of the electrostatic stress exactly cancel each other since no flow through the wall is permitted. Hence, $\langle \partial_y p \rangle \approx -\langle \varepsilon \partial_y (\partial_y \phi)^2/2\rangle$ so that $p$ is fixed except for an integration constant. Since the pressure is a scalar, it acts with the same strength in the direction parallel to the channel wall. In that direction, there is no obstacle or boundary to enforce the balance between the pressure gradient and electrostatic stress so that in general $\langle -\partial_x p - \rho_f \partial_x \phi \rangle \neq 0$. This imbalance causes the net flow. In the lower part of Fig. \ref{Fig:sketch_flowdom}, this basic mechanism is illustrated for a wall subjected to a sinusoidal wave of the electrostatic surface potential. Despite the fact that the electric potential $\phi$ (indicated by a color scale) at the non-dimensional time $\overline{t} = 0.75$ is reversed in comparison to the instant at $\overline{t} = 0.25$, the corresponding fluid stress $-\boldsymbol{\nabla}\: p - \rho_f \boldsymbol{\nabla}\: \phi$ (indicated by white arrows) is identical and induces a fluid motion in the same direction (indicated by black arrows and black streamlines). Thus, despite the alternating polarity of the electrostatic potential one obtains (on time-average) an uni-directional flow. Based on Eq. (\ref{Eq:ACICEOF_vel}), the maximal velocity of such a fluid motion occurs at a frequency of
\begin{equation}
\label{Eq:ACICEOF_freqmax} f_{\textrm{max}} = \Lambda^{-1} \frac{0.199\sqrt{\sigma}}{\Delta x},
\end{equation}
with $\sigma = \epsilon D/\lambda^2_\textrm{D}$ denoting the electric conductivity of the electrolyte, while $\Delta x$ denotes the size of the gap between two electrodes. As it was suggested to be a valid assumption by experiments preceding the theoretical work \cite{Green:PRE2000}, in the course of the derivation of Eqns. (\ref{Eq:ACICEOF_vel}) and (\ref{Eq:ACICEOF_freqmax}) it was assumed that $f_{\textrm{max}} = {\cal O}(t^{-1}_\textrm{RC})$ \cite{Gonzalez:PRE2000}.
Herein, we do not consider stationary electrodes, but the surface potential is induced by SAWs standing or traveling along the wall. Equivalent systems have been considered in the context of ACEOF by placing several electrodes with a spacing of $\lambda_0 = 2 \pi/k_0$ along the wall and energizing them according to a traveling wave, where $k_0$ denotes the wave number of the wall electrode. In that case, the (time-averaged) electro-osmotic slip velocity just outside of the EDL is found to read \cite{Ramos:JAP2005}
\begin{equation}
\label{Eq:TWACEOF_velmax} \langle u\rangle = \Lambda \frac{\epsilon k_0 \hat{U}^2}{2 \eta} \frac{\Omega_0}{1+\Omega^2_0},
\end{equation}
which is maximal at $\Omega_0 = 1$ with $\Omega_0 = 2 \pi f_0 C_\textrm{D}/(\sigma k_0)$. For SAW actuation, $k_0$ is not a free parameter but given by the dispersion relation of the SAW. Hence, by contrast to TV-ACEOF, one finds two opposing effects: Higher frequencies imply reduced charging of the EDL but larger wave numbers, while smaller frequencies imply stronger EDL charging but smaller wave numbers. More specifically, invoking the SAW dispersion relation to estimate the magnitude of the SAW-induced EOF implies that $\Omega_0 = c_\textrm{SAW} C_\textrm{D}/\sigma$ is a constant and that $\langle u\rangle$ as expressed by Eq. (\ref{Eq:TWACEOF_velmax}) is linearly increasing with the SAW frequency. Hence, no finite frequency with maximal EOF should be expected. With these considerations and for $c_\textrm{SAW} = 3965\:\textrm{m s}^{-1}$, $\hat{U} = 1\:\textrm{V}$ as well as the thermophysical properties listed in Table \ref{Tbl:thermoprops_operation}, one finds sizable SAW-EOF velocities above $1\:\mu\textrm{m s}^{-1}$ only for frequencies above $100\:\textrm{MHz}$. Ignoring the functional dependence of (\ref{Eq:TWACEOF_velmax}) on $\Omega_0$ and assuming $f_\textrm{SAW} = 1\:\textrm{MHz}$ leads to an estimate of $\langle u\rangle_\textrm{SAW,max} = {\cal O}(10^{-1})\:\textrm{mm s}^{-1}$. Such an estimate corresponds to the Helmholtz-Smoluchowski (HS) velocity expressed by
\begin{equation}
\label{Eq:Helmholtz_Smoluchowski} u_\textrm{HS} = \frac{\epsilon \zeta}{\eta} E,
\end{equation}
for which the electric field $E$ equals $2 \zeta/\lambda_\textrm{SAW}$, while $\zeta \equiv \hat{U}$. On the one hand, since ACEOF and SAW-induced EOF share the same physical origin, expression (\ref{Eq:TWACEOF_velmax}) is physically justified but provides unrealistic estimates. On the other hand, expression (\ref{Eq:Helmholtz_Smoluchowski}) provides more realistic estimates but the simplification leading to it cannot be fully justified. These contradicting estimates obtained from the linearized model ask for a more careful view on the interplay between the RC time of the EDL, high electrostatic surface potentials, and the dispersion relation of the SAW. As described in the next section, this can be addressed by a full nonlinear numerical simulation.
\section{Details of the computational model and test cases} \label{Sec:CompModel}
A sketch of the (non-dimensional) computational domain is shown in the upper part of Fig. \ref{Fig:sketch_flowdom}. All of the three segments $1-3$ have a non-dimensional height of one, where segments $1$ and $3$ are passive segments of length three that are not subjected to a SAW. Along the side walls of these segments (marked by "$c$"), no-slip,
\begin{equation}
\label{Eq:No-Slip} \partial_{\overline{x}} \overline{\psi} = \partial_{\overline{y}} \overline{\psi} = 0,
\end{equation}
and no-flux,
\begin{equation}
\label{Eq:No-Flux} \overline{\boldsymbol{j}}_\pm \cdot \boldsymbol{n} = 0,
\end{equation}
are applied as boundary conditions, with $\boldsymbol{n}$ denoting the surface normal and
\begin{equation}
\label{Eq:Non-Dim-Flux} -\overline{\boldsymbol{j}}_\pm = \overline{\boldsymbol{\nabla}\:} \overline{n}_\pm \mp \nu \overline{n}_\pm \overline{\boldsymbol{\nabla}\:}\:\overline{\phi}
\end{equation}
denoting the diffusive-electromigrative ion flux vector. Herein, the non-dimensional nabla operator is defined by $\overline{\boldsymbol{\nabla}\:} = (A \partial_{\overline{x}},\partial_{\overline{y}})^\textrm{T}$. At the channel ends (marked by "$d$"), while still enforcing the no-slip condition, the no-flux condition is replaced by the condition of periodicity of the ion concentrations between both channels ends. However, it was verified that enforcing the no-flux condition instead leads to the same results. Considering that the investigation of SAW-induced flow through a channel is the focus of the present work, it might be surprising that the no-slip boundary condition is used at the channel ends. As will be detailed further below, this is a consequence of the numerical solution approach. Segment $2$ incorporates no-slip and no-flux but piezo-active side walls (marked by "$a$" and "$b$"), where the electric boundary conditions expressed by Eqns. (\ref{Eq:Wallpot_SW}) or (\ref{Eq:Wallpot_TW}) are imposed. While $\lambda_\textrm{SAW}$ itself varies with $f_\textrm{SAW}$ according to the dispersion relation, for every simulation and independent of the frequency the non-dimensional length of segment $2$ equals five, i.e., five SAW wavelengths are included within the computational domain. For all cases, $\overline{\psi}=0$ was set along boundary $a$.
The governing, time-dependent equations (\ref{Eq:NSE_stream_nondim})-(\ref{Eq:NPE_nondim}) along with its boundary conditions were implemented using the partial differential equations (PDE) mode in COMSOL MULTIPHYSICS 5.4 \cite{Comsol2019} and solved on a dense structured but anisotropic grid, which was highly refined along the charged walls to improve the resolution of the local electric field. Quadratic (Lagrangian) shape functions were employed for the spatial discretization, while the constant Newton-Raphson method was used to linearize the non-linear parts of the governing equations. All simulations were carried out on a Dell Precision T7500 workstation with Ubuntu 12.04 LTS as operating system. Appropriate convergence studies were carried out with respect to mesh density and time step size. Since the COMSOL solver employs a variable time-stepping scheme according to a specified relative tolerance, the temporal convergence study was conducted by successively tightening the relative residual during each time step iteration, which forces the iterative solver to reduce the time integration increments.
\begin{figure}
\centerline{\includegraphics[width=12cm]{fig_sketch_flowdom.pdf}}
\caption{In the upper part of this figure, a sketch of the parallel-plate channel, the computational domain, and the employed boundary conditions are shown. Segments $1$ and $3$ are passive segments not subjected to a SAW, employing no-slip boundaries of unspecified electric potential, supplemented by either a no-flux boundary condition (boundaries c) or periodic ion concentrations (boundaries d). Segment $2$ consists of no-slip and no-flux but piezo-active side walls (boundaries a and b). In the lower part, the electric potential $\phi$ (color scale online, gray scale in print), the (polarity-independent) fluid stress $-\boldsymbol{\nabla}\: p - \rho_\textrm{f} \boldsymbol{\nabla}\: \phi$ (white arrows), as well as the corresponding fluid motion (black arrows and black streamlines) are shown for the two (non-dimensional) time instants $\overline{t} = 0.25$ and $\overline{t} = 0.75$.}
\label{Fig:sketch_flowdom}
\end{figure}
With the described stream function-vorticity formulation, it was not possible to simulate SAW-driven EOF in channels with open ends, for which the no-slip condition at the channel ends (segments "$d$") needs to be replaced by the condition of periodicity with respect to the (axial) flow velocities. In such cases, it was found that an artificial pressure-driven flow is superimposed, which strongly exceeds the EOF. The reason for this is that the advective term in Eq. (\ref{Eq:NSE_stream_nondim}) is typically vanishingly small so that the effective momentum equation is linear. In this case, an artificial pressure-driven flow described by $\overline{\psi}_\textrm{art}$ with $\boldsymbol{\nabla}\:^4 \overline{\psi}_\textrm{art}=0$ can be added to the EOF described by $\overline{\psi}=\overline{\psi}_\textrm{EOF}$, so that $\overline{\psi}=\overline{\psi}_\textrm{EOF}+\overline{\psi}_\textrm{art}$ is still a solution of the governing momentum equation. This problem could only be circumvented by simulating channels with closed ends, for which the SAW-driven EOF leads to a (now physically justified) pressure gradient along the channel and a corresponding (time-averaged) backflow with a Hagen-Poiseuille (HP) profile. For a parallel-plate channel as depicted in Fig. \ref{Fig:sketch_flowdom} and in a non-dimensional form, this is described by
\begin{equation}
\label{Eq:Hagen-Poiseuille_nondim} \overline{\psi}_\textrm{HP} = \frac{1}{2} A \partial_{\overline{x}} \overline{p} \left[\frac{1}{3}(\overline{y}^3-1)-\frac{1}{2}(\overline{y}^2-1) \right],
\end{equation}
where the non-dimensional pressure gradient along the channel is given by $A \partial_{\overline{x}} \overline{p}_\textrm{HP} = \partial_{\overline{y}} \overline{\omega}_{|\overline{y}=0.5}$. The SAW-driven EOF was computed by subsequently subtracting the analytical solution (\ref{Eq:Hagen-Poiseuille_nondim}) from the numerically obtained complete solution, where $\partial_{\overline{y}} \overline{\omega}_{|\overline{y}=0.5}$ was taken as the line-average along the channel center-plane of segment $2$ from the time-averaged numerical simulation. Note that for a pure EOF, this value is identical to zero. Attempts to prevent the artificial pressure-driven flow by imposing $\partial_{\overline{y}} \overline{\omega}_{|\overline{y}=0.5}=0$ along the center plane or along the domain boundaries were not successful.
To allow for a parametric study of $f_\textrm{SAW}$, it is essential for any modeling framework of SAW-induced flow that the domain length can be scaled by a multiple of $\lambda_\textrm{SAW}$ since $f_\textrm{SAW}$ affects $\lambda_\textrm{SAW}$ via the dispersion relation. Otherwise every other value of $f_\textrm{SAW}$ would require a different computational domain if always the same number of multiplies of $\lambda_\textrm{SAW}$ should fit into the domain. In the present study, employing a commercial fluid solver, this restricted the choice of the problem formulation and numerical implementation to the chosen one, at the expense of involving the problem of an artificial pressure-driven flow.
To check the numerical implementation, the simulation code was used to reproduce well-known solutions to relevant EOF problems. First, in the thin EDL limit and for constant, low-valued wall $\zeta$ potentials, it was ensured that at steady-state one obtains the HS velocity (\ref {Eq:Helmholtz_Smoluchowski}) and the corresponding (quasi) plug flow. Second, to check the correct implementation of the time-dependent terms, the numerical work of \citet{Suh:IJNumMethFluids2011} and \citet{Suh:CollSurfA2011} was used. In these studies, the transient flow induced by a sudden increase of the $\zeta$ potential from zero to a constant value is addressed. Starting from uniform ion concentrations, the flow vanishes as soon as the ion concentrations in wall vicinity follow the Boltzmann distribution such that the ion cloud in the EDL is in a state of mechanical equilibrium. Comparing the domain-averaged absolute flow velocity as a function of time for a wall potential of either $0.2\:\textrm{V}$ or $1\:\textrm{V}$ (as shown in Fig. 2 of Ref. \citet{Suh:CollSurfA2011}), full quantitative agreement was found.
The comparison with early numerical studies on ACEOF is hampered by the circumstance that most of them address systems with larger geometric length scales to allow for an easier experimental validation. For instance, in \citet{Green:PRE2002}, ACEOF is induced by two wall electrodes placed next to each other and actuated by AC voltages in the $\textrm{kHz}$ range, while the transport phenomena due to the non-equilibrium EDL are not resolved. Instead, a thin-double-layer model is used which imposes Eq. (\ref{Eq:ACICEOF_vel}) as an analytical expression for the time-averaged EOF velocity at the boundary between the diffuse layer and the bulk. The effective Debye parameters characteristic for such studies are of the order of $\overline{\kappa}_0 = {\cal O}(5 \times 10^3)$. The simulation code underlying our work fully resolves the non-equilibrium processes in the EDL both in space and time. To be applicable for larger systems, a high mesh density with strong grid anisotropy needs to be employed, next to high orders (at least cubic) of the interpolation functions used for discretization. Since \citet{Green:PRE2002} solely plot the simulated time-averaged streamlines without providing quantitative flow characterization, only a qualitative comparison was possible, which was conducted at a frequency of $1\:\textrm{kHz}$ (Fig. 11 (c) in Ref. \cite{Green:PRE2002}) and suggested good agreement. For quantitative tests, a comparison with the system simulated by \citet{Pribyl:InTech2008} was undertaken. In that work, ACEOF is induced by AC voltages with frequencies ranging from $10^1$ to $10^5\textrm{Hz}$, while the fully resolved EDL is characterized by an effective Debye parameter of $100$. For example, for a selected AC frequency of $100\:\textrm{Hz}$, the evolution of the cross-averaged axial velocity in time as shown in Fig. 2 of Ref. \cite{Pribyl:ISEHD2006} was fully reproduced with the present code.
All simulations discussed in the following start at $\overline{t}=0$ with a uniform bulk ion concentration of $\overline{n} = (\overline{n}_+ + \overline{n}_-)/2 = 1$. As such an initial bulk ion distribution at non-vanishing $\zeta$ potentials implies mechanical non-equilibrium, a transient osmotic flow is observed even if a time-independent wall potential is applied. This transient flow vanishes with time, and in the case of applying an AC voltage a stable periodic flow remains \cite{Pribyl:InTech2008}. To obtain reproducible time-averaged results, all simulations consisted of two parts: The first was undertaken until the flow field at $\overline{t}=\overline{t}_2$ was indistinguishable from the one at $\overline{t}=\overline{t}_1=\overline{t}_2 - 1$, i.e., one time period before. In this context, the existence of a flow field that is synchronized with the SAW indicates that subharmonic modes are not present, permitting averaging over one SAW time period. Consequently, a second simulation of the same case for a time period of $\Delta \overline{t}=1$ was conducted, with the results obtained from the previous one at $\overline{t}=\overline{t}_2$ as start values. Intermediate solutions within $\Delta \overline{t}$ were saved at narrow increments and subsequently used to compute the time-averaged flow field by numerical integration with high accuracy.
\section{Results and discussion} \label{Sec:param}
\subsection{Standing waves} \label{Sec:param_sd}
First, cases are considered in which the $\zeta$ potential along the walls of the channel mid segment $2$ (see Fig. \ref{Fig:sketch_flowdom}) follows a standing SAW expressed by Eq. (\ref{Eq:Wallpot_SW}) with the voltage amplitude amounting to $\hat{U} = 1.25\:\textrm{V}$. Either a single wave (SW) on one side or two waves (DW) of the same frequency but with a phase shift $\Delta \varphi$ on both sides of that channel segment are employed. In Fig. \ref{Fig:STDSAW_vel_rms}, the space- and time averaged root-mean-square (rms) velocity defined by
\begin{equation}
\label{Eq:vel_rms} \langle v \rangle_\textrm{rms} = \frac{u_0}{5} \int^1_0 \int^5_0 \sqrt{\partial_{\overline{x}} \langle\overline{\psi} \rangle^2 + \partial_{\overline{y}} \langle \overline{\psi} \rangle^2} d \overline{x} d \overline{y},
\end{equation}
is shown as a function of the applied SAW frequency, with $\Delta \varphi$ as a parameter and where
\begin{equation}
\label{Eq:time_avg} \langle\overline{\psi} \rangle = \int^1_0 \overline{\psi}d\overline{t}
\end{equation}
denotes the time average of the stream function over one period. While $\langle v \rangle_\textrm{rms}$ is non-vanishing, there is no net axial velocity along the channel, i.e., at every instant in time, the axial flow velocity integrated across the channel width is identical to zero. For a qualitative discussion, the streamlines at $\overline{t} = 0.25$ and $10\:\textrm{MHz}$ are shown on the RHS panel of Fig. \ref{Fig:STDSAW_vel_rms}, where the plotted domain height equals $h$, while its length equals $2 \lambda_\textrm{SAW}$. Qualitatively, the time-averaged streamlines for each case look identical to the instantaneous ones (not shown), except for a very brief moment in every quarter cycle when the instantaneous vortices change its sense of rotation. As a representative example, $\langle\overline{\psi} \rangle$ is shown for $f_\textrm{SAW} = 10\:\textrm{MHz}$ and $\Delta \varphi = \pi/2$ on the top panel of Fig. \ref{Fig:STDSAW_vel_rms}. From the stream function plots, it can be seen that the SAW-induced EOF may cause two fundamentally different flow patterns: For a SW or a DW with $\Delta \varphi = \pi/2$, the vortices stretch over the complete channel width, while for a DW with either $\Delta \varphi = 0$ or $\pi$ the vortices are symmetric with respect to the channel center plane. This implies that in these cases twice the number of vortices are present. In comparison to the SW case, for which the vortices are aligned vertically to the channel center plane, the vortices of the DW case with $\Delta \varphi = \pi/2$ are tilted, with the tilt angle being proportional to $A$. For the latter case, $\langle v \rangle_\textrm{rms}$ is up to three times larger than for the SW case. The values of $\langle v \rangle_\textrm{rms}$ of the DW case with $\Delta \varphi = 0$ are negligibly small. As implied by expression (\ref{Eq:Stern}), ICEOF is mainly driven by electric currents into the EDL. Since for $\Delta \varphi = 0$ the opposing channel walls are kept at a potential distribution being symmetric to the center plane, such cross currents are minimized and no measurable ICEOF is achieved. The largest value of $\langle v \rangle_\textrm{rms}$ is obtained at $10\;\textrm{MHz}$ and $\Delta \varphi=\pi/2$, amounting to $65\;\mu \textrm{m s}^{-1}$. By contrast to the expectation discussed before in the context of Eq. (\ref{Eq:TWACEOF_velmax}), there seems to exist an optimal actuation frequency, which lies between $1$ and $10\:\textrm{MHz}$ in all cases studied. In earlier work on ICEOF and ACEOF, larger systems were considered for which $t^{-1}_\textrm{RC}$ is in the kHz-range. Since narrow channels are considered herein, $t_\textrm{RC}$ is significantly smaller than in these previous cases. For $h = 480\:\textrm{nm}$, $\lambda_\textrm{D} = 9.61\:\textrm{nm}$, and $D = 5 \times 10^{-9}\:\textrm{m}^2\:\textrm{s}^{-1}$, one finds $t^{-1}_\textrm{RC} = 1.1\:\textrm{MHz}$. Hence, the maximum in the average velocity can be explained by the characteristic time scale to charge the electric double layer. Figure \ref{Fig:STDSAW_vel_rms} underpins not only the importance to utilize two SAWs on the opposing channel walls instead of just a single SAW, but also the sensitivity of the system with respect to $\Delta \varphi$: Given that no electric current enters or leaves the computational domain, the electric charge needed to rebuild the EDL on one side of the channel originates from the EDL on the opposite side. Consequently, the SAW-induced EOF goes along with large periodic currents connecting the EDLs on the opposite sides of the channel.
\begin{figure}
\centerline{\includegraphics[width=12cm]{fig_stdsaw_vel_rms.pdf}}
\caption{Space- and time-averaged rms velocity $\langle v \rangle_\textrm{rms}$ within the channel mid segment $2$ defined in Fig. \ref{Fig:sketch_flowdom} are shown as a function of the SAW frequency. Either a single (SW) SAW is applied to the lower wall or two (DW) SAWs of the same frequency but with a phase shift $\Delta \varphi$ are applied on both sides of that channel segment. A $\zeta$ potential distribution was imposed as given by Eq. (\ref{Eq:Wallpot_SW}), where the voltage amplitude amounts to $\hat{U} = 1.25\:\textrm{V}$. Representative streamlines for the different SW and DW configurations are shown in the panels on the right hand side of the $x$-$y$ diagram. The time-averaged stream function $\int_0^1 \overline{\psi} d\overline{t}$ is exemplarily shown for $f_\textrm{SAW} = 10\:\textrm{MHz}$ and $\Delta \varphi = \pi/2$ on the top panel. The domain height, the nominal EDL thickness, and the bulk ion concentration equal $h = 480\;\textrm{nm}$, $\lambda_\textrm{D} = 9.61\:\textrm{nm}$, and $n_0 = 1\:\textrm{mM}$, respectively. The plotted domain length equals $2 \lambda_\textrm{SAW}$, where $\lambda_\textrm{SAW} = c_\textrm{SAW}/f_\textrm{SAW}$ with $c_\textrm{SAW} = 3965\:\textrm{m s}^{-1}$. For reference, $t^{-1}_\textrm{RC} = 1.1\:\textrm{MHz}$. The lines connecting the data points in the $x$-$y$ diagram are guides to the eye.}
\label{Fig:STDSAW_vel_rms}
\end{figure}
For comparison, a number of simulations were conducted where the Stern layer was implemented via relation (\ref{Eq:Stern_nondim}) (not shown). The Stern parameter $\delta_\textrm{St}$ was varied from $0.1$ to $10$. It was found that with increasing $\delta_\textrm{St}$ the Stern layer reduces the effective $\zeta$ potential, weakening the SAW-induced EOF. For $\delta_\textrm{St} = 0.1$ and below, $\langle v \rangle_\textrm{rms}$ was practically unaffected by the presence of the Stern layer. Furthermore, given the large electric potentials involved, it was found that invoking the Debye-H\"uckel (DH) approximation to calculate the voltage drop across the Stern layer from (\ref{Eq:Stern}) by the simplified expression
\begin{equation}
\label{Eq:Stern_volt_drop} \phi_{|y=0} \approx \frac{\hat{U}}{1+\delta_\textrm{St}}
\end{equation}
is inaccurate and of little practical relevance.
For the following discussion of traveling waves, it is emphasized that for a standing SAW the direction of rotation of the vortices depends on the sign of the temporal change of the local electric potential. For an increasing absolute local value, flow is directed towards such areas, while for a decreasing absolute local value, flow is directed away from it. This implies that in each cycle of the SAW, the vortices change their direction of rotation four times. During each vortex reversal, additional vortices emerge from areas close to the walls which subsequently replace the previous counter-rotating vortices. This process is illustrated in Fig. \ref{Fig:vortex_reversal}, where the height of the plotted domain equals $h$, while its length equals $\lambda_\textrm{SAW}$. From these plots, it is apparent that there are always four vortex pairs (each pair consisting of two counter-rotating vortices) within a single $\lambda_\textrm{SAW}$, which corresponds to the four sections of a sinusoidal wave in which the electric surface potential either increases or decreases.
\begin{figure}
\centerline{\includegraphics[width=12cm]{fig_stdsaw_vortex_reversal.pdf}}
\caption{Time evolution of the vortex pattern upon sign change of $d_t U_\textrm{SD}$ for the DW case with $f_\textrm{SAW} = 10\:\textrm{MHz}$, $\Delta \varphi=\pi$, and $\hat{U}=1.25\:\textrm{V}$. The nominal EDL thickness, the bulk ion concentration, as well as the plotted domain height and length equal $\lambda_\textrm{D} = 9.61\:\textrm{nm}$, $n_0 = 1\:\textrm{mM}$, $h = 480\:\textrm{nm}$, and $\lambda_\textrm{SAW} = 0.396\:\textrm{mm}$, respectively.}
\label{Fig:vortex_reversal}
\end{figure}
A number of simulations were conducted where the SAW wavelength along the upper channel wall differs from the one at the bottom wall, potentially causing (frequency) beating effects (not shown). In all cases studied, the time-averaged velocities were always lower than for the corresponding case with identical wavelengths on both walls. Hence, if increasing the effective flow velocities is the key target, the SAW frequencies and wavelengths should be identical on both walls.
\subsection{Traveling waves} \label{Sec:param_tv}
The flow profiles induced by traveling SAW waves appear qualitatively identical to those shown in Fig. \ref{Fig:STDSAW_vel_rms} as insets, just that the vortex pairs are not stationary but travel along the channel. Hence, by contrast to a stationary SAW, a traveling SAW induces a net axial flow. Such flow profiles differ from those observed in TW-ACEOF, where the streamlines meander along the channel due to the presence of the electrodes but are not closed to form vortices \cite{Ramos:JAP2005,Yeh:PRE2011}. The reason for this is that in the latter case the electrode spacing is such that $A={\cal O}(1)$, while in the present case $A \ll 1$.
Figure \ref{Fig:TVLSAW_vel_rms} (a) shows the net axial velocity defined by
\begin{equation}
\label{Eq:vel_ax} \langle u \rangle_\textrm{ax} = \frac{u_0}{5} \int^1_0 \int^5_0 |\partial_{\overline{y}} \langle\overline{\psi} \rangle - \partial_{\overline{y}} \overline{\psi}_\textrm{HP}| d \overline{x} d \overline{y}
\end{equation}
as a function of $f_\textrm{SAW}$ if two traveling waves move along both walls of channel segment $2$ with a phase shift of either $\Delta \varphi =\pi/2$ or $\pi$. As discussed before, due to the emergence of an artificial pressure-driven flow in simulations of a channel with open ends exposed to a traveling SAW, all simulations were conducted for a channel with closed ends. Since in that case the traveling SAW leads to a (physically justified) pressure-driven backflow, to arrive at the net axial flow velocity, in Eq. (\ref{Eq:vel_ax}) the corresponding HP flow profile is subtracted from the axial flow velocity computed numerically. The time-averaged stream function $\langle \overline{\psi} \rangle$ is calculated by the numerical simulations for a closed channel, while $\overline{\psi}_\textrm{HP}$ is given by Eq. (\ref{Eq:Hagen-Poiseuille_nondim}), along with the approach to obtain $A \partial_{\overline{x}} \overline{p}_\textrm{HP} = \partial_{\overline{y}} \overline{\omega}_{|\overline{y}=0.5}$ from the numerical simulations, as described before. For the simulations shown, $h=480\:\textrm{nm}$, while $\lambda_\textrm{D}=9.61\:\textrm{nm}$, and $\hat{U} = 1.25\:\textrm{V}$. The largest axial mean velocity is obtained for $f_\textrm{SAW} = 10\:\textrm{MHz}$ and $\Delta \varphi = \pi$, amounting to more than $0.2\:\textrm{mm s}^{-1}$. This is almost three times more than the largest value of $\langle v \rangle_\textrm{rms}$ obtained for standing waves under the same conditions. The reason for this discrepancy is the absence of vortex reversals in the TV wave cases. While for the SD wave cases the vortex pairs change their rotation direction four times in each cycle, the vortex pairs in the TV wave cases keep their direction of rotation within a reference frame co-moving with $\langle v \rangle_\textrm{ax}$.
In Fig. \ref{Fig:TVLSAW_vel_rms} (b), $\langle u \rangle_\textrm{ax}$ is shown as a function of $\overline{\kappa}_0 = h/\lambda_\textrm{D}$, with the nominal Debye length as a parameter. The latter is adjusted by employing different bulk ion concentrations in the range of $0.1\:\textrm{mM} \leq n_0 \leq 10\:\textrm{mM}$. For all simulations shown in this plot, $f_\textrm{SAW} = 10\:\textrm{MHz}$ and $\Delta \varphi = \pi$. In general, strong confinement as expressed by $\overline{\kappa}_0 \rightarrow 1$ leads to vanishing SAW-induced EOF, while those $\overline{\kappa}_0$ values where the peak velocity for a specific $n_0$ occurs are found to differ substantially from case to case and strongly depend on $\lambda_\textrm{D}$. Smaller $\lambda_\textrm{D}$ lead to larger $\langle u \rangle_\textrm{ax}$, reaching more than $1.3\:\textrm{mm s}^{-1}$ at $\lambda_\textrm{D} = 3.04\:\textrm{nm}$. This behavior can be explained by the enhanced gradients of the electric potential and charge density, causing an enhanced EOP. Conducting simulations with $\lambda_\textrm{D}$ reduced even further by increasing $n_0$ was not considered useful, as one would enter the regime of ion crowding. Since the Poisson-Nernst-Planck model considers the ions as point charges and is strictly valid only in the dilute limit, the validity of corresponding results would be questionable \cite{Bazant:AdvCollIntScie2009}. Nevertheless, using more accurate models incorporating such effects of ion crowding is an interesting path to be pursued further. Note in this context that any model describing the thin EDL limit with a framework based on the Poisson-Boltzmann theory is unsuitable, as the EOP vanishes in this case.
\begin{figure}
\centerline{\includegraphics[width=12cm]{fig_tvlsaw_vel_rms.pdf}}
\caption{EOF induced by traveling SAWs moving along the walls of channel segment $2$. In panel (a), the time- and space-averaged axial velocity $\langle u \rangle_\textrm{ax}$ as a function of $f_\textrm{SAW}$ is shown, with the phase shift $\Delta \varphi$ as a parameter, while $h=480\:\textrm{nm}$, $\lambda_\textrm{D}=9.61\:\textrm{nm}$, and $n_0 = 1\:\textrm{mM}$. For reference, $t^{-1}_\textrm{RC} = 1.1\:\textrm{MHz}$. In panel (b), $\langle u \rangle_\textrm{ax}$ as a function of $h/\lambda_\textrm{D}$ is shown for $\Delta \varphi = \pi$, with the nominal Debye length $\lambda_\textrm{D}$ as a parameter, which corresponds to the bulk salinity noted next to it. For both plots $\hat{U} = 1.25\:\textrm{V}$. The lines connecting the data points are guides to the eye.}
\label{Fig:TVLSAW_vel_rms}
\end{figure}
Figure \ref{Fig:TVLSAW_uax} shows the time-averaged axial velocity profiles $\langle \overline{u}(\overline{y})\rangle = \langle u(y/h) \rangle/u_0$, corrected for the HP-backflow, with the frequencies and phase shifts as used in Fig. \ref{Fig:TVLSAW_vel_rms} (a) being the varied parameters. All other parameters are identical to those employed in the latter figure. In Fig. \ref{Fig:TVLSAW_uax} (a), $\Delta \varphi = \pi$ is used. All cases resemble the classical EOF plug-like flow profile, except for the transition from the flow inside the EDL and the bulk. Here, distinct peak velocities within the EDL can be observed, implying a lagging of the bulk flow that depends on the SAW frequencies. This effect becomes stronger with higher frequency so that for $100\:\textrm{MHz}$ the peak velocity exceeds the flow velocity at the channel center plane by up to $22\:\%$, while for frequencies lower than $100\:\textrm{kHz}$ it disappears. Hence, it is an inertial effect, where for higher frequencies, the flow cannot attain a quasi-steady state before the sign of the surface potential changes again. Figure \ref{Fig:TVLSAW_uax} (b) shows the results for $\Delta \varphi = \pi/2$. One observes that for increasing $f_\textrm{SAW}$ the axial net velocity profile becomes increasingly asymmetric with respect to the channel center plane. The instantaneous flow pattern for this case looks qualitatively similar to the corresponding SD case as shown in Fig. \ref{Fig:STDSAW_vel_rms}. The vortex pairs occupy not only the complete channel cross section but are also tilted; i.e., they are not mirror-symmetric with respect to the channel center plane. In addition, the clockwise and counter-clockwise rotating vortices differ in shape and magnitude. Time-averaging over such an instantaneous flow profile leads to the asymmetric net flow profile shown in Fig. \ref{Fig:TVLSAW_uax} (b).
\begin{figure}
\centerline{\includegraphics[width=12cm]{fig_tvlsaw_uax.pdf}}
\caption{Time-averaged axial velocity profiles $\langle \overline{u}(\overline{y})\rangle = \langle u (y/h) \rangle/u_0$ with $f_\textrm{SAW}$ as parameter. In panel (a) $\Delta \varphi = \pi$ while in panel (b) $\Delta \varphi = \pi/2$. In all simulations, $h = 480\:\textrm{nm}$, $\lambda_\textrm{D} = 9.61\:\textrm{nm}$, $n_0 = 1\:\textrm{mM}$, $\hat{U} = 1.25\:\textrm{V}$, and $u_0 = 11.9\:\mu\textrm{m s}^{-1}$. The lines connecting the data points are guides to the eye.}
\label{Fig:TVLSAW_uax}
\end{figure}
Some simulations were conducted (not shown) where the SAW on the top wall travels in the opposite direction to the one on the lower wall. For such a configuration, no axial net flow can be observed. Instead, a flow pattern emerges which resembles those found for two standing SAW with a phase shift of $\Delta \varphi = \pi/2$. While for the latter case the vortices change their sense of rotation four times in each cycle, the vortices caused by two counter-traveling SAWs span the complete channel cross section and keep their rotation direction. Consequently, large $\langle v \rangle_\textrm{rms}$ values of the order of $0.2\:\textrm{mm s}^{-1}$ ($10\:\textrm{MHz}$, $h=480\:\textrm{nm}$, $\lambda_\textrm{D} = 9.61\:\textrm{nm}$, $\hat{U} = 1.25\:\textrm{V}$) can be obtained. Hence, if quasi-stationary vortex flow is desired, two counter-propagating waves instead of two standing waves should be used.
It is known that the standard theory of ICEOF often overpredicts flow velocities obtained experimentally \cite{Sugioka:PRE2016}, while ACEOF experiments conducted at high frequencies and voltages comparable to those used in this study indicate that one may observe flow reversals that are not captured by the standard Poisson-Nernst-Planck (PNP) framework underlying this study \cite{Bazant:AdvCollIntScie2009}. The reasons for these discrepancies are still in the focus of active research. Possible explanations include phase-delay \cite{Sugioka:PRE2016} and excluded-volume effects \cite{Bazant:AdvCollIntScie2009}. While the former effect is included in our work, for the latter, the ion distributions under excluded-volume effects in non-equilibrium ICEOF scenarios are frequently postulated to follow a Fermi-Dirac statistics. Since the charge density would still be a function of the electric potential as the only variable parameter, describing the ion density in a non-equilibrium situation with an equilibrium distribution function appears to be questionable. According to expression (\ref{Eq:EOP}), the EOP is zero for such a case, i.e., no flow, including flow reversals, can develop. Hence, the explanation of flow reversal by excluded-volume effects still requires further clarification. In addition, the electrode and counter-electrode used in ACEOF experiments exhibiting flow reversals typically differ in size \cite{Bazant:AdvCollIntScie2009}. This is not an issue in this work. Finally, other authors have suggested that the flow reversal is due to the combined effects of non-identical mobilities of the involved ion species and Faradaic currents at the electrodes \cite{Gonzalez:PRE2010}. While these effects are important and relevant, we think that further detailing the model in an attempt to capture these issues would go beyond the basic scope of this study, namely to address electrokinetic effects and the affiliated fundamental flow physics induced by SAWs that cause non-negligible EOF particularly in nanochannels. For this reason, we leave the problem of flow reversals to future studies.
Figure \ref{Fig:TVLSAW_uax_zeta_sweep} shows the time-averaged axial velocity $\langle u \rangle_\textrm{ax}$, averaged over the channel cross section, as a function of the surface potential amplitude $\hat{U}$ and with $f_\textrm{SAW}$ as a parameter. All cases follow the quadratic dependence as one may expect from Eq. (\ref{Eq:TWACEOF_velmax}), with the largest velocities developing at $f_\textrm{SAW} = 10\:\textrm{MHz}$. This plot is another demonstration that the physical mechanism underlying SAW-EOF is similar to that of ACEOF. It also suggests that for even higher potential amplitudes - not uncommon in practical realizations of SAWs - significantly higher flow velocities than those reported in this work may be induced.
\begin{figure}
\centerline{\includegraphics[width=6.5cm]{fig_tvlsaw_vel_rms_zeta.pdf}}
\caption{Time-averaged axial velocity $\langle u \rangle_\textrm{ax}$ as a function of the surface potential amplitude $\hat{U}$ and with $f_\textrm{SAW}$ as parameter, while $\Delta \varphi = \pi$, $h = 480\:\textrm{nm}$, $\lambda_\textrm{D} = 9.61\:\textrm{nm}$, and $n_0 = 1\:\textrm{mM}$. The lines connecting the data points are guides to the eye.}
\label{Fig:TVLSAW_uax_zeta_sweep}
\end{figure}
\section{Conclusions}
In this work, the electroosmotic flow induced in an aqueous solution by surface acoustic waves standing or traveling along (piezo-active) walls of a narrow parallel-plate channel has been investigated numerically. The physical mechanism is similar to traveling wave ACEOF, but with no electrodes at the channel walls. For standing waves, it was seen that vortex pairs develop that change their sense of rotation four times per cycle. The frequent flow reversal implies a noticeable but small net velocity of ${\cal O}(10^1)\:\mu\textrm{m s}^{-1}$ that is maximized when the waves at the opposing channel walls have an identical frequency but a phase shift of $90^\circ$. For traveling waves, it was found that similar vortex pairs develop that, however, do not change their sense of rotation. Instead, they move along the channel, leading to a net flow that scales quadratically with the voltage amplitude. Due to the absence of periodic reversals of the vortex pairs, higher net velocities can be generated that can be of ${\cal O}(10^{-1})\:\textrm{mm s}^{-1}$. For frequencies of the same order of magnitude as the inverse of the nominal RC-time of the electric double layer, transport is maximized if acoustic waves of identical frequency but phase-shifted by $180^\circ$ are imposed on both channel walls facing each other. This maximizes the electric current between the oppositely charged Debye layers on these walls. Conventionally, for mm-sized flow domains, surface acoustic waves may generate flow velocities of ${\cal O}(10^{1})\:\textrm{cm s}^{-1}$ by means of acoustic streaming. However, this actuation method is inefficient for narrow nanometer-wide channels, so that for such cases the EOF described in this work may be the dominant mode of transport. Given that it does not require any elaborate wiring inside of the channel, it may be an interesting approach to drive liquids through narrow confinements.
\section{Acknowledgment}
Financial support by the German Research Foundation (DFG) through Grant No. HA 2696/42-1 is gratefully acknowledged.
|
1,116,691,500,384 | arxiv | \section{Introduction}
Deep neural networks (DNNs) have achieved state-of-the art accuracy on various computer vision tasks such as image classification \cite{krizhevsky2012imagenet,he2016deep} but at the expense of extremely high computational and storage complexity, e.g., ResNet-18 \cite{he2016deep} needs $\sim 10^{12}$ 1-b full adders (FAs) and $3.74\times 10^{8}$-bits of activation and weight storage to achieve an accuracy of 70\% on the ImageNet dataset. These high computational and storage costs inhibit the deployment of such DNNs on resource-constrained Edge devices. As a result, there is much interest in designing low-complexity DNNs without compromising their accuracy.
There are two distinct approaches for reducing DNN complexity: 1) model compression \cite{han2015learning} and quantization \cite{hubara2016binarized,rastegari2016xnor} of complex networks, and 2) the design of lightweight networks from scratch, e.g., MobileNet \cite{howard2017mobilenets,sandler2018mobilenetv2}.
Model compression and quantization methods rely on the intrinsic over parameterization in complex networks to reduce their complexity. Such methods have proved to be very effective in reducing network complexity with negligible impact on its accuracy, e.g., ternary quantization of ResNet-18 weights \cite{yang2019quantization} reduces its computational and storage complexity by $88\%$ and $74\%$, respectively, at the expense of a drop in accuracy from $70.3\%$ to $69.1\%$.
In the second approach, the design of lightweight networks such as MobileNet \cite{howard2017mobilenets,sandler2018mobilenetv2}, SqueezeNet \cite{iandola2016squeezenet}, ShuffleNet \cite{zhang2018shufflenet}, ConDenseNet \cite{huang2018condensenet} have also shown tremendous success. Such networks exploit algorithmic properties such as factorizability of convolutions and utilize either $1\times 1$ convolutions (SqueezeNet), grouped convolutions (ShuffleNet, ConDenseNet), or both (MobileNet). For example, MobileNetV1 \cite{howard2017mobilenets} achieves comparable (or even higher) accuracy than its ResNet-18 floating-point (FP) counterpart but at a computational and storage complexity that are $3\times$ and $7\times$ lower, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\columnwidth]{figures/comparison.pdf}%
\end{center}
\caption{The Top-1 accuracy on ImageNet vs. computational cost for MobileNetV1 achieved by state-of-the-art quantization methods (RQ \cite{louizos2018relaxed}, UNIQ \cite{baskin2018uniq}, and IAO \cite{jacob2018quantization}). Our proposed method DBQ simultaneously achieves the highest accuracy and the lowest complexity.}%
\label{fig:motivation}%
\end{figure}
In contrast, not much work has been done in model compression or quantization of lightweight networks and for a good reason -- such networks are already irredundant leaving much less room for complexity reduction. Existing works \cite{louizos2018relaxed,wang2019haq,baskin2018uniq,jacob2018quantization,sheng2018quantization} that quantize lightweight networks use fixed-point quantization with relatively high bitwidths (see Fig.~\ref{fig:motivation}) which offer limited reductions in complexity. In contrast, aggressive quantization schemes such as binarization \cite{hubara2016binarized,rastegari2016xnor} or ternarization \cite{zhu2016trained,li2016ternary} have been benchmarked on over-parameterized networks. In fact, ternarizing MobileNetV1 leads to a catastrophic drop in accuracy from $72.12\%$ to $66.45\%$ on ImageNet as we show in Section~\ref{sec:imagenet}. In order to improve the performance of ternarized models while leveraging the simplicity of ternary-based arithmetic, one can construct a non-uniform quantizer as linear combinations of ternary values. Such formulation has already been proposed in the context of binarized neural networks \cite{zhang2018lq,lin2017towards}, however the training algorithm involved is: 1) extremely inefficient to implement; 2) can lead to sub-optimal results due to gradient mismatch issues; and 3) has only been benchmarked on over-parameterized networks.
To this end, our work is the \textit{first} to tackle the daunting task of aggressively quantizing lightweight networks, such as MobileNetV1 \cite{howard2017mobilenets}, MobileNetV2 \cite{sandler2018mobilenetv2}, and ShuffleNetV2 \cite{ma2018shufflenet} using multiple ternary branches. We propose an efficient and fully differentiable multiple ternary branch quantization algorithm (DBQ). For MobileNetV1 on ImageNet, DBQ achieves an accuracy $2\%$ higher than state-of-the art quantization methods with a complexity that is $3.5\times$ lower as shown in Fig.~\ref{fig:motivation}. This represents an overall reduction of $24.5\times$ compared to FP with a $1.2\%$ drop in accuracy.
Specifically, our contributions are:
\begin{enumerate}
\item We are the \textit{first} to successfully ternarize lightweight networks (MobileNetV1, MobileNetV2, ShuffleNetV2) on ImageNet. This result is achieved by using DBQ with two ternary branches.
\item We present the \textit{first fully differentiable} branched quantization algorithm (DBQ) for DNNs requiring minimal training overhead.
\item We show that DBQ outperforms state-of-the art methods in both accuracy and computational cost. Compared to state-of-the art quantization method RQ \cite{louizos2018relaxed}, DBQ drastically improves the Top-1 accuracy of MobileNetV1 on ImageNet from $61.50\%$ to $70.92\%$ at iso-model size accompanied by a $19\%$ reduction in computational complexity.
\item For lightweight networks tackling real world applications, we show that DBQ with two ternary branches offers the best (pareto-optimal) accuracy-complexity trade-off compared to using one ternary branch with higher number of channels, at iso-model size.
\end{enumerate}
\begin{wraptable}{R}{0.45\columnwidth}
\begin{center}
\resizebox{0.45\columnwidth}{!}{%
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c c c c }
\clineB{1-4}{2.5}
\textbf{Layer Type} & \textbf{Mults} [\%]& \textbf{Adds} [\%]& \textbf{Params} [\%] \\\clineB{1-4}{2.5}
FL & $1.89$ & $1.83$ & $0.02$ \\\hline
DW & $3.03$ & $2.72$ & $1.05$ \\\hline
PW & $\mathbf{94.02}$ & $\mathbf{94.37}$ & $\mathbf{74.19}$ \\\hline
FC & $0.18$ & $0.18$ & $24.22$ \\\hline
PL & $0$ & $0.01$ & $0$ \\\hline
BN & $0.88$ & $0.89$ & $0.52$ \\\hline
\end{tabular}
}
\end{center}
\caption{The number of multiplications, additions, and parameters required by each layer type: first layer (FL), depthwise (DW), pointwise (PW), fully connected (FC), pooling layer (PL), and batch normalization (BN), for a single inference using MobileNetV1.}
\label{tab:mobilenet-stats}
\end{wraptable}
\section{Related Work}
Reducing DNN complexity via quantization has been an active area of research over the past few years. A majority of such works either train the quantized network from scratch \cite{zhu2016trained,zhang2018lq,li2016ternary,hubara2016binarized,rastegari2016xnor,sakr2018true} or fine-tune a pre-trained model with quantization-in-the-loop \cite{jacob2018quantization,louizos2018relaxed,wang2019haq,yang2019quantization,baskin2018uniq,zhou2018explicit}. Where retraining is not an option, \cite{sakr2017analytical} provides analytical guarantees on the minimum precision requirements of a pre-trained FP network given a budget on the accuracy drop from FP. Training based quantization works fall into two classes of methods: 1) estimation based methods \cite{zhang2018lq,lin2017towards,li2016ternary,wang2019haq,jacob2018quantization}, where the full-precision weights and activations are quantized in the forward path, and gradients are back-propagated through a non-differentiable quantizer function via a gradient estimator such as the Straight Through Estimator (STE) \cite{bengio2013estimating}; and 2) optimization based methods, where gradients flow directly from the full-precision weights to the cost function via an approximate differentiable quantizer \cite{yang2019quantization,louizos2018relaxed,sakr2018true}, or by including an explicit quantization error term to the loss function \cite{hou2018loss,zhou2018explicit}. Application of these methods can be categorized into three clusters:
\textbf{Aggressive Quantization}: Methods such as binarization and ternarization have been highly successful for reducing DNN complexity. BinaryNets \cite{hubara2016binarized} quantize both weights and activations of DNNs to $\pm 1$, while XNORNets \cite{rastegari2016xnor} use a full-precision scalar to represent binarized weights in order to improve accuracy. Ternary Weight Networks (TWN) \cite{li2016ternary} quantize weights to $\{-1,0,1\}$ and leverage the resulting weight sparsity due to the '$0$' state to skip operations. Trained Ternary Quantization (TTQ) \cite{zhu2016trained} proposes learning the ternary scales via back-prop. However, a major drawback of such methods is the resulting accuracy loss especially when applied to lightweight DNNs such as MobileNet.
In Section \ref{sec:imagenet}, we show that ternarizing only the pointwise layers in MobileNetV1 on Imagenet, which correspond to $\sim 94\%$ of the total multiplication/additions (Table~\ref{tab:mobilenet-stats}), incurs a massive accuracy loss ($\sim 5.67 \% $) compared to the full-precision baseline.
Hence, such methods are typically benchmarked on simple datasets such as CIFAR-10, or use over parameterized models such as AlexNet \cite{krizhevsky2012imagenet} or ResNet-18 \cite{he2016deep} on ImageNet. In contrast, our proposed DBQ method is able to aggressively quantize the lightweight MobileNetV1 architecture with minimal loss in Top-1 accuracy (Fig~\ref{fig:motivation}).
\textbf{Non-uniform Quantization}: These methods seek to improve the performance of binarized/ternarized models while leveraging their arithmetic simplicity, e.g., LQNets \cite{zhang2018lq} and ABCNets \cite{lin2017towards}, by quantizing weights and activations as linear combinations of binary values. The resulting non-uniform multi-bit quantization allows the computation of dot products to be carried out using binary arithmetic with appropriate scaling and addition. However, these methods suffer from two major drawbacks: 1) the design of their quantization functions is computationally expensive as it requires an iterative solution of a non-convex optimization problem per-layer per-forward pass during training, which results in a significant training time overhead in the range $1.4\times$ $-$ $3.7\times$ \cite{zhang2018lq}; and 2) they suffer from gradient mismatch problems as they depend on the STE \cite{bengio2013estimating} method to compute the gradients during training. This renders the quantizer constructed by these methods to be sub-optimal, since they \textit{estimate} the quantizer parameters by minimizing a \emph{local} cost function, e.g., MSE. Moreover, these methods have been benchmarked only on over parameterized networks on ImageNet. Whereas our proposed DBQ method \emph{learns} the multiple \emph{ternary} branches by minimizing a \emph{global} loss function since the proposed quantizer is fully differentiable, which enables the efficient training of similar non-uniform quantizers, while also eradicating the need for any gradient estimator.
\textbf{Quantization of Lightweight DNNs}: Recent works that quantize MobileNets either apply fixed-point quantization with uniform \cite{louizos2018relaxed,sheng2018quantization,jacob2018quantization} or mixed \cite{wang2019haq,Uhlich2020Mixed} precision across layers. Hardware-Aware Quantization (HAQ) \cite{wang2019haq} proposes using reinforcement learning to learn the per-layer bit-precision for both weights and activations, whereas \cite{Uhlich2020Mixed} learns the bit-precision via a reformulation of the quantizer function and relying on the STE for gradient computation. Integer-Arithmetic-Only (IAO) \cite{jacob2018quantization} proposes using $8$-b quantization for accelerating the inference of MobileNets on hardware platforms such as Qualcom Hexagon and ARM NEON. Relaxed Quantization (RQ) \cite{louizos2018relaxed} approximates the quantization function with a smooth differentiable approximate function, but the quantized values are still in fixed-point. Uniform Noise Injection Quantization (UNIQ) \cite{baskin2018uniq} proposes training a non-uniform quantizer using a special noise injection method that allows natural computation of gradients for quantized parameters. UNIQ uses a non-uniform quantizer requiring inefficient lookup tables and full precision multipliers/additions. Furthermore, all of these approaches use relatively high bitwidths ($\sim $ 6b$-$8b), and most even fail to bridge the accuracy gap between the quantized models and their full-precision baseline. In contrast, the proposed DBQ method is able to aggressively reduce the precision of the dominant ($94\%$) PW layers of MobileNetV1 to two ternary parameters with negligible degradation in the Top-1 accuracy.
\section{Differentiable Branched Quantizer (DBQ)}
A ternary $B$-branch quantizer $Q(\mathbf{w})$ of a full precision weight vector $\mathbf{w} \in {\mathbb{R}}^{D}$ (Fig~\ref{fig:2t}(a)) is given by:
\begin{equation}
\mathbf{w}_q = Q(\mathbf{w})=\sum_{j=1}^{B}\alpha_j \mathbf{w}_j
\label{eq:quant-eq}
\end{equation}
where $\mathbf{w}_j\in \{-1,0,1\}^D$ are the ternary branch weight vectors, and $\forall j \in [B]$: $\alpha_j >0$ are per-branch scalars.
In DBQ, we wish to learn all the network parameters which requires the quantizer function $Q(\mathbf{w})$ to be made differentiable. To do so, we first formulate a parametric form of $Q(\mathbf{w})$ in Section~\ref{subsec:formulation} and then employ a smooth 'temperature-controlled' approximation of the quantizer step function to establish its differentiability in Section~\ref{subsec:differential}.
\begin{figure}[t]
\begin{center}
\subfloat[]{\includegraphics[width=0.45\columnwidth]{figures/2T-explained.pdf}\label{fig:2t-explained}}%
\qquad%
\subfloat[]{\includegraphics[width=0.45\columnwidth]{figures/2T-branches.pdf}\label{fig:2t-branches}}%
\end{center}
\caption{Branched quantization of full precision weights: (a) as a linear combination of ternary weights, and (b) implemented as multiple parallel ternary branch operations to leverage the properties of ternary arithmetic for dot product computations.}%
\label{fig:2t}%
\end{figure}
\subsection{Formulation of DBQ}\label{subsec:formulation}
We formulate the ternary $B$-branch quantizer in Fig.~\ref{fig:2t} as a $N=3^B$-level non-uniform quantizer $Q(\mathbf{w}): {\mathbb{R}}^{D} \rightarrow \mathcal{V}^{D}$ with quantization levels $\mathcal{V}=\{v_i\}^{N}_{i=1}$. Assuming that the quantization levels $v_i$'s are sorted in ascending order, the $Q(\mathbf{w})$ can be written as a linear combination of $N-1$ step functions as shown below:
\begin{equation}\label{eq:proposed}
Q(\mathbf{w}) = \sum_{i=1}^{N-1}\Big[\big(v_{i+1}-v_i\big) f(\mathbf{w}-t_i)\Big]-\frac{v_N-v_1}{2}
\end{equation}
where $f(\mathbf{u})=[\identityf{u_1>0},..., \identityf{u_{D}>0}]^{\text{T}}$ is an element-wise ideal step function, and $\{t_i\}_{i=1}^{N-1}$ are the quantizer thresholds. The $(v_N-v_1)/2$ term is the quantizer offset. We impose the ternary quantizer structure in \eqref{eq:quant-eq} via the constraint:
\begin{equation}\label{eq:constraint}
v_i =\sum_{j=1}^Be_{i,j}\alpha_j
\end{equation}
where $e_{i,j} \in \{-1,0,1\}$, and thereby obtain the
final quantizer expression:
\begin{equation}\label{eq:proposed2}
Q(\mathbf{w}) = \gamma_2 \Bigg[\sum_{i=1}^{N-1}\Big[f(\gamma_1 \mathbf{w}-t_i)\sum_{j=1}^Bb_{i,j}\alpha_j\Big]-\sum_{j=1}^B\alpha_j\Bigg]
\end{equation}
where $b_{i,j}=e_{i+1,j}-e_{i,j} \in \{-2,-1,0,1,2\}$ $\forall j \in [B]$ are \textit{fixed} coefficients, and $\gamma_1\ \&\ \gamma_2$ are pre/post-quantization scales to ensure that the quantizer operates on normalized inputs. Thus, the branched quantizer is parametrized by $\mathcal{P}_Q=\{\alpha_1, ..., \alpha_B, \gamma_1, \gamma_2, t_1, ..., t_{N-1}\}$ and these all need to be learned.
In this paper, we focus on the $B=2$ case, i.e., two ternary branch, as visualized in Fig.~\ref{fig:2t-quant}, with $N=3^2=9$ different quantization levels $v_i$. In this case, \eqref{eq:proposed2} can be expanded as:
\begin{align}
\begin{split}
Q(\mathbf{w}) &= \gamma_2 \Big[\alpha_2 f(\gamma_1 \mathbf{w} -t_1) + (\alpha_1 - \alpha_2) f(\gamma_1 \mathbf{w} -t_2) + (2\alpha_2 - \alpha_1) f(\gamma_1 \mathbf{w} -t_3) \\ &+ (\alpha_1 - \alpha_2) f(\gamma_1 \mathbf{w} -t_4) + (\alpha_1 - \alpha_2) f(\gamma_1 \mathbf{w} -t_5) + (2\alpha_2 - \alpha_1) f(\gamma_1 \mathbf{w} -t_6) \\
&+ (\alpha_1 - \alpha_2) f(\gamma_1 \mathbf{w} -t_7) + \alpha_2 f(\gamma_1 \mathbf{w} -t_8) -(\alpha_1 + \alpha_2)\Big]
\end{split}
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\linewidth]{figures/2T-quantizer.pdf}%
\end{center}
\caption{Visualization of a two ternary (2T) branch quantizer with branch scales $\alpha_1$ and $\alpha_2$ assuming $\alpha_1\geq \alpha_2 \geq \frac{\alpha_1}{2} \geq 0$.}%
\label{fig:2t-quant
\end{figure}
\subsection{Differentiability}\label{subsec:differential}
Inspired by \cite{yang2019quantization,xie2018snas}, we replace the non-differentiable $f$ in \eqref{eq:proposed2} with a smooth sigmoid approximation $\hat{f}_T$ as follows:
\begin{align}
\begin{split}
\hat{f}_T(u) &= \frac{1}{1+\text{exp}(-Tu)} \\
\end{split}
\end{align}
where the \emph{temperature} parameter $T$ controls the approximation error, i.e.,:
\begin{equation}\label{eq:error}
e_T(u) = \hat{f}_T(u) - f(u) \xrightarrow[T \to \infty]{} 0
\end{equation}
When learning the quantizer parameters $\mathcal{P}_Q$, the temperature $T$ is increased gradually as the training converges so that $\hat{f}_T(u)\rightarrow f(u)$. The resultant differentiable quantizer $Q_T(\mathbf{w})=\mathbf{w}_q=\mathbf{z}$ therefore enables a straightforward calculation of the gradients for all quantizer and model parameters w.r.t. loss function $\mathcal{L}$ as follows:
\begin{align}
\frac{\partial {\cal L}}{\partial \gamma_2} &= \frac{1}{\gamma_2} \sum_{k=1}^D\frac{\partial {\cal L}}{\partial z_k}z_k \label{eq:deriv1}\\
\frac{\partial {\cal L}}{\partial \alpha_j} &= \gamma_2 \sum_{k=1}^D \frac{\partial {\cal L}}{\partial z_k} \Bigg[\sum_{i=1}^{N-1}\Big[b_{i,j}g_{k,i}\Big]-1\Bigg] \label{eq:deriv2}\\
\frac{\partial {\cal L}}{\partial t_i} &= -\gamma_2 T \sum_{k=1}^D \frac{\partial {\cal L}}{\partial z_k} \Big[h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j \Big] \label{eq:deriv3}\\
\frac{\partial {\cal L}}{\partial w_k} &= \gamma_1 \gamma_2 T \frac{\partial {\cal L}}{\partial z_k} \sum_{i=1}^{N-1} \Big[ h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j \Big] \label{eq:deriv4}\\
\frac{\partial {\cal L}}{\partial \gamma_1} &= \gamma_2 T \sum_{k=1}^D \frac{\partial {\cal L}}{\partial z_k}w_k \Bigg[ \sum_{i=1}^{N-1} \Big[ h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j \Big] \Bigg] \label{eq:deriv5}
\end{align}
where $h_{k,i} = g_{k,i}(1-g_{k,i})$ and $g_{k,i} = \hat{f}_T(\gamma_1w_k-t_i)$ for brevity. By doing so, we eliminate the need for the STE and the expensive computational overhead introduced in estimation-based methods such as LQNet \cite{zhang2018lq} or ABCNet \cite{lin2017towards}. Note that software frameworks such as PyTorch \cite{paszke2017automatic} automatically take care of computing these gradients so these don't need to be explicitly coded.
\subsection{Implementation Details}
\textbf{Parameter Initialization}: Initializing the quantizer parameters $\mathcal{P}_Q$ is performed once before training and requires an initial vector $\mathbf{w}\in {\mathbb{R}}^D$ which can be from a pre-trained network or from random initialization (training from scratch). The initialization procedure is as follows: 1) the post-quantization scale $\gamma_2$ is set to the maximum absolute value in $\mathbf{w}$, and the pre-quantization scale $\gamma_1$ is set to $1/\gamma_2$. This ensures that the quantizer operates on normalized parameters which facilitates the optimization of its parameters, and that the quantized values are of the same scale as the inputs; 2) to find the optimal thresholds $\{t_i\}_{i=1}^{N-1}$, we first compute the optimal $N$ centroids $\{c_i\}_{i=1}^{N}$ of the normalized vector $\gamma_1\mathbf{w}$ via $k$-means, and then $\forall i\in[N-1]$ we set $t_i$ to be the midpoint of the interval $[c_i, c_{i+1}]$; and 3) a good initialization for $\{\alpha_j\}_{j=1}^{B}$ is found by solving for the optimal values that minimize the $L_2$ norm between the normalized vector $\gamma_1\mathbf{w}$ and its quantized counterpart.
\textbf{Training and Inference}: During training, the proposed DBQ quantizer is used with the approximate smooth step function $\hat{f}_T$ for both forward and backward calculations (\eqref{eq:proposed2} \& \eqref{eq:deriv1}$-$\eqref{eq:deriv5}). For a given layer in the network that performs the function $\mathbf{y}=F(\mathbf{w}, \mathbf{x})$, applying DBQ simply boils down to composing the quantizer described in \eqref{eq:proposed2} with the function $F$: $\mathbf{y}=F(Q_T(\mathbf{w}), \mathbf{x})$. For quantizing convolutional layers, we apply kernel-wise quantizers. The overhead of full precision scales is amortized across the large filter lengths. The choice of the temperature parameter $T$ is important. A large value of $T$ would reduce the approximation error in \eqref{eq:error}, however the gradients would saturate quickly, thus causing a bottleneck for learning the quantizer parameters. Therefore, an initial small value for $T$ is used for the first training epoch, and its value is increased for successive epochs based on a pre-determined temperature update schedule. A simple yet effective schedule is to linearly increment the temperature with the number of epochs: $T = T_{\text{init}} + e\times T_{\text{inc}}$. During inference, the approximate step function is replaced with the ideal function $f$ such that the quantizer output satisfies \eqref{eq:quant-eq}.
\textbf{Activation Quantization}:
The challenge in quantizing input activations with a fixed-point quantizer during training is determining a suitable clipping value (Fig.~\ref{fig:act-quant}). Traditionally, the use of ReLU$6$ (which clips at $6$) has been a popular choice due to its simplicity \cite{sheng2018quantization,jacob2018quantization}. However, the choice of $6$ provides no guarantees on the clipping probability, and can therefore yield sub-optimal results. Similar to \cite{dbouk2020low}, we propose clipping the post-BN activations $y_{\text{BN}}$ (Fig.~\ref{fig:act-quant}) using:
\begin{equation}
c = \max_{i\in[C]}(\beta_l^{(i)}+k\gamma_l^{(i)})
\label{eq:c}
\end{equation}
where $C$ is the number of channels in the activation tensor $y_1$, $(\beta^{(i)},\gamma^{(i)})$ are learnable per-channel shift and scale parameters of BN, and $k$ is a network hyperparameter that controls the clipping probability. Assuming that the distribution of $y_{\text{BN}}^{(i)} \sim \mathcal N\big(\beta^{(i)},(\gamma^{(i)})^2\big)$ \cite{ioffe2015batch} and using $6\sigma$ rule ($k=6$), one can show that the choice of $c$ in \eqref{eq:c} guarantees:
\begin{equation}
\text{Pr}\{y_{\text{BN}}\leq c\} \geq 0.999
\end{equation}
Note that having a fixed clipping value $c$ for all channels is crucial in order to ensure that the dot product operations can be implemented in fixed-point.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\linewidth]{figures/act-quant.pdf}
\end{center}
\caption{Quantizing activations post-ReLU requires a pre-determined clipping parameter $c$.}%
\label{fig:act-quant
\end{figure}
\section{Experimental Results}
To demonstrate the effectiveness of the DBQ method for quantizing lightweight networks, we evaluate it on three different image classification datasets: 1) CIFAR-10 \cite{krizhevsky2009learning} using ResNet-20 \cite{he2016deep}; 2) ImageNet (ILSVRC 2012) \cite{russakovsky2015imagenet} using MobileNetV1 \cite{howard2017mobilenets}, MobileNetV2 \cite{sandler2018mobilenetv2}, and ShuffleNetV2 \cite{ma2018shufflenet}; and 3) the recently proposed Visual Wake Words \cite{chowdhery2019visual} using MobileNetV1. In all of our experiments, we train full precision models from scratch, and perform fine tuning on said models for training their quantized counterparts. We use stochastic gradient descent for training all the models. For further details on the training setup for each experiment, please check the supplementary material.
\subsection{Complexity Metrics}
We propose a set of metrics, inspired by those used in \cite{sakr2017analytical,sakr2018per}, in order to quantify the complexity reduction achieved by our proposed method.
\textbf{Computational Cost} {($\bm{\mathcal{C}_C}$)} for an $L$-layer network:
\begin{align}
\begin{split}\label{eq:cc}
{\cal C}_C &= \sum_{l=1}^LN_l\Big[D_lB_{W,l}B_{A,l} + (D_l-1)(B_{A,l}+B_{W,l}+\lceil\log_2D_l \rceil-1)\Big]
\end{split}
\end{align}
where $N_l$ is the number of $D_l$-dimensional dot products in layer $l$ with $B_{W,l}$ and $B_{A,l}$ being the weights and activations precisions respectively. This cost essentially measures the number of 1b full adders (FAs) needed to implement the dot products required for a given network. For full precision (32b) parameters, we make the simplifying assumption of treating them as 23b (mantissa precision) fixed-point parameters.
\textbf{Sparsity-Aware Computational Cost} {($\bm{\mathcal{C}_S}$)} is also defined in order to leverage weight-sparsity in different models that can be reflected on the model complexity:
\begin{align}
\begin{split}\label{eq:sc}
{\cal C}_S &= \sum_{l=1}^LN_l\Big[D'_lB_{W,l}B_{A,l} + (D'_l-1)(B_{A,l}+B_{W,l}+\lceil\log_2D_l \rceil-1)\Big]
\end{split}
\end{align}
where $D'_l$ is the number of non-zero weights in the corresponding $D_l$-dimensional dot product.
\textbf{Representational Cost} {($\bm{\mathcal{C}_R}$)} for an $L$-layer network:
\begin{align}
\begin{split} \label{eq:rc}
{\cal C}_R &= \sum_{l=1}^L\Big[|W_l|B_{W,l} + |A_l|B_{A,l}\Big]
\end{split}
\end{align}
where $|W_l|$ and $|A_l|$ are the number of elements in the weight and activation tensors in layer $l$, respectively.
\textbf{Model Storage Cost} {($\bm{\mathcal{C}_M}$)} for an $L$-layer network:
\begin{align}
\begin{split} \label{eq:mc}
{\cal C}_M &= \sum_{l=1}^L|W_l|B_{W,l}
\end{split}
\end{align}
which only accounts for the weight storage, and can be useful for studying model compression.
\begin{table}[t]
\begin{center}
\resizebox{0.6\columnwidth}{!}{%
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l l c c}
\clineB{1-4}{2.5}
\textbf{Method} & \textbf{Acc}. ($\bm{\Delta}$) [\%] & $\bm{\mathcal{C}_C}\ (\bm{\mathcal{C}_S})$ [$10^{9}$FA] & $\bm{\mathcal{C}_R}\ (\bm{\mathcal{C}_M})$ [$10^{6}$b]\\\clineB{1-4}{2.5}
FP \cite{zhang2018lq} & $92.10\ (/)$ & $23.73\ (23.73)$ & $14.63\ (8.63)$\\\hline
LQNet-1B \cite{zhang2018lq} & $90.10\ (-2.171)$& $1.60\ (1.60)$ & $6.34\ (0.35)$
\\\hline
LQNet-2B \cite{zhang2018lq} & $91.80\ (-0.325)$& $2.83\ (2.83)$ & $6.61\ (0.61)$
\\\hline
LQNet-3B \cite{zhang2018lq} & $92.00\ (-0.108)$& $4.07\ (4.07)$ & $6.88\ (0.88)$
\\\hline\hline
FP (Ours) & $92.00\ (/)$ & $23.73\ (23.73)$ & $14.63\ (8.63)$\\\hline
DBQ-1T (Ours) & $\mathbf{91.06\ (-1.021)}$ & $\mathbf{1.60\ (0.92)}$ & $6.61\ (0.61)$\\\hline
DBQ-2T (Ours) & $\mathbf{91.93\ (-0.076)}$ & $\mathbf{2.83\ (1.79)}$ & $7.15\ (1.15)$\\\hline
\end{tabular}
}
\end{center}
\caption{The accuracy on CIFAR-10 and complexity metrics ($\mathcal{C}_C$, $\mathcal{C}_S$, $\mathcal{C}_R$, $\mathcal{C}_M$) for ResNet-20 using our method DBQ compared to LQNet. $\Delta$ represents the normalized accuracy drop of the quantized models with respect to its full precision baseline. The first, last layers, and input activations are kept in full precision for the quantized models in accordance with \cite{zhang2018lq}.}
\label{tab:resnet20}
\end{table}
\subsection{CIFAR-10 Results}
We first demonstrate the effectiveness of DBQ on the CIFAR-10 dataset using the popular network ResNet-20 \cite{he2016deep}. To ensure a fair comparison with the LQNet \cite{zhang2018lq} models, we do not quantize the first and last fully connected layers, and we keep all activations in full precision. Table~\ref{tab:resnet20} summarizes the accuracy (and percentage drop) as well as the four complexity metrics ($\mathcal{C}_C$, $\mathcal{C}_S$, $\mathcal{C}_R$, $\mathcal{C}_M$) for different number of branches used. At iso-number of branches, the DBQ models achieve higher accuracies for the same $\mathcal{C}_{C}$ and lower $\mathcal{C}_{S}$ due to the high number of zero valued weights, as opposed to binary branches where the weights are either $\pm 1$. Comparing the DBQ-2T and LQNet-3B models, which achieve comparable accuracies, DBQ-2T requires $\sim 32\%$ less ${\cal C}_{C}$ and $\sim 56 \%$ less ${\cal C}_{S}$, at the expense of an extra bit per-parameter, which is reflected in the marginal $\sim 4 \%$ increase in ${\cal C}_R$.
\subsection{ImageNet Results}
\label{sec:imagenet}
In this section, we report results for MobileNetV1 \cite{howard2017mobilenets}, MobileNetV2 \cite{sandler2018mobilenetv2}, and ShuffleNetV2 \cite{ma2018shufflenet} on ImageNet. We first focus on MobileNetV1 by performing an ablation study, and leverage these results for quantizing the more recent MobileNetV2 and ShuffleNetV2.
\begin{table}[t]
\begin{center}
\resizebox{\columnwidth}{!}{%
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|c c c c c|c c c}
\clineB{1-9}{2.5}
\textbf{Model Name} & \textbf{Activations}&\textbf{FL} & \textbf{DW} & \textbf{PW} & \textbf{FC} & \textbf{Top-1/5 Acc.} [$\%$] & $\bm{\mathcal{C}_C}\ (\bm{\mathcal{C}_S})$ [$10^{10}$FA] & $\bm{\mathcal{C}_R}\ (\bm{\mathcal{C}_M})$ [$10^{7}$b] \\ \clineB{1-9}{2.5}
FP & ReLU - 32b & 32b & 32b & 32b & 32b & $\mathbf{72.12/90.43}$ &$33.37\ (33.37)$ & $30.00\ (13.54)$\\ \hline
FX8-1 & ReLU6 - 8b & 32b & 8b & 8b & 32b & $71.65/90.17$ & $5.78\ (5.39)$ & $10.38\ (5.90)$\\ \hline
FX8-2 & ReLU6 - 8b & 8b & 8b & 8b & 8b & $71.60/90.19$ & $5.24\ (4.85)$ & $7.56\ (3.44)$\\ \hline
FX8-3 & ReLU$x$ - 8b & 8b & 8b & 8b & 8b & $\mathbf{71.86/90.26}$ & $5.24\ (4.85)$& $7.56\ (3.44)$ \\ \hline
DBQ-1T & ReLU - 32b & 32b & 32b & 1T & 32b & $66.45/86.72$ & $3.60\ (2.61)$& $20.58\ (4.12)$\\ \hline
DBQ-2T-1 & ReLU - 32b & 32b & 32b & 2T & 32b & $71.09/89.71$ & $5.23\ (3.77)$ & $21.21\ (4.75)$\\ \hline
DBQ-2T-2 & ReLU6 - 8b & 32b & 8b & 2T & 32b & $70.25/89.42$ &$2.73\ (1.97)$ & $9.12\ (4.64)$\\ \hline
DBQ-2T-3 & ReLU$x$ - 8b & 32b & 8b & 2T & 32b & $70.80/89.75$ &$2.73\ (1.97)$ & $9.12\ (4.64)$\\ \hline
DBQ-2T-4 & ReLU$x$ - 8b & 8b &8b & 2T & 8b & $\mathbf{70.92/89.61}$ & $\mathbf{2.18\ (1.42)}$ & $\mathbf{6.30\ (2.18)}$\\\hline
\end{tabular}
}
\end{center}
\caption{The Top-1/5 accuracy on ImageNet and complexity metrics ($\mathcal{C}_C$, $\mathcal{C}_S$, $\mathcal{C}_R$, $\mathcal{C}_M$) for MobileNetV1 under different precision assignments. Models denoted by DBQ-$z$T are trained using our differentiable branch quantizer with $B=z$ ternary branches. ReLU$x$ denotes a clipped ReLU using our proposed clipping method in Eq.~\eqref{eq:c}.}
\label{tab:imagenet-results}
\end{table}
\textbf{Ablation Study:} Table~\ref{tab:imagenet-results} summarizes the Top-1,5 accuracies of all the MobileNetV1 models trained with different layer precision assignments in order to evaluate the impact of our design choices. To see the impact of using two ternary branches instead of one, we begin with the DBQ-1T model which is obtained by quantizing only the PW layers of MobileNetV1 to one ternary branch (1T) keeping all other activations and weights in full precision. Table~\ref{tab:imagenet-results} shows that DBQ-1T achieves a massive $89\%$ reduction in ${\cal C}_C$ compared to the FP model but at a catastrophic loss of $5.67\%$ in the Top-1 accuracy. In contrast, DBQ-2T-1, which is DBQ-1T with a second ternary branch, is able to recover accuracy to within $1.03\%$ of the full-precision baseline while also achieving massive savings in $\mathcal{C}_C$ of $84 \%$. Quantizing the activations and the remaining layers weights of DBQ-2T-1 to 8b fixed-point, i.e., DBQ-2T-4, incurs a minimal loss in accuracy of $1.2\%$ compared to the FP model while also achieving even greater reduction in both $\mathcal{C}_C$ ($93\%$) and $\mathcal{C}_R$ ($70\%$). The reduction in $\mathcal{C}_S$ increases to $96 \%$ when branch sparsity is exploited to skip computations.
Note that the reason that only PW layers are quantized using ternary branches is three-fold: 1) PW layers consume $\sim 94\%$ of the amount of multiply-adds required for inference (Table~\ref{tab:mobilenet-stats}); 2) we have observed that quantizing the PW layers has the most severe impact on classification accuracy compared to quantizing other layers; and 3) DW layers suffer from extremely small dot-product lengths (9), rendering them unsuitable for multiple branch quantization (the overhead of branch-merge and scaling operations will dominate).
The benefits of our proposed BN-based clipping described in \eqref{eq:c} can be seen by comparing the accuracy of the 8b fixed-point model FX8-3 using BN-based clipping with $k=6$ with its ReLU6-based clipping counterpart FX8-2. The Top-1 accuracy of FX8-3 is better than FX8-2 without any overhead in training or inference. Similarly for DBQ-2T-3 and DBQ-2T-2.
\begin{table}[t]
\begin{center}
\resizebox{\columnwidth}{!}{%
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|c c c c c|c c c}
\clineB{1-9}{2.5}
\textbf{Method} & \textbf{Act.} & \textbf{FL} & \textbf{DW} & \textbf{PW} & \textbf{FC} & \textbf{Top-1 Acc.} [$\%$] & $\bm{\mathcal{C}_C}\ (\bm{\mathcal{C}_S})$ [$10^{10}$FA] & $\bm{\mathcal{C}_R}\ (\bm{\mathcal{C}_M})$ [$10^7$b]\\ \clineB{1-9}{2.5}
IAO$^\star$ \cite{jacob2018quantization} & 8b & 8b & 8b & 8b & 8b & $\mathbf{69.00}^*$ & $4.97\ (/)$ & $7.49\ (3.37)$
\\ \hline
UNIQ \cite{baskin2018uniq} & 8b & 5b & 5b & 5b & 5b & $67.50$ & $3.70\ (/)$ & $6.29\ (2.18)$
\\ \hline
UNIQ \cite{baskin2018uniq} & 8b & 4b & 4b & 4b & 4b & $66.00$ & $3.19\ (/)$ & $5.87\ (\mathbf{1.76})$
\\ \hline
UNIQ \cite{baskin2018uniq} & 8b & 8b & 8b & 8b & 8b & $68.25$ & $5.24\ (/)$ & $7.56\ (3.44)$
\\ \hline
QSM$^\star$ \cite{sheng2018quantization} & 8b & 8b & 8b & 8b & 8b & $68.03$ & $4.97\ (/)$ & $7.49\ (3.37)$
\\ \hline
RQ \cite{louizos2018relaxed} & 5b & 5b & 5b & 5b & 5b & $61.50$ & $\mathbf{2.68}\ (/)$ & $\mathbf{4.75}\ (2.18)$
\\ \hline
RQ \cite{louizos2018relaxed} & 6b & 6b & 6b & 6b & 6b & $67.50$ & $3.42\ (/)$ & $5.69\ (2.60)$
\\ \hline
HAQ cloud \cite{wang2019haq} & mixed & 8b & mixed & mixed & 8b & $65.33 - 71.20^{\dagger}$ & $2.73\ (/)$ & $5.09\ (3.12)$
\\ \hline
HAQ edge \cite{wang2019haq} & mixed & 8b & mixed & mixed & 8b & $67.40 - 71.20^\dagger$ & $4.06\ (/)$ & $5.87\ (2.49)$
\\ \hline \hline
FX8 (Ours)& 8b & 8b & 8b & 8b & 8b & $\mathbf{71.86}$& $5.24\ (4.85)$& $7.56\ (3.44)$\\ \hline
DBQ-2T (Ours)& 8b & 8b & 8b & 2T & 8b & $\mathbf{70.92}$ & $\mathbf{2.18\ (1.42)}$ & $6.30\ (2.18)$\\\hline
\multicolumn{9}{l}{$^{\star}$models with BN folding \qquad $^{*}$results extracted from a plot \qquad $^{\dagger}$exact accuracy not reported}
\end{tabular}
}
\end{center}
\caption{The Top-1 accuracy on ImageNet and complexity metrics ($\mathcal{C}_C$, $\mathcal{C}_S$, $\mathcal{C}_R$, $\mathcal{C}_M$) for MobileNetV1 using our method (DBQ-2T) compared to state-of-the art training-based quantization methods.}
\label{tab:imagenet-comparison}
\end{table}
\begin{wrapfigure}{R}{0.5\textwidth}
\begin{center}
\includegraphics[width=0.48\textwidth]{figures/branch_scales.pdf}
\end{center}
\caption{The distribution of the ratio of the ternary branch scales $\alpha_1$ and $\alpha_2$ for DBQ-2T-4 from Table~\ref{tab:imagenet-results}.}
\label{fig:hist-scales}
\end{wrapfigure}
\textbf{Branching Utility:} A 2T quantizer should result in $9$ distinct quantization levels as shown in Fig.~\ref{fig:2t-quant}. However, in a 2T branched quantizer such as ours, it is possible for the number of quantization levels to be smaller than 9, e.g., if $\alpha_1=\alpha_2$ then the number of quantization levels is 5. In this case, the full representational power of the 2T branched quantizer is not utilized. To see if the 2T branched quantizer generates all 9 levels, we plot the distribution of the ratio $R_{\alpha}=\frac{\alpha_1}{\alpha_2}$ across all the PW layers in the DBQ-2T-4 model (Table~\ref{tab:imagenet-results}). The distribution is centered around $R_{\alpha}=1.48$ with more than $99\%$ of the values lying in the range $[1.2,1.7]$. This demonstrates that the quantizer learned by DBQ employs the full representational power offered by the 2T structure.
\textbf{Comparison with State-of-the Art:} Table~\ref{tab:imagenet-comparison} compares the performance of our proposed DBQ method against state-of-the art results on ImageNet for MobileNetV1. Our model DBQ-2T, which corresponds to DBQ-2T-4 in Table~\ref{tab:imagenet-results} achieves the lowest computational cost $\mathcal{C}_C$ ($2.18 \times 10^{10}$ FAs) compared to previously published networks, while achieving the highest Top-1 accuracy $70.92\%$. Compared to the lowest complexity model RQ \cite{louizos2018relaxed}, DBQ-2T achieves a $19\%$ reduction in $\mathcal{C}_C$ with a $9.42\%$ improvement in Top-1 accuracy at iso-storage complexity $\mathcal{C}_M$. Furthermore, DBQ-2T improves upon the accuracy of the IAO model \cite{jacob2018quantization}, which currently achieves the highest Top-1 accuracy, by $1.92\%$ but with a massive reduction in complexity $\mathcal{C}_C$ ($56\%$), $\mathcal{C}_R$ ($16\%$), and $\mathcal{C}_M$ ($35\%$).
\textbf{More Lightweight Networks:} Table \ref{tab:extra-results} demonstrates the performance of DBQ when applied to the more recent lightweight networks: MobileNetV2 and ShuffleNetV2. Similar to MobileNetV1, we find that the PW layers \textit{dominate} the number of operations required for a single inference for both MobileNetV2 ($87\%$) and ShuffleNetV2 ($90\%$). Thus, and inline with our experiments on MobileNetV1, we quantize all PW layers using 2T, with the remaining layers and activations quantized to 8b fixed-point. We observe a minimal $1.3\%$ (MobileNetV2) and $2.6\%$ (ShuffleNetV2) drop in accuracy compared to FP, while achieving \textit{massive} ($77\% - 95\%$) reductions in all the complexity metrics. A comparison between DBQ and \cite{Uhlich2020Mixed} for MobileNetV2 is present in the supplementary material.
\begin{table}[t]
\begin{center}
\resizebox{\columnwidth}{!}{%
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|c c c c c|c c c}
\clineB{1-9}{2.5}
\textbf{Model} & \textbf{Act.} & \textbf{FL} & \textbf{DW} & \textbf{PW} & \textbf{FC} & \textbf{Top-1 Acc.} [$\%$] & $\bm{\mathcal{C}_C}\ (\bm{\mathcal{C}_S})$ [$10^{10}$FA] & $\bm{\mathcal{C}_R}\ (\bm{\mathcal{C}_M})$ [$10^7$b]\\ \clineB{1-9}{2.5}
MobileNetV2-FP & 32b& 32b & 32b & 32b & 32b & $71.88$ & $17.83\ (17.83)$ & $32.87\ (11.22)$ \\ \hline
MobileNetV2-2T & 8b & 8b & 8b & 2T & 8b& $\mathbf{70.54}$ & $\mathbf{1.42}\ (\mathbf{1.11})$ & $\mathbf{7.45}\ (\mathbf{2.04})$ \\ \hline \hline
ShuffleNetV2-FP & 32b& 32b & 32b & 32b & 32b & $69.36$ & $8.52\ (8.52)$ & $13.81\ (7.29)$ \\ \hline
ShuffleNetV2-2T & 8b & 8b & 8b & 2T & 8b& $\mathbf{ 66.74}$ & $\mathbf{0.64}\ (\mathbf{0.46})$ & $\mathbf{3.21}\ (\mathbf{1.38})$ \\ \hline
\end{tabular}
}
\end{center}
\caption{The Top-1 accuracy on ImageNet and complexity metrics ($\mathcal{C}_C$, $\mathcal{C}_S$, $\mathcal{C}_R$, $\mathcal{C}_M$) for MobileNetV2 and ShuffleNetV2 using our method (DBQ-2T).}
\label{tab:extra-results}
\end{table}
\subsection{Visual Wake Words Results}
We study the accuracy-precision-complexity trade-off in quantized DNNs using the Visual Wake Words (VWW) dataset that was recently proposed by Google \cite{chowdhery2019visual} in order to facilitate the development of lightweight vision models for deployment on resource-constrained Edge devices.
This dataset reflects a typical real-world scenario involving the detection of specific events by observing incoming data, e.g., monitoring a camera video feed in order to detect the presence of a person \cite{chowdhery2019visual}, similar to the use of audio wake words in speech recognition. The VWW dataset is derived from the COCO dataset \cite{lin2014microsoft} via a simple re-labeling of the available images has a training set of 115k images and a test set with 8k images.
\begin{figure}[t]
\begin{center}
\subfloat[]{\includegraphics[height=4.2cm]{figures/vww-sc.pdf}\label{fig:vww-cc}}%
\qquad%
\subfloat[]{\includegraphics[height=4.2cm]{figures/vww-rc.pdf}\label{fig:vww-rc}}%
\end{center}
\caption{The test accuracy of MobileNetV1 on the Visual Wake Words dataset with varying precision assignment and width multiplier $m$ vs. (a) sparsity-aware computational cost, and (b) representational cost. Only the precision of the pointwise layer's weights are changing, whereas all the remaining activations and weights are quantized using 8b fixed-point.}%
\label{fig:vww}%
\end{figure}
As in \cite{chowdhery2019visual}, we employ the modified MobileNetV1 architecture which has a FC layer with 2 output classes instead of 1000. The complexity of the network is tuned by varying the network width multiplier\cite{howard2017mobilenets} $m\in \{0.125,0.25,0.375,0.5\}$. Similar to our ImageNet experiments, we quantize all layers to 8b fixed-point and vary the precision of the PW layers using 8b-to-2b fixed-point and DBQ-1T and DBQ-2T.
As shown in Fig.~\ref{fig:vww-cc}, for over parameterized models, e.g., $m=0.5$, we find DBQ-1T (red square) shows a massive reduction in $\mathcal{C}_S$ ($\sim 69\%$) at iso-accuracy compared to the fixed-point models (red circle) (Fig.~\ref{fig:vww-cc}). In contrast, for lightweight models, e.g., $m=0.125$, DBQ-1T (blue square) achieves an impressive $45\%$ reduction in $\mathcal{C}_S$ but at the expense of a $3\%$ loss in test accuracy as compared to the fixed-point model (blue circle) (Fig.~\ref{fig:vww-cc}). The DBQ models (diamonds and squares) can be seen to form a pareto-optimal accuracy-vs. ${\cal C}_S$ trade-off curve in Fig.~\ref{fig:vww-cc} demonstrating its effectiveness.
Fig.~\ref{fig:vww-rc} shows that the choice of the width multiplier $m$ has a much more significant impact on the representational cost ${\cal C}_R$ than varying bit-precision which implies that ${\cal C}_R$ is dominated by the storage requirements of activations rather than weights. This implies that the choice of the model parameter $m$ is governed by the amount of on-chip storage available on an Edge device. In contrast, the choice of the bit precision of the PW layers is dictated by the latency/energy requirements which upper bounds $\mathcal{C}_S$ as seen in Fig.~\ref{fig:vww-cc}. As a result, when comparing the lightweight $m=0.25$ DBQ-2T model (orange diamond) with the over parameterized $m=0.375$ DBQ-1T model (green square), we observe that DBQ-2T achieves a reduction in both ${\cal C}_S$ ($26\%$) and ${\cal C}_R$ ($30\%$), at iso-accuracy and iso-${\cal C}_M$ ($\sim 10^6$b).
\section{Conclusion}
We presented DBQ, an efficient fully differentiable method for training multiple ternary branch quantizers for deep neural networks and validated its effectiveness for lightweight networks on the CIFAR-10 (ResNet-20), ImageNet (MobileNetV1, MobileNetV2, and ShuffleNetV2) and Visual Wake Words (MobileNetV1) datasets. Our method outperforms the state-of-the-art quantization schemes in both accuracy and complexity metrics.\\
\\
\textbf{Acknowledgment:} The authors would like to thank Avishek Biswas, Manu Mathew and Arthur Redfern for helpful discussions and support.
\bibliographystyle{splncs04}
\section{Experimental Setup}
In this section, we describe the experimental setup used for generating all our results.
\subsection{CIFAR-10}
\subsubsection{Data Augmentation}
The CIFAR-10 dataset consits of $32\times 32$ RGB images.
For generating the training samples, we adopt the standard data augmentation used in \cite{huang2018condensenet} where each image is: 1) zero-padded with $4$ pixels on each side; 2) horizontally flipped with probability $0.5$; and 3) randomly cropped using a $32\times 32$ window. During testing, we use the $32\times 32$ images as is from the testing set.
We also normalize the images, for both training and testing, using a per-channel mean and standard deviation calculated across the training set.
\subsubsection{Training Hyperparameters}
For training the full precision (FP) ResNet-20 baseline on CIFAR-10, we use SGD with momentum $\beta=0.9$, batch size of $100$, and weight decay of $\lambda=10^{-4}$. The FP model is trained for a total of $E_\text{T}=200$ epochs, with an initial learning rate $\eta_0=0.1$ and a cosine update rule \cite{loshchilov2016sgdr}:
\begin{equation}\label{eq:cosine}
\eta_e = \frac{\eta_0}{2}\Big(1+\cos{\Big(\frac{ e}{E_{\text{T}}}\pi\Big)}\Big)
\end{equation}
During the fine-tuning process, i.e. training the model with weights initialized from the FP baseline, we train using the same setup as before, but for a fewer number of epochs $E_\text{T}=50$ and a smaller initial learning rate $\eta_0=0.01$. The DBQ models trained use a linear temperature increment schedule:
\begin{equation}
T_e = T_{\text{init}} + e\cdot T_{\text{inc}}
\end{equation}
with an initial temperature $T_{\text{init}}=5$ and increments $T_{\text{inc}}=2.5$.
\subsection{ImageNet}
\subsubsection{Data Augmentation}
For our ImageNet experiments, we follow the standard data augmentation used in \cite{he2016deep}, where during training, images are: 1) resized; 2) horizontally flipped; and 3) randomly cropped to $224\times 224$. During testing, all images are resized to $256\times 256$ and then cropped to $224\times 224$. We also normalize the input images on a per-channel basis.
\subsubsection{Training Hyperparameters}
For training the full precision MobileNetV1 baseline on ImageNet, we use a similar setup as our CIFAR-10 experiments, with a slightly different learning rate schedule. Similar to \cite{goyal2017accurate}, the first $E_{\text{W}}$ epochs are used for learning rate "warm-up":
\begin{equation}
\eta_e = \frac{(e+1)\eta_0}{E_{W}}
\end{equation}
after which the remaining epochs utilize a cosine learning rate as described in \eqref{eq:cosine}. The hyperparameters used for both FP and quantization fine-tuning are specified in Table~\ref{tab:imagenet-setup-mnv1}.
The full precision MobileNetV2 and ShuffleNetV2 baselines on ImageNet are pre-trained models obtained from PyTorch \cite{paszke2017automatic}. Their 2T quantized counterparts, MobileNetV2-2T and ShuffleNetV2-2T, are fine-tuned using the training hyperparameters described in Table~\ref{tab:imagenet-setup-rest}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c||c|c|}
\hline
& Batch Size & $\beta$& $\lambda$ & $\eta_0$ & $E_{\text{W}}$ & $E_{\text{T}}$ & $T_{\text{init}}$ & $T_{\text{inc}}$\\ \hline \hline
FP & $512$ & $0.9$ & $4\times10^{-5}$& $0.1$ & $5$ & $150$ & NA & NA \\ \hline
Quant. & $512$ & $0.9$ & $4\times10^{-5}$& $0.001$ & $0$ & $50$ & $50$ & $20$ \\ \hline
\end{tabular}
\end{center}
\caption{Training hyperparameters used for MobileNetV1 experiments on the ImageNet dataset.}
\label{tab:imagenet-setup-mnv1}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c||c|c|}
\hline
& Batch Size & $\beta$& $\lambda$ & $\eta_0$ & $E_{\text{W}}$ & $E_{\text{T}}$ & $T_{\text{init}}$ & $T_{\text{inc}}$\\ \hline \hline
MobileNetV2-2T & $256$ & $0.9$ & $4\times10^{-5}$& $5\times10^{-4}$ & $0$ & $50$ & $25$ & $10$ \\ \hline
ShuffleNetV2-2T & $512$ & $0.9$ & $4\times10^{-5}$& $0.001$ & $0$ & $30$ & $25$ & $10$ \\ \hline
\end{tabular}
\end{center}
\caption{Training hyperparameters used for quantized MobileNetV2 and ShuffleNetV2 experiments on the ImageNet dataset.}
\label{tab:imagenet-setup-rest}
\end{table}
\subsection{Visual Wake Words}
\subsubsection{Data Augmentation}
For data augmentation during training, we follow the exact setup as our ImageNet experiments with input normalization and random horizontal flips and crops. During testing, images are normalized, resized to $256\times 256$, and then cropped to $224\times 224$.
\subsubsection{Training Hyperparameters}
The training setup used is identical to our ImageNet experiments as well, and Table~\ref{tab:vww-setup} specifies the values of the hyperparameters used for both full precision and quantization training.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c||c|c|}
\hline
& Batch Size & $\beta$& $\lambda$ & $\eta_0$ & $E_{\text{W}}$ & $E_{\text{T}}$ & $T_{\text{init}}$ & $T_{\text{inc}}$\\ \hline \hline
FP & $512$ & $0.9$ & $4\times10^{-5}$& $0.1$ & $5$ & $200$ & NA & NA \\ \hline
Quant. & $512$ & $0.9$ & $4\times10^{-5}$& $0.01$ & $0$ & $50$ & $20$ & $5$ \\ \hline
\end{tabular}
\end{center}
\caption{Training hyperparameters used for experiments on the Visual Wake Words dataset.}
\label{tab:vww-setup}
\end{table}
\section{Gradient Derivations}
In this section, we provide derivations for the gradient expressions of the loss function $\mathcal{L}$ with respect to the full precision weights $\mathbf{w} \in {\mathbb{R}}^D$ and the quantizer parameters $\mathcal{P}_Q=\{\alpha_1, ..., \alpha_B, \gamma_1, \gamma_2, t_1, ..., t_{N-1}\}$. Recall that during training, the quantizer expression is:
\begin{equation}\label{eq:quantizer}
\mathbf{z} = Q_T(\mathbf{w}) = \gamma_2 \Bigg[\sum_{i=1}^{N-1}\Big[\hat{f}_T(\gamma_1 \mathbf{w}-t_i)\sum_{j=1}^Bb_{i,j}\alpha_j\Big]-\sum_{j=1}^B\alpha_j\Bigg]
\end{equation}
where $\hat{f}_T$ is the smooth approximation using the Sigmoid function:
\begin{equation}
\hat{f}_T(u) = \frac{1}{1+\text{exp}(-Tu)}
\end{equation}
whose derivative can be easily written as:
\begin{equation} \label{eq:sig-grad}
\grad{\hat{f}_T(u)}{u} = T \hat{f}_T(u)\Big[1-\hat{f}_T(u)\Big]
\end{equation}
\subsection{Notation}
The derivations of these gradients involves computing derivatives with vectors. Thus, in this section we establish the appropriate notation.
The derivative of a scalar $y$ with respect to a $D$-dimensional vector $\mathbf{x}$ is:
\begin{equation}\label{eq:d1}
\frac{\partial y}{\partial\mathbf{x}} = \begin{bmatrix} \frac{\partial y}{\partial x_1} & \frac{\partial y}{\partial x_2} & \dots & \frac{\partial y}{\partial x_D} \end{bmatrix}
\end{equation}
whereas the derivative of a vector $\mathbf{y}$ with respect to a scalar $x$ is:
\begin{equation}\label{eq:d2}
\frac{\partial \mathbf{y}}{\partial x} = \begin{bmatrix} \frac{\partial y_1}{\partial x} \\ \frac{\partial y_2}{\partial x} \\ \vdots \\ \frac{\partial y_D}{\partial x} \end{bmatrix}
\end{equation}
The derivative of a scalar $y$ with respect to another scalar $x$, assuming $y=g(\mathbf{z})$ and $\mathbf{z} = f(x)$, can therefore be computed using the chain rule:
\begin{equation}
\grad{y}{x}=\grad{y}{\mathbf{z}}\cdot \grad{\mathbf{z}}{x} = \begin{bmatrix} \frac{\partial y}{\partial z_1} & \frac{\partial y}{\partial z_2} & \dots & \frac{\partial y}{\partial z_D} \end{bmatrix} \begin{bmatrix} \frac{\partial z_1}{\partial x} \\ \frac{\partial z_2}{\partial x} \\ \vdots \\ \frac{\partial z_D}{\partial x} \end{bmatrix} = \sum_{k=1}^D\grad{y}{z_k}\cdot \grad{z_k}{x}
\end{equation}
\subsection{Derivations}
\subsubsection{Post-quantization Scale}
We notice that:
\begin{equation}
\grad{z_k}{\gamma_2} = \frac{z_k}{\gamma_2}
\end{equation}
which can be plugged in to get the gradient using the chain rule:
\begin{equation}
\frac{\partial {\cal L}}{\partial \gamma_2} = \frac{\partial {\cal L}}{\partial \mathbf{z}} \cdot \frac{\partial \mathbf{z}}{\partial \gamma_2} = \sum_{k=1}^D \grad{{\cal L}}{z_k}\cdot \grad{z_k}{\gamma_2} = \frac{1}{\gamma_2} \sum_{k=1}^D\frac{\partial {\cal L}}{\partial z_k}z_k
\end{equation}
\subsubsection{Ternary Branch Scales}
We first compute $\forall j \in [B]$:
\begin{equation}
\grad{z_k}{\alpha_j} = \gamma_2 \Bigg[\sum_{i=1}^{N-1}\Big[\hat{f}_T(\gamma_1 w_k-t_i)b_{i,j}\Big]-1\Bigg] = \gamma_2 \Bigg[\sum_{i=1}^{N-1}\Big[g_{k,i}b_{i,j}\Big]-1\Bigg]
\end{equation}
where $g_{k,i}=\hat{f}_T(\gamma_1 w_k-t_i)$ for brevity. Therefore, using the chain rule we obtain:
\begin{equation}
\grad{{\cal L}}{\alpha_j} = \grad{{\cal L}}{\mathbf{z}}\cdot \grad{\mathbf{z}}{\alpha_j} =\sum_{k=1}^D \grad{{\cal L}}{z_k}\cdot \grad{z_k}{\alpha_j} = \gamma_2 \sum_{k=1}^D \frac{\partial {\cal L}}{\partial z_k} \Bigg[\sum_{i=1}^{N-1}\Big[b_{i,j}g_{k,i}\Big]-1\Bigg]
\end{equation}
\subsubsection{Quantizer Thresholds}
We first utilize \eqref{eq:sig-grad} in order to compute $\forall i \in [N-1]$:
\begin{align}
\begin{split}
\grad{z_k}{t_i} &= \gamma_2 \Big[\grad{\hat{f}_T(\gamma_1 w_k-t_i)}{t_i}\sum_{j=1}^Bb_{i,j}\alpha_j\Big] \\
&= -\gamma_2 T \Big[g_{k,i}(1-g_{k,i})\sum_{j=1}^Bb_{i,j}\alpha_j\Big] = -\gamma_2 T \Big[h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j\Big]
\end{split}
\end{align}
where $h_{k,i} = g_{k,i}(1-g_{k,i})$ for brevity. Therefore using the chain rule we obtain:
\begin{equation}
\grad{{\cal L}}{t_i} = \grad{{\cal L}}{\mathbf{z}}\cdot \grad{\mathbf{z}}{t_i} =\sum_{k=1}^D \grad{{\cal L}}{z_k}\cdot \grad{z_k}{t_i} = -\gamma_2 T \sum_{k=1}^D \frac{\partial {\cal L}}{\partial z_k} \Big[h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j \Big]
\end{equation}
\subsubsection{Pre-quantization Scale}
Similarly, we utilize \eqref{eq:sig-grad} in order to compute:
\begin{align}
\begin{split}
\grad{z_k}{\gamma_1} &= \gamma_2 \Bigg[\sum_{i=1}^{N-1}\Big[\grad{\hat{f}_T(\gamma_1 w_k-t_i)}{\gamma_1}\sum_{j=1}^Bb_{i,j}\alpha_j\Big]\Bigg]= \gamma_2 T w_k \Bigg[\sum_{i=1}^{N-1}\Big[h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j\Big]\Bigg]
\end{split}
\end{align}
and therefore applying the chain rule yields:
\begin{equation}
\grad{{\cal L}}{\gamma_1} = \grad{{\cal L}}{\mathbf{z}}\cdot \grad{\mathbf{z}}{\gamma_1} = \sum_{k=1}^D \grad{{\cal L}}{z_k}\cdot \grad{z_k}{\gamma_1} = \gamma_2T \sum_{k=1}^D \grad{{\cal L}}{z_k} w_k \Bigg[\sum_{i=1}^{N-1}\Big[h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j\Big]\Bigg]
\end{equation}
\subsubsection{Full Precision Weights}
Finally, in order to compute the gradient of $\mathcal{L}$ with respect to the full precision weights $\mathbf{w}=[w_1, ..., w_D]^{\text{T}}$, we first compute $\forall k \in [D]$:
\begin{align}
\begin{split}
\grad{z_m}{w_k} &= \gamma_2 \Bigg[\sum_{i=1}^{N-1}\Big[\grad{\hat{f}_T(\gamma_1 w_m-t_i)}{w_k}\sum_{j=1}^Bb_{i,j}\alpha_j\Big]\Bigg] \\
&= \begin{cases} \gamma_1 \gamma_2 T \Bigg[\sum_{i=1}^{N-1}\Big[h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j\Big]\Bigg], & \text{if } m=k \\ 0, & \text{otherwise} \end{cases}
\end{split}
\end{align}
and using the chain rule, we obtain:
\begin{equation}
\grad{{\cal L}}{w_k}= \grad{{\cal L}}{\mathbf{z}}\cdot \grad{\mathbf{z}}{w_k} = \sum_{m=1}^D\grad{{\cal L}}{z_m}\cdot \grad{z_m}{w_k} = \gamma_1 \gamma_2 T \frac{\partial {\cal L}}{\partial z_k} \sum_{i=1}^{N-1} \Big[ h_{k,i}\sum_{j=1}^Bb_{i,j}\alpha_j \Big]
\end{equation}
\section{MobileNetV2 on ImageNet Comparisons}
We compare DBQ and \cite{Uhlich2020Mixed} on MobileNetV2 in Table~\ref{tab:mn2-extra-results}. \cite{Uhlich2020Mixed} has two versions trained models M1 and M2, where M1 is trained with a memory constraint and M2 is not. We find that DBQ-2T is smaller than M2 \cite{Uhlich2020Mixed} at iso-accuracy on ImageNet and more accurate than M1 \cite{Uhlich2020Mixed} but at a larger storage cost. We are unable to compare the computational complexities since \cite{Uhlich2020Mixed} lacks sufficient information, hence we adopt the metrics reported in \cite{Uhlich2020Mixed}, which are weight storage (analogous to ${\cal C}_M$) and activation storage (analogous to ${\cal C}_R - {\cal C}_M$).
\begin{table}[hbb]
\begin{center}
\resizebox{\columnwidth}{!}{%
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|c c c}
\clineB{1-4}{2.5}
\textbf{Model} & \textbf{Top-1 Acc.} [$\%$] & \textbf{Weight Storage} [MB] & \textbf{Activation Storage} [MB]\\ \clineB{1-4}{2.5}
M1 \cite{Uhlich2020Mixed} (w/ constr.) & $69.74$ & $1.55$ & $0.57$ \\ \hline
M2 \cite{Uhlich2020Mixed} (w/o constr.) & $70.59$ & $3.14$ & $1.58$ \\ \hline
DBQ-2T & $\mathbf{70.54}$ & $\mathbf{2.43}$ & $\mathbf{1.15}$ \\ \hline
\end{tabular}
}
\end{center}
\caption{The Top-1 accuracy on ImageNet and Storage costs for MobileNetV2 using our method (DBQ-2T) compared to \cite{Uhlich2020Mixed}.}
\label{tab:mn2-extra-results}
\end{table}
\section{DBQ Branch Sparsity}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}
\cline{4-6}
\multicolumn{3}{c}{}&\multicolumn{3}{|c|}{Average Branch Sparsity [$\%$]}\\
\hline
PW Layer & $C_{\text{in}}$ & $C_{\text{out}}$ & FX8& DBQ-1T & DBQ-2T\\ \hline \hline
0 & $64$ & $32$ & $35.55$& $58.69$ & $64.82$\\ \hline
1 & $64$ & $128$ & $10.74$ & $41.42$& $51.75$\\ \hline
2 & $128$ & $128$ & $6.86$ & $34.09$ & $46.45$\\ \hline
3 & $128$ & $256$ & $6.73$ & $31.83$& $44.96$\\ \hline
4 & $256$ & $256$ & $4.53$ & $29.10$& $43.05$\\ \hline
5 & $256$ & $512$ & $7.31$ & $30.62$& $44.36$\\ \hline
6 & $512$ & $512$ & $6.41$ & $28.50$& $43.40$\\ \hline
7 & $512$ & $512$ & $6.00$ & $26.48$& $42.94$\\ \hline
8 & $512$ & $512$ & $4.00$ & $24.03$& $41.70$\\ \hline
9 & $512$ & $512$ & $5.57$ & $24.89$& $42.56$\\ \hline
10 & $512$ & $512$ & $5.50$ & $23.65$& $42.30$\\ \hline
11 & $512$ & $1024$ & $7.00$ & $23.17$& $42.41$\\ \hline
12 & $1024$ & $1024$ & $10.69$ & $28.25$&$45.77$ \\ \hline \hline
\multicolumn{3}{|c|}{Network Average} &$7.59$ &$26.50$& $\mathbf{43.78}$ \\\hline
\end{tabular}
\end{center}
\caption{Branch level sparsity for all the pointwise (PW) layers of MobileNetV1 on ImageNet. $C_{\text{in}}$ and $C_{\text{out}}$ denote the number of input and output channels respectively.}
\label{tab:sparsity}
\end{table}
One of the advantages of implementing ternary-based dot products is leveraging weight sparsity, which is reflected in our sparsity-aware computational cost ${\cal C}_S$. In this work, we show that for MobileNetV1 on ImageNet with two ternary branch quantization (DBQ-2T-4), the computational cost can be reduced from $2.18\times 10^{10}$ FAs to $1.42\times 10^{10}$ ($\sim 35\%$ reduction) by simply skipping the operations involving zero weights. Table \ref{tab:sparsity} reports the average branch level sparsity for every point wise layer. For the DBQ-2T model, which quantizes PW layers to two ternary branches, we find that on average $43.78\%$ of all PW weights are zero, which explains the massive $35\%$ reduction in ${\cal C}_S$. In contrast, the DBQ-1T model, which quantizes all PW layers to one ternary branch, achieves a $26.5\%$ average branch sparsity. While DBQ-2T has twice the number of branches compared to DBQ-1T, the per-branch sparsity is actually much higher for the DBQ-2T. In other words, while the number of pointwise parameters increases by $2\times$ when going from 1T to 2T, due to the high branch sparsity, the number of non-zero parameters increases by $1.53\times$ only. On the other hand, using 8b fixed-point for the PW layers yields very little weight sparsity ($7.59\%$).
\bibliographystyle{splncs04}
|
1,116,691,500,385 | arxiv | \section{Introduction}
Galaxies exhibit bimodal distributions in a number of observed properties. The bimodality in galaxy morphologies formed the basis of the original galaxy classification scheme of \citet{hubble26}. The colors and luminosities of galaxies have been long known to correlate with morphology \citep[e.g.][]{devaucouleurs61, chester64} with ellipticals being predominantly red and spirals and irregulars blue.
More recently, large statistical samples of galaxies have become available, allowing us to investigate the bimodality of galaxies in a much more quantitative way. In particular, the bimodality appears quite strongly in the galaxy $(u-r)$ color distribution which consists of two peaks with a minimum in between them at $(u-r)\approx 2.1-2.2$ \citep{strateva01}. Galaxies in the red peak tend to be predominantly morphologically early-type and high surface brightness galaxies while those in the blue peak are dominated by morphologically late-type galaxies with lower surface brightness \citep{strateva01,driver06,blanton03b,ball06}. Based upon a sample of low-redshift galaxies from the SDSS, \citet{baldry04} investigated the distribution of galaxies in the $(u-r)$ vs. $M_r$ color-magnitude diagram (CMD). The galaxies in their sample separate into blue and red sequences with the distribution in color at each absolute magnitude well-fit by the sum of two Gaussians. The mean color as a function of $M_r$ for each sequence consists of an overall reddening with increasing luminosity with a steeper transition in the average color and width of both sequences at a stellar mass of $\sim2\times10^{10}$ M$_{\sun}$.
In addition to mass, one of the most important other factors suspected of contributing to the galaxy bimodality is the environment. While it has long been known that the morphologies of galaxies are correlated with the local density \citep{dressler80}, the dependence of galaxy colors and luminosities with local density is complicated. Although the ratio of the number of red to blue galaxies varies strongly with the local density, the mean color of the blue and red sequences varies relatively little with environment \citep{balogh04}. On the other hand, the luminosity of blue sequence galaxies is nearly independent of environment while both luminous and faint red galaxies are found on average in higher density environments than intermediate luminosity red galaxies \citep{hogg03}.
The galaxy bimodality has also begun to be investigated based upon large samples of galaxy spectra from the Sloan Digital Sky Survey (SDSS). In particular, \citet{kauffmann03a} developed a method that uses the Balmer absorption line index ${\rm H\delta_A}$ and the ${\rm 4000 \AA}$ break strength $D_n(4000)$ measured from the SDSS fiber spectra in the central $3\arcsec$ of each galaxy to constrain the star formation histories, dust attenuation, and stellar masses for their sample. Based upon these derived parameters, \citet{kauffmann03b} showed that galaxies tend to divide into two distinct groups around a stellar mass of $3\times10^{10}$ M$_{\sun}$, similar to the transition mass noted in the optical galaxy CMD \citep{baldry04}. While galaxies below this mass tend to have younger stellar populations, more massive galaxies tend to be older. In related work, \citet{brinchmann04} used the emission lines in the SDSS spectra to determine star formation rates (SFRs) for a large sample of SDSS galaxies. Using the specific star formation rate, i.e. the current SFR divided by the stellar mass $M_*$, \citet{brinchmann04} found that galaxies with $10^8<M<10^9$ M$_{\sun}$ have $\log{(SFR/M^*)}=-9.6$ to $-10$, values consistent with an approximately constant SFR with time. Above $10^{10}$ M$_{\sun}$, the specific SFRs decline with mass, implying star formation histories increasingly weighted to much older ages.
The evolution of the galaxy color-magnitude diagram out to $z\sim1$ has begun to be explored \citep{willmer06, faber06, blanton06}. These results show that the galaxy bimodality is already in place at $z\sim1$. However, the color of both sequences tends to become somewhat bluer with increasing redshift while the luminosity function of both red and blue galaxies shifts to higher luminosities \citep{blanton06, willmer06}. Based upon combining the DEEP2 and COMBO17 surveys, \citet{faber06} argued that the number density of blue galaxies is more or less constant from $z\sim1$ to $z\sim0$, while the number density of red galaxies has been increasing. \citet{faber06} proposed a scenario to explain their data in which some blue galaxies migrate to the red sequence as a result of gas-rich mergers that use up the remaining gas in an interaction-induced starburst. These galaxies then migrate up the red sequence by a series of gas-free mergers.
The origin of the galaxy bimodality and corresponding transition mass of a few$\times10^{10}$ M$_{\sun}$ is beginning to be understood theoretically. Based upon a semi-analytic model utilizing some simple prescriptions for gas cooling, star formation, and supernova feedback coupled with merging histories of dark matter haloes, \citet{menci05} modeled the $(u-r)$ vs. $M_r$ CMD of \citet{baldry04}. In their model, feedback from supernovae is ineffective at regulating star formation for galaxies above a certain threshold halo mass. In these massive galaxies all of the gas is consumed relatively quickly and results in a red sequence galaxy at zero redshift. Blue sequence galaxies, on the other hand, tend to come from less massive progenitors where supernovae feedback is effective at regulating star formation, thus allowing star formation to continue down to the present. While their model is successful at reproducing most of the optical CMD, it predicts too many blue galaxies at $M_r=-22$ compared to the observations.
A different explanation for the origin of bimodality has been suggested by \citet{dekel06}. According to this model, above a critical halo mass $M_{shock} \sim 10^{12}$ M$_{\sun}$, a shock is generated in the gas accreting onto the dark matter halo which heats most of the gas and prevents it from cooling and forming stars. In these massive haloes, star formation does happen at $z\gtrsim2$ due to cold gas that is able to penetrate the hot gas, leading to a burst of star formation, while for $z\lesssim2$ heating from Active Galactic Nuclei (AGN) prevents gas from forming any more stars. This naturally leads to the most massive galaxies lying on the red sequence at $z\sim0$. For galaxies residing in halos with masses less than $10^{12}$ M$_{\sun}$, the gas is not shock heated, allowing cold flows to fuel star formation that is then regulated by supernova feedback. As a result, lower mass galaxies lie on the blue sequence and the location of the bright tip of the blue sequence is due to the onset of the shock in the accreting gas for more massive halos and the feedback from AGN. In this scenario, galaxies tend to move up the blue sequence with time until their masses go above $M_{shock}$, or they merge into another more massive halo with mass above $M_{shock}$, after which the gas in the galaxy is no longer allowed to cool and star formation ceases. Both \citet{cattaneo06} and \citet{croton06} have coupled semi-analytic models including the transition from shock heating to cold flows and feedback from supernova as well as AGN with the merging histories of dark matter haloes from N-body simulations. While the details of the modeling of the baryonic physics differs somewhat, both groups were able to reproduce the local galaxy CMD by tuning the various parameters affecting star formation and feedback in their models.
In this paper, we investigate the galaxy bimodality as revealed in the UV minus optical colors of a large sample of galaxies observed by both the {\it Galaxy Evolution Explorer} ({\it GALEX}) and the SDSS. While significant contributions to the UV luminosity can come from older evolved stars in red sequence galaxies \citep[e.g.][]{yi05, rich05}, in general the UV light in galaxies is dominated by massive stars with main sequence lifetimes up to $\sim 10^8$ yrs. As a result, the emerging UV luminosity is proportional to the recent star formation rate once corrected for light absorbed by dust \citep{kennicutt98}. The greater sensitivity of the {\it GALEX} bands to the recent star formation rate as compared to the SDSS $u$ band would lead us to expect a greater separation between the red and blue sequences. While the measurements from the SDSS spectra are sensitive diagnostics of the stellar populations in galaxies, they are measured only in the central $3\arcsec$ of each galaxy, making somewhat uncertain aperture corrections necessary to account for the portion of each galaxy not sampled. While the UV data presented here is much more susceptible to dust attenuation than in the optical SDSS data, the UV measurements sample the entire galaxy and thus complement the SDSS measurements.
\section{Data and Analysis}
\subsection{{\it GALEX} Data}
The UV data presented here are derived from the {\it GALEX} Medium Imaging Survey (MIS) \citep{martin05}. {\it GALEX} is a 50cm diameter UV telescope that images the sky simultaneously in both a $FUV$ and a $NUV$ band, centered at 1540 \AA~and 2300 \AA, respectively. The field-of-view of {\it GALEX} is approximately circular with a diameter of $1\fdg2$ and resolution of about $5\farcs5$ FWHM in the $NUV$. The MIS pointings are chosen to overlap areas of sky with imaging and spectroscopy from the SDSS and consist of exposures of at least one to a few orbits with the mode of the exposure time distribution being 1700 sec. The dataset used in our analysis is taken from the union of the {\it GALEX} first data release (GR1) with the {\it GALEX} internal release IR1.1, a subset of which has been included in the second data release (GR2) publicly available from the {\it GALEX} archive.\footnote{The {\it GALEX} archive can be accessed from http://galex.stsci.edu/GR2/.} The IR1.1 data was processed with a pipeline very similar to that used in the GR1 data and employed the same calibration as used in that release. Details of the {\it GALEX} detectors, pipeline, calibration and source extraction can be found in \citet{morrissey05,morrissey07}.
The {\it GALEX} pipeline uses the SExtractor program \citep{bertin96} to detect and make measurements of sources in the images. Throughout this paper, we use the "MAG\_AUTO" measurements output by SExtractor. These magnitudes are measured within elliptical apertures with semi-major axis scaled to 2.5 times the first moment of each source's radial profile, as first suggested by \citet{kron80}.
Due to an error in the way in which the SExtractor parameters were set in the GALEX pipeline, the photometric errors for most of the sources, as reported in the GR1 and IR1.1 catalogs, are underestimated. For each source we have calculated a more accurate statistical error in the total magnitude from the size of the MAG\_AUTO aperture and the flat field response, exposure time, and sky background at the source position. In addition to the statistical errors, we have added in quadrature an assumed zero-point plus flat field uncertainty of 2\% in the $NUV$ and 5\% in the $FUV$ \citep{morrissey07}. The errors increase from the zeropoint uncertainty at the bright end up to $\approx 0.2-0.3$ mag at 23rd mag in both bands.
\subsection{{\it GALEX}-SDSS Matched Sample}
The {\it GALEX} MIS catalogs were matched with the SDSS MPA/JHU DR4 value-added catalogs.\footnote{The MPA/JHU value-added catalogs were downloaded from http://www.mps-garching.mpg.de/SDSS/.} These catalogs consist of line and index measurements from the SDSS spectra as well as many derived quantities and are described in more detail in a series of papers on the star formation rates, star formation histories, stellar masses, and metallicities of galaxies in the local universe \citep{kauffmann03a,kauffmann03b,brinchmann04,tremonti04}. For each {\it GALEX} pointing, SDSS sources within $0\fdg6$ of the {\it GALEX} field center were matched with the nearest {\it GALEX} source within a radius of $4\arcsec$. When concatenating together the catalogs for all the fields, we removed duplicate {\it GALEX} detections in the overlap regions between adjacent pointings by using the SDSS identification numbers (Plate ID, MJD, Fiber ID) and selecting the {\it GALEX} match closest to its field center.
After matching the {\it GALEX} and SDSS data, we further restricted the sample with various cuts intended to generate a complete statistical sample which are summarized in Table \ref{selection_limits}. For the SDSS photometry, we selected galaxies targetted for spectroscopy in the SDSS main galaxy sample with $r$-band magnitudes in the range $14.5 < r < 17.6$. While the nominal magnitude limit of the SDSS main galaxy sample is $r=17.77$ \citep{strauss02}, in practice the limit varies as a function of position on the sky. After examining the galaxy number counts, we set the faint limit to $r=17.6$ because the counts begin to turn-over below this level. While the median photometric error for the SDSS galaxies is only 0.03 mag, there are a small fraction with much larger errors. In order to remove these objects, we restricted the sample to galaxies with errors $\sigma_r < 0.2$ mag. In addition to the photometry, we further restricted the sample to those galaxies with redshifts $z$ in the range $0.01 < z < 0.25$ and with redshift confidence $z_{conf} > 0.67$.
In addition to the cuts on the SDSS data, we applied several cuts based upon the UV measurements. Since {\it GALEX} photometry and astrometry degrade near the edge of the detectors, we only included objects in the sample if their distance from the {\it GALEX} field center $fov\_radius$ is less than 0\fdg55. {\it GALEX} detections were also required to have $nuv\_artifact\leq1$. This excludes from our sample galaxies that lie within regions expected to be contaminated by reflections from bright stars within the field. Areas of the images with $nuv\_artifact=1$ are designed to flag regions where scattered light is predicted from bright stars just outside the field-of-view. Currently, this flag is set very conservatively and the vast majority of sources with this flag set are in regions that are free from scattered light issues. We therefore elected to ignore this flag for our sample. We also only included fields with exposure times greater than 1000 sec. The resulting overlap area between {\it GALEX} and the SDSS including all of the above cuts is 485.321 deg$^2$ in the $NUV$ and 411.266 deg$^2$ in the $FUV$. The $FUV$ sample has a somewhat smaller area because some of the fields have $NUV$ data only. The procedure used to calculate the overlap area is similar to that used by \citet{bianchi07}, where a more detailed description can be found.
For both the $FUV$ and $NUV$ samples, we included galaxies with apparent magnitudes in the range $16 < N(F)UV < 23$. Based upon a series of artificial source tests, we have estimated that the data is greater than 80\% complete for $NUV, FUV < 23$ mag. After applying the Galactic extinction correction, the UV magnitude limit varies across the sky. We account for this variation below when computing the volume densities. After applying all of the above cuts to the data, the $NUV$ and $FUV$ samples contain 26,281 and 18,091 galaxies, respectively. The redshift distributions for the $NUV$ and $FUV$ samples are shown in Figures \ref{zdist_nuv} and \ref{zdist_fuv}. In the figures, the solid black lines are the distributions of SDSS galaxies lying within the area observed by {\it GALEX} and satisfying the SDSS selection criteria while the dashed red lines show those galaxies which have a {\it GALEX} match falling within the {\it GALEX} selection criteria as well. The fraction of SDSS main sample galaxies with {\it GALEX} matches tends to decline with redshift, mainly due to increasing numbers of red galaxies falling below the {\it GALEX} detection limit. In Figure \ref{apparent_cmd} the $r$-band magnitude is plotted as a function of color for the $NUV$ and $FUV$ samples. The dashed lines in the figure indicate the $r$-band and UV magnitude limits. For the $NUV$ sample, the lack of galaxies with $(NUV-r) \gtrsim 6.5$ at bright $r$ magnitudes would argue that the {\it GALEX} MIS data is deep enough to fully sample the entire color distribution of galaxies in the local universe. Thus, the edge of the color distribution is likely real and not a selection effect. On the other hand, for the $FUV$ sample in the right hand panel of Figure \ref{apparent_cmd}, the data go right up to the selection limit at the red end. In the $FUV$ sample, the red edge of the color distribution thus reflects the UV magnitude limit.
The fraction of SDSS main sample galaxies that lie within 0\fdg55 of a {\it GALEX} MIS field center and that have a {\it GALEX} match within our $4\arcsec$ search radius is shown in Figure \ref{completeness_r} as a function of $r$ magnitude. In the $NUV$ sample, the completeness is roughly constant at about 90\% down to $r\approx16.5$. Fainter than this magnitude, the completeness begins to drop off. There are very few galaxies with colors redder than $(NUV-r)\approx6.5$. At the {\it GALEX} magnitude limit of 23 mag, a galaxy with this red a color would have $r=16.5$. Thus, the fraction of SDSS galaxies with a {\it GALEX} match begins to drop fainter than this $r$ magnitude due to increasing numbers of red galaxies falling below the {\it GALEX} detection limit. For the $FUV$ sample, the reddest galaxies have $(FUV-r)\approx7.5$ which corresponds to $r=15.5$ at the limiting magnitude of $FUV=23$. As expected, the fraction of SDSS galaxies with an $FUV$ match begins to decline at about this $r$ magnitude.
In both the $FUV$ and $NUV$ samples, the match completeness does not reach 100\% at bright $r$ magnitudes. We have visually inspected all SDSS main sample galaxies with $14.5 < r < 15.5$ that were observed by {\it GALEX}. For almost all of these galaxies, there is a galaxy visible in the {\it GALEX} images. However, the UV center measured by the {\it GALEX} pipeline for these non-matches lies more than $4\arcsec$ from the SDSS position. Sometimes the {\it GALEX} pipeline breaks the galaxy into more than one fragment, none of which are coincident with the SDSS position. In other cases, especially if the galaxy has a low UV surface brightness, the center can be offset from the SDSS position by more than our search radius even if the {\it GALEX} pipeline detects the galaxy as a single object. We assume that the level portion of the match completeness curves in Figure \ref{completeness_r} gives the intrinsic completeness for {\it GALEX} detections of the SDSS main galaxy sample. The values we adopt are 0.91 and 0.80 for the $NUV$ and $FUV$, respectively.
While the completeness of the SDSS photometric sample is nearly 100\%, some fraction of galaxies that meet the SDSS main galaxy sample selection do not have a redshift measured \citep{strauss02}. Although some galaxies do not have a redshift due to low signal-to-noise in their spectra, the majority of targeted galaxies without redshifts are missed due to the constraint that SDSS spectroscopy fibers can not be placed closer than $55\arcsec$ to one another. Some of these missed galaxies can be observed in neighboring plates if that region of sky is covered by more than one plate. While the exact completeness is determined by the precise geometry of the spectroscopic plates, the result is that the spectroscopic completeness of the SDSS main galaxy sample is 92-94\% for the early release data \citep{strauss02,blanton01}. We have adopted a spectroscopic completeness of 0.9. Multiplying the {\it GALEX}-SDSS match completeness by the SDSS spectroscopic completeness, we estimated the total completeness of our sample to be 0.82 in the $NUV$ and $0.72$ in the $FUV$. In calculating the volume densities below, we correct the number counts by these factors, (i.e. the factor $f$ in equations (\ref{phi_eqn}) and (\ref{phierr_eqn}) below).
In addition to the spectroscopic incompleteness, there is an additional surface brightness selection that is imposed on the SDSS main galaxy sample. As a part of their study of the luminosity function of low luminosity galaxies, \citet{blanton05} investigated the completeness as a function of surface brightness and found that the SDSS spectroscopic galaxy sample is greater than 90\% complete above a half light surface brightness of $\mu_{50,r}=22.4$ mag arcsec$^{-2}$ with the completeness dropping to 50\% at $\mu_{50,r}=23.4$ mag arcsec$^{-2}$. Since luminosity and surface brightness are correlated, the surface brightness selection preferentially selects against dwarf galaxies. For galaxies brighter than $M_r=-18$, \citet{blanton05} have a fit a gaussian to the surface brightness distribution in a series of absolute magnitude bins and have used this model to extrapolate the number of galaxies likely missed due to the surface brightness selection at fainter luminosities. The fraction of galaxies missing from the sample increases from near zero at $M_r=-18$ to approximately 40\% at $M_r=-16$. However, these low luminosity, low surface brightness galaxies do not make a significant contribution to the total luminosity density. Even after correcting for the surface brightness incompleteness, about 90\% of the $r$-band luminosity density is due to galaxies with $M_r<-17$. Thus, the surface brightness selection is unlikely to affect our results for bright galaxies. However, we may be underestimating the number density of galaxies with $-18<M_r<-16$ by up to 40\%.
\subsection{Absolute magnitudes and volume densities}
We computed absolute magnitudes for our sample galaxies using, for example for the $r$-band,
\begin{equation}
M_{r,0.1} = m_r - 5 \log{D_L} - 25 - K_{0.1,r}(z) + (z - 0.1)Q
\label{absmag_eqn}
\end{equation}
where $M_{r,0.1}$ is the absolute magnitude, $m_r$ is the extinction corrected $r$-band magnitude, $D_L$ is the luminosity distance in Mpc, $K_{0.1,r}(z)$ is the K-correction needed to account for the shifting of the galaxy SEDs with respect to the filter bandpass, and $Q$ is a term to account for luminosity evolution in units of magnitudes per redshift. A positive value for $Q$ means that galaxies get brighter with increasing redshift. Similar equations were used for the other bands. For calculating the luminosity distance we assumed a Hubble constant $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ and a flat universe with matter density relative to the critical density of $\Omega_m=0.3$ and dark energy density of $\Omega_{\Lambda} = 0.7$. We calculated the K-corrections using the K\_CORRECT program, version 4.1.4, originally developed by \citet{blanton03a} and now extended to handle {\it GALEX} data \citep{blanton_roweis07}. Given the redshift of a galaxy, the K\_CORRECT program finds the linear combination of a set of template galaxy spectra that best reproduces the observed colors of a galaxy. The templates have already been determined as described by \citet{blanton_roweis07}. The coefficients in the linear combination for each galaxy were used later when determining the maximum volume out to which a given galaxy could be observed and be detected in the sample. As suggested by \citet{blanton03b,blanton03c}, we minimized the errors due to the K-corrections by correcting all of the galaxies to bandpasses shifted by $z=0.1$, a redshift near the median value for our sample. We denote absolute magnitudes and colors in this system with the subscript 0.1.
Probably the most uncertain term in equation (\ref{absmag_eqn}) is the evolution term $Q$. When calculating the optical galaxy luminosity functions from SDSS data, \citet{blanton03b} fit for both luminosity and density evolution even within the relatively small redshift range sampled by the SDSS data. While the results for number and density evolution are highly correlated, the best fit luminosity functions are consistent with no number density evolution but significant luminosity evolution. In particular for the $r_{0.1}$ band, \citet{blanton03b} found $Q = 1.62 \pm 0.3$, corresponding to a difference of 0.4 mag over the redshift range in our sample of $0.01<z<0.25$. As shown by \citet{blanton03b}, neglecting evolution can lead to significant distortions in the shape of the luminosity function. We have adopted $Q=1.6$ for all bands considered here. This means that we have implicitly assumed that galaxies evolve in luminosity but not in color or in number density.
A value for $Q$ of 1.6 is roughly consistent with other determinations in the UV as well. \citet{schiminovich05} investigated the evolution of the UV luminosity density $\rho_{FUV}$ with redshift and found for $z<1$ that $\rho_{FUV} \propto (1+z)^{2.5\pm0.7}$, corresponding to $Q=1.9\pm0.5$, assuming that the increase in $\rho_{FUV}$ is due entirely to luminosity evolution. Based upon a sample of {\it GALEX} galaxies matched with redshifts from the Two Degree Field Redshift Survey \citep{colless01}, \citet{treyer05} measured evolution of the luminosity function characteristic magnitude $M^*$ of $\Delta M^*_{NUV} \approx 0.3 \pm 0.1$ mag between $z=0.05$ and $z=0.15$, equivalent to $Q=3\pm1$. These other measurements would tend to favor a somewhat larger evolution in the $UV$ with redshift as compared to the $r$-band. Indeed, galaxies most likely become bluer with increasing redshift as a result of the average star formation rates of galaxies increasing with redshift. For example, comparing a low-z sample from the SDSS and a sample of galaxies at $z\sim1$, \citet{blanton06} found that the blue sequence becomes bluer by 0.3 mag in $(u-r)$ while the red sequence becomes bluer by only 0.1 mag. In another study of the evolution of the blue and red sequences, \citet{faber06} analyzed the evolution of galaxies in the $(U-B)$ vs. $M_B$ CMD out to $z\sim1$. They found that $M^*_B$ becomes brighter by $\sim1.3$ mag out $z\sim1$ for both blue and red sequence galaxies. On the other hand, the luminosity function normalization $\phi^*$ for blue galaxies was found to be roughly constant while that of red galaxies is increasing.
In order to asses the effect of color evolution on our results, we have recomputed the absolute magnitudes and volume densities described below for the $NUV$ sample assuming evolution in the $r$-band of $Q_r = 1.6$ and in the $NUV$ of $Q_{NUV}=3$. These choices correspond to a decrease in the $(NUV-r)$ color of a galaxy across our redshift range from $0.01 < z < 0.25$ of 0.34 mag. We have compared many of the results presented in the following sections with and without allowing for color evolution. The morphology of the color-magnitude diagram remains largely unchanged with the peaks and widths of the red and blue sequences nearly the same. The largest effect of including color evolution is on the volume density of luminous blue galaxies since these galaxies are detectable across the entire redshift range and thus would have the largest color correction. Including color evolution also tends to increase somewhat the volume density of galaxies with very blue colors $(NUV-r)<1$. Specifically, including color evolution would increase the luminosity density of the bluest galaxies with $0<(NUV-r)<1$ by 0.2 dex in the $r$-band and by 0.3 dex in the $NUV$. Due to the large uncertainties remaining in the evolution of galaxy colors with redshift and the relatively minor effect it has on our results, we decided to neglect color evolution in our analysis.
When correcting for Galactic extinction, we assumed the \citet{cardelli89} extinction law with $R_V = A_V/E(B-V) = 3.1$. For the SDSS bands, the ratio of $A(\lambda)/E(B-V)$ is 5.155, 3.793, and 2.751 for $u$, $g$, and $r$, respectively, while for the $FUV$ the ratio is 8.24. Due to the presence of the 2175~\AA~bump in the Galactic extinction law, the extinction in the $NUV$ band is no longer strictly proportional to the reddening $E(B-V)$. In order to quantify the effect this has on our extinction corrections, we used a small set of 42 SEDs from \citet{bruzual03} that span a representative range of galaxy SEDs from quiescent ellipticals to rapidly star-forming galaxies.\footnote{ The 42 SEDs can be found at http://www.lam.oamp.fr/arnouts/LE\_PHARE.html.} For each intrinsic SED, we applied the \citet{cardelli89} extinction law and then computed the resulting $NUV$ AB magnitude as a function of $E(B-V)$. For each SED, we fit a a quadratic function of $E(B-V)$:
\begin{equation}
A_{NUV} = a_1 E(B-V) + a_2 E(B-V)^2.
\end{equation}
For galaxies with some recent star formation, $a_1 = 8.24$ and $a_2=-0.67$ while for older galaxies with little or no recent star formation and a SED that falls steeply in the UV with decreasing wavelength, $a_1$ is slightly smaller and lies in the range $7.5-8.0$. For 97\% of our sample $E(B-V) < 0.1,$ and thus the quadratic term can be safely neglected. In addition, the maximum difference in the adopted value for $A(NUV)$ for the range of values for $a_1$ among the 42 SEDs, is only 0.07 mag at $E(B-V)=0.1$. Therefore, we assume the value $A(NUV)/E(B-V)=8.2$ for all of our calculations.
As argued above, the fraction of SDSS main sample galaxies with {\it GALEX} detections is a strong function of the galaxy color. In Figures \ref{completeness_nuv_gr} and \ref{completeness_fuv_gr}, we plot contours of the fraction of SDSS galaxies with a {\it GALEX} match for the $NUV$ and $FUV$ samples, respectively. In both samples, the completeness for blue galaxies is more than 90\% while the completeness begins to drop for galaxies with $(g-r)_{0.1}>0.8$. For the $NUV$ sample the completeness along most of the red sequence is in the range $30-60\%$ while the completeness of the red sequence in the $FUV$ sample is lower and in the range $10-40\%$. It is important to note that this drop-off in the fraction of galaxies with a {\it GALEX} match is due to the {\it GALEX} magnitude limit and was taken into account below when computing the volume densities of galaxies as a function of absolute magnitude and color.
We used the $V_{max}$ method \citep{schmidt68} to determine the volume densities of galaxies in our samples. The value of $V_{max}$ for each galaxy is given by the maximum volume within which the galaxy could have been included in the sample, given our selection limits listed in Table \ref{selection_limits}. We computed a separate $V_{max}$ for the $FUV$ and $NUV$ samples. First, we computed $K_{0.1}(z)$ for $0.01 < z < 0.25$ for each galaxy using the best-fit SED derived from the output of the K\_CORRECT program. Next, we used equation (\ref{absmag_eqn}) to define a maximum and minimum redshift for each galaxy in each band by replacing the apparent magnitude in equation (\ref{absmag_eqn}) by the magnitude limits listed in Table \ref{selection_limits}. Then we computed a combined minimum and maximum redshift using
\begin{mathletters}
\begin{eqnarray}
z_{max} = min( z_{r,max}, z_{UV,max}, 0.25), \\
z_{min} = max(z_{r,min}, z_{UV,min}, 0.01).
\end{eqnarray}
\end{mathletters}
Since we have assumed a cosmology with no overall curvature, we calculated the volume between $z_{min}$ and $z_{max}$ for each galaxy as
\begin{equation}
V_{max} = \frac{A}{3} \left(\frac{\pi}{180}\right)^2 \left( \frac{D_L(z_{max})^3}{(1+z_{max})^3} - \frac{D_L(z_{min})^3}{(1+z_{zmin})^3}\right),
\end{equation}
where $A$ is the solid angle in deg$^2$ from which the sample was drawn. The $V_{max}$ values can then be used to generate the number densities of galaxies as a function of any variables. For example, in order to generate a volume-corrected galaxy CMD for the $NUV$ sample, we computed the number density of galaxies as a function of absolute magnitude and color using
\begin{equation}
\Phi(M_{r,0.1},(NUV-r)_{0.1}) = \frac{f}{\Delta M \Delta C} \sum \frac{1}{V_{max}},
\label{phi_eqn}
\end{equation}
where $\Phi$ gives the number density of galaxies in units of Mpc$^{-3}$ mag$^{-2}$, $\Delta M$ is the width of each bin in absolute magnitude, $\Delta C$ is the width of each bin in color, $f$ is the inverse of the sample completeness estimated above in \S2.2, and the sum is taken over all galaxies within that particular color and absolute magnitude bin centered on $M_{r,0.1}$ and $(NUV-r)_{0.1}$. The corresponding uncertainty in each bin due to counting statistics is given by
\begin{equation}
\delta \Phi(M_{r,0.1},(NUV-r)_{0.1}) = \frac{f}{\Delta M \Delta C} \left( \sum \frac{1}{V_{max}^2}\right)^{1/2}.
\label{phierr_eqn}
\end{equation}
\section{Results}
\subsection{The Galaxy Color Magnitude Diagram}
The number of galaxies in our $(NUV-r)_{0.1}$ and $(FUV-r)_{0.1}$ CMDs are plotted in Figures \ref{cmd_nuv_points} and \ref{cmd_fuv_points}, respectively. In both plots, the data are plotted as contours where the density of points is high while the locations of individual galaxies are plotted where the density is low. The uncertainty in the colors in both samples is dominated by the errors in the UV measurements. Thus the errors as a function of position in the CMDs are most strongly correlated with color. The median error as a function of color is plotted in Figures \ref{cmd_nuv_points} and \ref{cmd_fuv_points} along the left-hand side of each figure. For blue galaxies, the uncertainty is dominated by the zero-point uncertainty in the {\it GALEX} data. For red galaxies, the errors span a larger range from $0.1-0.4$ mag with a median of about 0.2 mag.
The corresponding volume densities of galaxies as a function of position in the CMD are plotted in Figures \ref{cmd_nuv_vmax} and \ref{cmd_fuv_vmax}, where the weighting was derived from the $V_{max}$ values as in equation (\ref{phi_eqn}) with $\Delta M = 0.5$ mag and $\Delta C = 0.2$ mag. The peak of each sequence from the Gaussian fits described below are over-plotted as the dashed lines in the $(NUV-r)$ diagram. The volume densities and the errors for the $(NUV-r)$ diagram are given in Tables \ref{cmd_nuv_tab} and \ref{cmderr_nuv_tab} while those for the $(FUV-r)$ diagram are given in Tables \ref{cmd_fuv_tab} and \ref{cmderr_fuv_tab}.
As we have argued in \S2.2, the red edge of the color distribution in the $NUV$ sample is real whereas the red edge of the $FUV$ sample is a selection effect due to the $FUV$ flux limit. This is reflected in the morphology of the galaxy distributions in Figures \ref{cmd_nuv_vmax} and \ref{cmd_fuv_vmax}, where the distribution turns over for the reddest colors in the $NUV$ diagram and does not turn over entirely in the $FUV$ diagram. Throughout the remainder of this paper, we focus on the $NUV$ diagram.
In both the $FUV$ and $NUV$ diagrams, the galaxies separate into two well-defined sequences in addition to a population of galaxies that lie in between. As has been noted before in optical CMDs \citep[e.g.][]{baldry04}, the most luminous galaxies are on the red sequence, while both sequences become redder with increasing luminosity. In contrast to the optical $(u-r)$ CMD, the blue sequence does not appear to merge at the bright end with the red sequence.
An alternative view of the $NUV$ sample is shown in Figure \ref{cmd_nuv_vmax2} where the volume density of galaxies is plotted as a function of the $NUV$ luminosity. The sample reaches significantly fainter $NUV$ absolute magnitudes for the red galaxies due to the SDSS $r$-band selection. Thus, the slope for the faintest $NUV$ absolute magnitudes included in our sample as a function of color is a selection effect. As in Figure \ref{cmd_nuv_vmax}, the sample separates into blue and red sequences. However, there is little, if any, trend of color with $M_{NUV,0.1}$ along either sequence. This is consistent with the conclusions from studies in the optical which indicate that one of the most important factors determining the evolution of a galaxy is its mass, which is much more closely related to the $r$-band luminosity than to the $NUV$ luminosity \citep{kauffmann03b,brinchmann04}.
\subsection{Color Distributions as a Function of $M_{r,0.1}$}
The volume-corrected number density of galaxies as a function of $(NUV-r)_{0.1}$ color is plotted in Figures \ref{cmd_nuv_colordist}$-$\ref{cmd_nuv_colordist3} in 0.5 magnitude wide bins of $M_{r,0.1}$. The error bars are the statistical errors only calculated using equation (\ref{phierr_eqn}). Except for the most luminous bin, there is both a red and blue peak visible in each panel. Similar to previous optical CMDs, the red sequence dominates in the brighter bins. The red and blue sequences reach approximately equal strengths around $M_{r,0.1}=-21.75$ with the blue sequence becoming dominant at fainter luminosities. The relative number of red sequence galaxies reaches 50\% at about the same luminosity when dividing galaxies using the $(u-r)$ color \citep{baldry04}.
Following \citet{baldry04}, we attempted to fit Gaussians to the red and blue peaks in the color distributions in each $M_{r,0.1}$ bin although we employed a somewhat different methodology. We fit a single Gaussian of the form $(1/\sqrt{2\pi\sigma^2}) \exp{\{-((NUV-r)_{0.1}-\mu)^2/2\sigma^2\}}$ separately to the red and blue sequences. In order to select points on the red and blue sequences, we have utilized the fit from \citet{yi05} to the $(NUV-r)$ color as a function of $M_r$ for a sample of morphologically selected early-type galaxies: $(NUV-r) = f(M_r) = 1.73 - 0.17 M_r$. For the purposes of fitting a Gaussian to each sequence, we defined the red sequence as the points with $(NUV-r)_{0.1} > f(M_{r,0.1}) - 0.5$ and fit a Gaussian to these points using the IDL routine GAUSSFIT, a program that computes a non-linear least squares fit to the data. Similarly, we fit a Gaussian to the blue sequence for points with $(NUV-r)_{0.1} < f(M_{r,0.1}) - 2.0$. These color limits are plotted as the dashed red and blue lines in each of the color distributions shown in Figures \ref{cmd_nuv_colordist}$-$\ref{cmd_nuv_colordist3} whereas the sum of the best-fitting Gaussians is plotted as the solid line. In contrast to the optical $(u-r)$ CMD, a double-Gaussian does not provide a good fit to the data. Although the Gaussians provide a reasonable fit to the blue edge of the blue sequence and the red edge of the red sequence, their sum falls well below the data in the region between the two sequences.
For the star-forming galaxies in the blue sequence, it is difficult to generate galaxies with extremely blue colors, i.e. $(NUV-r)_{0.1} \lesssim 1$, except with large starbursts or very young ages \citep[e.g.][]{treyer98}. Clearly, such objects are rare in the local universe. The skew of the blue sequence in the $(NUV-r)_{0.1}$ CMD to redder colors compared to what is observed in the optical would be expected for galaxies with somewhat older average stellar populations or with larger reddening due to dust. Similarly, whereas the red sequence is relatively narrow in the optical, a color distribution skewed towards the blue would be expected for early-type galaxies with some residual star formation \citep{yi05}. The departure of the blue and red sequences from a Gaussian would imply that we are beginning to resolve some of these effects due to the greater sensitivity of the UV light to both variations in the star formation rate and the amount of dust.
Even though a double Gaussian provides a poor fit to the $(NUV-r)_{0.1}$ color distribution, the peak of each Gaussian provides a robust estimate of the peak of each sequence in each absolute magnitude bin. On the other hand, the width $\sigma$ of each Gaussian should be interpreted with caution as it only gives some information about the blue edge of the blue sequence and the red edge of the red sequence and does not provide a good representation of the entire distribution. The resulting parameters of the Gaussian fits are plotted in Figure \ref{cmd_nuv_seq}. In the figure the circles and squares give the peaks of the red and blue sequences, respectively, whereas the error bars denote the $\sigma$ for each Gaussian. The values of $\sigma$ for the red sequence lie in the range $0.3-0.5$ mag while those for the blue sequence lie in the range $0.5-0.6$ mag. Also plotted in Figure \ref{cmd_nuv_seq} are the median photometric errors as a function of color for comparison. While the values for $\sigma$ for the blue sequence are significantly larger than the photometric errors, the values for the red sequence are comparable to the photometric errors in the color, indicating that the fall-off of the color distribution on the red edge of the red sequence is largely consistent with that expected from the errors.
We have fit a line to the peak color of the red sequence as a function of absolute magnitude with the result $(NUV-r)_{0.1} = 1.897 - 0.175 M_{r,0.1}$. This fit is plotted as the dashed red line in Figure \ref{cmd_nuv_vmax} and the solid red line in Figure \ref{cmd_nuv_seq}. This fit has the same slope as found by \citet{yi05} except with a slightly redder intercept. \citet{yi05} analyzed a sample of morphologically selected early type galaxies, some of which appear to harbor some residual star formation, a fact which would tend to pull their fit somewhat towards the blue compared to our color-selected sample.
Following \citet{baldry04}, we fit the peak color of the blue sequence with the sum of a line plus a tanh function with the result
\begin{equation}
(NUV-r)_{0.1} = 2.39 + 0.075 (M_{r,0.1} + 20) - 0.808 \tanh{\left( \frac{M_{r,0.1} + 20.32}{1.81}\right)}.
\end{equation}
This curve is overplotted as the solid blue line in Figure \ref{cmd_nuv_seq}. Overall the peak of the blue sequence increases in color from $(NUV-r)_{0.1}=1.8$ at the faint end to $\sim 3$ for the most luminous blue galaxies. The color of the blue sequence changes most rapidly around $M_{r,0.1}=-20.3$.
\subsection{Luminosity Functions as a Function of Color}
\subsubsection{$M_{r,0.1}$ luminosity functions}
The galaxy luminosity function varies strongly with color even within each sequence. This is illustrated in Figure \ref{cmd_nuv_lfs} where the $M_{r,0.1}$ luminosity functions are plotted separately for one magnitude wide bins in $(NUV-r)_{0.1}$ color ranging from zero to seven. These are essentially horizontal cuts through the CMD in Figure \ref{cmd_nuv_vmax}. We have fit the luminosity function in each color bin with a \citet{schechter76} function given by
\begin{equation}
\Phi(M) = 0.4 \ln{10} \phi^* 10^{-0.4(M-M^*)(\alpha+1)} \exp{\{-10^{-0.4(M-M^*)}\}},
\label{schechter}
\end{equation}
where $\phi^*$, $M^*$, and $\alpha$ are fit to each sequence by minimizing $\chi^2$. We determined the $1\sigma$ errors in each of the parameters using the range of solutions within one of the minimum reduced $\chi^2$. The best-fit parameters are listed in Table \ref{schechter_r} while $M^*_{r,0.1}$ and $\alpha$ are plotted as a function of $(NUV-r)_{0.1}$ in Figure \ref{cmd_nuv_lfs_param}.
The luminosity function for the very bluest bin with $0<(NUV-r)_{0.1}<1$ is very steep with a best-fitting faint end slope of nearly $-2$, although the errors on $M^*$ and $\alpha$ for this bin are large and highly correlated. As the color increases through the blue sequence, the faint end slope $\alpha$ gradually increases, reaching values of $\sim -0.5$ in between the two sequences at $(NUV-r)_{0.1}\approx4$. Although the uncertainties become larger, the slope reaches a slighty larger value of $\sim0$ for the reddest bin. The value of $M^*$ similarly varies systematically with color, going from $-20.4$ in the bluest bins to $-20.8$ at intermediate colors and finally reaching $-21.1$ for the reddest galaxies. Qualitatively similar results were found by \citet{blanton01} when determining the $r$-band luminosity function separated by $(g-r)$ color.
We have computed the total luminosity density within each color bin from the best-fit Schechter parameters as $\rho_{r,0.1}=\int^{\infty}_{0}L\Phi(L)dL = \phi^* L^* \Gamma(\alpha+2)$. The statistical errors in $\rho$ were determined using the range of values corresponding to those fits within one of the minimum reduced $\chi^2$. For values of the faint end slope $\alpha<-2$, the integral of the Schechter function diverges when integrating all the way down to zero luminosity. With the exception of the bluest bin, the luminosity function slopes are significantly larger than $-2$. However, the $1\sigma$ confidence interval for $\alpha$ for the bluest bin includes values in the region where $\alpha<-2$. When computing the luminosity density in this bin, we have integrated the luminosity function only down to $M_{r,0.1}=-12$. Since this choice is somewhat arbitrary, the luminosity density of the bluest galaxies is more uncertain than for the other color bins. However, we note that this luminosity reaches about the limits for which the galaxy luminosity function has been determined from the SDSS \citep{blanton05}.
The luminosity densities are listed in Table \ref{schechter_r} and are plotted as a function of color in the top panel of Figure \ref{lumden_r} while the fraction of the total luminosity density within each color bin is plotted in the bottom panel of the figure. The largest contribution to the luminosity density comes from galaxies with $2<(NUV-r)_{0.1}<3$ and accounts for $\approx30\% $ of the total. Galaxies bluer than $(NUV-r)_{0.1}=4$ together contribute 64\% of the luminosity density while redder galaxies account for 36\%. Adding up the the contribution to the luminosity density from each color bin, we obtain a total of $\log{\rho_{r,0.1}}=26.903\pm0.030$ ergs s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$. This is only slightly larger than the luminosity density of $\log{\rho_{r,0.1}}=26.845\pm0.012$ calculated from a much larger sample of SDSS galaxies by \citet{blanton03b}, after converting to our value for the Hubble constant.
\subsubsection{$M_{NUV,0.1}$ luminosity functions}
Similar to the analysis of the $M_{r,0.1}$ luminosity functions described in the previous section, we have computed $M_{NUV,0.1}$ luminosity functions in one magnitude wide bins of $(NUV-r)_{0.1}$ color. As for the $r$-band, we fit Schechter functions to the distribution within each color bin. The luminosity functions are plotted in Figure \ref{cmd_nuv_lfs2} while the best-fitting Schechter function parameters are listed in Table \ref{schechter_nuv}. The best-fit values for $M^*_{NUV,0.1}$ and $\alpha$ are plotted as a function of color in Figure \ref{cmd_nuv_lfs_param2}. Qualitatively, the results are similar to that in the $r$-band. The faint end slope $\alpha$ is very steep for the bluest galaxies with a value of $-1.8$. The faint end slope gradually increases with color up to a value of $-0.6$ at $(NUV-r)_{0.1}=3.5$. For redder galaxies, the faint end slope remains at about this value until increasing slightly again in the reddest bin. The value of $M^*_{NUV,0.1}$ increases dramatically from $\sim-20$ for the bluest galaxies to $\sim-15$ for the reddest.
Similar to the $r$-band, we have computed the $NUV$ luminosity density $\rho_{NUV,0.1}$ within each color bin. As before, the luminosity function for the bluest galaxies is near a slope $-2$ where its integral diverges. Therefore, for the luminosity density in this bin, we have only integrated the luminosity density down to $M_{NUV,0.1}=-12$. The value of the luminosity density as a function of color is plotted in the top panel of Figure \ref{lumden_uv} while the bottom panel shows the fraction of the total luminosity density contributed by the galaxies in each color bin. As would be expected, the $NUV$ luminosity density is dominated by the blue sequence galaxies. Specifically, $\approx80\%$ of $\rho_{NUV,0.1}$ is coming from galaxies with colors in the range $1<(NUV-r)_{0.1}<3$. The bluest galaxies, those with $0<(NUV-r)_{0.1}<1$, only contribute $\approx6\%$ to the $NUV$ luminosity density. Galaxies as blue as $(NUV-r)_{0.1}\sim0$ are difficult to produce using models with smoothly decling star formation histories. Such very blue galaxies can be reproduced by models with a star formation burst lasting $10-100$ Myr, with little dust and involving a significant fraction of the mass of the galaxy \citep{treyer98}. Clearly, such dust-free starburst galaxies are relatively rare in the local universe and do not contribute much to the total UV luminosity density. The blue sequence could in principle include galaxies undergoing large starbursts that have relatively large extinctions. However, we argue in \S3.5 below that the bulk of the galaxies in the blue sequence do not have such extreme dust attenuation.
Adding up the total luminosity density for galaxies of all colors, we obtain a value of $\log{\rho_{NUV,0.1}} = 25.791 \pm 0.029$ ergs s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$, a value consistent to within the uncertainties with the value determined by \citet{wyder05} from an UV-selected sample.
\subsection{Comparison of $(NUV-r)_{0.1}$ with $(u-r)_{0.1}$}
Although there are a number of qualitative similarities between the $(NUV-r)_{0.1}$ CMD presented here and the $(u-r)$ diagram from \citet{baldry04}, there are a few notable differences. As already shown in \S3.2, the $(NUV-r)_{0.1}$ color distributions are not well fit by a double Gaussian function, in contrast to the $(u-r)$ distributions. There is an excess of galaxies in between the two sequences above that predicted by the double Gaussian. Moreover, the blue sequence appears to merge with the red sequence for the most luminous galaxies in the $(u-r)$ diagram while there is still a distinct blue peak visible in Figure \ref{cmd_nuv_colordist} up to $M_{r,0.1} =-22.75$. In addition, the separation between the blue and red sequences, compared to their widths is larger at all luminosities than in the $(u-r)$ distributions.
The reason for these differences is relatively easy to understand and related to the greater sensitivity of the $NUV$ band to changes in the recent star formation rate. To illustrate this point, we compare directly the $(NUV-r)_{0.1}$ and $(u-r)_{0.1}$ colors for our sample in Figure \ref{ur}. When generating this figure, we excluded galaxies with $u$-band photometric errors larger than 0.3 mag. For blue galaxies, the two colors are very well-correlated with a slope of $\Delta (u-r)_{0.1}/\Delta (NUV-r)_{0.1} \sim 0.5$. However for galaxies with colors redder than $(NUV-r)_{0.1}\approx3.5$, there is a change in slope and the $(u-r)_{0.1}$ color begins to increase less quickly with $(NUV-r)_{0.1}$ than for bluer $(u-r)_{0.1}$ colors. As a result, galaxies that are on the red sequence in the $(u-r)_{0.1}$ CMD tend to be more spread out in color in the $(NUV-r)_{0.1}$ diagram.
This behavior is basically that predicted based upon simple galaxy models. In Figure \ref{ur}, we overplot as red circles the locations of a few \citet{bruzual03} models with an age of 13 Gyr, no dust and solar metallicity. The models are plotted for exponentially declining star formation histories with five values of the time constant $\gamma$ in the range $0.01-7.5$ Gyr$^{-1}$. These models are capable of reproducing the locus of data points in Figure \ref{ur}, and in particular the change in slope for $(NUV-r)_{0.1}>3.5$. The main resulting difference in the observed CMDs is a greater sensitivity of the UV to small changes in the recent star formation rate, especially relevant to galaxies on the red sequence or in between the two sequences. For reference, the solid black line in the figure indicates the reddening vector in this diagram corresponding to a reddening in the ionized gas of $E(B-V)_{gas}=0.5$ mag, assuming the attenuation law from \citet{calzetti00}. The reddening vector lies nearly parallel to the blue sequence, meaning that dust would tend to simply move galaxies along the blue sequence in this diagram.
\subsection{Correcting for Dust}
One of the most important obstacles in interpreting the galaxy CMD is the fact that the UV minus optical color of a galaxy is not only affected by the galaxy's star formation history but by other physical parameters, primarily the amount of dust and the metallicity. We would like to understand how much of the variation in color with luminosity that we observe is due to which of these physical parameters. The most reliable method for determining the UV attenuation is the far-infrared (FIR) to UV flux ratio because it is almost independent of the age of the stellar population, the dust geometry, or intrinsic dust properties \citep{gordon00}. However, the vast majority of our sample have no FIR data available. Thus, we are forced to use more indirect methods. We have estimated the effects of dust on the colors of galaxies in our sample using two methods, the first using the Balmer lines, and the second using the empirical dust-SFH-color relation derived by \citet{johnson06}.
\subsubsection{Balmer decrement}
We have used the Hydrogen Balmer line fluxes measured in the SDSS fiber spectra by \citet{tremonti04} to estimate the attenuation for our sample. The intrinsic ratio of H$\alpha$ to H$\beta$ flux is relatively independent of the physical conditions within \ion{H}{2} regions and has a value of $R_{\alpha\beta,0}=2.87$ \citep{osterbrock89}. For those galaxies with H$\alpha$ and H$\beta$ emission lines detected, we have computed a reddening in the ionized gas $E_{gas}(B-V)$ using
\begin{equation}
E_{gas}(B-V) = \frac{2.5 \log{(R_{\alpha \beta} / R_{\alpha \beta, 0})}}{k({\rm H}\beta) - k({\rm H}\alpha)},
\end{equation}
where $k({\rm H} \beta) - k({\rm H} \alpha) = 1.163$ for the Galactic extinction law of \citet{cardelli89} with $R_V=3.1$. We further computed the attenuation in the $NUV$ and $r$ bands using the \citet{calzetti00} attentuation law, where $A_{NUV} = 3.63 E_{gas}(B-V)$ and $A_r = 1.57 E_{gas}(B-V)$. We have used the line flux uncertainties to compute a statistical uncertainty $\delta A_{NUV}$ in the resulting attenuation. For those galaxies with no H$\alpha$ or H$\beta$ lines, $A_{NUV} < 0$, or $\delta A_{NUV} < 0.5$ mag, we have assumed $E_{gas}(B-V) = 0$. For the remaining galaxies, the median $NUV$ attenuation is $A_{NUV} = 1.2$ mag with 99\% of the galaxies lying in the range $A_{NUV} = 0-3$ mag.
There are a few caveats to bear in mind when estimating the attenuation using the Balmer lines. The relationship between the $R_{\alpha \beta}$ measured in the $3\arcsec$ diameter fiber spectra and the value for the galaxy as a whole is likely complicated. For many star-forming galaxies, the metallicity becomes lower with radius, which we would expect to lead to a decrease in the dust attenuation with radius as well. In these cases, the UV attenuation within the fiber would be higher than for the galaxy as a whole. On the other hand, some galaxies, for example with a strong bulge, have most of their star formation occurring in the outer disk of the galaxy. For these cases where there would be weak or no emission lines in the fiber spectra, the integrated UV attenuation would be underestimated. Finally, the attenuation law of \citet{calzetti00} was calibrated using observations of starburst galaxies. It has been shown that the relation ship between the UV attenuation and UV spectral slope for less active, more "normal" star-forming galaxies is smaller than predicted by the starburst results \citep{bell02, kong04, seibert05, buat05}. Whether the attenuation determined from the Balmer decrement and the \citet{calzetti00} law lead to similar overestimates of the attenuation remains unclear.
The dust-corrected CMD is plotted in Figure \ref{cmd_nuv_extcorrbalmer_vmax}. The color of the faint end of the blue sequence changes little since the reddening for these galaxies is small. The reddening increases with luminosity such that the overall trend of color with luminosity disappears while leaving the dispersion of the blue sequence relatively unchanged. The dust-corrected color of the peak of the blue sequence has a value of $(NUV-r)_{0.1} \approx 1.7$.
\subsubsection{The dust-SFH-color relation}
By combining a spectroscopic measure of the star formation history relatively insensitive to dust, such as the $D_n(4000)$ index \citep[e.g.,][]{kauffmann03a}, with UV, optical, and FIR fluxes, it is possible to separate the effects of dust and star formation history on the colors of galaxies. \citet{johnson06} have used a sub-sample of SDSS galaxies with both UV measurements from {\it GALEX} and FIR fluxes from {\it Spitzer} to determine the UV attenuation, as measured by the FIR to UV ratio, as a function of $D_n(4000)$ and color. We have utilized the fits given by \citet{johnson06} to estimate the UV attenuation for our entire sample. Specifically, we calculated the $FUV$ attenuation $A_{FUV}$ as a function of $D_n(4000)$ and $(NUV-r)_{0.1}$ color from the fits in Table 1 of \citet{johnson06} as
\begin{equation}
A_{FUV} = 1.27 - 1.56\{D_n(4000)-1.25\} + 1.35\{(NUV-r)_{0.1}-2\} - 1.24\{D_n(4000)-1.25\} \{(NUV-r)_{0.1}-2)\}.
\label{extinction_eqn}
\end{equation}
We calculated the $NUV$ and $r$ attenuation using equation (\ref{extinction_eqn}) and assuming $A_{NUV} = 0.81 A_{FUV}$ and $A_r = 0.35 A_{FUV}$, derived from the \citet{calzetti00} attenuation curve.
In order to illustrate the range of $NUV$ attenuations in our sample, in Figure \ref{d4000} we plot the $(NUV-r)_{0.1}$ color as a function of $D_n(4000)$ from the SDSS spectra. The contours in the figure reflect the number of galaxies in the sample and have not been weighted by $1/V_{max}$. Overall, the $(NUV-r)_{0.1}$ color and the $D_n(4000)$ index are correlated with redder galaxies having on average larger $D_n(4000)$ values, indicating older stellar populations, although with a relatively large spread. The dashed red lines plotted over the data in Figure \ref{d4000} are lines of constant $A_{NUV}$ using equation (\ref{extinction_eqn}) . The bulk of the blue sequence galaxies in our sample, with absolute magnitudes near $M^*$, have $A_{NUV}=1-2$ mag, with the mode of the distribution at $A_{NUV}\approx1.3$ mag. The attenuation decreases on average with decreasing color and reaches values near zero for the bluest galaxies. Since the distribution of galaxy colors along the blue sequence do not lie perpendicular to lines of constant attenuation in Figure \ref{d4000}, the variation in color along the blue sequence is not entirely due to dust alone $-$ the star formation history and metallicity also play a role.
While the $(NUV-r)_{0.1}$ color is relatively tightly correlated for blue and red sequence galaxies, sources with intermediate colors tend to exhbibit a larger spread in $D_n(4000)$. Indeed, galaxies with these intermediate colors tend to have a large range of attenuation ranging from one to three magnitudes. This would indicate that some galaxies are located in between the two sequence simply because they are star-forming galaxies with a lot of dust reddening while other galaxies are located there due to their older average stellar populations.
In addition to dust and star formation history, it is important to keep in mind that aperture effects can play an important role, particularly for galaxies in between the two sequences because the $D_n(4000)$ measurements only sample the inner $3\arcsec$ of each galaxy. For galaxies with strong bulges, for example, the SDSS spectra sample mostly the bulge resulting in an old value for $D_n(4000)$ whereas the color would tend to indicate a somewhat younger age since it measures the flux from both the bulge and any recent star formation in the disk. This may explain the population of galaxies in Figure \ref{d4000} that have values of $D_n(4000)$ that would place them on the red sequence but with colors that place them in between the red and blue sequences. However, as the sample used by \citet{johnson06} is drawn from the SDSS, their fits should inherently account for this aperture correction on average.
The relation in equation (\ref{extinction_eqn}) is an empirical description of the correlations observed in the joint SDSS-{\it GALEX}-{\it Spitzer} sample analyzed by \citet{johnson06}. As such, it is difficult to interpret directly in terms of physical variables. Presumably, it is the galaxies' star formation history which varies along each line of constant UV attenuation shown in Figure \ref{d4000} although the lines also implicitly account for the fact that the $D_n(4000$) index is measured only within the central regions of each galaxy. In addition, the analysis of \citet{johnson06} assumes a very simple prescription for converting the FIR/UV ratio to an UV attenuation and does not take into account heating due to an older stellar population. In such cases, particularly relevant for red sequence galaxies with little or no recent star formation, the UV and FIR emission may become decoupled. Indeed, the residuals from the fit in equation (\ref{extinction_eqn}) are larger for red galaxies in the \citet{johnson06} sample.
Another important caveat to bear in mind when interpreting the lines of constant $A_{NUV}$ in Figure \ref{d4000} is the cuts imposed to define the sample used in determining the fit in equation (\ref{extinction_eqn}). The sample used by \citet{johnson06} was selected by requiring galaxies to be detected both in the UV by {\it GALEX} and at $24\micron$ by {\it Spitzer}. Thus, their sample will not include quiescent early type galaxies with no recent star formation. As can be seen in Figure \ref{d4000}, the typical attenuation predicted for red sequence galaxies is $A_{NUV} \sim 1.5$ mag. This is most likely an overestimate in many cases since many of these red sequence galaxies likely do not contain any residual star formation and are simply red due to their older stellar populations. Therefore, care must be taken in interpreting the UV attenuation for red sequence galaxies in our sample.
The $(NUV-r)_{0.1}$ vs. $M_{r,0.1}$ diagram corrected for dust using the dust-SFH-color relation is shown in Figure \ref{cmd_nuv_extcorr_vmax}. In this corrected diagram the ridge line of the blue sequence is shifted towards the blue compared to the uncorrected diagram while the width has become significantly narrower, particularly at the faint end. In contrast to the Balmer dust corrected CMD, a trend of color with absolute magnitude remains. Whereas the ridge line of the blue sequence in the uncorrected CMD increases from about $(NUV-r)_{0.1}\approx 1.8$ to 3 with increasing luminosity, the blue sequence in the dust corrected diagram increases from $(NUV-r)_{0.1}\approx 1.3$ to $2.2$. Thus, roughly a quarter of the change in color of the blue sequence with luminosity is due to dust. The red sequence in the corrected diagram also shifts to the blue but has a wider spread in color compared to the uncorrected CMD. As we have already argued, it is likely that the attenuation estimated for part of the red sequence using equation (\ref{extinction_eqn}) has been overestimated. In addition, a significant number of galaxies with intermediate colors remains.
Despite this caveat for non-star-forming galaxies, we prefer the relations computed by \citet{johnson06} as the best measure of the UV attenuation for our sample because this method is based upon the total FIR/UV ratio, which is the most direct measure of the UV light absorbed \citep[e.g.][]{gordon00}. Deriving an UV attenuation from the Balmer lines requires assuming an attenuation law to convert from the extinction in the Balmer lines to that in the UV in addition to the uncertainty associated with only having measurements in the central $3 \arcsec$ of each galaxy. Although \citet{calzetti94} showed that $E_{stars}(B-V) = 0.44 E_{gas}(B-V)$ for starburst galaxies, it remains unclear whether this relation pertains to more "normal" galaxies.
Besides attenuation due to dust and the star formation history, the colors of galaxies can also be affected by the metallicity. We have estimated the effect of metallicity on the colors of star-forming galaxies using the models of \citet{bruzual03}. For galaxies at an age of 12.6 Gyr, no extinction, and exponentially declining star formation histories with time constant $\gamma < 4$ Gyr$^{-1}$, the $(NUV-r)$ color varies by 0.2 mag for metallicities between 0.4 and 2.5 times the solar abundance. For such currently star-forming galaxies, the $(NUV-r)$ color becomes bluer with increasing metallicity. This some what counterintuitive result is due to the variation in the ratio of blue to red supergiant stars as a function of metallicity \citep{bruzual03}. Since metallicity tends to increase with galaxy luminosity \citep[e.g.,][]{tremonti04}, the increase in color along the blue sequence would be expected to be decreased somewhat due to this effect. Thus, metallicity variation does not account for the remaining change in color along the blue sequence after correcting for dust attenuation.
\subsection{Specific Star Formation Rates as a Function of Stellar Mass}
Although we do not have access to the detailed star formation history of each of the galaxies in our sample, we can try to place some basic constraints on their evolution. One of the most basic parameters that we may hope to use to constrain the star formation history is the specific star formation rate, or the star formation rate per stellar mass $SFR/M^*$. We have made estimates of specific star formation rates for our sample as follows. For the stellar masses, we have made use of the estimates already included as a part of the MPA/JHU values-added catalogs. As described in detail by \citet{kauffmann03a}, the stellar masses were determined from the $z$-band luminosities with the mass to light ratio constrained by their model fits to the $D_n(4000)$ and $H\delta_A$ spectroscopic indices. The masses assume a \citet{kroupa01} stellar initial mass function (IMF). We have estimated a star formation rate SFR in M$_{\sun}$ yr$^{-1}$ for each galaxy using its dust corrected $NUV$ luminosity. We have relied upon the relation from \citet{kennicutt98} converted to the \citet{kroupa01} IMF: $SFR = 1.0\times10^{-28} L_{\nu,NUV}$, where $L_{\nu,NUV}$ is the $NUV$ luminosity in units of ergs s$^{-1}$ Hz$^{-1}$ and SFR is the star formation rate in $M_{\odot}$ yr$^{-1}$.
An alternative method for calculating stellar masses was presented by \citet{bell03}. Based upon fits of models to galaxies with optical and near-IR photometry, \citet{bell03} determined relations between the stellar mass-to-light ratio $(M_{*}/L)$ and color for various bands. Using their relation between $(M_{*}/L)_r$ and $(g-r)$, we have compared the resulting stellar masses with those from \citet{kauffmann03a}. The stellar masses agree on average for the most massive galaxies while those for lower mass galaxies are larger using the \citet{bell03} relations. The difference in mass increases steadily below $10^{10}$ M$_{\sun}$, reaching a difference of $\approx 0.4$ dex on average at $10^8$ M$_{\sun}$. Throughout the remainder of this discussion, we assume the stellar masses from \citet{kauffmann03a} but note that the specific SFRs for the lowest mass galaxies are likely more uncertain since the origin of this difference between the two mass calibrations is not clear.
The number density of galaxies determined using the $V_{max}$ weighting is shown in Figures \ref{ssfr_balmer} and \ref{ssfr_johnson} for the two different dust corrections discussed in \S3.5. In Figure \ref{ssfr_balmer}, the SFR was calculated from the $NUV$ luminosity corrected for dust using the Balmer decrement, whereas the SFR used in Figure \ref{ssfr_balmer} was determined using the dust-SFR-color relation of \citet{johnson06}. The overall trends are similar in both diagrams. The specific SFR is correlated strongly with the stellar mass, as previous studies have shown \citep[e.g.][]{brinchmann04}. The specific SFR decreases gradually with increasing stellar mass. Above a stellar mass of $\sim10^{10.5}$ M$_{\sun}$, galaxies with very low specific SFRs begin to dominate, although they are present in smaller numbers well below this mass. This is partially a selection effect since our sample would be biased against low mass galaxies with low specific SFRs. Indeed, the low mass limit in Figures \ref{ssfr_balmer} and \ref{ssfr_johnson} increases as a function of mass due to the UV selection limit of our sample.
There are some notable differences between the specific SFRs in Figures \ref{ssfr_balmer} and \ref{ssfr_johnson}. The width of the blue sequence is substantially broader when using the Balmer decrement dust correction. In both diagrams the peak specific SFR in both diagrams is $\approx 10^{-10.3}$ yr$^{-1}$ at the high mass end of the blue sequence at $10^{11}$ M$_{\sun}$. The specific SFR increases somewhat less with decreasing mass for the Balmer decrement dust correction than for the \citet{johnson06} dust correction. At a stellar mass of $10^{8.5}$ M$_{\sun}$, the peak specific SFR is $\approx 10^{-9.6}$ yr$^{-1}$ for the Balmer decrement dust correction and $\approx 10^{-9.3}$ with the \citet{johnson06} dust correction.
There are larger differences in the specific SFRs derived from the two dust corrections for red sequence galaxies. The specific SFR is a factor of $\sim 10$ lower for the Balmer decrement dust correction. Determining specific SFRs for red sequence galaxies is problematic in both methods. The Balmer line dust correction underestimates the amount of dust absorption in cases where most of the star formation is taking place in the outer parts of a galaxy that are not sampled by the SDSS fiber spectra. On the other hand, the \citet{johnson06} method overestimates the dust absorption in some red sequence galaxies. In particular, dust heating by older stellar populations or AGN can contribute to the FIR luminosity to a higher degree in red sequence galaxies. In addition, the galaxies used to derive the correlations in \citet{johnson06} were all detected at $24\micron$, which would bias the sample against truly quiescent galaxies with very little dust. Regardless of the dust correction applied, SFRs derived from the $NUV$ luminosity for red sequence galaxies are problematic because the $NUV$ band can include light from older stellar populations \citep{yi05, rich05}. In those cases, the $NUV$ luminosity will overestimate the recent SFR. For these reasons, caution must be used when interpreting the specific SFRs for red sequence galaxies. Figures \ref{ssfr_balmer} and \ref{ssfr_johnson} are most useful for investigating the star formation histories of blue sequence galaxies. As we have argued in the previous section, we prefer the \citet{johnson06} dust correction and thus restrict our discussion to Figure \ref{ssfr_johnson} through out the remainder of the paper.
With a few additional assumptions, the specific SFR can be related to the ratio of the present to past average SFR as $b = SFR/<SFR> = (SFR/M^*) T R$, where $T$ is the age of the galaxy and $R$ is the fraction of the mass formed over the galaxy's lifetime that does not eventually get returned to the ISM or IGM \citep{brinchmann04}. A typical value of $R$ is $\approx 0.5$ \citep{brinchmann04}. Assuming that all galaxies started forming stars shortly after the Big Bang, we set $T=13$ Gyr. For these choices of $T$ and $R$, a galaxy with $b=1$, i.e. a constant SFR, would have a specific SFR of $10^{-9.8}$ yr$^{-1}$. Thus galaxies with specific SFRs larger than this value would have current SFRs above their average while those with specific SFRs $<10^{-9.8}$ yr$^{-1}$ would have current SFRs below their lifetime average. The right hand y-axes of Figures \ref{ssfr_balmer} and \ref{ssfr_johnson} gives the corresponding values of $\log{(b)}$. Under these assumptions, blue sequence galaxies with $M_* < 10^{10}$ M$_{\sun}$ would have star formation histories increasing moderately with time, with more massive blue sequence galaxies having formed their stars at somewhat declining rate with time. As noted earlier when discussing the CMD itself, galaxies undergoing extreme starbursts, i.e. objects with $\log{(b)} > 1$, are rare in the local universe and do not contribute significantly to the SFR density.
Quantitatively, the specific star formation rates estimated here are somewhat larger than those determined by \citet{brinchmann04} from their analysis of the SDSS emission lines, especially for lower mass galaxies. The origin of this difference is not clear. The SFRs estimated by \citet{brinchmann04} were derived largely from the emission lines coming from \ion{H}{2} regions within the SDSS fiber. Since stars capable of producing an \ion{H}{2} region are very massive stars with lifetimes up to $\sim 10^7$ yr, compared to stars with lifetimes up to $\sim 10^8$ yr that dominate the UV emission, differences between UV and optical emission line based SFRs could be due to short time scale variations in the SFRs of galaxies. This could in principle be more important for lower mass galaxies where stochastic fluctuations in the SFR would be expected as a fraction of the total mass. In addition to this fundamental difference, part of the difference could be due to uncertainties in our method for correcting for dust attenuation or to uncertainties in the aperture corrections applied to the SDSS measurements. A more detailed comparison of the SFRs from \citet{brinchmann04} and {\it GALEX} UV determinations is presented elsewhere \citep{treyer07}.
\section{Discussion}
Due to the greater sensitivity of the UV minus optical colors to very low specific star formation rates as compared to optical colors, our galaxy color-magnitude diagrams have revealed a few new features. Similar to optical results, our CMD exhibits prominent blue and red sequences. However, whereas in the $(u-r)$ CMD \citep[e.g.,][]{baldry04}, the blue sequence appears to merge with the red sequence at the luminous end, in the $(NUV-r)_{0.1}$ CMD, the blue sequence remains as a separate peak in the color distribution up to $M_{r,0.1} \simeq -23$. Also in the optical CMDs \citep{baldry04, balogh04}, the color distribution in each absolute magnitude bin is well fit by the sum of two Gaussians. This color bimodality naturally leads to an interpretation in which there are simply two populations of galaxies with nothing in between. As we have shown in Figure \ref{cmd_nuv_colordist}, the color distributions as a function of $M_{r,0.1}$ are not well fit by a double Gaussian function due to an excess of galaxies in between the red and blue peaks. These galaxies in the "green valley" between the red and blue peaks would suggest that the properties of galaxies are not strictly bimodal and that there really exists more of a continuum of properties between star-forming and quiescent galaxies.
Besides the colors, we have also shown that the $r_{0.1}$-band luminosity function shape varies systematically with color. While there is a steady increase in the faint end power law exponent $\alpha$ with color across the blue peak, the slope levels off in the "green valley." In addition, the decrease in the value of the characteristic magnitude $M^{*}_{r,0.1}$ appears to level off at intermediate colors as well. This change in the luminosity function shape at intermediate colors would also suggest that the "green valley" galaxies coincide with a physical difference in the galaxy populations.
Interpreting the change in the peak color of each sequence with luminosity is complicated by the fact that a galaxy's color is affected not only by its star formation history, but also its metallicity and dust content. For morphologically selected early type galaxies, there is little dust and the change in color is due to some combination of metallicity and age of the stellar populations. While the red sequence, as defined by the CMD, contains galaxies with a range of morphologies, the dominant effects are still likely the age and metallicity, although a significant fraction of early-type galaxies appear to have some residual star formation \citep{yi05}.
On the other hand, for blue sequence galaxies, we know that their colors are affected both by dust attenuation and their SFH, with metallicity playing a minor role. We have attempted to determine the effects of dust in two ways. Using the Balmer decrement as observed within the SDSS fiber spectra in conjunction with the \citet{calzetti00} attenuation law, we found that the trend of the peak color with luminosity disappears, which would suggest that the average color along the blue sequence is due mostly to dust with the SFH primarily responsible for the dispersion in the color at fixed luminosity.
Our second method to correct for dust reddening and attenuation relies upon the empirical correlations derived by \citet{johnson06}. Applying these fits to our entire sample, we found that only about one quarter of the change in color along the blue sequence is due to dust. In this case the dispersion of the blue sequence is small and there is a significant variation in the average color with luminosity, indicating a decrease in the average galaxy age with decreasing luminosity. Due to the requirement that the galaxies used to calibrate these relations must be detected in both the UV and at $24 \micron$, there is an inherent bias against truly quiescent galaxies with little star formation and dust. Thus, the attenuation for the red galaxies is likely overestimated. Nevertheless, we argued in \S3.5 why we prefer this dust correction.
While the detailed morphology of the CMD depends upon which dust correction is assumed, there remain galaxies in the "green valley" between the two sequences in both versions of the dust-corrected CMD. While there are indeed some galaxies with intermediate colors that are simply very reddened versions of blue sequence galaxies, many do have SFHs weighted to older ages. We would expect that as time progresses, the red sequence is building up as galaxies exhaust their gas and cease forming stars. With the data presented here we may be beginning to separate out the population undergoing this transition. Of course, it is important to note that just based upon the integrated measurements here it is not possible to infer the evolution of different structures within each galaxy. In fact, galaxies with bulges would be expected to lie at intermediate colors, where the SFH of the bulge may be quite different than that of the disk. In these cases, the integrated color would reflect the relative strengths of the bulge and disk. In addition, it is not possible just based upon the simple diagnostics employed in this paper to infer whether galaxies in the green valley are star-forming galaxies transitioning to the red sequence for the first time or whether they were already on the red sequence and underwent a burst of star formation after merging with a gas rich galaxy. The rate at which galaxies are moving from the blue to the red sequence is explored in more detail in \citet{martin07}.
A remarkable feature of the specific SFR as a function of stellar mass in Figure \ref{ssfr_johnson} is the relatively small spread of a factor of $2-3$ in $SFR/M_*$ at a given stellar mass. It is difficult for galaxies to have a large $SFR/M_*$ ratio for an extended period of time due to exhausting the available gas supply as well as feedback from supernovae which tends to heat the gas and prevent it from forming stars. On the other hand, the energy input from supernovae can help maintain lower levels of star formation over an extended period of time as long as the halo is massive enough to retain the heated gas.
The behavior seen in Figure \ref{ssfr_johnson} is broadly consistent with the theoretical models of \citet{cattaneo06} which succeeded in reproducing the distribution of galaxies in the optical $(u-r)$ CMD. In their models, galaxies below a critical halo mass of $10^{12}$ M$_{\sun}$ accrete gas from cold streams. This gas fuels star formation that is self-regulated due to feedback from supernovae and stellar winds. These lower mass galaxies gradually gain in mass, move up in luminosity along the blue sequence, and become redder. The gradual decrease in the specific star formation rate with stellar mass seen in Figure \ref{ssfr_johnson} would be consistent with the predictions of this model. The gradual transition to the dominance of red galaxies above a stellar mass of $\sim 10^{10.5}$ M$_{\sun}$ would correspond in this picture to the critical halo mass of $10^{12}$ M$_{\sun}$ above which star formation is quenched due to shock heating of the gas and feedback from AGN. In their model lower mass galaxies can also have their star formation quenched if they happen to become satellites within a halo above the critical mass. This tends to populate galaxies along the fainter end of the red sequence and serves to broaden somewhat the mass range over which the transition from star-forming to quiescent galaxies occurs. The most luminous blue galaxies help constrain the value of the critical halo mass $M_{shock}$ above which gas is not able to cool and form stars. While \citet{cattaneo06} have used the $(u-r)$ CMD to constrain the value of $M_{shock}$, our observations of blue sequence galaxies at slightly larger luminosities may correspond to somewhat larger values for $M_{shock}$.
In their analysis of the evolution of galaxies in the $(U-B)$ CMD out to $z=1$, \citet{faber06} discussed the origin of red sequence galaxies and how blue sequence galaxies transition from blue to red. In their favored scenario, galaxies first grow moderately in mass along the blue sequence, then undergo a merger which halts further star formation, and finally undergo a series of gas-free mergers that move the galaxies up the red sequence. Such a scenario is broadly consistent with the results presented here. The excess of galaxies in between the two sequences would in this case be consistent with galaxies in the process of turning off their star formation and making the transition to the red sequence. As can be seen in Figures \ref{cmd_nuv_lfs} and \ref{cmd_nuv_lfs_param}, the $M_{r,0.1}$ luminosity functions for the bluest galaxies have a steeper slope and fainter $M^*$ than for galaxies in between the two sequences. If large numbers of lower mass galaxies were stopping star formation and moving towards the red sequence, we would expect to see a steeper faint end slope to the luminosity function for galaxies with intermediate and red colors. Thus, the luminosity functions are more consistent with a scenario in which the ancestors of galaxies on the red sequence are weighted towards the luminous end of the blue sequence. The bright end cut-off of the $r$-band luminosity functions is remarkably similar for galaxies with $2 < (NUV-r)_{0.1} < 5$ while there appears to be a relatively sharp jump in the bright end at $(NUV - r)_{0.1} \sim 5$. Thus, the most massive galaxies are unlikely to be the descendants of the merging of galaxies at the luminous end of the current blue sequence. Either these galaxies are the descendants of much more massive blue sequence galaxies that are not present in the nearby universe, or they are the result of "dry" mergers of smaller mass red sequence galaxies, as \citet{faber06} argued.
\section{Summary}
We have determined the volume density of galaxies in the local universe as a function of absolute magnitude $M_{r,0.1}$ and $(NUV-r)_{0.1}$ and $(FUV-r)_{0.1}$ colors based upon a sample of galaxies observed in the UV by {\it GALEX} and with optical data from the SDSS. The galaxies in these CMDs separate into well-defined blue and red sequences that become redder with increasing luminosity. While the most luminous galaxies are on the red sequence, a separate blue peak is detectable as bright as $M_{r,0.1} \approx -23$. In contrast to CMDs relying solely on an optical color such as $(u-r)$ \citep{baldry04}, the color distribution at fixed absolute magnitude is not well fit by the sum of two Gaussians due to an excess of objects at intermediate colors between the blue and red peaks. The greater separation between the blue and red sequences is a consequence of the greater sensitivity of the UV bands to very low levels of recent star formation. The $r_{0.1}$-band luminosity function shape varies systematically with color with the faint end slope $\alpha$ gradually increasing across the blue sequence, reaching a value of $\alpha \sim -0.6$ at intermediate colors before increasing even more for the reddest galaxies. We have used these fits to the luminosity functions to derive the fraction of the luminosity density in the local universe as a function of color. Dust-free starburst galaxies with colors $(NUV-r)_{0.1}<1$ are rare in the local universe and account for only about 5\% of the $NUV_{0.1}$ luminosity density. About 80\% of the $NUV_{0.1}$ luminosity denisty is emitted by blue sequence galaxies with colors $1 < (NUV-r)_{0.1} < 3$.
We have used both the Balmer decrement and the dust-SFH-color relation of \citet{johnson06} to estimate the effect of dust on the galaxy colors and absolute magnitudes. For the Balmer decrement method, the increase in color with luminosity along the blue sequence is due entirely to dust with the dispersion at fixed absolute magnitude relatively unchanged. On the other hand, the blue sequence color in the CMD corrected for dust using the \citet{johnson06} method does still increase with luminosity, indicating that part of this change in color is due to the star formation history and not to dust alone. We argue that we prefer the \citet{johnson06} method as it is ultimately based upon an attenuation derived from the FIR/UV ratio. Regardless of which dust correction we employ, however, a significant number of galaxies remain at colors in between the two sequences, indicating that not all of the galaxies there are simply dusty versions of blue sequence star-forming galaxies.
We have used the $NUV_{0.1}$ luminosities corrected for dust using the \citet{johnson06} method in conjunction with the stellar masses determined by \citet{kauffmann03a} to plot the density of galaxies as a function of specific star formation rate $SFR/M_*$ and stellar mass $M_*$. The dispersion in $SFR/M_*$ is only a factor of $2-3$ at a fixed stellar mass along the blue sequence. The value of $SFR/M_*$ decreases from values of $\approx10^{-9.3}$ yr$^{-1}$ at $M_*=10^{8.5}$ M$_{\sun}$ to $\approx 10^{-10.3}$ near the tip of the blue sequence at $M_* = M^{11}$ M$_{\sun}$. Similar to previous optical results \citep{kauffmann03a, kauffmann03b}, galaxies with low specific star formation rates begin to dominate above a stellar mass of about $10^{10.5}$ M$_{\sun}$.
In addition to the small number of galaxy properties explored here, many galaxies in our sample contain many other measurements mainly from the SDSS. In a companion paper in this volume, \citet{martin07} have estimated the mass flux of galaxies from the blue to the red sequence and have discussed some of the other properties of the galaxies in between the red and blue sequences. In another paper, \citet{schiminovich07} have investigated the correlation of morphology and other characteristics with position in the CMD. While detecting red sequence galaxies out to significant distances in the rest-frame UV is very difficult, it should be possible to use data from {\it GALEX} deep exposures in conjunction with ground-based photometry and spectra to investigate the evolution of the blue sequence with redshift. In addition, the variation of the CMD with local galaxy density should provide interesting contraints on the nature of the galaxies in between the blue and red peaks as well as models of the physical processes affecting the evolution of galaxies in the CMD.
\acknowledgments
{\it GALEX} (Galaxy Evolution Explorer) is a NASA Small Explorer, launched in April 2003.
We gratefully acknowledge NASA's support for construction, operation,
and science analysis for the {\it GALEX} mission,
developed in cooperation with the Centre National d'Etudes Spatiales
of France and the Korean Ministry of Science and Technology.
{\it Facilities:} \facility{GALEX}
|
1,116,691,500,386 | arxiv | \section{Introduction}
With the emergence of deep neural networks~\cite{alexnet,vgg,resnet}, object detection built on deep networks has achieved significant progress both in detection accuracy~\cite{fastrcnn,RFCN,cornernet} and detection efficiency~\cite{YOLO,YOLOV3}. Beneficial from an optimal trade-off between real-time detection efficiency and accurate detection performance, single-shot detectors~\cite{SSD} have gained increased popularity for various computer vision applications. Despite this success, complex scale variations in practical scenes exist as a fundamental challenge and a bottleneck for accurate object detection~\cite{SNIP,SNIPER,SAN}.
\begin{figure}[t]
\centering
\small
\includegraphics[width=8.0cm]{Figure1motivation_single.pdf}
\caption{Two common detection problems for baseline SSD~\cite{SSD} and the solution using our NETNet. The visualized features are extracted from the first pyramid layer for detecting small objects. (I) False negative problem. The small objects (tennis racket, sports ball) are missed in (a) because features of small objects are not salient on the corresponding pyramid features (b). Our NETNet can detect small objects with high confidence by erasing the features of large objects and focusing on small objects as (c, d). (II) Part false positive problem. The head is detected as another person in baseline because this part region are highlighted on the features (f) for detecting small objects. Our NETNet can solve this problem by suppressing the salient part features of large objects, as (h).}
\label{Problem}
\end{figure}
To tackle complex scale variations, the single-shot detector SSD~\cite{SSD} has been proposed and developed based on pyramid feature representation. SSD implements scale-aware object detection by detecting different-sized objects within different layers of the pyramid, which is motivated by the fact that deep-layer features with small feature resolution contain more semantic information for large objects, while the features for small objects are found in the shallow layers with large feature resolution~\cite{RONet,SSDES}. Specifically, shallow layers are responsible for detecting small objects and deep layers are devoted to detecting large objects. Based on feature pyramid, some methods explore to further enhance the feature representation by fusing multi-scale features using an extra feature pyramid, which has proven useful~\cite{Hypernet,FPN, RetinaNet, Reconfig} for improving detection performance. Although single-shot detectors have made great progress for real-time detection and improving detection accuracy by adopting a feature pyramid, several failure cases, such as missing small objects and poor localization~\cite{AOL,CQSSD}, still exist limiting detection performance.
In most previous single-shot detectors, features are scale-confused instead of scale-aware even on one specific pyramid layer. For example, in some shallow layers of a feature pyramid, features for both small and large objects exist. As shown in Fig.~\ref{Problem}, in the shallow features (b) used for detecting small objects, the large-object features dominate the main saliency, weakening the small-object features and thus preventing the detection of small objects (\eg, the sports ball from (a) is not detected in the final result). Additionally, some parts of large objects have strong response regions on shallow features. For example, the head region in Fig.~\ref{Problem}(e) is highlighted in (f), which leads to the wrongly detection of the head region. Thus, the features are scale-confused making it difficult to solve these two problems, \ie, false negative problem and part false positive problem.
With this observation, we propose to generate scale-aware features for better single-shot object detection. To achieve this, redundant features are erased to alleviate feature scale-confusion. Thus, we only keep features of small objects in the shallow layers, erasing features of large objects. Then, we use these small-scale-aware features to detect small objects. As shown in Fig.~\ref{Problem}(d), most of the features of large objects are removed. The features of small objects are thus emphasized, enabling the small sports ball to be detected precisely. The salient features of large objects can also be suppressed to alleviate the part false positive problem, as shown in (h). Meanwhile, transferring these erased features to a suitable scale (\ie, large-scale) space could enhance the features of large objects and improve the overall detection accuracy.
The main contributions and characteristics of our method are listed as follows:
\begin{itemize}[leftmargin=*]
\item We propose a new Neighbor Erasing and Transferring (NET) mechanism to generate scale-aware features. NET mechanism efficiently reconfigures features between different pyramid layers to alleviate feature scale-confusion.
\item Two modules, the Neighbor Erasing Module (NEM) and Neighbor Transferring Module (NTM), are designed to unmix the scale confusion and enhance feature aggregation, respectively. The NEM, embedded with a reversed gate-guided erasing procedure, is to extract and erase the large object features from the shallow layers. Then, the large object features are transferred to the deep pyramid layers by the NTM for enhancing the deep features.
\item Based on SSD, a modified single-shot network, NETNet, is constructed by simultaneously embedding the scale-aware features and the scale-aware prediction. In NETNet, we enrich the pyramid features by introducing a Nearest Neighbor Fusion Module (NNFM).
\item As a result, our NETNet is capable of achieving fast and accurate object detection with a better trade-off than previous single-shot detectors.
\end{itemize}
\begin{figure}
\centering
\small
\includegraphics[width=7.6cm]{Figure2Relatedwork.pdf}
\caption{Different detectors for object detection.}
\label{pyramid}
\end{figure}
\section{Related Work}
\noindent \textbf{Scale-agnostic detectors.} Most recent object detectors are built upon deep networks. The regions with CNN features (R-CNN) methods~\cite{RCNN,fastrcnn} integrate a CNN into object detection and achieve promising performance. As a two-stage method, the Faster R-CNN~\cite{fasterrcnn} proposes a lightweight network for generating proposals and construct the detection network as a complete end-to-end network. Methods like YOLO~\cite{YOLO}, Faster R-CNN~\cite{fastrcnn}, R-FCN~\cite{RFCN}, and other variants~\cite{LHRCNN,Deformable,cascadercnn} have made significant progress for improving detection accuracy and efficiency. As shown in Fig.~\ref{pyramid}(a), this type of methods detect all objects of various scales by utilizing the deepest single-scale high-level features. Thus, these detectors are scale-agnostic detectors.
\noindent \textbf{Scale-aware detectors.} Due to the complex scale variations, many researchers have explored to exploit multi-scale features for improving object detection performance, as shown in Fig.~\ref{pyramid}(b). SSD~\cite{SSD} is a single-shot (\ie, single-stage) detector that proposes to make scale-aware prediction based on multi-layer pyramid features. Features in shallow layers are used for detecting small objects and features in deep layers for large objects. RFBNet~\cite{RFBNet} embeds multi-scale receptive fields to enhance feature discriminability. DES~\cite{SSDES} enriches the semantics of object features through a semantic segmentation branch and a global activation module. FPN~\cite{FPN}, DSSD~\cite{DSSD}, and RONet~\cite{RONet} involve extra top-down feature pyramids and detect objects on each scale of these pyramids as shown in Fig.~\ref{pyramid}(c). Most recent methods~\cite{RetinaNet,PonopFPN,PFPN,STDN,M2DET} have explored the advantages of the pyramid features and have achieved promising results. Kong \etal ~\cite{Reconfig} proposed to reconfigure the pyramid features by aggregating multi-layer features and reassigning them into different levels. Recent TridentNet~\cite{tridentnet} attempts to generate scale-specific features through a parallel multi-branch architectures as shown in Fig.~\ref{pyramid}(b) by embedding different receptive fields, which achieves promising improvement on two-stage detectors.
Different from these methods, we propose to generate scale-aware features for single-shot object detection by introducing an erasing and transferring mechanism. The adversarial erasing strategy has also been investigated in weakly supervised object localization ~\cite{eraseORM, eraseACL}, weakly supervised semantic segmentation~\cite{selferasing}, and salient object detection~\cite{reverse}. In these methods, the well recognized regions are erased to refine the prediction results iteratively. Different from them, we propose to reconfigure the pyramid features to scale-aware features by removing the scale-uncorrelated features using an erasing strategy. The erased features in shallow layers are further transferred to enhance the features in deep layers, instead of discarding them as previous erasing methods. As shown in Fig.~\ref{pyramid}(d), we aim to remove the features of large objects from the shallow pyramid layers and generate small-scale-aware features for detecting small objects. The features of large objects in the shallow layers are transferred to enhance the features of deep layers. We then build a single-shot scale-aware detector for more accurate object detection.
\begin{figure*}[t]
\centering
\small
\includegraphics[width=16.5cm]{Figure3NETM_v1.pdf}
\caption{The Neighbor Erasing and Transferring (NET) mechanism (a), with (b) Neighbor Erasing Module (NEM), and (c) Neighbor Transferring Module (NTM). After NETM, $\tilde{p}_s$ highlights small objects, and deep feature $\tilde{p}_{s+1}$ contains more information for larger objects.}
\label{NET_NEM_NTM}
\end{figure*}
\section{NET Mechanism} \label{sec:NET}
To tackle complex scale variations, we propose to generate scale-aware features for object detection. As can be observed from Fig.~\ref{Problem}(b) and (f), features in the shallow pyramid layers contain detailed information for both large objects and small objects. However, features for large objects are more salient than small objects, which causes small objects to be missed in Fig.~\ref{Problem}(a) and the part false positive problem in Fig.~\ref{Problem}(e). Instead of promoting feature fusion as previous top-down feature pyramids~\cite{FPN, DSSD}, we propose a NET mechanism to reconfigure the basic pyramid features to scale-aware features for scale-aware object detection. As shown in Fig.~\ref{NET_NEM_NTM}(a), in the NET mechanism, a feature erasing module (\ie, NEM) and a feature transferring module (\ie, NTM) is contained. The NEM is designed to remove large-object features from the shallow layers and emphasize the features of small objects. We then transfer these features using the NTM to enhance the deep features.
Because our method aims to reconfigure the scale-confused features of the basic pyramid to scale-aware features, we take the typical single-shot detector SSD~\cite{SSD} as our baseline in which a pyramid from the backbone network is adopted for multi-scale prediction.
We first analyze the feature pyramid in the baseline SSD. Then, we present the details of our NEM and NTM in the NET mechanism.
\subsection{Basic Feature Pyramid} \label{subsec:NET_pyramid}
In SSD, a feature pyramid is explored to detect objects with different scales. We denote the objects with a specific scale $s^{th}$ as $x_s$. The objects for all $S$ scales are represented as $X=\left\{x_1, x_2, ..., x_S\right\}$, where $x_1$ represents objects with smallest scale and $x_S$ refers to objects with largest scale.
SSD detects objects in a pyramidal hierarchy by exploiting multiple CNN layers, with each layer is responsible for detecting objects of a specific scale~\cite{EFIP}.
In the feature pyramid with $S$ layers, we denote the features from $s^{th}$ layer as $p_s$ and express all the pyramid features as $P=\left\{p_1, p_2, ..., p_S\right\}$, where $p_1$ represents features with largest resolution in the shallow pyramid layer for detecting small objects $x_1$. With feature pooling in the pyramid, feature resolution is decreased from $p_1$ to $p_S$. Obviously, features for small objects are gradually discarded from shallow to deep layers. Because of the small input image size (\eg, $300\times300$) for SSD, the deep layers (\eg, with spatial size $5\times5$) only contain features for large objects. Thus, we can approximately get:
\begin{equation}
p_s = f_s(x_s,x_{s+1},...,x_S),
\label{eqpy
\end{equation}
where $f_s(x)$ represents the feature extraction of the pyramid. The feature scale-confusion in a shallow layer (\eg, $p_1$ contains features for various-scale objects) makes detecting small objects difficult and leads to much part detection, as shown in Fig.~\ref{Problem}. We propose to reconfigure the pyramid features to be scale-aware features and solve these problems.
\subsection{Neighbor Erasing Module} \label{subsec:NET_NEM}
To alleviate feature scale-confusion, we propose a Neighbor Erasing Module (NEM) to filter out the redundant features.
Suppose two adjacent pyramid layers, $s^{th}$ layer and $(s+1)^{th}$ layer. Obviously, features in the $s^{th}$ layer $p_s=f_s(x_s,x_{s+1},...,x_S)\in \mathbb{R}^{{h_s}\times {w_s} \times {c_s}}$ have more information for objects $x_{s}$ than features in the $(s+1)^{th}$ layer $p_{s+1}=f_{s+1}(x_{s+1},...,x_S)\in \mathbb{R}^{{h_{s+1}}\times {w_{s+1}} \times {c_{s+1}}}$, where ($h_s> h_{s+1}$, $w_s>w_{s+1}$).
Based on this feature distribution, we can generate features $\tilde{p}_s = f_s(x_s)$ for objects with scale $s$ from the pyramid feature $p_s$, by erasing features $p_{es}=f_s(x_{s+1},...,x_S)$ of objects in a scale range of [$s+1$, $S$] as:
\begin{equation}
\tilde{p}_s= p_s\ominus p_{es} =f_s(x_s,...,x_S) \ominus f_s(x_{s+1},...,x_S),
\label{eqerasing}
\end{equation}
with an element-wise subtraction operation $\ominus$.
Noticing that pyramid feature $p_{s+1}$ only contains information for objects with a scale range of [${s+1}, S$], we therefore use $p_{s+1}$ to guide the feature erasing in Eq.~\ref{eqerasing}. Specifically, we extract the feature $p_{es}$ from $p_s$ by:
\begin{equation}
p_{es} = p_s \odot \mathcal{F}_{{s+1}\to s}(p_{s+1}),
\label{eqattentionmal}
\end{equation}
where $\odot$ refers to Hadamard product. $\mathcal{F}_{{s+1}\to s}(p_{s+1})$ can be represented as a soft spatial gate $g_{s+1}^s\in [0,1]^{h_s\times w_s\times c}$ ($c$ is from $\left\{1,c_s\right\}$). We generate this gate by using the features from the $(s+1)^{th}$ pyramid layer and adopt it to guide suppressing features of objects $(x_{s+1},...,x_S)$ in $p_s$. In our implementation, we calculate this spatial gate as:
\begin{equation}
g_{s+1}^s = \mathcal{F}_{{s+1}\to s}(p_{s+1}) = \frac{1}{1+e^{-\mathcal{G}(\mathcal{U}(p_{s+1});W_{s+1}^{s})}},
\label{eqattention}
\end{equation}
where $\mathcal{U}(p_{s+1})$ upsamples $p_{s+1}$ to $p_{s+1}^s\in \mathbb{R}^{{h_{s}}\times {w_{s}} \times {c_{s+1}}}$ to keep the consistent spatial resolution between the gate $g_{s+1}^s$ and feature $p_s$. We implement the gate function $\mathcal{G}(.)$ with learnable weights $W_{s+1}^s$.
In actual, since $\mathcal{G}(.)$ can be represented as a self-attention function~\cite{nonlocal} in which attention for objects can be extracted from the input features, we can construct it based on the spatial attention mechanism in \cite{nonlocal} and \cite{DANet}. Alternately, we can choose to use max pooling or average pooling along channel direction to generate a spatial attention map ($c=1$) like that in~\cite{CBAM} as:
\begin{equation}
\mathcal{G}(p_{s+1}^s) = \mathcal{P}_{max}(p_{s+1}^s)~~\text{or}~~\mathcal{P}_{avg}(p_{s+1}^s),
\label{poolingattention}
\end{equation}
or combining max pooling $\mathcal{P}_{max}(.)$ and average pooling $\mathcal{P}_{avg}(.)$ by a convolution layer with ${W_{s+1}^{s}}$. In our implementation, we use a $1\times 1 \times c_s$ convolution layer $\mathcal{C}_{1\times 1}$ as:
\begin{equation}
\mathcal{G}(p_{s+1}^s) = \mathcal{C}_{1\times 1}(p_{s+1}^s;W_{s+1}^{s}),
\label{eqconvattention}
\end{equation}
to generate a channel-wise spatial gate for extracting and suppressing the features of larger objects in $p_s$, since it is proved an optimal trade-off between precision and efficiency as Sec.~\ref{subsec:EXP_abl}.
In summary, we generate the scale-aware features $\tilde{p}_s$ for smaller objects $x_s$ by suppressing the features of larger objects via a reversed gate as:
\begin{equation}
\tilde{p}_s = f_s(x_s) = p_s \ominus p_{es} = p_s \ominus (p_s\odot g_{s+1}^{s}).
\label{eqsummary}
\end{equation}
\begin{figure*}[t]
\centering
\footnotesize
\begin{center}
\includegraphics[width=15cm]{Figure4Netnet.pdf}
\end{center}
\caption{The proposed NETNet architecture. (a) The main pyramid parts of NETNet. To implement fast object detection, we build a single-shot network based on SSD~\cite{SSD}. We illustrate this architecture by taking the input image with a size of $300\times 300$ as an example. Six pyramid layers are used for building detectors, as in SSD. The embedded NNFM (b) is used for feature fusion before NETM.}
\label{net2}
\end{figure*}
\subsection{Neighbor Transferring Module}
\label{subsec:NET_NTM}
As discussed above, in the pyramid feature $p_s$, some detailed information (\eg, appearance and edge) for objects $\left\{x_{s+1},x_{s+2},...,x_S\right\}$ is also contained. Although this detailed information disturbs features for detecting smaller objects $x_s$, it is helpful for enhancing the features of larger objects $x_n$ $(n>s)$ for more accurate classification and localization. Therefore, we propose to transfer these features from a shallow layer (\eg, $p_s$) to a deep layer (\eg, $p_{s+1}$).
As formulated in Section~\ref{subsec:NET_NEM}, the soft spatial gate $g_{s+1}^s\in [0,1]^{h_s\times w_s\times c}$ generated by $p_{s+1}$ has larger activation values on the regions for objects $\left\{x_{s+1},...,x_S\right\}$. Thus, $p_{es}$ in Eq.~\ref{eqattentionmal} helps extract the detailed information of these larger objects.
We then transfer this detailed information $p_{es}$ and obtain the new pyramid features $\tilde{p}_{s+1}\in \mathbb{R}^{{h_{s+1}}\times {w_{s+1}} \times {c_{s+1}}}$ as:
\begin{equation}
\begin{aligned}
\tilde{p}_{s+1} &= \mathcal{T}_{s\to {s+1}}(p_{es},p_{s+1})\\
&=\mathcal{C}_{1\times 1}(\mathcal{D}(p_{es});W_s^{s+1})\oplus p_{s+1},
\end{aligned}
\label{eqntm
\end{equation}
composed of a downsampling operation $\mathcal{D}(.)$ to match the feature resolution and a convolutional layer $\mathcal{C}_{1\times 1}$ with learnable $W_s^{s+1} \in \mathbb{R}^{1\times 1\times {c_{s}} \times {c_{s+1}}}$ to maintain the consistent channel number. We perform an element-wise sum operation $\oplus$ to enhance $p_{s+1}$ by combining the detailed information from $p_{es}$.
We illustrate this Neighbor Transferring Module (NTM) in Fig.~\ref{NET_NEM_NTM}(c). The enhanced feature $\tilde{p}_{s+1}$ is used as the new pyramid feature for the subsequent scale-aware features generation and scale-aware object detection.
\section{Single-Shot Detector: NETNet} \label{sec:NETNet}
Single-shot object detectors like SSD~\cite{SSD} directly carry out regression and classification based on predefined anchors. This provides the SSD with a better trade-off to achieve real-time detection and promising performance. However, SSD performs poorly for detecting small objects and also suffers from inaccurate localization (\eg, the part detection problem), as shown in Fig.~\ref{Problem}. To solve these problems, we design a new single-shot object detection network, called NETNet embedding the proposed NET mechanism as a scale-aware detector.
In NETNet, we build our backbone network as that of SSD. Taking the network with an input image size 300$\times$300 as an example, we show the main network architecture of NETNet in Fig.~\ref{net2}(a). Features of six pyramid levels $\left\{p_1,p_2,p_3,p_4,p_5,p_6\right\}$ with resolutions $\left\{\right.$38$\times$38, 19$\times$19, 10$\times$10, 5$\times$5, 3$\times$3, 1$\times$1$\left.\right\}$ are extracted from the backbone as the basic feature pyramid. Based on the basic pyramid, we construct our NET Module (NETM) to generate scale-aware features and solve the aforementioned scale problems. In implementation, there are some scale-overlaps~\cite{PANet, couplenet} between the nearest neighbor pyramid levels (\eg, $p_1$ and $p_2$), when configuring the detection anchors and assigning ground truth. Therefore, we build a skipped NETM using our NET mechanism. Additionally, considering that the scale-overlaps make features for one object existing in the nearest neighboring pyramid layers complementary, we introduce a Nearest Neighbor Fusion Module (NNFM) as shown in Fig.~\ref{net2}(b) to enhance the pyramid features firstly by fusing the nearest neighboring pyramid features. Based on the NNFM and NETM, six different detection heads for box regression and classification, are built upon the scale-aware features to construct our scale-aware detector NETNet. We present the details of NETM and NNFM as follows.
\subsection{NETM in a Skip Manner}
\label{subsec:NETNet_NETSkip}
In typical single-shot detectors, features in the shallow layers (\eg, $p_1$ with larger feature resolution 38$\times$38) are used for detecting smaller objects, while features in deeper layers (\eg, $p_3$ with smaller resolution 10$\times$10) are used for detecting larger objects. Because features with small resolutions (\eg, 3$\times$3) have large receptive fields and less spatial information, we finally embed two NETMs in NETNet for feature erasing and transferring without using features $p_5$ and $p_6$. Due to the anchor configuration in SSD, two anchors in the nearest pyramid layers (\eg, $p_1$ and $p_2$) may share the same ground truth. That is, one small object should be detected in $p_1$ and $p_2$ simultaneously. To avoid disturbing the overlapped supervision, our NETNet is elaborately designed by embedding two skipped NETMs.
One NETM is built upon the pyramid features of $p_1$ and $p_3$. To erase the features of larger objects from the shallow layer $p_1$, we first upsample $p_3$ and use a $1\times 1$ convolution to generate soft spatial gate as Eq.~\ref{eqattention} for larger objects. We evaluate the effects of several different spatial attention methods and choose channel-wise spatial attention as Eq.~\ref{eqconvattention}. Then, an erasing operation in Eq.~\ref{eqsummary} generates features for smaller objects. We also embed a light fusion module into NETM to make the generated scale-aware features more robust. The fusion module is constructed as a residual block as in~\cite{resnet} by stacking ($1\times 1$ convolution, $3\times 3$ convolution, and $1\times 1$ convolution) with a skip connection. When applying the transferring module NTM, we first acquire the detailed information $p_{es}$ that is helpful for larger objects from $p_1$ as Eq.~\ref{eqattentionmal}. Then, this detailed information enhances the features $p_3$ as Eq.~\ref{eqntm}. The other NETM is built upon pyramid features of $p_2$ and $p_4$ with the similar configuration.
\subsection{Nearest Neighbor Fusion Module}
\label{subsec:NETNet_NNFM}
As pointed out in feature pyramid studies~\cite{PFPN, EFIP}, features from neighboring pyramid layers are complementary. Thus, incorporating context information from different layers promotes feature representation. Combining features from top to bottom is typically done to build a feature pyramid~\cite{DSSD}. However, since our purpose is to remove large-object features from the shallow layers and generate scale-aware features, introducing other more scale features may increase the feature scale-confusion problem. Therefore, we propose a more effective fusion module, NNFM, to enhance the pyramid features.
As shown in Fig.~\ref{net2}(b), in NNFM, only features from the adjacent pyramid layers are fused as:
\begin{equation}
p_{fs} = \mathcal{H}_{s-1}(p_{s-1})\oplus \mathcal{H}_{s}(p_{s})\oplus \mathcal{H}_{s+1}(p_{s+1}),
\end{equation}
where we denote the fused features of $s^{th}$ pyramid layer as $p_{fs} \in \mathbb{R}^{{h_{s}}\times {w_{s}} \times {c_{s}}}$. $\mathcal{H}_{s-1}$ is constructed by a pooling layer and a $1\times 1$ convolutional layer. $\mathcal{H}_{s}$ is constructed by a $1\times 1$ convolutional layer. $\mathcal{H}_{s+1}$ is constructed by a bilinear upsampling layer and a $1\times 1$ convolutional layer. Finally, these features are fused by an element-wise sum operation. Thus, we enhance the $p_2$ features by aggregating complementary information from $p_1$, $p_2$, and $p_3$, instead of using the features $\left\{ p6,p5,p4,p3,p2\right\}$ like a top-down pyramid network. Performing NNFM will not aggravate the feature scale-confusion, since the information of tiny objects from $p_1$ is discarded using pooling operation and the information of larger objects from $p_3$ will be erased by the subsequent NEM. As a result, the features of objects which should be detected on $p_2$, are enhanced by fusing the complementary information with NNFM.
\section{Experiments}
\label{sec:EXP}
\noindent \textbf{Dataset:} We evaluate our method on the benchmark detection dataset, MS COCO~\cite{COCO} dataset (\ie, COCO). It has 80 object categories and more than 140k images. Following~\cite{SSD,FPN}, we train our NETNet on the union (\textit{trainval35k}) of 80k training images and a 35k subset of validation images, and conduct ablation evaluations on the remaining 5k validation images (\textit{minival}). The final results are obtained by testing on the 20k test images (\textit{test-dev}) and submitted to the official server. The variations in scale of objects in COCO are complex.
AP$_s$, AP$_m$, and AP$_l$ evaluate the detection precision for three scales of objects.
\noindent \textbf{Training protocols:} We re-implement the SSD~\cite{SSD} as our baseline based on a Pytorch framework. All the models are trained over 160 epochs with the same training loss as SSD. For ablation experiments, we set the initial learning rate as 0.002 and decrease it by a factor of 0.1 after the 90$^{th}$, 120$^{th}$, and 140$^{th}$ epochs, respectively. We follow~\cite{RFBNet}, using a warm-up learning rate in the first 5 epochs. We set the weight decay to 0.0005 and the momentum to 0.9. Each model is trained with a batch size of 32 on 2 GPUs. Results are reported using the standard COCO-style metric.
\begin{table}[t]
\centering
\small
\setlength{\tabcolsep}{1.9mm}
\begin{center}
\renewcommand\arraystretch{1.0}
\begin{tabular}{r||ccc|ccc}
\Hline
Methods & AP &AP$_{50}$ &AP$_{75}$ &AP$_s$ &AP$_m$ &AP$_l$ \\ \hline \hline
Baseline SSD &25.1 &41.8 &26.1 &6.3 &28.3 &43.3 \\ \hline
NEM &29.4 &48.9 &30.4 &13.2 &32.2 &44.3 \\
NTM &25.8 &42.4 &26.9 &6.5 &28.5 &44.4 \\
NETM &30.4 &49.7 &31.4 &13.4 &33.0 &45.6 \\ \hline
NETM + TDP &30.6 &49.9 &31.9 &12.8 &33.0 &\textbf{46.3} \\\hline
\textbf{NETNet} &\textbf{31.1} &\textbf{50.5} &\textbf{32.4} &\textbf{13.6} &\textbf{35.0} &45.4\\ \hline
\end{tabular}
\end{center}
\caption{Ablation evaluation for NETM and NNFM on the MS COCO \textit{minival} set. NETNet is our model with NETM and NNFM.}
\label{NET}
\end{table}
\begin{table}[t]
\centering
\small
\setlength{\tabcolsep}{1.7mm}
\begin{center}
\renewcommand\arraystretch{1.1}
\begin{tabular}{r||ccc|ccc}
\Hline
Methods & AP &AP$_{50}$ &AP$_{75}$ &AP$_s$ &AP$_m$ &AP$_l$ \\ \hline \hline
Max Attention &28.7 &47.3 &29.9 &11.5 &31.4 &43.4 \\
Mean Attention &28.8 &47.6 &29.6 &12.5 &32.0 &43.9 \\
Global Attention &29.3 &48.6 &\textbf{30.5} &12.5 &32.0 &44.2 \\ \Hline
\textbf{NEM} &\textbf{29.4} &\textbf{48.9} &{30.4} &\textbf{13.2} &\textbf{32.2} &\textbf{44.3} \\ \hline
\end{tabular}
\end{center}
\caption{Ablation evaluation for different attentions of NEM.}
\label{attention}
\end{table}
\subsection{Ablation Study}
\label{subsec:EXP_abl}
\noindent \textbf{Configuration of NETNet.}
For ablation experiments, we construct NETNet with a VGG-16 backbone pretrained on ImageNet~\cite{imagenet}, and train the models with an input size of 300$\times$300. Following SSD, we truncate the final fully connected layers of the backbone and add a series of smaller convolutional layers to construct the feature pyramid.
\noindent \textbf{Evaluation of NETNet:}
\textit{Overall NEM.} As shown in Table~\ref{NET}, compared with SSD, NEM yields a large margin of absolute improvement of 4.3\% AP. Because our NEM can remove the features of larger objects from the shallow layer to solve feature confusion, the salient regions can be suppressed and features for smaller objects can be activated to improve the performance for detecting smaller objects. We obtain a 6.9\% AP improvement for small objects and 3.9\% AP improvement for medium objects, which demonstrates the effectiveness of NEM for feature erasing.
\textit{NTM and NETM.} We propose to transfer features using NTM to complement the detailed information of larger objects. As shown in Table~\ref{NET}, using only NTM brings a 1.1\% improvement for large objects because of the enhanced features for large objects. Combining NEM and NTM promotes each module to learn better features through an adversarial strategy. Our NETM using NEM and NTM further improves the overall AP by 1.0\%.
\textit{NNFM.} We compare our NNFM for feature fusion with a typical Top-Down Pyramid (TDP) like FPN~\cite{FPN} based on our NETM. When combing the TDP with our NETM, a slight overall improvement, 0.2\% AP, is achieved. However, we find the detection performance for small objects degrades by using TDP (from 13.4\% AP to 12.8\% AP), which may be caused by the feature confusion that is not consistent with our NET mechanism. When combining the NETM with NNFM (\ie, NETNet), a 31.1\% AP performance is obtained. Our NNFM further improves the performance for medium objects by a large margin (2.0\%).
\noindent \textbf{Evaluation of NEM:}
\textit{Attention for NEM.} We train our network with only two NEMs to evaluate different spatial gate generation methods as discussed in Sec.~\ref{subsec:NET_NEM}. Due to the large computation consumption of spatial attention method in~\cite{nonlocal, DANet}, we only implement a simplified one as 'Global Attention' by reducing the inner channel number. 'Mix' represents combining 'Max' and 'Avg' attention. As presented in Table~\ref{attention}, using attention as Eq.~\ref{eqconvattention} in our NEM, which generates a channel-wise spatial gate for each channel of the shallow pyramid features, obtains a better performance of 29.4\% AP. We visualize some examples in supplementary material.
\begin{table}[t]
\centering
\small
\setlength{\tabcolsep}{2.0mm}
\begin{center}
\renewcommand\arraystretch{1.0}
\begin{tabular}{r||ccc|ccc}
\Hline
Methods & AP &AP$_{50}$ &AP$_{75}$ &AP$_s$ &AP$_m$ &AP$_l$ \\ \hline \hline
Baseline SSD &25.1 &41.8 &26.1 &6.3 &28.3 &43.3\\ \hline
NEM$_{13}$ &28.9 &48.7 &30.2 &12.8 &31.0 &44.4\\
NEM$_{24}$ &28.5 &46.6 &30.0 &10.6 &31.7 &44.5\\ \hline
NNEM &29.1 &48.8 &30.1 &12.7 &31.9 &{44.4}\\ \hline
\textbf{NEM} &\textbf{29.4} &\textbf{48.9} &\textbf{30.4} &\textbf{13.2} &\textbf{32.2} &44.3\\ \hline
\end{tabular}
\end{center}
\caption{Ablation evaluation for different NEMs on \textit{minival} set.}
\label{2NE}
\end{table}
\begin{table*}[t]
\footnotesize
\centering
\begin{center}
\renewcommand\arraystretch{1.0}
\setlength{\tabcolsep}{3.0mm}
\begin{tabular}{r|cc|cc|c|cc|ccc}
\Hline
Methods & Backbone & Image Size & Time (ms) & FPS & AP &AP$_{50}$ &AP$_{75}$ &AP$_s$ &AP$_m$ &AP$_l$ \\ \hline \hline
\textbf{Two-stage detectors:}&&&&&&&&&&\\
Faster~\cite{fasterrcnn} &VGG-16 &1000$\times$600 &147 & 6.8 &24.2 &45.3 &23.5 &7.7 &26.4 &37.1\\
Faster-FPN ~\cite{FPN} &ResNet-101 &1000$\times$600 &190 & 5.3 &36.2 &59.1 &39.0 &18.2 &39.0 &48.2\\
R-FCN~\cite{RFCN} &ResNet-101 &1000$\times$600 &110 & 9.1 &29.9 &51.9 &- &10.8 &32.8 &45.0\\
CoupleNet~\cite{couplenet} &ResNet-101 &1000$\times$600 &120 & 8.0 &34.4 &54.8 &37.2 &13.4 &38.1 &50.8 \\
Mask R-CNN~\cite{maskrcnn} &ResNext-101 &1280$\times$800 &210 & 4.8 &39.8 &62.3 &43.4 &22.1 &43.2 &51.2\\%\hline \hline
Cascade R-CNN~\cite{cascadercnn} &Res101-FPN &1280$\times$800 &141 & 7.1 &42.8 &62.1 &46.3 &23.7 &45.5 &55.2\\ \hline \hline
\textbf{Anchor-free detectors}:& & & & & & & & & &\\
CornerNet~\cite{cornernet} &Hourglass-104 &511$\times$511 &244 &4.1 &40.5 &56.5 &43.1 &19.4 &42.7 &53.9\\
CenterNet~\cite{centernet} &Hourglass-104 &511$\times$511 &340 &2.9 &44.9 &62.4 &48.1 &25.6 &47.4 &57.4 \\
FCOS~\cite{FCOS} &Res101-FPN &1333$\times$800 &- &- &41.5 &60.7 &45.0 &24.4 &44.8 &51.6\\ \hline \hline
\textbf{Single-stage detectors:} &&&&&&&&&&\\
SSD300~\cite{SSD} &VGG-16 &300$\times$300 & 17* & 58.9 &25.1 &43.1 &25.8 &6.6 &25.9 &41.4\\
DFPR~\cite{Reconfig} &VGG-16 &300$\times$300 & - & - &28.4 &48.2 &29.1 &- &- &- \\
PFPNet-S300~\cite{PFPN} &VGG-16 &300$\times$300 & - & - &29.6 &49.6 &31.1 &10.6 &32.0 &44.9\\
RefineDet320~\cite{RefineDet} &VGG-16 &320$\times$320 &26 & 38.7 &29.4 &49.2 &31.3 &10.0 &32.0 &44.4\\
RFBNet~\cite{RFBNet} &VGG-16 &300$\times$300 & 15 (19*) & 66.7 &30.3 &49.3 &31.8 &11.8 &31.9 &45.9\\
EFIP~\cite{EFIP} &VGG-16 &300$\times$300 & 14 & 71.4 &30.0 &48.8 &31.7 &10.9 &32.8 &46.3\\
HSD~\cite{HSD} &VGG-16 &320$\times$320 &25 &40.0 &33.5 &53.2 &36.1 &15.0 &35.0 &47.8\\
\rowcolor{mygray} NETNet (ours) &VGG-16 &300$\times$300 & 18 & 55.6 &{32.0} &{51.5} &{33.6} &{13.9} &{34.5} &46.2\\
\rowcolor{mygray} NETNet+Ref~\cite{HSD} &VGG-16 &320$\times$320 &- &- &{34.9} &{53.8} &{37.8} &{16.3} &{37.7} &48.2\\\hline
DSSD513~\cite{DSSD} &ResNet-101 &513$\times$513 & 182 & 5.5 &33.2 &53.3 &35.2 &13.0 &35.4 &51.1\\
RetinaNet~\cite{RetinaNet} &ResNet-101 &500$\times$500 & 90 & 11.1 &34.4 &53.1 &36.8 &14.7 &38.5 &48.5\\
STDN512~\cite{STDN} &DenseNet-169 &513$\times$513 & - & - &31.8 &51.0 &33.6 &14.4 &36.1 &43.4\\
DFPR~\cite{Reconfig} &ResNet-101 &512$\times$512 & - & - &34.6 &54.3 &37.3 &14.7 &38.1 &51.9\\
RefineDet512~\cite{RefineDet} &ResNet-101 &512$\times$512 & - & - &36.4 &57.5 &39.5 &16.6 &39.9 &51.4\\
SSD512~\cite{SSD} &VGG-16 &512$\times$512 & 28 & 35.7 &28.8 &48.5 &30.3 &10.9 &31.8 &43.5\\
DES512~\cite{SSDES} &VGG-16 &512$\times$512 & - & - &32.8 &53.2 &34.6 &13.9 &36.0 &47.6\\
RFBNet~\cite{RFBNet} &VGG-16 &512$\times$512 & 33 (37*) & 30.3 &34.4 &55.7 &36.4 &17.6 &37.0 &47.6\\
EFIP~\cite{EFIP} &VGG-16 &512$\times$512 &29 &34.5 &34.6 &55.8 &36.8 &18.3 &38.2 &47.1\\
TripleNet~\cite{triplenet} &ResNet-101 &512$\times$512 &- &- &37.4 &59.3 &39.6 &18.5 &39.0 &52.7\\
\rowcolor{mygray} NETNet (ours) &VGG-16 &512$\times$512 & 33 & 30.3 &36.7 &57.4 &39.2 &{20.2} &39.2 &49.0\\
\rowcolor{mygray} NETNet (ours) &ResNet-101 &512$\times$512 & 37 & 27.0 &{38.5} &{58.6} &{41.3} &19.0 &{42.3} &{53.9}\\\hline
\end{tabular}
\end{center}
\caption{Comparison on the MS COCO \textit{test-dev} set. The results are reported for the case of single-scale inference. We test the time on a Titan X Pascal GPU with Pytorch 0.3.1. Times with * are obtained by testing in the same environment with NETNet.}
\label{Test}
\end{table*}
\textit{NEM on different layers.} We evaluate the influence of each NEM and show the results in Table~\ref{2NE}. By only adding NEM on $p_1$ and $p_3$, we obtain a 6.5\% AP improvement (NEM$_{13}$) on AP$_s$, which is better than that of NEM$_{24}$ (on $p_2$ and $p_4$) because there are more small objects features in $p_1$. We obtain a better improvement for medium objects by NEM$_{24}$. There is some ground truth and feature overlap in $p_1$ and $p_2$, which yields the improvements for both small and medium objects using each NEM. We obtain the best result by combining them. These results demonstrate the effectiveness of our method for erasing redundant features.
\textit{Skipped NEM.} We also construct a model by adding three regular NEMs built upon ($p_1$, $p_2$), ($p_2$, $p_3$), and ($p_3$, $p_4$), respectively. This is a type of nearest neighbor erasing module built upon the features of two nearest neighbor layers. We denote this model as NNEM in Table~\ref{2NE}. The NNEM model obtains a lower performance (29.1\%) than our NEM (29.4\%). Because the same ground truth may be assigned to predefined anchors from two neighboring layers, using NNEM disturbs the ground truth supervision. Using the skipped NEM helps the network achieve better results for detecting small objects and medium objects.
\noindent \textbf{Evaluation of network configurations:}
We evaluate the performance of NETNet with different configurations. By refining the learning rate (using 0.004 as the initial learning rate), we achieve a final best performance of 31.8\% AP with a 300$\times$300 input size. When we further use the refined prediction procedure in ~\cite{HSD}, a 34.7\% AP performance is obtained. In addition, larger image size and better backbone help improve the performance. With VGG-16 and a 512$\times$512 size, 36.1\% AP is obtained. Using ResNet-101 brings NETNet to a top performance, 38.2\% AP.
\begin{figure}[t]
\begin{center}
\footnotesize
\begin{tikzpicture}[scale=0.9]
\begin{axis}[
axis lines = left,
ymin=28, ymax=41,
xmin=10, xmax=295,
xlabel=Inference time (ms),
ylabel= \footnotesize{COCO mAP}]
\coordinate (legend) at (axis description cs:0.99,0.006);
\addplot[only marks,
mark=otimes*, yellow,
mark size=3.0pt
]
coordinates {
(28,28.8)};\label{plot:ssd}
\addplot[only marks,
mark=otimes*, green,
mark size=3.0pt
]
coordinates {
(90,34.4)};\label{plot:retinanet}
\addplot[only marks,
mark=otimes*, brown,
mark size=3.0pt
]
coordinates {
(37,34.4)};\label{plot:RFBNet}
\addplot[only marks,
mark=otimes*, pink,
mark size=3.0pt
]
coordinates {
(26,29.4)};\label{plot:RefineDet320}
\addplot[only marks,
mark=otimes*, red,
mark size=3.0pt
]
coordinates {
(14,30.0)};\label{plot:EFIP300}
\addplot[only marks,
mark=otimes*, purple,
mark size=3.0pt
]
coordinates {
(29,34.6)};\label{plot:EFIP}
\addplot[only marks,
mark=otimes*, gray,
mark size=3.0pt
]
coordinates {
(19,30.3)};\label{plot:RFBNet300}
\addplot[only marks,
mark=triangle*, yellow,
mark size=4.0pt
]
coordinates {
(18,32.0)};\label{plot:ours300}
\addplot[only marks,
mark=triangle*, black,
mark size=4.0pt
]
coordinates {
(33,36.7)};\label{plot:oursvgg}
\addplot[only marks,
mark=triangle*, blue,
mark size=4.0pt
]
coordinates {
(37,38.5)};\label{plot:oursres}
\addplot[only marks,
mark=otimes*, orange,
mark size=3.0pt
]
coordinates {
(275,40.5)};\label{plot:connernet}
\addplot[only marks,
mark=otimes*, blue,
mark size=3.0pt
]
coordinates {
(285,40.8)};\label{plot:centernet}
\end{axis}
\node[draw=none,fill=none,anchor= south east] at (legend){\resizebox{0.50\linewidth}{!}{
\begin{tabular}{l|c|c}
Detectors & Time &mAP \\ \hline
\ref{plot:RFBNet300} RFBNet300~\cite{RFBNet}& 19$^*$ & 30.3 \\
\ref{plot:EFIP300} EFIP300~\cite{EFIP}& 14 & 30.0 \\
\ref{plot:RefineDet320} RefineDet320~\cite{RefineDet} &26 &29.4 \\
\ref{plot:ssd} SSD512~\cite{SSD}& {28} & 28.8 \\
\ref{plot:EFIP} EFIP512~\cite{EFIP}& 29 & 34.6 \\
\ref{plot:retinanet} RetinaNet512~\cite{RetinaNet}& 90 & 34.4 \\
\ref{plot:RFBNet} RFBNet512~\cite{RFBNet}& 37$^*$ & 34.4 \\\hline
\ref{plot:connernet} ConnerNet511~\cite{cornernet}& 244 & 40.5 \\
\ref{plot:centernet} CenterNet511~\cite{centernet}& 340 & 44.9 \\
\hline
\ref{plot:ours300} NETNet300 & 18 &32.0 \\
\ref{plot:oursvgg} NETNet512-VGG &33 &36.7 \\
\ref{plot:oursres} NETNet512-Res101 & 37 &38.5 \\
\hline
\end{tabular}}};
\end{tikzpicture}
\end{center}
\caption{Accuracy (mAP) vs. speed (ms) comparison. Methods in the top-left corner have better overall performance.
} \label{figuretimevs}
\end{figure}
\subsection{Results on COCO Test Set}
We evaluate NETNet on the COCO \textit{test-dev} set and compare it with previous state-of-the-art methods, as shown in Table~\ref{Test}. Our NETNet outperforms the baseline SSD significantly with only a slight extra time cost. With an input size of 300$\times$300 and VGG-16, our NETNet obtains 32.0\% AP with 55.6 FPS, which outperforms other state-of-the-art single-shot detectors with a similar configuration. Employing the refinement in~\cite{HSD} helps NETNet obtain a top performance 34.9\% AP. When testing with an image size of 512$\times$512, NETNet obtains 36.7\% (30.3 FPS) with VGG-16 and 38.5\% (27.0 FPS) with ResNet-101. Some anchor-free methods achieve better detection accuracy, but they are generally require more than 100 ms to process one image. As shown in Fig.~\ref{figuretimevs}, our method achieves an optimal trade-off for accurate detection while maintaining a fast speed.
\begin{figure}
\centering
\includegraphics[width=8.0cm]{Figure_resvis1.pdf}
\caption{Detection results visualization. Our method can alleviate the part false positive problem as (b) and the small objects missing (false negative problem) as (d). More detection results can be found in the supplementary material.}
\label{figurevis}
\end{figure}
\section{Discussion}
Different from previous pyramid methods, NET mechanism helps reconfigure the basic pyramid to be scale-aware features which are more suitable for scale-aware detection. In another side, because we need use shallow features to generate deep features by progressively convolution operations in a network, using a direct hard supervision will force the large object regions in shallow layers of the backbone to be background and harm the feature learning of deep layers. NET works like a soft supervision by introducing a reversed feedback from high-level features for feature erasing, which will not harm the feature learning but enhance the information aggregation in the backbone pyramid. More visualization analysis can be found in the supplementary material.
In addition, we carry out an error analysis to further demonstrate the effectiveness of our method for solving the false positive (FP) problem and false negative problem (FN,~\ie, missing detection). For fair comparison, we use the detection results on the \textit{minival} set by SSD and NETNet (31.8\% AP) with VGG-16 and 300$\times$300 image size.
\noindent \textbf{Tackling FP problem.} By treating the predicted box, which has a IoU \textless 0.5 with the ground truth as a FP sample, we conduct a statistical analysis for the FP problem. In total, there are about 20k less FP samples by our method than SSD as shown in Fig.~\ref{plterror}(a), which demonstrates our method can alleviate this problem. We further analyze the part false positive (PFP) problem based on the PFP samples under different thresholds. The part rate $p_\theta$ is calculated as the ratio of intersection region (between one predicted FP box and the ground truth) over the area of the predicted box. If $p_\theta$ is higher than the threshold, the FP box is regarded as a PFP sample. We present the PFP error in Fig.~\ref{plterror}(b). The x-axis denotes the thresholds and y-axis represents the ratio of PFP sample number over total predicted box number. Our method can reduce the PFP error. We visualize some detection results in Fig.~\ref{figurevis} (a) and (b).
\noindent \textbf{Tackling FN problem.} We show the error analysis plots of our baseline SSD and our NETNet in Fig.~\ref{Error} for small objects. Each plot describes a Precision Recall (PR) curve obtained by eliminating the corresponding detection errors except `C75' (\ie, AP$_{75}$) and `C50'(\ie, AP$_{50}$). Thus, the area of each color can measure the corresponding errors. Overall, our method is more significant on small object detection (\ie, 39.8\% FN error by NETNet \textit{vs} 60.8\% error by SSD). As shown in Fig.~\ref{figurevis}(d), our NETNet can detect small objects precisely, and alleviate the FN problem well.
\begin{figure}
\centering
\includegraphics[width=8.0cm]{TOTALERROR_color_v1.pdf}
\caption{Error analysis for total false positive problem (a) and part false positive problem (b) on the MS COCO \textit{minival} set.}
\label{plterror}
\end{figure}
\begin{figure}
\small
\centering
\setlength{\tabcolsep}{2mm}
\begin{tabular}{cccc}
\includegraphics[width=3.8cm]{overall-all-small.pdf}& \includegraphics[width=3.8cm]{our_overall-all-small.pdf}\\
(a) \scriptsize{SSD} &(b) \scriptsize{NETNet}
\end{tabular}
\caption{Error analysis for (a) baseline SSD and (b) our NETNet on small objects. `FN' represents the missing detection error (false negative). The overall false negative error can be measured by subtracting the AP value of BG from FN. The overall false negative error is 60.8\% for SSD and 39.8\% for NETNet. Lower is better.}
\label{Error}
\end{figure}
\section{Conclusion}
In this paper, we have proposed a Neighbor Erasing and Transferring (NET) mechanism with feature reconfiguration for tackling complex scale variations in object detection. Scale-aware features are generated by erasing the features of larger objects from the shallow layers and transferring them into deep pyramid layers.
We have constructed a single-shot network called NETNet by embedding NETM and NNFM to achieve fast and accurate scale-aware object detection. As demonstrated by experiments on the MS COCO dataset, our NETNet is able to solve the missing detection and part false positive problems effectively, leading to an improved trade-off for real-time and accurate detection. In future work, we consider to explore the advantages of NET on other detectors for scale-aware object detection.
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,500,387 | arxiv | \section{Introduction}
To the best of our knowledge, the continuous development of the Internet of Things (IoT) and cyberspace has established ubiquitous connections between objects, things, and humans. Meantime, it also promotes the seamless convergence among physical, social, thinking, and cyber spaces \cite{A1}. In recent years, such human activities as work, shopping, conference, and entertainment have been increasingly changed into online forms. Particularly in the context of the COVID-19 pandemic, people tend to spend more time in virtual space. New commercial forms enable more industries to seek innovative developing ways, especially in these pioneering realms like electronic games, fashions, education, etc.
The Metaverse thrives at this stage with the vision to provide more possibilities in daily life and industrial manufacturing \cite{A2}. It has become such a hot buzzword these days and has attracted lots of attention from both academia and the industry. We are all impressed by the wide spread of Metaverse, from the first science fiction \emph{Snowcrash} to Facebook, the giant IT company that was even renamed Meta a few months ago. Compared with the Internet world or cyberspace which refers to a network of networks, the Metaverse depicts a parallel and immersive world where virtuality and reality are fused. It could be regarded as a hypothesized iteration of cyberspace, where humans could enter the virtual world with techniques such as Virtual Reality (VR), Augmented Reality (AR), etc \cite{A3}. The vision of the Metaverse is to provide an immersive user experience with low latency and strong intelligence. We can imagine that the scenes in the film of \emph{Ready Player One} where everyone can be interconnected with the world of OASIS, will come true in Metaverse \cite{A4}. However, the digitalized Metaverse is not only for playing games, but also portrayed as a digital world with persistence and synchronization, as well as the economy, culture, regulation, ethics, and morality. Hence, to support complicated applications, the Metaverse must be equipped with advanced techniques to keep activities, interactions, and transactions safe, transparent, and sustainable \cite{A5}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=17cm]{2.png}
\caption{The differences between the essence of the universe and the Metaverse.}
\label{2}
\end{figure*}
To create a deeply virtualized Metaverse and fulfill the extremely immersive user experience, it is necessary to carry out innovative technical research. Many scholars and engineers have launched exploratory research, particularly research on supporting techniques in the Metaverse. For example, in \cite{A6}, the authors mention that the Metaverse is not a composition of one or more technologies. It depends on six technical pillars named BIGANT, that is Blockchain, Interactivity, Game, Artificial Intelligence (AI), Network, and IoT. Lee also overviews the technological singularity of Metaverse from the aspects of Extended Reality, User Interactivity, AI, Blockchain, Computer Vision, IoT and Robotics, Edge and Cloud computing, and Future Mobile Networks \cite{A7}. All these techniques will contribute a lot to Metaverse development, while most of them are proposed according to specific application requirements.
Different from existing studies, this article introduces the Metaverse from a new technical perspective, ranging from its essence, and technical framework to open challenges. The main contributions of this paper are as follows:
\begin{itemize}
\item Introducing the essence of Metaverse from its etymology, and illustrating the fact that Metaverse needs to go beyond the limits of space, time, and contents with the case of Maslow's Hierarchy of Needs.
\item Proposing four pillars of the Metaverse named ubiquitous connections, space convergence, virtuality and reality interaction, and human-centered communication, and establishing a corresponding technical framework.
\item Concluding further technical challenges in the Metaverse with guidance for upcoming research.
\end{itemize}
The remainder of this paper is arranged as followed: Section 2 introduces the essence of the Metaverse from its etymology. Section 3 proposes the four pillars of the Metaverse and establishes the corresponding technical framework. Section 4 points out some open issues related to technical development in the Metaverse. Section 5 concludes this paper.
\begin{table*}[!ht]
\centering
\caption{Some wonderful experiences related to Maslow's different hierarchies of needs in the Metaverse.}
\label{tab1}
\begin{tabular}{lllll}
\toprule
\toprule
\makecell[l]{Hierarchy \\ of Needs} & Case & Space & Time & Content \\
\midrule
\makecell[l]{Physiological \\ Needs} & Food & \multicolumn{1}{m{4cm}}{In the Metaverse, it would be easy to acquire precious food ingredients all over the world. Their cultivation will not be severely limited by geographical location, temperature, humidity, etc.} & \multicolumn{1}{m{3cm}}{The food in the Metaverse is rarely subject to strict shelf-life restrictions, and some may be kept fresh forever.} & \multicolumn{1}{m{3cm}}{In the metaverse, food does not necessarily need to be processed or made from ingredients. Instead, modern technologies are adopted to simulate the smells, tastes and colors of food.} \\
\midrule
Safety needs & Transportation & \multicolumn{1}{m{4cm}}{It allows human to travel around the world even if they are at home. Users could break the speed limit and realize instantaneous movement at their will. Additionally, it is possible to travel to places with harsh environments, such as Antarctica and Arctic, icebergs and volcanoes.} & \multicolumn{1}{m{3cm}}{One of the key features in the Metaverse is time travel which can provide wonderful user experience. People can go back to the past or fly to the future in the Metaverse.} & \multicolumn{1}{m{3cm}}{The transportation in the Metaverse may not only depend on vehicles, ships, and airplanes, but also some advanced ways through brainwaves, specific actions, etc.} \\
\midrule
\makecell[l]{Love and \\ belongingness \\ need} & Friendship & \multicolumn{1}{m{4cm}}{The humans could establish friendship and communicate with each other face to face in the Metaverse, even if the two sides are thousands of miles apart in the real world.} & \multicolumn{1}{m{3cm}}{The friendship in the Metaverse will not depend too much on time constraints. For example, the impact of time difference on social interaction is so tiny that could be neglected.} & \multicolumn{1}{m{3cm}}{The types of social friendship are various in the Metaverse. It could be established between humans and virtual identities, and also could be among virtual identities.} \\
\midrule
Esteem needs & \makecell[l]{Social \\ position} & \multicolumn{1}{m{4cm}}{Humans could hold social positions in the Metaverse similar to that in the real world. In addition, human beings can have more social status in different countries, fields and spaces with competent capabilities.} & \multicolumn{1}{m{3cm}}{Humans can play different social roles in the Metaverse, no matter the roles in the ancient times or in the future. For example, one can travel back to the ancient times to play Chinese emperors.} & \multicolumn{1}{m{3cm}}{There may be various social organizations in the Metaverse, which may be completely virtual or real. Humans could hold different positions, acquire respect, and satisfy self-esteem needs in the Metaverse.} \\
\midrule
\multicolumn{1}{m{2cm}}{Self-actualization needs} & \makecell[l]{Self-\\fulfillment} & \multicolumn{1}{m{4cm}}{The Metaverse provides more opportunities for humans to achieve self-fulfillment. For example, by finding friends with similar talents in the Metaverse, humans could receive recognition for their specific skills and realize self-fulfillment.} & \multicolumn{1}{m{3cm}}{The Metaverse may have detailed records for any achievements in the past, and keep the feeling of self-fulfillment away from fading over time.} & \multicolumn{1}{m{3cm}}{In the Metaverse, humans have multiple ways to achieve self-fulfillment. They are able to create cultural, psychological, and other virtual contents and values in the Metaverse.} \\
\bottomrule
\bottomrule
\end{tabular}
\end{table*}
\section{The Essence of Metaverse from its Etymology}
So far, there is no precise definition of the essence of the Metaverse. Some regard it as a novel living space for humans, while others consider it a combination of multiple technologies. Although people remain divided over the essence of the Metaverse, they have reached a consensus that Metaverse offers more possibilities for immersive user experiences. In other words, the Metaverse changes the way we interact with virtual environments. It claims to improve the immersion largely with advanced techniques.
The word ``Metaverse'' is composed of the prefix ``Meta'' and the suffix ``verse''. As we all know, the ``Meta'' was a Greek term, and it is popularly used as a prefix to mean after or beyond. For example, the word ``metadata'' often means something more than data, especially with a self-referential connotation. The term ``verse'' is the abbreviation of ``universe''. Inspired by it, we argue that the portmanteau word ``Metaverse'' could be regarded as a computer-generated virtual space that is beyond the universe \cite{A8}. If it is, the essence of the Metaverse should also go beyond that of the universe.
According to the physical or philosophical definition, the essence of the universe could be understood as all of space, time, and their contents, including various matter and energy \cite{A117}. We can say that user experiences in the universe or the real world are restricted by spatial-temporal coordinates. For example, any true feeling in daily life must occur at a given time and place. Additionally, in most cases, the experience in the real world depends on actual contents, such as the astonishing culture shock produced when one watches an exhibition in a museum, the rumblings of workplaces enabling one to realize the process of industrial manufacturing, etc. Moreover, The user experience must abide by some rules in real life, e.g., time past cannot be called back; physical speed has upper limitations.
Based on the etymology of the Metaverse, we infer that the essence of the Metaverse goes beyond that of the universe, especially in the aspects of space and time. Figure \ref{2} depicts the differences between the essence of the universe and the Metaverse. To provide immersive user experiences, the Metaverse focuses on breaking the limits of space, time, and content. It changes the way humans interact with the outside and concentrates more on the enhancement of immersion.
In other words, there would be less dependence on spatial and temporal characteristics, and all the contents are much more abundant and feasible. For example, with the help of holographic projection, the late Teresa Teng could ``stand'' on the stage and sing with other singers at the concert, which provides a visual feast with hyper spatiotemporality. Such scenes without physical and spatial restrictions would be very common in the Metaverse. To provide a clear illustration of the changes in the future Metaverse, we cite Maslow's Hierarchy of Needs and describe the possible progress that the Metaverse brings in respective stages of needs.
To the best of our knowledge, the popular five-stage Maslow's Hierarchy of Needs includes physiological needs, safety needs, love and belongingness needs, esteem needs, and self-actualization needs \cite{A119}. Physiological needs refer to the biological requirements for human survival, such as food, shelter, and clothing. Once these basic needs are satisfied, safety needs become salient. These needs include the security of the body, employment, resources, and health. The love and belongingness needs are the third level in the hierarchical theory, which concentrates on social feelings, especially the emotional needs of relationships. Esteem needs refer to the need for respect, self-esteem, and self-confidence. It includes not only respect for yourself, but also the desire and need for respect from others. The highest level in Maslow's Hierarchy is the self-actualization needs, which motivate humans to realize their potential and seek personal growth and self-realization. Maslow's Hierarchy of Needs describes the five most basic and innate needs, which are the motivations guiding individual behavior.
The Metaverse will provide wonderful experiences when humans strive to satisfy those requirements, especially by breaking spatial and temporal limits. According to Maslow's Hierarchy of Needs, Table \ref{tab1} lists some cases of progress in the Metaverse. For example, to meet physiological needs, advanced technology will be used to emulate the tastes and smells of food so that it is possible for people to ``experience'' virtual food in the Metaverse.
Although we introduce some cases in the Metaverse without spatial and temporal constraints, it is worth noting that hyper spatiotemporality is used to describe the features of user experience in the Metaverse. Considering that people are still in the traditional physical world, they cannot completely escape time and space constraints. In other words, we argue that the Metaverse attempts to overcome the spatial-temporal restrictions for immersive user experience. The Metaverse comes from the universe and transcends the universe, especially in terms of time and space. The blueprint of the Metaverse depicts a more immersive living space parallel to the real world for humans.
\section{Pillars in the Metaverse and the corresponding technical framework}
As mentioned above, the Metaverse aims at providing an immersive user experience, which needs to overcome limitations of space and time and expand contents as much as possible. Hence, advanced technologies play fundamental roles in the Metaverse. As shown in Figure \ref{fig_2}, we conclude four pillars for the Metaverse, namely ubiquitous connections, space convergence, virtuality and reality interaction, and human-centered communication. These pillars make it possible to break physical boundaries and temporal limitations and achieve an immersive user experience in the Metaverse. In this section, we would like to introduce the pillars of the Metaverse and the corresponding technical framework.
\begin{figure}[!ht]
\centering
\includegraphics[width=7cm]{GeneralTechnicalAspects1.png}
\caption{The four pillars in the Metaverse.}
\label{fig_2}
\end{figure}
\subsection{Four pillars in the Metaverse}
In this section, we give an overall introduction to the four pillars, which largely contribute to the great immersion in the Metaverse.
\subsubsection{Ubiquitous connections}
As Chris Wysopal emphasized in Forbes, ubiquitous connection, or ubiquitous connectivity describes the fact that the connections between devices and software are so omnipresent and already exist in every corner of our life \cite{A14}. In the last years, it has witnessed the significant progress brought by the Internet, to which almost everything could be connected safely and seamlessly \cite{A9}. By establishing ubiquitous connections among things, humans, and objects in both real and virtual spaces, various activities could be carried out. For example, people can communicate without barriers even if they are thousands of miles apart; trades can be conducted in an orderly manner regardless of different time zones.
The ubiquitous connections make it possible to break spatial-temporal restrictions and lay foundations for the emergence of the Metaverse. For example, techniques related to sensing and perception provide possibilities for entities in the real world to enter the virtual world. Techniques in networking and communication allow data to transmit smoothly between different parties, overcoming the limitations in space, time, and content. Moreover, applications and services in daily life such as smart cities, intelligent transportation, and smart healthcare would be also replicated and achieved in the Metaverse. Additionally, there are more advanced applications in the Metaverse, for instance, immersive commerce, education, traveling, and so forth.
There already exist some explorations of Metaverse with the help of ubiquitous connections. For instance, Schaf establishes a prototype named 3D AutoSysLab for distant education scenarios. It designs an initial framework of an immersive learning environment with techniques like wearable sensors, mixed reality, 3D social Metaverse, and simulation modeling \cite{A10}. Han points out that sensors and devices in IoT could help replicate things in the real world to the Metaverse. This virtual replication is named digital twin, which helps a lot in dynamic resource allocation and risk prediction \cite{A11}. The digital twin also plays a significant role in smart cities and manufacturing, where the smart city digital twin is a simulation model of the physical assets, buildings, roads, and other entities in cities \cite{A12}. By replicating the real scenarios in smart cities, it is possible to monitor the present conditions, and make the appropriate response in case of any unpredictable situations \cite{A13}.
The developments of ubiquitous connections, in particular, the technical advances in IoT realize the interconnections between real and virtual space. It overcomes the limitations between spaces and provides possibilities for the Metaverse to come true and develop towards further prosperity.
\subsubsection{Space convergence}
Due to the continuous development of ubiquitous connections, an overwhelming convergence between physical, social, thinking, and cyber spaces emerges, which could be interpreted as General Cyberspace \cite{A1}. It interprets the omnipresent convergence between the Cyber-Physical system (CPS), Cyber-Physical-Social system (CPSS), and Cyber-Physical-Social-Thinking hyperspace (CPST), which is sure to break the spatial boundaries, and create possibilities for the interactions between virtual and real spaces \cite{A15,A16,A17}. In other words, the techniques of space convergence will contribute a lot to the development of the Metaverse.
Since physical space is the basic premise of all the other existence, it is the first to achieve the convergence between physical and cyber spaces. At the initial stage with simple printers, cameras, and servers, the convergence is merely realized by mutual mapping. For instance, Khanna proposes an IoT-based system for smart parking in which the practical parking resources and traffic volumes have been mapped in the cloud server to achieve real-time computing and provide appropriate suggestions \cite{A18}. In the following days, the technical advances in ubiquitous computing and ambient environment will make the convergence between physical and cyber spaces closer. For example, Tao illustrates the techniques of digital twins towards Industry 4.0 which could achieve high-fidelity cyber-physical integration with accurate models, throughout the whole process of smart manufacturing \cite{A19}. These techniques will contribute a lot to building vivid scenes in the Metaverse, and providing fundamental support for the immersive and digitalized world.
Another typical aspect of space convergence should be that between social and cyber spaces. Although the initial communication between humans is limited by physical boundaries and time constraints, the developments of the Internet have made communication more flexible. A surprisingly high number of social media such as Twitter, WeChat, LinkedIn, Instagram, Facebook, etc. have emerged whereby online communication could proceed smoothly \cite{A20}. Users on social networking sites have corresponding identities representing themselves. The virtual identities, the so-called ``avatars'' in the virtual space integrate social and cyber spaces and establish online social relationships as required. This could be regarded as the initial prototype of the Metaverse, as Mark Zuckerberg once emphasized that online social sites would strive for an extremely interconnected Metaverse in the future. The continuously technical evolution in cyber-social space convergence would be sure to promote the development of the Metaverse.
In addition, the convergence between thinking and cyber spaces also becomes popular these days, especially with the advances in disciplines of neuroscience, cognitive computing, and brain informatics. Although there still exists disputes in connecting thoughts and ideas together via the Internet, a big step forward has been made in this regard with the help of embedded sensors, electrodes, etc. They could monitor and analyze the brain signals, and then make appropriate responses with output devices. For example, in 2016, Elon Musk co-founded a company named Neuralink to develop ultra-high bandwidth Brain-computer Interfaces (BCI) \cite{A22}. In the following years, the company has built an integrated brain-machine interface platform with thousands of channels, which shows high scalability in clinical packages \cite{A21}. And it also announced a prototype of BCI named LINK V0.9 in 2019 which was implanted into the pig's brain and demonstrated high performance in monitoring the pig's brain activities \cite{A23}. These technologies not only provide the basis for the further understanding of human thoughts but also lay foundations for Metaverse development, where all ideas, thoughts, and brain activities should be understood in the digitalized space.
\subsubsection{Virtuality and reality interaction}
As the Metaverse describes a digitalized space while human bodies are still in the real physical space, the interactions between virtual and real space are extremely significant. Humans need to enter the Metaverse and enjoy wonderful experiences, in which the techniques related to virtuality and reality interaction provide appropriate ways.
Firstly, the continuous developments of interaction technologies provide the possibility to enter the Metaverse. With the iterative update of such interactive devices as mice, keyboards, touch screens, and modern wearable equipment (helmets and 3D glasses), interactive technologies have made a great step forward. For example, in \cite{A25} the authors design a 3D educational game based on VR Gear and Samsung VR display devices. Establishing online teaching and learning scenarios allows users to acquire wonderful learning experiences at home. Epp concludes common VR headsets are widely adopted in VR games, such as Oculus Rift, HTC Vive, and Windows Mixed Reality. They could help enter the virtual games more easily and improve the quality of VR games to a large extent \cite{A24}. Considering the advances in interactive technologies, it would be much more convenient for users to enter the Metaverse and enjoy fantastic services in the coming days.
Besides, techniques related to virtuality and reality interaction create and optimize various forms of content, which may contribute a lot to immersive user experiences in the Metaverse. With the help of VR, AR, and other interactive technologies, richer contents could be extended without considering spatial-temporal constraints. For example, techniques of holographic projection contribute a lot to a digital art exhibition and show a high performance in reproducing 3D images of artworks \cite{A26}. It enables users to enjoy visual feasts by replicating and creating different contents from ancient times to the near future, from polar oceans to the ends of the universe. Especially in recent years, techniques like precise location trackers, intelligent gloves, and motion capture systems make it possible to have a much more immersive user experience, opening the way to the prosperity of Metaverse.
\subsubsection{Human-centered communication}
\begin{figure*}[!ht]
\centering
\includegraphics[width=16cm]{TechnologyOverview.png}
\caption{The technical framework corresponding to the four pillars in the Metaverse.}
\label{fig_3}
\end{figure*}
Human-centered communication has been developed from the initial face-to-face communication, written letters, to wired and wireless communication, and today's instant messaging. The techniques of instant messaging have provided great convenience in humans' daily life. It allows the communication to break the spatial and temporal constraints and get a timely response. The advances in instant messaging make human-centered communication no longer limited to real space but shift gradually to virtual cyberspace. However, a case in point is that most online social communication still relies on media such as electric screens. The two sides of instant communication keep in touch through text, voice, and video, but they could not get a more immersive experience.
In the Metaverse, human-centered communication also plays a central part since most activities are carried out based on humans' communication and collaboration. To improve the quality of human-centered communication, technical breakthroughs need to be explored to further break the constraints of time and space and enhance the sense of immersion. For instance, authors in \cite{A27} adopt 3D simulation technologies and establish a system named ``Avatar to Person'', which is designed to repair virtual faces and generate a simulated voice for the disabled. It finally allows humans to communicate effectively and improves the social participation of people with disabilities. Chang makes a case study of K-Live in Korea and adopts hologram technologies to monitor user experience to achieve sustainable satisfaction and immersion \cite{A28}. This situation also appeared at Jiangsu Satellite TV New Year's Eve concert 2022, where the late Teresa Teng appeared on the stage singing and performing with other singers.
The continuous advances in human-centered communication have provided possibilities for the development of the Metaverse, and in turn, the Metaverse also puts forward technical requirements to a certain extent. When communicating with each other in the Metaverse, humans may depend on much more diverse and convenient media, or there may be no media at all. More efficient communication modes, faster transmission speed, and strong computing abilities are needed in the Metaverse for people to indulge more in the parallel digitalized world.
\subsection{The technical framework corresponding to the four pillars in the Metaverse}
Figure \ref{fig_3} depicts the proposed technical framework in the Metaverse corresponding to the four pillars. It can be seen that the techniques of ubiquitous connections and space convergence are solid foundations for overcoming the spatial-temporal constraints, and the techniques related to virtuality and reality interaction serve as methods of helping humans access the Metaverse and communicate with each other. In addition, techniques of virtuality and reality interaction could also enrich the immersive contents and scenarios in the Metaverse. Moreover, techniques related to human-centered communication are like the driving force in the Metaverse. Since humans are still at the center of the Metaverse, they put forward higher requirements for multi-dimensional, full-sensory, and 3D communication experiences. In this section, we give a comprehensive overview of the techniques involved in the framework.
\subsubsection{Potential techniques in aspect of ubiquitous connections}
As discussed above, the Metaverse is such a parallel and digitalized world that allows humans to interact with each other. Techniques of ubiquitous connections are so significant for things, humans, and entities to establish relationships. Figure \ref{3} clarifies the potential techniques related to ubiquitous sensing, which would contribute a lot to the development of the Metaverse.
\ \par
\noindent
\textbf{a) Ubiquitous sensing}
\ \par
Ubiquitous sensing refers to the abilities of omnipresent perception via different sensors attached or embedded in surroundings \cite{A89}. Similar to the sensing layer in IoT architecture, the ubiquitous sensing in the Metaverse also serves as the foundation, responsible for sensing objects and collecting information from the surroundings.
To the best of our knowledge, sensing techniques in real life mainly include sensor networks, Radio Frequency Identification, GPS positioning, etc. They rely on various sensors to acquire abundant information and further make instant decisions. For example, Li adopts ambient sensors to monitor inhabitants' daily activities, especially those of people with disabilities and the elderly in empty nests who may encounter potential risks \cite{A90}. In global positioning systems, sensing techniques allow the precise perception of location, velocity, and direction via different sensors, which have been widely used in areas of smart traffic, team sports, and logistics.
The ubiquitous sensing is also significant in the Metaverse. It serves to collect abundant information from surroundings as required and serves as the foundation for providing immersive user experiences in the Metaverse. In addition to sensing the virtual environments, the Metaverse also needs to strengthen the control over ``avatars'', and monitor users' immersion timely. Hence, techniques of ubiquitous connections in the Metaverse are stronger in context awareness and seamless perception. There already have advanced techniques equipped with wearable devices, interactive helmets, and glasses, as well as embedded chips and electrodes. For example, the Metaverse prototype of a university campus proposed in \cite{A5} is designed to provide ubiquitous sensing-based services. It adopts location information and eye-tracking as sources of sensing input and owns the ability of ubiquitous sensing to perceive the surroundings efficiently.
\begin{figure}[!ht]
\centering
\includegraphics[width=7.5cm]{4.png}
\caption{Techniques related to ubiquitous connections.}
\label{3}
\end{figure}
The ubiquitous sensing provides access to data and information in the Metaverse and lays foundations for further applications and services. We could not imagine what would happen if the status of the ``avatar'' is not perceived accurately and timely. VR games can’t provide the best user experience without immediate and precise sensing.
\ \par
\noindent
\textbf{b) Networking and communication}
\ \par
Techniques of networking and communication in the Metaverse are responsible for safe and error-free data transmission, which is similar to the role of the network layer in IoT architecture. The most common techniques in the network layer include short-range wireless communication, and mobile communication technologies, which are also applicable in the Metaverse for networking and communication.
As for the techniques related to short-range wireless communication, they refer to the transmission paradigm where the communication sender and receiver transmit information through radio waves, and the transmission distance is limited to a short range. Bluetooth and WiFi are the most popular representatives. As a global specification used for wireless data and voice communication, Bluetooth technology serves as special, short-distance connectivity without wires, aiming to establish a communication environment for fixed and mobile devices. It supports the frequency band of 2.45GHz used by people around the world, providing a transmission rate of 1Mbps and a transmission distance of 10m \cite{A91}. WiFi is a wireless extension of Ethernet, which is mainly used in home wireless networks and public hot spots such as airports, hotels, shopping malls, etc., while its transmission distance is easily blocked by walls \cite{A92}. These short-range wireless communications are equally significant in the Metaverse and will be adopted between devices and ``avatars''. For instance, Bluetooth would be popular in VR devices due to its small size and low cost.
Additionally, techniques of mobile communication also play significant roles in the Metaverse. Due to its fast evolution in recent years, mobile communications technology has developed from the first-generation technology to the fifth (5G) and sixth generation (6G) communication, which have brought huge benefits to human life. For example, the fourth generation (4G) communication technology mainly adopts orthogonal frequency division multiplexing and multiple-input multiple-output technologies, and it could support faster communication speed and a wider network spectrum \cite{A93}. The 5G is an extension of previous generations of technology and is currently the latest cellular mobile communication technology. Its goal is to achieve high-speed data transmission with low latency, low cost, large system capacity, and large-scale device connection \cite{A94}. With the blessing of emerging technologies, 6G emerges as a new communication technology. It is revolutionizing applications in various domains and providing immense impacts on citizens, consumers, and businesses for a future society of fully intelligent and autonomous systems \cite{A95}. Since the Metaverse is aimed at providing immersive user experiences and puts forward higher requirements in data transmission and processing, the 6G, as well as other future communication techniques, could provide intelligent capabilities of high-speed data transmission, enabling users to achieve an immersive experience to a large extent.
\ \par \noindent \textbf{c) Strong computing power}
\ \par
Frankly speaking, the Metaverse provides much more complicated applications such as high-precision VR games, immersive social sites, and complex economic systems, which require stronger computing power to keep timely responses. Computing power, as the name implies, refers to the ability of data processing and computing. It exists in all kinds of intelligent hardware devices, ranging from laptops and mobile phones to supercomputers.
At present, computing science is moving from traditional computational and digital simulation to the paradigm composed of high-performance computing, big data, and deep learning. Under such circumstances, computing power also owns different measurements, such as computing speed, algorithm performance, data storage, communication capacity, and computing service capacity. To satisfy the restricted demands, technical explorations are always on the way. For example, the cloud, fog, and edge computing paradigms provide possibilities for the optimization of computing resources, which largely alleviate the computing pressure of the central server. Additionally, leading companies are also devoted to innovative research on hardware and software, which could maximize computing power. Microsoft announces a new supercomputer that enables the training of extremely large AI models. Additionally, it also publishes a new version of the open-source deep learning library for PyTorch, which requires less computing power when training large, distributed models.
The Metaverse requires stronger computing power to provide timely responses and immersive user experiences. There is an urgent need to continuously develop the techniques related to computing power for future Metaverse.
\ \par \noindent \textbf{d) Artificial Intelligence}
\ \par
Artificial intelligence (AI) is a branch of computer science, which represents the simulation of the information process of human consciousness, thinking, and intelligence \cite{A101}. In recent years, with continuous technical advances, AI techniques have been widely developed and used in both daily life and industrial manufacturing. Generally speaking, the key technologies involved in AI include machine learning \cite{A102,A103}, knowledge graph \cite{A104}, natural language processing \cite{A105,A106}, speech recognition \cite{A107}, computer vision \cite{A108}, etc., which have penetrated into every aspect of our lives. It provides strong support for intelligent applications such as unmanned driving, face recognition, personalized recommendation, and medical image processing \cite{A109}.
In the Metaverse, AI techniques would also contribute a lot to its future development. For example, with the techniques of machine learning, the Metaverse could be easily equipped with strong abilities in data processing and analysis. Techniques of computer vision simulate realistic images and create vivid scenarios and ``avatars'' in the Metaverse. As we all know, the ``avatars'' in the Metaverse are not the simple replication of humans in real life, but also require additional smart functions acting like humans where AI is inseparable. As authors in \cite{A110} conclude, AI would show great significance both in the foundation and development of the Metaverse and would aid a lot in providing immersive user experiences.
\ \par \noindent \textbf{e) Blockchain}
\ \par
Blockchain refers to a decentralized and intelligent distributed ledger platform that helps establish a shareable, trustworthy, and durable mechanism. It is composed of blocks that automatically form a chain according to the time of generation and could only be revised according to strict rules and open protocols \cite{A111}. Instead of relying on other institutions to provide credit evaluation, the Blockchain allows the adoption of technologies such as cryptography and computer science to ensure security and build a corresponding secure shared database. Due to these characteristics, it has huge potential in finance, logistics, and other areas with high-security demands.
For example, the Blockchain could get rid of the dependence on the third-party and realize direct point-to-point connection in financial fields, which not only reduce costs but also help complete transaction payment quickly \cite{A112}. In the area of smart logistics, the Blockchain could trace the production and delivery process of items and improve the efficiency of supply chain management with lower cost \cite{A113}. In addition, the Blockchain can prove the existence of a file or digital content at a specific time through a hash timestamp, and its characteristics of openness, non-tampering, and traceability provide a complete solution for judicial authentication protection and anti-counterfeiting traceability. Moreover, the Blockchain can simplify the complex multi-level structure in the energy system and reduce the transaction cost of energy as well \cite{A114}.
Blockchain technology has demonstrated its high performance in optimizing big data applications and data circulation as well as data sharing. To the best of our knowledge, the future Metaverse will generate massive amounts of data, and the existing centralized data storage mechanism fails to tackle such overwhelming challenges. Therefore, the storage and processing solutions with Blockchain are expected to show significance in the future Metaverse. In addition, the non-tampering and traceability features of blockchain ensure the authenticity and high quality of data and would provide the basis for the development of Metaverse which also owns specific virtual economic systems \cite{A116}.
\subsubsection{Potential techniques in aspect of space convergence}
The convergence among physical, social, thinking, and cyber spaces is promoted by the development of Internet technologies and has gradually formed the CPST hyperspace \cite{A1}. As can be seen in Figure \ref{4}, the techniques in the aspect of space convergence enable it possible to break the spatial boundaries and lay the fundamental foundation for the emergence of the Metaverse. In this section, we will introduce the enabling techniques of space convergence, which would provide possibilities for the future Metaverse.
\begin{figure}[!ht]
\centering
\includegraphics[width=7.5cm]{3.png}
\caption{Techniques related to space convergence.}
\label{4}
\end{figure}
\ \par \noindent \textbf{a) Identity modeling}
\ \par
As the Metaverse depicts a digitalized world that is similar to the real space, it includes elements such as ``human'', ``animal'', and ``plant'', as well as the economy, education, traveling, rules and laws. Identity modeling plays a significant role in the Metaverse, which will not only help create an identification for given objects but also achieve accurate mapping between the Metaverse and the real world.
Generally, identity modeling can be divided into two categories, ID-based and nID-based identity modeling. As the name implies, ID-based identity modeling depicts the identity by specific codes, like the ID card of humans in the real world. Research on ID-based identity is always on the way. For instance, Masinter et al. propose a Uniform Resource Identifier scheme with a string of characters to identify objects \cite{A48}. Brock et al. put forward the Electronic Product Code (EPC) based on Universal Product Code (UPC), which adopts simple and extensible codes to track objects throughout their life cycle \cite{A49}. All these explorations guide ID-based identity modeling which is also applicable in the future Metaverse.
In addition to ID-based identity modeling with tags assigned by the outside, the objects also have specific attributes themselves, such as natural attributes and social attributes. Ning et al. first proposed the concept of nID which would be used to represent objects when ID does not exist, is untrusted, or is damaged \cite{A50}. He further presented a tree-code model combined with ID and nID information to realize the representation of objects \cite{A51}. Usually, in nID-based identity modeling, the techniques such as semantic ontology are commonly used for representing the attributes, especially the RDF and OWL ontology languages \cite{A52}. For example, Hasemann et al. models embedded devices in the RDF and OWL language and extends this identification method to a variety of hardware scenarios \cite{A53}.
With the help of identity modeling, accurate mapping between the Metaverse and the real world would be achieved and would also support higher applications in the Metaverse such as relationship traceability, safety certification, and unified management.
\ \par \noindent \textbf{b) Space-time consistency}
\ \par
The accurate mapping between the real space and the Metaverse also puts forward high requirements for space-time consistency, to achieve seamless interactions.
There is already some research on spatial-temporal consistency between real space and virtual space. For example, Li et al. proposed a space-time registration model to realize the space and time synchronization between reality and virtuality, which adopted the electronic support measure and unscented Kalman filter to compute the space-time biases and perform time synchronization respectively \cite{A54}. Besides, Zhou et al. proposed a metric to quantify the space-time inconsistency in large-scale distributed virtual environments and verified that this metric can effectively evaluate space-time consistency through Ping-Pong game experiments \cite{A55}. Additionally, Zhong et al. realized the long sequence tracking of objects in a moving object detection system based on the characteristics of space-time consistency \cite{A56}. All these technical explorations could guide for achieving space-time consistency in the future Metaverse.
The Metaverse depicts a world full of various activities, and space-time consistency is so significant to keep communications and collaboration smooth and successful.
\ \par \noindent \textbf{c) Session management}
\ \par
In the Metaverse, an increasing number of immersive services would be available such as virtual dressing, digital transactions, and space travel. There is also a need to track and monitor humans' activities in real-time to guarantee interactions and communication successfully. In this regard, session management plays an important role. It not only refers to the management of sessions across real life and the Metaverse but also the applications and services that take place in the Metaverse.
Nowadays, many researchers have done related work on session management in virtual space, especially on the Internet, which can be adopted in the future Metaverse. Johnston used a password authentication method based on HTML forms to track user authentication and login sessions in mail order sales websites \cite{A57}. Gutzmann supervised the behavior of users who accessed the network through session management in the HTTP environment, which was realized by the cookie-based ticket mechanism \cite{A58}. Additionally, Poggi et al. used machine learning algorithms and the Markov-chain model to predict the interactive behavior of web users, according to which resources will be reasonably allocated to users in advance \cite{A59}.
Session management technology mainly includes those technologies that are meaningful to security and privacy protection to a certain extent. With the help of identity authentication, access control, algorithm encryption, and other solutions, it ensures the transparency and security of sessions. These ideas would inspire a lot for the session management in the Metaverse and provide some guidance for further development.
\ \par
\noindent
\textbf{d) Resource management}
\ \par
When it comes to resources in the Metaverse, we are going to discuss it from two aspects, namely the hardware and software. For example, the supporting hardware resources mainly refer to the core components, including chips, display screens, interactive devices, terminals, etc., while the software may refer to the operating systems, the rendering tools, the computing capabilities, the memory storage, file data, etc. Reasonable resource management could prompt the proper and orderly resource allocation according to dynamic requirements and guarantee a stable and disciplined resource-using environment for the Metaverse.
It is worth noting that there is already some research on resource management. For example, Czajkowski et al. propose a resource management architecture that could solve the problem of resource allocation between different components. This resource management architecture has been verified on a meta-computing system platform, including 15 sites, 330 computers, and 3,600 processors \cite{A63}. Kim et al. build an efficient resource management scheme (ERMS), which allocates the IoT storage resources based on the XML standard, laying a good foundation for the distributed storage and process of data \cite{A64}.
Since the Metaverse is still in its infancy, there is currently no mature mechanism of resource management. However, the existing works related to resource management in IoT or cyberspace could be adopted for reference. In addition, the Metaverse will own some novel resources like ``avatars'' and virtual identities, and there will be additional requirements needed to be satisfied in its resource management, such as the ownership, management, and disposal of multiple virtual identities.
\ \par \noindent \textbf{e) Energy management}
\ \par
As we can imagine, the Metaverse provides immersive applications and services to human beings depending on various types of energy. There may be some novel forms of energy such as thin films for solar cells, in addition to oil, steam, and electricity. It puts forward high requirements for effective energy management.
There already exist some explorations of energy management in real life, which could guide that in the Metaverse. For instance, based on the principle of harvesting energy and saving electricity, Zhang et al. transformed the network energy utility optimization problem into a stochastic problem and proposed a time-scale energy allocation algorithm based on the Lyapuno framework \cite{A65}. Yang et al. modeled the energy optimization problem as the multi-agent reinforcement learning formula and proposed an energy management method based on collaborative multi-agent deep reinforcement learning, which could be used to implement radio block assignment and the transmission power control strategy \cite{A66}. All these techniques would inspire energy management in the Metaverse.
Moreover, there is a need to support the interactions between the Metaverse and the real space, hence energy management may need to overcome the boundaries between virtuality and reality. Considering the specific requirements, the open issues such as the energy transmission between virtual and real space, the consumption calculation, and optimization are needed to be resolved. There is still a long way to explore efficient energy management in the Metaverse.
\subsubsection{Potential techniques in aspect of virtuality and reality interaction}
As we talked about above, the Metaverse is such a paralleled digitalized world while humans are still in the real physical space, hence there is an overwhelming interaction between virtuality and reality. Related technologies are not only a better way to enter the virtual Metaverse, but also the best choice to create immersive scenes in the Metaverse. At present, there have been some relatively mature explorations in virtuality and reality interaction. In this section, we mainly introduce some typical ones which may contribute a lot to the development of the Metaverse.
\ \par \noindent \textbf{a) Virtual Reality (VR)}
\ \par
The concept of VR was first proposed by Jaron Lanier in 1989 when he pointed out that VR reestablished the relationships with the physical world in a new plane \cite{A30}. However, it is worth noting that there would not exert any influence on the subjective world. The VR only helps depict a virtual environment generated by computers, which usually depends on external devices such as helmets and glasses. It could help generate a real-time simulation through multi-sensory channels of taste, smell, vision, sound, touch, etc. so that users are completely immersed in a virtual environment \cite{A29}.
With the help of VR, it is easier to effectively make people believe in the environments they feel and achieve an unprecedented immersive experience. In general, VR techniques in the field of 3D audiovisual modeling, tracking, and interaction show high efficiency. For example, 3D audiovisual technologies can ``puzzle'' the brain, and its high-definition realistic effect makes you believe what you see is what you get \cite{A32}. 3D tracking technologies could track the movements and rotations of the observers to realize precise real-time positioning. It has been adopted in areas such as autonomous driving \cite{A34}. 3D interactive technologies provide possibilities for users to be immersed in the virtual world. Some common tools including intelligent gloves, smart glasses, and helmets are widely adopted for household and commercial purposes \cite{A35}.
To maximize the user's sense of immersion, VR technologies need to break through their limitations such as the strong dependence on devices. In recent years autostereoscopy has become popular since it provides possibilities to display stereoscopic images based on the theories of the parallax barrier, lenticular lens, and light field, rather than depending on any specific devices \cite{A33}. This is a great step forward to the immersive user experience and the future Metaverse, as there is no strict dependence on complicated helmets and glasses, which would largely improve the feelings of users to some extent.
\ \par \noindent \textbf{b) Augmented Reality (AR)}
\ \par
AR is a technology that skillfully blends virtual information with the real world, and it adopts a variety of techniques, including multimedia, 3D modeling, real-time tracking and registration, intelligent interaction, sensing technologies, etc. \cite{A31}. Different from VR technologies, AR emphasizes the extension of the real world and helps enhance the objects residing in the real physical space by adding multiple sensory modalities \cite{A36}.
AR needs the support of a complete set of hardware devices, such as processors, displays, sensors, and input devices. In these components, we could adopt modern mobile computing devices such as smartphones and tablets to act as processors to provide powerful processing and computing capabilities. As for displays, there are some common tools, among which the head-mounted display (HMD) is one kind of display device worn on the forehead, like a helmet. In such Modern HMDs, sensors are typically used for precise monitoring, enabling the system to align virtual information with the physical world and adjust accordingly based on the user's head movements \cite{A37}. Sensors are so significant in AR techniques because the capture of the surrounding real environment still needs to be realized via them, for instance, accelerators, GPS, infrared sensors, and so on. The input devices involved in AR technologies are mainly cameras or webcams. Some even require speech recognition systems that translate what the user says into instructions the computer can understand, and gesture recognition systems that interpret body movements through visual detection or sensors. Giant IoT companies are devoted to the research and development of some advanced AR equipment, for example, the AR glass ``Project Iris'' from Google, and ``Shield AR'' from Vuzix.
VR and AR are a pair of similar concepts, but there are still some differences between them. In VR, the users' perception of reality completely depends on virtual information, while in AR, users gain additional computer-generated information in addition to the data collected in real life, thereby enhancing their perception of reality \cite{A38}. It is worth noting that they are all beneficial technologies to improve users' immersion, and will be key technologies of the future Metaverse.
\ \par \noindent \textbf{c) Mixed Reality (MR)}
\ \par
MR is also another technique that helps us get rid of the restrictions of the screen and improves the users' immersion to a large extent. Rather than creating a completely virtual scene, MR focuses more on the instinctual interactions between the real world and the digitalized space, which is like AR techniques we discussed above \cite{A39}. It could be regarded as a hybrid of AR and VR, where a transition between VR and AR could be achieved simultaneously.
\begin{table}[!ht]
\centering
\caption{The differences between VR, AR, and MR.}
\label{tab2}
\begin{tabular}{ccc}
\toprule
\toprule
Type & Description & Characteristic \\
\midrule
VR & \multicolumn{1}{m{3.5cm}}{In the VR world, everything is virtual and totally created by various techniques.} & \multicolumn{1}{m{2cm}}{Virtuality} \\
\midrule
AR & \multicolumn{1}{m{3.5cm}}{The AR technique creates visual things, and then overlies them into the real world.} & \multicolumn{1}{m{2cm}}{Virtuality and Reality, from virtuality to reality} \\
\midrule
MR & \multicolumn{1}{m{3.5cm}}{The MR technique visualizes the real things, and then overlies them into the virtual world. It needs to keep instant message with the real world. } & \multicolumn{1}{m{2cm}}{Virtuality and Reality, from reality to virtuality} \\
\bottomrule
\bottomrule
\end{tabular}
\end{table}
MR needs to acquire real-time access to effective information about real objects, achieve its digital modeling, and then realize the co-existence and interaction between the real and visual objects. To better distinguish between VR, AR, and MR, we make a comparison as shown in Table \ref{tab2}.
Generally, MR techniques are more complex because real objects need to be virtualized first. During the process of virtualization, it needs a camera to scan objects for 3D reconstruction. We all know that the picture captured by the camera is two-dimensional, that is, the picture is flat, and the depth information is lost, so it is necessary to reconstruct and generate a virtual 3D object, which we call a real virtualized object. This kind of MR technique is popular in our daily life. For example, the AR filters in social media are providing MR user experiences. In addition, MR techniques also show huge possibilities in areas of education, entertainment, remote working, etc. It will become mainstream in the future Metaverse.
\begin{figure*}[!ht]
\centering
\includegraphics[width=16cm]{Timeline.png}
\caption{The development of human-centered communication.}
\label{fig_4}
\end{figure*}
\ \par \noindent \textbf{d) Brain-Computer Interface (BCI)}
\ \par
BCI sometimes called ``brain port'', is a direct connection path established between the brain and external equipment \cite{A40}. In general, common BCIs can be divided into unidirectional BCIs and bidirectional BCIs, according to the direction of instruction transmission. As the name suggests, in the case of unidirectional BCIs, the computer either accepts commands from the brain or sends signals to the brain but fails to send and receive signals at the same time, while the bidirectional BCIs allow bidirectional information to be exchanged between the brain and external devices simultaneously.
Through the connection between the brain and the computer, people can freely acquire information, socialize with each other, and even obtain such sensory experiences in the virtual world as taste, touch, etc. \cite{A41}. Compared with ``traditional'' media which only provide two-dimensional sensory experiences such as audio and video, BCIs can bring a revolutionary experience to the Metaverse.
Since 2019, Neuralink, Musk's BCI company, has repeatedly set off a wave of public opinion with its advances in BCIs \cite{A43}. It first announced its BCI system on July 17, 2019. On August 29, 2020, Elon Musk showed Neuralink brain implant working in a pig and successfully reading its brain activities \cite{A23}. This is a big step forward as it implies the possibilities in the future for further connections between human brains and computers, which may be one of the hot topics in the future Metaverse.
The emergence of BCIs enhances the interaction between reality and virtual space. In the future Metaverse, it may be possible to control objects in the Metaverse with brain waves to a certain extent, and users can freely move every part of the body according to their wishes. It will no longer need rigidly preset movements, and users can interact with the virtual world as much as possible \cite{A42}.
\ \par \noindent \textbf{e) Game Engine}
\ \par
When it comes to the interaction between virtuality and reality, the gaming industry is one of the first industries to be noticed. Especially in recent years, the popularity of somatosensory games and interactive projection games has brought the gaming industry into a new stage. Under such circumstances, the game engine also emerges gradually, which provides software platforms for game designers to develop various video games.
The game engine is composed of some editable systems or applications that could allow game designers to develop games much more efficiently and conveniently without starting from scratch \cite{A44}. To some extent, the game engine could be regarded as a predefined set of codes for certain games that the machine could be understood. As authors in \cite{A45} mentioned, the game engine usually includes parts of the rending engine, physics engine, collision detection system, sound effects, script engine, computer animation, network engine, scene management, etc. With the combination of the above technologies, it could integrate different game resources, such as images, sounds, and animations, and design according to the requirements.
Nowadays, the game engine is not only born for games but also can support the creation of interactive high-fidelity graphics and environments. Since the gaming industry is one of the first hot fields in the Metaverse, the game engine also serves as a significant method to realize the Metaverse. In 2020, the company named MetaVRse announced its MetaVRse engine, the new 3D game engine for non-coders \cite{A46}. It enables the users to create immersive experiences in the no-code movement, which is a great step toward the future Metaverse.
\ \par \noindent \textbf{f) 3D modeling}
\ \par
Generally speaking, 3D modeling is constructing a mathematical model of any given object by replicating the parameters of its surface, such as the edges, vertices, polygons, etc. The products created by 3D modeling techniques such as 3D rendering and 3D computer graphics are also named as 3D models.
3D modeling plays a significant role in education, entertainment, healthcare, etc. For example, 3D modeling can build a virtual museum where people could enjoy all kinds of exquisite artwork without leaving home \cite{A60}. In the area of healthcare, Winzenrieth established a 3D model based on dual-energy x-ray absorptiometry, which allowed a more precise assessment of the drug's therapeutic effect \cite{A61}. Additionally, 3D modeling could also improve the success rate of surgery, and a novel 3D modeling tool for virtual nose surgery has huge potential to help surgeons model potential surgical maneuvers and minimize the complications \cite{A62}. The popular holographic projection is one kind of 3D modeling technique, which originally refers to the technology of recording and reproducing the real 3D image of an object based on the principle of interference. For example, with the help of holographic projection, the late Teresa Teng could ``stand'' on the stage singing and providing a visual feast.
3D modeling technologies hold significant roles in the construction and development of the Metaverse. They help replicate the architectural style of buildings in real life and make the scenarios more realistic. The 3D modeling technologies could be regarded as the foundations of the Metaverse and serve as the important bridge keeping the accurate mapping between the Metaverse and real life.
\ \par \noindent \textbf{g) Real-time rendering}
\ \par
Real-time rendering is a branch of computer graphics that studies how to create and analyze real-time images \cite{A73}. It could quickly render a constantly changing 3D environment and create the illusion of movement. Real-time rendering takes advantage of the ``persistence of vision'' feature of human eyes, which have a short pause in the observed scenes. Hence, we humans could not notice the phenomenon of frame switching as long as the switching speed is fast enough. It looks like a real-time changing animation. At present, the techniques of real-time rendering are mainly used in the graphic design of video games and the production of movies.
Compared with the production of traditional animation, real-time rendering focuses on interactivity and real-time \cite{A74}. Generally, the scene is optimized to improve the calculation speed and reduce the time delay. For users, any operation such as finger swiping across the screen, mouse click, keyboard input, etc., will cause the screen to recalculate. Meanwhile, the users need to get real-time feedback after the operation. Therefore, real-time calculation and response are significant for a better user experience.
However, the quality of real-time rendering is deeply limited by hardware, which drives the industry to focus more on technical innovations. With the improvement of GPU performance, the speed of real-time computing is faster, and the accuracy of image computing has become much higher. For example, NVIDIA launched RTX Real-Time Ray Tracing technology in 2018, which was possible to provide more realistic images \cite{A75}.
Since the Metaverse is a digitalized world, most of the scenes are virtualized and need to be rendered in real-time. With the development of real-time rendering technologies, the Metaverse is possible to provide immersive user experiences, coupled with strong interactivity and in-time response.
\subsubsection{Potential techniques in aspect of human-centered communication}
The development of human-centered communication not only drives the emergence and prosperity of the Metaverse but also puts forward higher requirements for Metaverse development. As can be seen in Figure \ref{fig_4}, human-centered communication has gone through several stages, from primitive communication relying on body movements or languages to the emergence of wired and wireless communication and instant messaging in the 20th century. Great changes have taken place in the aspect of human-centered communication, especially in recent years, the development of various social media like Instagram, MSN, Facebook, WeChat, etc., have penetrated every corner of modern life, which greatly breaks through physical boundaries and provides much convenience for communication.
However, at this stage, communication still relies on physical media, such as mobile phones, computers, and other electronic devices. Users are still faced with ``screens'' and could not achieve a more immersive user experience. While the Metaverse serves as a higher-level living space, it gives birth to the new communication of advanced immersion and feelings. By enabling multi-dimensional and multi-sensory communication methods, users are allowed to communicate and establish immersive social contacts with each other, further breaking the spatial-temporal limitations. Usually, the communication methods in the Metaverse include traditional audio-visual communication through text, voice, image, etc., and some higher-dimensional communication ways such as simulating touch, taste, and other senses.
Under such circumstances, it is necessary to perceive and process complex and multi-dimensional information in the Metaverse to keep successful communication. Hence techniques related to 3D communication methods, social networks analysis, social aware computing, cognitive computing, swarm intelligence, and affective computing are becoming vital, which would help a lot during human-centered communication. In this section, we will give a further introduction to the detailed techniques.
\ \par \noindent \textbf{a) 3D communication methods}
\ \par
The Metaverse should provide users with more free and open communication scenarios, to improve users' communication experience and efficiency. In addition to traditional audio-visual communication ways, more immersive ways of communication like signals, tastes, and feelings are expected. As authors in \cite{A67} argue, the Metaverse allows users to communicate seamlessly with digital artifacts via different approaches, for instance, the XR systems with stereoscopic displays, spatial or binaural audio with soundscape constructions, handheld input devices, etc.
Specifically, handheld input devices play specific roles during human-centered communication in the Metaverse. For example, the motion controllers which allow people to touch, grab, and manipulate visual objects could support active interaction and communication \cite{A68}. Additionally, multi-sensory communication overcomes the restrictions of ``screens'', where humans could communicate their feelings with each other face to face, even if they are still in different places in real space. It gets rid of the shackles of screens or other intermediate media, and largely improves users' immersive experience during communication.
\ \par \noindent \textbf{b) Social networks analysis}
\ \par
The social network is a typical complex network, which is defined as a social structure composed of various nodes and edges. Nodes usually stand for individuals or organizations, while edges usually refer to social relationships. Social networks contain massive amounts of information, reflecting the group interaction behaviors and social relationships among users \cite{A69}. It shows unique characteristics due to its sociality represented in the graph structure, such as centrality, betweenness, modularity, etc. Via social network analysis, more hidden information could be obtained \cite{A70}. There are two kinds of social network structures, the small world networks, and the scale-free networks.
In small-world networks, two important properties named network average path length, and average aggregation coefficient count a lot \cite{A71}. The Network average path length refers to the average distance between nodes in the network, while the average aggregation coefficient refers to the probability of any node whose neighbors are also neighbors. By analyzing these parameters, it is possible to predicate the hidden relationships between different nodes.
The scale-free features come from the study of complex networks, that is, the degree to in the network obeys the power-law distribution \cite{A72}. In social networks, it means that most nodes have a small number of edges connected, while a few nodes have a large number of edges connected. That is, a few people have complex social relationships, while most people have simple social relationships. According to the research, well-known social networks such as Flickr, YouTube, and LiveJournal all have small-world phenomena and scale-free characteristics \cite{A97}.
Analyzing the graph structure and social characteristics in social networks, allows us to study the laws existing in the social networks, recur and deduct the social features. At present, social network analysis is mostly used for social recommendation \cite{A98}, social influence modeling \cite{A99}, user behavior prediction, etc. \cite{A100}. Since the relationships and interactions in the Metaverse are not limited to humans, ``avatar'', but humans and ``avatar'', the social network analysis techniques would contribute a lot to the Metaverse, especially in the predication, deduction, and management of the complicated social relationships and behaviors, etc.
\ \par \noindent \textbf{c) Social aware computing}
\ \par
To our best knowledge, the continuous developing of pervasive computing largely improves the ability to data acquisition and process. Combined with the progress in social computing, a new paradigm of social aware computing emerges. Social aware computing refers to the process of real-time perception and social behavior recognition, which could analyze and mine social characteristics and specialization, further supporting interaction, communication, and cooperation in the community \cite{A76}. Since the Metaverse could be regarded as a world parallel to the real world, which has its special structure and social system, it is equally important to carry on social aware computing in the Metaverse.
The techniques of social aware computing mainly include the following aspects like large-scale pervasive sensing, activity and interaction analysis, social interaction support, software framework and methodology, and applications. Based upon these techniques, it is possible to acquire real-time data via sensing devices, execute analysis of social behavior, and provide auxiliary suggestions for social activities. At present, social aware computing has been widely used in areas of network communication and smart traffic \cite{A79}. For example, Liu designs a congestion control scheme based on social aware computing in Delay Tolerant Networks \cite{A77}. Zhang adopts social characteristic computing in urban informatics, and his proposed optimal scheme largely improves the efficiency of content dispatch \cite{A78}.
As the Metaverse aims to depict an immersive digital world with complicated social systems and mechanisms, it puts forward higher requirements in context awareness and social computing. The techniques of social aware computing make it possible to understand complicated environments well and provide much more intelligent services and user experiences.
\ \par \noindent \textbf{d) Cognitive computing}
\ \par
Cognitive computing refers to the abilities of intelligent systems that could simulate the cognition science of humans, especially the way that the human brain works. With the help of techniques such as information analysis, natural language processing, machine learning, etc., cognitive computing is possible to synthesize data from surroundings and make the most intelligent decisions as required. It attempts to solve the inaccuracy and uncertainty in biological systems and realizes the process of perception, memory, learning, language, thinking, and problem-solving to different degrees \cite{A80}. By training and learning from large amounts of data, cognitive computing systems could improve the way of pattern recognition and data processing, to provide more possibilities for problem prediction and solution modeling.
Cognitive computing aims to infinitely approach human intelligence to better deal with complex problems of learning, reasoning, deduction, and so forth. For example, IBM Watson is such an expert system that it could help doctors treat their patients by providing suggestions based on its knowledge system during the medical process \cite{A81}. However, it is worth noting that specific abilities are needed to achieve cognitive computing, such as adaptability, interactivity, state iteration, and context understanding. Adaptability ensures the ability to adjust to dynamic data and environment changes, and interactivity represents the capacity of interacting with external humans or devices. State iteration enables the identification of unsure problems by asking questions or extracting additional information, and context understanding emphasizes the capability of better understanding, identifying, and mining contextual data. All these key attributes make cognitive computing successful.
In the Metaverse, there would exist simulated systems that act like humans, for example, the ``avatar'' with their own ``ideas'', ``thoughts'', ``intelligence'', etc. Cognitive computing is quite common and expected. The man-like intelligent systems could largely augment human intelligence and are possible to deal with more complicated issues. Cognitive computing will strengthen Metaverse with efficient data processing speed, abundant expert knowledge, and provide the most appropriate decisions as expected.
\ \par \noindent \textbf{e) Swarm Intelligence}
\ \par
Swarm Intelligence is a type of method that is inspired by the intelligent behaviors of ant colonies or bees in nature, who would like to cooperate and communicate together according to specific rules to realize the final aim \cite{A83}. The concept was first proposed in 1989 by Gerardo Beni and Jing Wang and was adopted in the cellular robotic systems, in which a collection of robots worked together to achieve the goal tasks, even each only with limited processing abilities \cite{A82}.
Generally, in cyberspace and IoT, swarm intelligence has shown potential in managing many hardware devices and achieving complex functions. For example, Luo proposes a fault-tolerance algorithm based on practical swarm optimization for IoT, to overcome the fault-tolerance routing issue \cite{A85}. Zedadra overviews the swarm intelligence-based algorithms and points out the possibilities to be applied in IoT systems, including node localization, optimization control, medical care, etc \cite{A87}. In \cite{A88}, the authors also design an approach based on swarm intelligence for traffic light scheduling, which has brought huge profits in practical applications.
In the framework of swarm intelligence, it usually includes three parts: perception participants, network layer and end users \cite{A86}. The perceptual participants refer to intelligent robots and group users, and the network layer is responsible for data storage, transmission, and process. The end users are those who send requirements and look forward to feedback from different perceptual participants and network layers. They cooperate and work for the best results.
Considering its characteristics, swarm intelligence is more significant for supporting distributed social structures. As Chen concluded, the Metaverse is a digital world where entities could cooperate and tackle significant difficulties together. It is less limited by space and time with more decentralized social characteristics \cite{A84}. Therefore, swarm intelligence could play an important role in the Metaverse, and efficiently guide further system construction and social management.
\section{Open issues and challenges of technologies in the Metaverse}
Since the Metaverse is still in its infancy, there is a long way to establish a comprehensive and mature technical framework at once. Some open issues and challenges of technologies need to be considered for further development in the future Metaverse.
\subsection{The balance between virtuality and reality}
The Metaverse is a digitalized virtual world with strong interaction between virtuality and reality. Some disputes about the interaction between virtuality and reality emerge. Whether virtuality would completely replace reality or reversely drive the development of reality. How do humans switch freely between virtuality and reality and achieve a balance. How to deal with the conflicts and coordination between virtuality and reality. These are all challenges needed to be considered in the future development of the Metaverse.
\subsection{The fusion between different techniques}
The Metaverse relies on various techniques to support further applications. Hence, it may need seamless fusion, coordination, and collaboration between different technologies to address various challenges. It is necessary to better manage the superposition, compatibility, and integration between multiple technologies, to keep the technical system stable and reliable.
\subsection{The gap between theoretical research and practical applications}
Although the Metaverse has become so popular in recent days, most of its research is still in its infancy and stays in the primary stage of theoretical explorations. The huge gap between theoretical research and practical applications is one of the challenges in the Metaverse. It is essential to consider the development of infrastructures, the imbalance of technical resources, the lack of the recognized industry standard, the difficulties in large-scale production, etc. In addition, it also needs to explore the most appropriate forms of business models and interactive ways used in the Metaverse.
\subsection{The resource and energy management}
The problems related to resource and energy management are also open issues. For example, how to maximize the utilization of resources in the Metaverse; how to deal with the problem of energy consumption in the Metaverse; how to solve the contradiction between technological development and energy consumption and resource imbalance. They would become dominant challenges in the technology development of the Metaverse.
\subsection{The security and privacy protection}
Since the Metaverse is a parallel and digitalized world, most personal information of users is at risk of leakage and invasion. It is urgent to establish a safe and comprehensive security and privacy protection mechanism in the Metaverse in case of any risks, for instance, adopting resolutions to strengthen identity authentication and security management, etc.
\section{Conclusions}
The Metaverse has stirred up many hot topics in both academia and industry, and leads to novel explorations. Different from other works, we introduce the Metaverse from a new technology perspective, including its essence, corresponding technical framework, and possible technical challenges in the future. We especially conclude four pillars of the Metaverse, named ubiquitous connections, space convergence, virtuality and reality interaction, and human-centered communication, and establish the corresponding technical framework. Moreover, we analyze some open issues and challenges of the Metaverse in its technical aspect. The Metaverse depicts a digitalized world with immersive user experience, but most of its research is at the theoretical level, and there is still a long way to go in the future.
\bibliographystyle{elsarticle-num}
|
1,116,691,500,388 | arxiv | \section{Introduction}
Form-like document understanding has become a booming research topic recently thanks to its many real-world applications in industry.
Form-like documents refer to documents with rich typesetting formats, such as invoices and receipts in everyday inventory workflow.
Automatically extracting and organizing structured information from form-like documents is a valuable yet challenging problem.
Recent methods~\citep{xu2020layoutlm, garncarek2021lambert, lee2022formnet} often discuss the problem of form-like document understanding, e.g.~document entity extraction (DEE), in the supervised setting, assuming the training and test sets are of the same document type.
However, in real-world scenarios, there is often the need for generalizing models from seen document types to new unseen document types.
Beyond annotation costs, endlessly training specialized models on new types of documents is not scalable in many practical scenarios.
Moreover, the methods in the supervised setting pre-define the document schema, \textit{i.e.} the set of entities contained in the document, following the sequence-to-sequence tagging framework via the BOISE labeling format~\citep{ratinov2009design}.
Consequently, the models lack the ability to learn from different documents with diverse schemas.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/Intro-v2-bigger.pdf}
\caption{Illustration of the zero-shot transfer learning stages of QueryForm\xspace. In the pre-training stage, we extract millions of schemas and entity-value pairs from publicly available webpages to generate a large amount of query-value pairs to teach the backbone model to make query-conditional prediction.
During fine-tuning, we extract more accurate entity-value pairs from the available annotated document and directly learn schema information from data.
Finally, we evaluate the pre-trained model on a different target document without training data.}
\label{fig:intro}
\vspace{-4mm}
\end{figure}
Thus, it is desirable to have a systematic way to learn knowledge from existing annotated documents of different types to an un-annotated target document type (e.g. invoice in Figure~\ref{fig:po_vs_invoice}, right). This learning paradigm is usually defined as zero-shot transfer learning in literature~\cite{xu2021layoutxlm}. Beyond this, it is even more desirable to leverage highly-structured form-like documents with rich schema, such as form-like webpages in Figure~\ref{fig:po_vs_invoice}, left. Although webpages do not have explicit human annotations, we believe the diverse schemas and natural ``entities'', such as headers and text paragraphs, that exist in webpages can be valuable for document understanding. However, how to effectively utilize these webpages with a high discrepancy from documents like invoice and receipt is an unknown yet challenging problem.
In this work, we propose a novel query-based framework, QueryForm\xspace, to learn transferable knowledge from different types of documents for zero-shot entity extraction on the target document type. Ideally, we would like to prompt the model: \textit{This document has the following \textsc{[schema]}, please extract its \textsc{[entity]} value}, and model is able to accurately predict the corresponding word tokens belong to the queried entity. %
To this end, we encode both schema and entity information in our query, so that the model is no longer limited by a certain document type and a fixed set of entity types (or classes). Moreover, our query-based design can even benefit further from large-scale datasets with diverse schemas and entity types.
In order to feed this kind of composite query, we propose a \textit{dual prompting\xspace} strategy to effectively prompt the backbone model, \textit{e.g.}, a pre-trained Transformer, to make conditional prediction. As its name suggests, the dual prompting\xspace strategy consists of an E(ntity)-Prompt and a S(chema)-Prompt. Depending on the annotations we have, we can either generate the prompts from semantic labels, or learn them directly from data. Although similar concepts to dual prompting\xspace exist in the vision field~\citep{wang2022dualprompt,wang2022learning} to solve different problems, the main design in QueryForm\xspace is original in DEE.
We also propose a query-based pre-training method, QueryWeb\xspace, which leverages the highly-accessible and inexhaustible resource - publicly available webpages.
During the pre-training stage, the model learns to quickly adapt to various queries composed of different S-Prompt\xspace{s} and E-Prompt\xspace{s} generated from HTML source of webpages. After the decoupling of entity and schema that are tied to document types, the model can learn more transferable knowledge - leveraging the rich layout, scale and content information in webpages to make query-conditional predictions.
In summary, our work makes the following contributions:
\begin{itemize}
\item We propose QueryForm\xspace, a novel yet simple query-based framework for zero-shot document entity extraction. QueryForm\xspace provides a new dual prompting\xspace mechanism to encode both document schema and entity information to learn transferrable knowledge from source to target document types.
\item %
We demonstrate an effective pre-training approach, QueryWeb\xspace, that collects publicly available webpages with various layouts and HTML sources, and pre-trains QueryForm\xspace via the dual prompting\xspace mechanism.
Although webpages show high discrepancy from the target documents, we show this approach consistently improves the zero-shot performance.
\item With extensive empirical evaluation, QueryForm\xspace sets new state-of-the-art F1 score on both Inventory-Payment and FUNSD-XFUND zero-shot transfer learning benchmarks.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.999\linewidth]{figures/web_invoice_example.pdf}
\caption{Form-like examples of Webpage and Invoice documents. Webpage appears to have distinct layouts and contents from invoice documents, but they both contain rich entity-value pairs, such as ``\textit{page title}-The 61st Annual Meeting of the Association for Computational Linguistics'' in Webpage and ``\textit{total amount} - \$755'' in Invoice.}
\label{fig:po_vs_invoice}
\end{figure}
\section{Related Work}
\textbf{Document entity extraction (DEE).} Researchers started to study extracting information from documents using rule-based models \citep{lebourgeois1992fast,o1993document,simon1997fast}, or learning-based approaches with hand-engineered features~\cite{marinai2005artificial,wei2013evaluation,schuster2013intellix}. These methods have limited representation power and generalization ability.
\noindent More recently, neural models have been the mainstream solution for document entity extraction (DEE). Both RNN-based~\citep{palm2017cloudscan,aggarwal2021form2seq} and CNN-based models~\citep{katti2018chargrid,zhao2019cutie,denk2019bertgrid} have been adopted for DEE task. Nevertheless, motivated by the superior performance of the Transformers~\citep{vaswani2017attention} in various NLU tasks~\citep{devlin2018bert,raffel2020exploring}, recent work develops multiple Transformer-based models for DEE. \citet{majumder2020representation} extended BERT~\cite{devlin2018bert} to learn representations for form-like documents;
~\cite{kim2022ocr} propose a encoder-decoder structure that directly extracts document information from image input; ~\citet{xu2020layoutlm,xu2021layoutxlm} leverage both image and text inputs to capture cross-modality information.~\citet{lee2021rope,lee2022formnet} further introduced GCN~\cite{kipf2016semi} to encode spatial relationships in addition to the Transformer backbone. However, these methods only consider the usual supervised learning setting, \textit{i.e.}, training and test sets are from the same document type. On the contrary, our work proposes a novel query-based framework and tackle the challenging yet under-investigated zero-shot transfer learning~\citep{xu2021layoutxlm} setting.
\noindent \textbf{Pre-training for DEE.} Existing large-scale pre-training techniques in NLP~\cite{devlin2018bert,conneau2019unsupervised,liu2019roberta} are readily available for serialized document tokens. Multimodal pre-training~\citep{xu2020layoutlmv2,xu2020layoutlm,xu2021layoutxlm,appalaraju2021docformer} achieves better performance than text-modality alone by incorporating visual information at the cost of more expensive data collection and computation costs.
Our work presents a novel pre-training method using text modality alone, which is complementary to models that relies on image modality~\cite{kim2022ocr} or multiple modalities~\cite{xu2020layoutlm, xu2021layoutxlm}.
Moreover, we leverage publicly available webpages, which contain rich structured information and are much more accessible than documents. Different from the common Mask Language Model (MLM) objective used in pre-training, QueryForm\xspace has the same query-conditional objective during both pre-training and fine-tuning, which intuitively strengthens the transferability of pre-trained knowledge.
To the best of our knowledge, DQN~\citep{gao2022docquerynet} and Donut~\citep{kim2022ocr} are the closest work to ours in the DEE domain. However, our work is still very different from them from multiple perspectives, including problem setting, query design, and pre-training technique. On the other hand, leveraging webpages to pre-train language models has been explored in prior work.~\citet{liu2019roberta, brown2020language} extract text corpora from webpages and~\cite{aghajanyan2021htlm} use HTML source for pre-training. However, to the best of our knowledge, we are the first to leverage both webpages and the corresponding HTML source in a novel query-based pre-training framework to address the challenging zero-shot DEE task. Our framework fully takes advantage of the rich schema and layout information from webpages and utilizes HTML tags as weak entity annotation to align pre-training with the downstream DEE task.
\begin{figure*}[t]
\centering
\includegraphics[width=0.999\linewidth]{figures/QueryForm-v4-bigger.pdf}
\caption{Overview of QueryForm\xspace. Our dual prompting\xspace design yields a consistent objective in both pre-training and fine-tuning stages. Note that the schema query in pre-training comes from website domains while it is a learnable parameter in fine-tuning. See Section~\ref{sec:method} for more details.}
\label{fig:overview}
\end{figure*}
\section{Preliminaries}
\subsection{Problem Formulation} \label{sec:problem_formulation}
Given serialized words from a form-like document, we formulate the DEE problem as sequence tagging for tokenized words, \textit{i.e.}, for each word, we predict its corresponding entity class. Recent methods~\citep{xu2020layoutlm, garncarek2021lambert, lee2022formnet} use the BOISE labeling format~\citep{ratinov2009design} - classifying the token as \{${\bm{e}}$-Begin, Outside, ${\bm{e}}$-Inside, ${\bm{e}}$-Single, ${\bm{e}}$-End\} of a certain entity ${\bm{e}} \in \mathbf{E}$ to mark the entity span, where $\mathbf{E}$ is the set of entities of interest. Thus the cardinality of the label space will be $(4 \times |\mathbf{E}| + 1)$. In our formulation, we explicitly encode entity in the E-Prompt\xspace. Therefore, we are able to use a more succinct and generalizable BOISE labeling with only $5$ labels, \{Begin, Outside, Inside, Single, End\} to mark the span. Our approach decouples the label space with entity types. Following~\citet{lee2022formnet}, we then apply the Viterbi algorithm to get the final prediction.
In our work, we focus on the zero-shot DEE setting proposed by~\citet{xu2021layoutxlm}, where 1) the training source documents have significant domain gap from the target test documents (e.g. languages or document types), 2) there is no training documents available from the target documents, and 3) source documents include entities contained in the target documents.
\subsection{Architecture design} \label{sec:arch_design}
Following the setting in earlier work~\citep{majumder2020representation, lee2022formnet}, our method takes the WordPiece~\cite{wu2016google} tokenized outputs from the Optical Character Recognition (OCR) engine in reading order (left-right and top-bottom). By design, our method is compatible with any sequence encoder model as the backbone. We adopt the long-sequence transformer extension, ETC~\citep{ainslie2020etc} as our backbone, following the adoption of~\citet{lee2022formnet}, which contains Rich Attention as an enhancement of self-attention layers to encode 2D spatial layout information. We find this method (used as our baseline) performs fairly strong in the usual supervised learning setup, however, its performance drops significantly in the zero-shot learning setting.
Note that in practice, one can use QueryForm\xspace with OCR engines with different heuristics or other model backbones~\cite{zaheer2020big}. The work focuses on how to enrich entity query abilities from forms via our proposed QueryForm\xspace.
\section{Methodology} \label{sec:method}
We propose~QueryForm\xspace as a general query-based framework for solving the zero-shot DEE problem. QueryForm\xspace consists of a novel dual prompting\xspace strategy and a specially-designed pre-training approach called QueryWeb\xspace. As shown in Figure~\ref{fig:overview}, the model is first pre-trained on a large-scale Webpages dataset to learn to make conditional prediction under relatively noisy queries generated from combination of webpage domains (proxy of schema) and HTML tags (proxy of entity). Then, the model is fine-tuned on form-like documents with a unified schema to learn more specialized knowledge, by learning schema information in S-Prompt\xspace and further encode more accurate entity-level knowledge in E-Prompt\xspace. Finally, we directly test the model on the target document type in a zero-shot fashion (Figure \ref{fig:intro}).
\subsection{Dual Prompting}
Given a serialized document represented as a sequence of tokens ${\bm{x}}$ from the set of all documents $\mathbf{X}$, and a set of entities of interest $\mathbf{E} = \{{\bm{e}}_1,\cdots, {\bm{e}}_m\}$,
the goal is to let the model predict the corresponding label sequence ${\bm{y}}$. In our query-based framework, we additionally define $\mathbf{Q} = \{{\bm{q}}_1,\cdots, {\bm{q}}_m\}$ as the set of queries, where there is a bijection between $\mathbf{Q}$ and $\mathbf{E}$. The model takes an input tuple $({\bm{q}}_i, {\bm{x}})$ and predicts the conditional output ${\bm{y}}_{{\bm{q}}_i}$ (see BOISE prediction in Figure~\ref{fig:overview} for example). ${\bm{y}}_{{\bm{q}}_i}$ defines the token spans of the given query ${\bm{q}}$ with 5 classes (\textit{i.e.}, BOISE).
To encode entity information into the query, we can use the entity name as query, \textit{i.e.}, ${\bm{q}}_i = {\bm{e}}_i$. We denote by $t$ the tokenizer, $f_\theta$ the input embedding layer and $p_\phi$ the rest of the language model.
Then we can get the token-wise BOISE prediction:
\begin{align} \label{eq:entity_only}
\begin{split}
\hat{{\bm{y}}}_{{\bm{q}}_i}= p_{\phi} ( f_\theta([t({\bm{e}}_i); t({\bm{x}})])),
\end{split}
\end{align}
where ``$[\cdot\ ;\ \cdot]$'' is the concatenation operation along the token length dimension. Note that although ${\bm{e}}$ itself is not learnable, we can still learn its embedding $f_\theta(t({\bm{e}}))$ by optimizing $\theta$.
We name the query directly generated from the entity name as E-Prompt\xspace.
However, in QueryForm\xspace, our novel pre-training stage requires learning from large amount of webpages, which contain diverse categories of schema. Therefore, the model naturally requires more informative queries that also encodes the schema information. To this end, we propose the S-Prompt\xspace to capture schema information. In pre-training, we can generate S-Prompt\xspace in a similar way as we do for E-Prompt\xspace, please see Section~\ref{sec:pretraining} for more details. During fine-tuning, the schema for form-like documents is often very different from that of webpages. Thus, we let the model learn the schema representation directly from the data, so that it can align well to the S-Prompt\xspace{s} used in pre-training. We denote S-Prompt\xspace by ${\bm{s}}$, learnable vectors in the token embedding space during fine-tuning to capture schema information implicitly from the data.
According to the assumption in Section~\ref{sec:problem_formulation}, the documents we used in fine-tuning includes the target entities of interest. Intuitively, the schema information from fine-tuning documents should be transferrable to the target test document type. So we directly reuse the learned S-Prompt\xspace when testing the target documents. In this case, ${\bm{q}}_i = ({\bm{s}}, {\bm{e}}_i)$ and the prediction becomes:
\vspace{-2mm}
\begin{equation} \label{eq:bi_prompt}
\hat{{\bm{y}}}_{{\bm{q}}_i} = p_{\phi} ( [{\bm{s}}; f_\theta([t({\bm{e}}_i); t({\bm{x}})])]).
\end{equation}
Finally, the model is trained with the objective:
\vspace{-.3cm}
\begin{align} \label{eq:ft-objective}
\min_{\theta, \phi, {\bm{s}}}\ \sum_{{\bm{x}}} \sum_{i=1}^{m} \mathcal{L} \left( {\bm{y}}_{{\bm{q}}_i}, \hat{{\bm{y}}}_{{\bm{q}}_i} \right), \vspace{-.3cm}
\end{align}
where $\mathcal{L}$ is the cross-entropy loss.
\subsection{QueryWeb\xspace: Webpage-based Pre-training}
\label{sec:pretraining}
Distinct from recent work that focuses on multimodal pre-training, our proposed pre-training approach provides a new perspective with two core ideas: (1) Aligning pre-training and fine-tuning objectives. (2) Utilizing easy-accessible and informative webpages.
Recall that, in the fine-tuning stage, QueryForm\xspace is trained with a moderately sized set of queries composed of E-Prompt\xspace{s} generated from human-annotated entities and a learnable S-Prompt\xspace that encodes schema information. It is reasonable to believe that if we can pre-train the model with an extremely large set of queries composed of different E-Prompt\xspace and S-Prompt\xspace generated from weakly-annotated documents, the model will perform better than the Masked Language Model (MLM)~\cite{devlin2018bert} pre-training alone. Here, we present our simple webpage-based pre-training technique as well as our data collection recipes to enpower the pre-training.
\textbf{Dual prompting based pre-training.} We directly extract schema and entity information from rich HTML structure of various webpages and use them to generate the S-Prompt\xspace and the
E-Prompt\xspace, respectively. With a slight abuse of notation, we denote S-Prompt\xspace by $\Tilde{{\bm{s}}}$, where $\Tilde{\cdot}$ indicates that S-Prompt\xspace is no longer a learnable parameter. Different from the fine-tuning stage with a single set of entities under a unified schema, we can group the webpages by schema: $\{(\Tilde{{\bm{s}}}_1, \mathbf{E}_1, \mathbf{X}_1), \cdots, (\Tilde{{\bm{s}}}_n, \mathbf{E}_n, \mathbf{X}_n )\}$, where each schema $\Tilde{{\bm{s}}}_j$ corresponds to a set of entities $\mathbf{E}_j$ and a set of webpages $\mathbf{X}_j$. Similarly, the model takes the query-document tuple $({\bm{q}}_{ij}, {\bm{x}})$, where ${\bm{q}}_{ji} = (\Tilde{{\bm{s}}}_j, {\bm{e}}_{ji})$, ${\bm{e}}_{ji} \in \mathbf{E}_j$ and ${\bm{x}} \in \mathbf{X}_j$, and outputs the following conditional prediction $\hat{{\bm{y}}}_{{\bm{q}}_{ji}}$:
\begin{equation} \label{eq:bi_prompt_pre}
\hat{{\bm{y}}}_{{\bm{q}}_{ji}} = p_{\phi} ( f_\theta([t(\Tilde{{\bm{s}}}_i); t({\bm{e}}_j); t({\bm{x}})])).
\end{equation}
Equation~\ref{eq:bi_prompt_pre} is analogous to~\eqref{eq:bi_prompt}, however, ${\bm{s}}_i$ here is directly sourced from webpage data, which makes it different from the learnable ${\bm{s}}$ in~\eqref{eq:bi_prompt}.
Then we have the following pre-training objective:
\vspace{-3mm}
\begin{align} \label{eq:pre-objective}
\min_{\theta, \phi}\ \sum_{j=1}^{n} \sum_{{\bm{x}} \in \mathbf{X}_j} \sum_{i=1}^{|\mathbf{E}_j|} \mathcal{L} \left( {\bm{y}}_{{\bm{q}}_{ji}}, \hat{{\bm{y}}}_{{\bm{q}}_{ji}} \right).
\end{align}
The pre-training format is highly aligned with the fine-tuning in \eqref{eq:ft-objective}, so the model learns consistently during both stages to make query-conditional predictions.
\begin{figure}[t]
\begin{minted}[fontsize=\small]{html}
http://www.example.com
<div class=”product”>
<span id=”name”>Bath Mat</span>
<span id=”price”>$13.99</span>
</div>
\end{minted}
\vspace{-.2cm}
\caption{An example of HTML snippets, with two entities \textit{product/name} and \textit{product/price}.}
\label{figure:Webpage_example}
\end{figure}
\textbf{Data collection recipe.}
How to extract schema and entity information from any webpage is another contribution of this paper.
Consider the HTML snippets from Figure~\ref{figure:Webpage_example}.
First, it naturally contains two entities, and the combination of HTML tags defines what the entity is about, or its ``entity type''. Therefore, we have "Bath Mat" is of entity \textit{product/name}, and ``\$13.99'' is of entity \textit{product/price}. Second, the schema of the webpage = \{\textit{product/name}, \textit{product/price}\}, and the schema is usually shared by a series of similar webpages under the same domain.
Therefore, we can extract the domain name ``\textit{www.example.com}'' as the schema information.
Both the schema information and entity types generated from webpages are then respectively encoded by our dual prompting\xspace mechanism.
In practice, the schema and entity information automatically generated from webpages are often noisy. However, in the experiments, our model is still able to learn structured information from noisy queries and obtain significantly better entity extraction performance on the target form-like documents.
Moreover, in order to represent webpages in a manner that generalizes to form-like documents, the webpage representation consists only of the visible text tokens and corresponding $x/y$ coordinates.\footnote{Visible text and coordinates are generated by rendering each page with Headless Chrome: https://developer.chrome.com/blog/headless-chrome/.}
\begin{table}[t!]
\begin{center}
\scalebox{0.75}{
\begin{tabular}{l|c|c|c}
\toprule
\bf Task & \bf Pre-training & \bf Train & \bf Test \\
\midrule
Zero-shot & \multirow{2}{*}{QueryWeb\xspace} & FUNSD & XFUND \\
Zero-shot & & Inventory & Payment \\
\midrule
Few-shot & QueryWeb\xspace + Inventory & Payment X-shot & Payment \\
\bottomrule
\end{tabular}
}
\vspace{-.2cm}
\caption{Experiment design of two zero-shot transfer learning and one few-shot learning tasks.} \vspace{-.5cm}
\label{table:experiment_design}
\end{center}
\end{table}
\begin{table}[t!]
\begin{center}
\scalebox{0.75}{
\begin{tabular}{l|r|r|r|r}
\toprule
\bf Dataset & \bf Schema & \bf Lang. & \bf Entity & \bf Example \\
\midrule
FUNSD & 1 & 1 & 4 & 199 \\
XFUND & 1 & 7 & 4 & 1393 \\
Payment & 1 & 1 & 7 & 10k \\
\midrule
Inventory & 1 & 1 & 7 - 28 & 24k \\ \midrule
QueryWeb\xspace & 87K & 1 & 2.6M & 1.2M \\
QueryWeb\xspace-ML & 113K & >50 & 11.5M & 13M \\
\bottomrule
\end{tabular}
}
\vspace{-.2cm}
\caption{Detailed statistics of used datasets.} \vspace{-.5cm}
\label{table:dataset_stats}
\end{center}
\end{table}
\begin{table*}[t!]
\centering
\scalebox{0.70}{
\begin{tabular}{l|cc|cc|c|ccccccc|c}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Pre-training}} & \multirow{2}{*}{\textbf{Image}} & \bf \# Layers & \multirow{2}{*}{\textbf{FUNSD}} & \multicolumn{8}{c}{\textbf{XFUND}} \\
& \bf Method & \bf Size & & \bf (Model Size) & & \bf ZH & \bf JA & \bf ES & \bf FR & \bf IT & \bf DE & \bf PT & \bf Avg.\\
\midrule
XLM-RoBERTa & MLM & 2.5TB & & 12L(270M) & 66.70 & 41.44& 30.23 & 30.55 & 37.10 & 27.67 & 28.86 & 39.36 & 38.24 \\
InfoXLM & MLM & 2.5TB & & 12L(270M) & 68.52 & 44.08 & 36.03 & 31.02 & 40.21 & 28.8 & 35.87 & 45.02 & 41.19 \\
LayoutXLM & MLM & 30M & \checkmark & 12L(345M) & 79.40 & 60.19 & 47.15 & 45.65 & 57.57 & 48.46 & 52.52 & 53.90 & 55.61 \\ \midrule
XLM-RoBERTa & MLM & 2.5TB & & 24L(550M)& 70.74 & 52.05 & 39.39 & 36.27 & 46.72 & 33.98 & 41.80 & 49.97 & 46.37 \\
InfoXLM & MLM & 2.5TB & & 24L(550M) & 73.25 & 55.36 & 41.32 & 36.89 & 49.09 & 35.98 & 43.63 & 51.26 & 48.35 \\
LayoutXLM & MLM & 30M & \checkmark & 24L(625M) & \bf 82.25 & \bf 68.96 & 51.90 & 49.76 & 61.35 & 55.17 & 59.05 & 60.77 & 61.15 \\ \midrule
QueryForm\xspace & QW\xspace & 1.2M & & 6L(82M) & 75.92 & 59.30 & 44.43 & 47.75 & 67.63 & 55.23 & 70.75 & 62.79 & 58.27 \\
QueryForm\xspace & QW\xspace-ML & 13M & & 12L(185M) & 80.84 & 67.68 & \bf 53.30 & \bf 55.80 & \bf 72.91 & \bf 67.30 & \bf 74.25 & \bf 69.30 & \bf 65.79 \\
\bottomrule
\end{tabular}
}
\vspace{-.2cm}
\caption{Comparison between QueryForm\xspace and competing methods on FUNSD-XFUND zero-shot benchmark. %
} %
\label{table:xfund-zs}
\end{table*}
\begin{table*}[t!]
\small
\centering
\scalebox{0.85}{
\begin{tabular}{l|cc|c|c|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Pre-training}} & \multirow{2}{*}{\textbf{Model Size}} & \bf Supervised & \multicolumn{2}{c}{\textbf{Source $\rightarrow$ Target}} \\
& \bf Method & \bf Size & & \bf Upper-bound & \bf I7 $\rightarrow$ Payment & \bf I28 $\rightarrow$ Payment \\
\midrule
ETC+RichAtt & MLM & 0.7M & 83M & 94.33 & 81.54 & 78.37 \\
FormNet & MLM & 7M & 82M & \bf 95.70 & 79.77 & 77.88 \\
FormNet & MLM & 7M & 157M & 95.61 & 86.04 & 82.42 \\ \midrule
QueryForm\xspace & QW\xspace & 1.2M & 82M & 94.60 & \bf 88.15 & \bf 89.23 \\
\bottomrule
\end{tabular}
}
\vspace{-.2cm}
\caption{Comparison between QueryForm\xspace and previous state-of-the-arts on Inventory-Payment zero-shot benchmark. I-7 and I-28 are abbreviations of Inventory-7 and Inventory-28, respectively. QueryForm\xspace has much better generalization ability indicated by its stronger zero-shot performance and smaller gap with its supervised upper-bound (trained and tested both on Payment).} \vspace{-.3cm}
\label{table:payment-zs}
\end{table*}
\section{Experiments}
\subsection{Datasets and Experiment Design}
We use 3 publicly available datasets and 2 in-house datasets that we collected to design and conduct extensive experiments to validate our method. Table \ref{table:dataset_stats} summaries the datasets.
\textbf{FUNSD}~\citep{jaume2019funsd} is a form understanding benchmark consisting of 199 annotated forms with %
4-entity types: \texttt{header}, \texttt{question}, \texttt{answer}, and \texttt{other}.
\textbf{XFUND}~\citep{xu2021layoutxlm} is a multilingual form understanding benchmark by extending the FUNSD dataset. The XFUND benchmark has 7 different languages with 1,393 fully annotated forms, where each language includes 199 forms with the same set of 4 entity types as FUNSD.
\textbf{Payment}~\citep{majumder2020representation} consists of around 10K documents and 7 entity types from human annotators. The corpus is collected from different vendors with various layout templates. In the few-shot learning experiments, we create multiple subsets by randomly subsampling documents from its training set.
\textbf{Inventory} is collected by us that contains inventory-related purchase documents (e.g., utility bills), containing a few document types different from the Payment dataset.
We prepare the dataset consists of $\sim$~24k documents in two annotated versions. The first version, \textit{Inventory-7}, is annotated at word-level with the same 7 entity types from Payment. The second version, \textit{Inventory-28}, is annotated at word-level with 21 additional entities types, including common entity types such as \texttt{shipping address}, \texttt{supplier name}.
\textbf{QueryWeb\xspace} is collected by us from publicly available webpages in English from the Internet with the acquisition procedure stated in Section~\ref{sec:pretraining}.
\textbf{QueryWeb\xspace-ML} is a multilingual (ML) version of QueryWeb\xspace containing more than $50$ languages (at $99\%$ percentile). We collect the dataset to help validate the effectiveness for multilingual pre-training for zero-shot generalizability across different languages.
\subsection{Experimental Details} \label{sec:exp_details}
We use the BERT-multilingual vocabulary~\cite{devlin2018bert} to tokenize the serialized OCR words. We have two variants of QueryForm\xspace: a 6-layer ETC with 512 hidden size and 8 attention heads and a 12-layer ETC with 768 hidden size and 12 attention heads. For both S-Prompt\xspace and E-Prompt\xspace generated from dataset annotations, we use a maximum token length of 32 with zero padding. For learnable S-Prompt\xspace used in the fine-tuning stage, we treat its token length as a hyperparameter to search.
\noindent \textbf{Pre-training.} %
Our method uses the proposed QueryWeb\xspace pre-training approach on the 2 large-scale webpage-based datasets. Other comparing methods use MLM pre-training with the corresponding datasets mentioned in their papers, including $\sim$0.7k unlabeled form documents for ETC+RichAtt~\citep{ainslie2020etc}, IIT-CDIP dataset~\citep{lewis2006building} with 7M documents for FormNet~\cite{lee2022formnet}, 30M multilingual documents for LayoutXLM~\cite{xu2021layoutxlm}, and 2.5TB multilingual CommonCrawl data for XLM-RoBERTa~\cite{conneau2019unsupervised} and InfoXLM~\cite{chi2020infoxlm}.
For QueryWeb\xspace pre-training, we use Adam optimizer with a batch size of 512. We set the learning rate to 0.0002 with a warm-up proportion of 0.01 and a linear learning rate decay of 100k steps.
\noindent \textbf{Fine-tuning.} We fine-tune all models using the Adam optimizer with a batch size of 128 and a learning rate of 0.0004. No warm-up or learning rate decay is used.
\noindent \textbf{Evaluation metrics.} We use Mirco-F1 to evaluate the performance on XFUND related expeirments, following~\citet{xu2021layoutxlm}; macro-F1 to evaluate Payment related experiments, following~\citet{majumder2020representation, lee2022formnet}.
\begin{table}[t!]
\small
\centering
\scalebox{0.85}{
\begin{tabular}{c|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Pre-training}} & \multirow{2}{*}{\textbf{E-P}} & \multirow{2}{*}{\textbf{S-P}} & \multicolumn{2}{c}{\textbf{Source $\rightarrow$ Target}} \\
& & & \bf I7 $\rightarrow$ Payment & \bf I28 $\rightarrow$ Payment \\
\midrule
MLM & \checkmark & & 81.95 & 85.71 \\
QueryWeb\xspace & \checkmark & & 85.33 & 87.41 \\
MLM & \checkmark & \checkmark & 84.91 & 86.41 \\
QueryWeb\xspace & \checkmark & \checkmark & \bf 88.15 & \bf 89.23 \\
\bottomrule
\end{tabular}
}
\caption{Ablation study of QueryForm\xspace on Inventory-Payment benchmark. E-P and S-P are abbreviations of E-Prompt\xspace and S-Prompt\xspace, respectively.} \vspace{-.5cm}
\label{table:ablation}
\end{table}
\subsection{Zero-shot Transfer Learning Results}
\begin{figure*}[t]
\centering
\includegraphics[width=.9\textwidth]{figures/output_visualization-v2.pdf}
\vspace{-.2cm}
\caption{Visualization example from XFUND (French). QueryForm\xspace labels entities with ambiguous ``others'' annotation from ground truth as one of the other three entity types with concrete meanings.}%
\label{fig:visualization} \vspace{-.3cm}
\end{figure*}
\begin{figure}[t]
\centering
\centering
\includegraphics[width=.48\textwidth]{figures/train_loss.pdf}
\vspace{-.5cm}
\captionof{figure}{Loss visualization of pre-training on QueryWeb\xspace (Left) and fine-tuning on Inventory (Right).}
\label{fig:train_loss}
\end{figure}
To evaluate QueryForm\xspace, we introduce two zero-shot transfer learning tasks and one few-shot learning task, as shown in Table~\ref{table:experiment_design}. We follow the official train-test split for all public available datasets by default, unless specified explicitly.
For zero-shot on Payment test set, we pre-train on QueryWeb\xspace and fine-tune on Inventory.
\noindent \textbf{FUNSD-XFUND.} %
In Table~\ref{table:xfund-zs}, we compare QueryForm\xspace with recent zero-shot transfer learning methods, include XLM-RoBERTa~\citep{conneau2019unsupervised}, InfoXLM~\cite{chi2020infoxlm}, and the current state-of-the-art LayoutXLM~\cite{xu2021layoutxlm}, which are all MLM pre-trained on multilingual text or document datasets of different sizes (details in Section~\ref{sec:exp_details}).
QueryForm\xspace outperforms all comparing methods even with much smaller model size and no image modality. In particular, when we pre-train QueryForm\xspace on the multilingual QueryWeb\xspace-ML, QueryForm\xspace obtains a significant boost on all languages.
Although XLM-RoBERTa and InfoXLM are MLM pre-trained on 2.5TB multilingual data, and LayoutXLM especially collected 30M visually rich documents for MLM pre-training, the stronger transferability of QueryForm\xspace for zero-shot DEE indicates that our pre-training method is more effective than MLM for this specific task. Since the 12 layer (185M) QueryForm\xspace has outperformed previous state-of-the-arts by a large margin with a much smaller model size, we leave further scaling up as future work.
\begin{table}[t!]
\small
\centering
\scalebox{0.90}{
\begin{tabular}{l|c|c}
\toprule
{\textbf{Method}} & {\textbf{Payment 1-shot}} & {\textbf{Payment 10-shot}}\\
\midrule
ETC+RichAtt\textsuperscript{2} & 59.62 & 88.21 \\
FormNet (157M)\textsuperscript{2} & 55.67 & 86.25 \\
QueryForm\xspace & \textbf{89.26} & \textbf{90.53} \\
\bottomrule
\end{tabular}
}\\
\caption{Comparison between QueryForm\xspace and comparing methods further fine-tuned on few-shot Payment training sets.}
\vspace{-3mm}%
\label{table:payment-fs}
\end{table}
\noindent \textbf{Inventory-Payment.}
Table~\ref{table:payment-zs} shows the zero-shot transfer learning result on the Inventory-Payment benchmark.
We compare QueryForm\xspace against the current state-of-the-art on Payment, FormNet~\cite{lee2022formnet}, and our baseline ETC+RichAtt (see Section~\ref{sec:arch_design}). QueryForm\xspace outperforms competing methods by a significant margin. Although FormNet obtains the best supervised upper-bound result on Payment, the lower zero-shot results indicate that knowledge transfer from different types of documents is still very challenging.
QueryForm\xspace is expected to take advantage of larger number of queries though they are less relevant.
To validate, we compare fine-tuning datasets with 7 and 28 annotated entities. As can be seen, supervised methods like FormNet and ETC+RichAtt suffer performance drop when seeing additional entities not existed in target dataset, while QueryForm\xspace gains further performance improvement.
\noindent \textbf{Ablation study.} We conduct ablation study of QueryForm\xspace. From the results in Table~\ref{table:ablation}, we can see that both our dual prompting\xspace strategy and QueryWeb\xspace pre-training contribute to the zero-shot F1 score individually, and synergistically improve the performance when working together.
\subsection{Few-shot Learning on Payment}
In practice, %
it is reasonable to believe the existence of a few annotated document from the target document type can make models quickly adapt.
Therefore, we design a few-shot learning step based on the best performing model obtained on the Inventory dataset.
Table~\ref{table:payment-fs} shows the 1- and 10-shot results on Payment. To make sure the training is stable on low data regime and comparison is fair, we conduct hyperparameter search (\textit{e.g.}, learning rates, \# of freezing layers) for all methods and select the best performing ones to present. When fine-tuning on the extreme Payment 1-shot, both FormNet and ETC-RichAtt overfit the single document from Payment severely, while QueryForm\xspace maintains high performance\footnote{For compared methods trained by us, although 10-shot leads to improvement, 1-shot degrades with the same parameter-search methods used for QueryForm\xspace. More advanced training strategies are not considered here.}. When extending to 10-shot, all methods improves, and QueryForm\xspace still perform the best. {Although FormNet is state-of-the-art in supervised learning setting, it underperforms other methods on low data regime. We hypothesize GCN requires more data to learn layout features.}
\subsection{Result analysis}
\noindent \textbf{Prediction visualization.} Figure~\ref{fig:visualization} demonstrates an example output of QueryForm\xspace. %
QueryForm\xspace infers entities that are annotated as ``others'' in ground truth as one of the other three entity types with concrete meanings. For example, ``Type'' is a question in the form, however, without corresponding answer. Although human annotators might find it ambiguous and mark it as ``others'', QueryForm\xspace successfully recognize it as a ``question''.
\noindent \textbf{Loss visualization.} %
Figure~\ref{fig:train_loss} shows the loss curves of pre-training on QueryWeb\xspace (Left) and fine-tuning on Inventory (Right).
\section{Conclusion}
This paper presents QueryForm\xspace, a novel framework to address the challenging zero-shot document entity extraction problem. The dual prompting\xspace design in QueryForm\xspace offers a refreshing view to unify the pre-training and fine-tuning objectives, allowing us to leverage large-scale form-like webpages with HTML tags as weak annotations.
QueryForm\xspace sets new state-of-the-art results on multiple zero-shot DEE benchmarks.
We believe QueryForm\xspace serves as a flexible framework for document understanding tasks, and multiple interesting directions could be further explored within the framework, such as prompt design, richer pre-training sources, etc.
\section*{Acknowledgements}
We greatly thank Chun-Liang Li, Harr Chen, Evan Huang, Nan Hua for their valuable feedback.
|
1,116,691,500,389 | arxiv | \section{Introduction} \label{sec:intro}
The orbital parameters of a planetary system are sculpted by its formation processes and evolutionary history in a quest for a stable configuration. Circular, coplanar planetary orbits are a natural consequence of formation in a disk, whether by core accretion or gravitational instability~\citep[e.g.,][]{1993ARAandA..31..129L,2001ASPC..235..195B}. This effect is observed in the low eccentricities and low mutual inclinations of the Solar System planets, and multi-exoplanet systems~\citep[e.g.,][]{2015PNAS..112...20L}. ALMA observations of protoplanetary disks with gaps also suggest coplanarity likely as a result of newborn planets clearing gas and dust from the disk~\citep[e.g., HL Tau][]{2015MNRAS.453L..73D}.
The dynamical evolution of an early planetary system determines the final configuration of its components \citep{2008ApJ...686..580C,2019A&A...629L...7C,2008ApJ...686..621F}. Fly-by events can excite orbital eccentricities and even eject planets on timescales proportional to the impact parameter of the fly-by~\citep[e.g.,][]{2011MNRAS.411..859M}. Fly-bys can also trigger planet-planet scattering, decreasing the semimajor axes of some planets (typically the more massive ones) while leaving others in much wider orbits~\citep{2009ApJ...693L.113S,2009ApJ...696.1600V}.
The orbital architectures of mature systems reflect both their formation conditions and their dynamical evolution.
In this Letter, we present the full orbital architecture of the 14~Her system.
Over 20 years of RV follow-up show a clear signature of a giant planet~\citep[e.g.][]{2003ApJ...582..455B,2004A&A...414..351N,2006ApJ...645..688G}, and a long-term trend for a second one~\citep[e.g.,][]{2021AJ....161..134H,2021ApJS..255....8R,2007ApJ...654..625W}. With absolute astrometry from \emph{Hipparcos} and \emph{Gaia},
we are able to break the $M \sin i$ degeneracy for both planets and calculate their orbital parameters, as well as identify strong evidence for a high mutual inclination between the orbits. The orbits of the 14~Her planets hint at a turbulent past marked by dynamical interactions between the two massive planets that led to their current, peculiar configurations.
We structure the Letter as follows. In Section~\ref{sec:starchar}, we present the existing stellar characteristics from the literature. In Section~\ref{sec:data}, we describe our RV and absolute astrometric data. Section~\ref{sec:mcmc} describes the \texttt{orvara}~tool for orbit fitting and our analysis procedure. In Section~\ref{sec:results}, we report the orbital parameters derived for the two planets of this system. In Section~\ref{sec:discussion}, we explore the implications of our results for the formation and evolution of the 14~Her system. We present our conclusions in Section~\ref{sec:conclusions}.
\iffalse
\fi
\section{System Characterization}\label{sec:starchar}
14~Her ($\alpha$ = 16:10:24.315, $\delta$ = +43:49:03.50) is a middle-aged K0 dwarf located $17.9416\pm0.0072$\,pc away with an estimated $T_\mathrm{eff}$ $= 5282$\,K~\citep{2021AandA...649A...1G,2021A&A...649A...2L}.
Due to its brightness and proximity to Earth, 14~Her was one of the first stars monitored with RV to search for exoplanets~\citep[e.g.,][]{2003A&A...410.1039P,2003A&A...410.1051N,2004A&A...414..351N}.
Even though significant RV variations had been found in the ELODIE RV data by 1996, the first formal discovery publication for 14~Her~b was not presented until 2003~\citep{2003ApJ...582..455B}.
14~Her~c was not identified until 2007~\citep{2007ApJ...654..625W} due to its long orbital period.
Similarly to other planet-hosting stars discovered around the same time (e.g., $\rho^1$ Cnc), 14~Her is ``super''-metal-rich with a $\mathrm{[Fe/H]} = 0.50\pm0.05$~\citep{1996ApJS..102..105T,1999ApJ...511L.111G}. Alternative measurements place the metallicity of 14~Her between $\mathrm{[Fe/H]} = 0.30-0.60$~\citep[e.g.,][]{2003AJ....126.2015H,2006AJ....131.3069L}, although certainly metal-rich. Abundance measurements of absorption lines show that 14~Her is a chromospherically quiet star~($log_{10} R'_{HK} = -4.94\pm0.04$;~\citealt{2019AJ....158..101M}).
We infer the fundamental parameters of 14~Her using the Bayesian activity-age dating method of \cite{2014ApJ...786....1B}, the luminosity, effective temperature, and angular diameter relations of \cite{2010A&A...512A..54C}, and the PARSEC isochrones \citep{2012MNRAS.427..127B}. We find an age of $4.6^{+3.8}_{-1.3}$\,Gyr based on the star's low chromospheric activity and slow rotation period. We use the $V_T-J$ color from Tycho-2 \citep{2000A&A...355L..27H} and 2MASS \citep{2003yCat.2246....0C} and adopt a metallicity of $[{\rm Fe/H}] = 0.43 \pm 0.07$ to infer a luminosity of $0.67 \pm 0.02$\,$L_\odot$ using the relations given in \cite{2010A&A...512A..54C}. Our adopted metallicity range spans 2/3 of the measurements in the PASTEL catalog \citep{2010AandA...515A.111S}, all of which are from high-dispersion, high signal-to-noise spectroscopy. We do require a slight extrapolation of the \cite{2010A&A...512A..54C} relations, which are only validated to $[{\rm Fe/H}] = 0.4$. The effective temperature relations of \cite{2010A&A...512A..54C}, based on the $V_T - K_s$ color, give $T_{\rm eff} = 5310 \pm 30$\,K after including both measurement errors and the 18\,K rms scatter about their calibrated relation. These combine with the measured \emph{Gaia} parallax to give a radius of $0.97 \pm 0.02$\,$R_\odot$. This compares well with the $J$-band-based angular radius of $0.99 \pm 0.02$\,$R_\odot$ using the $V_T-K_s$ color \citep[Table 6,][]{2010A&A...512A..54C}. Finally, we use our inferred age distribution, a uniform $[{\rm Fe/H}]$ distribution, and the Salpeter initial mass function as priors, and construct a posterior mass distribution using the PARSEC isochrones together with our inferred metallicity and luminosity. We finally obtain a mass for 14~Her~A of $0.98 \pm 0.04$\,$M_\odot$.
Previous estimations of the rotation period of this star place it at 22.38\,days~\citep{2016AandA...596A..76S} and $48.5\pm1.137$\,days~\citep{2010MNRAS.408.1606W}, both of these coming from activity-period relations based on $logR'_{HK}$ measurements. While 14~Her has been observed in \emph{TESS} sectors 24 and 25, we are limited by the observing window of a given sector of 27.4\,days, and we do not attempt to combine \emph{TESS} sectors since is not trivial. From an archival ASAS-SN light curve, spanning just over 5 years (2013-02-13 to 2018-09-06), we have obtained a significant period of 29.5\,days with a Lomb-Scargle periodogram (Figure~\ref{fig:14HerLC}), which is consistent with the age of the star.
\begin{figure}
\centering
\includegraphics[width=0.77\textwidth]{ASAS-SN_rawLC.jpg}\\
\includegraphics[width=0.77\textwidth]{14Her_ASAS-SNlc_periodogram.jpg}
\caption{\emph{(Top)} ASAS-SN light curve for 14~Her. \emph{(Bottom)} Lomb-Scargle periodogram showing a significant period at 29.5\,days. Periods shorter than the red line are shorter than the 2-3\,day ASAS-SN cadence.}
\label{fig:14HerLC}
\end{figure}
\begin{deluxetable*}{lcc}
\tablenum{1}
\tablecaption{Stellar properties.}\label{tab:star}
\tablewidth{0pt}
\tablehead{
\colhead{Property} & \colhead{Value} & \colhead{Ref.}}
\startdata
\hline
\multicolumn{3}{c}{\emph{Fundamental Parameters}} \\
Spectral type & K0V & 4 \\
Effective temperature ($T_\mathrm{eff}$) & $5310\pm30$\,K & 1\\
Mass (M) & $0.98\pm0.04$\,$M_\mathrm{\odot}$ & 1 \\
Age & $4.6^{+3.8}_{-1.3}$\,Gyr & 1\\
Radius (R) & $0.97 \pm 0.02\,R_{\odot}$ & 1\\
Luminosity (L) & $0.67\pm0.02\,$$L_\mathrm{\odot}$ & 1 \\
Surface gravity (log $g$) & 4.46 & 3\\
Bulk metallicity ($\mathrm{[Fe/H]}$) & $0.43\pm0.07$ & 5\\
Chromospheric activity ($log R'_{HK}$) & $-4.94\pm0.04$ & 3\\
Equatorial velocity ($V\,\sin i$) & $1.65$\,km/s & 3\\
Rotation period (P$_{rot}$) & 29.5\,days & 1\\
\hline
\multicolumn{3}{c}{\emph{Astrometry}} \\
Parallax & $55.866\pm0.029$\,mas & 2\\
Proper Motion in R.A. ($\mu_{\alpha}cos\delta$) & $131.745\pm0.028$\,mas/yr & 2\\
Proper Motion in Dec. ($\mu_{\delta}$) & $-297.025\pm0.037$\,mas/yr & 2\\
$\chi^2$ & 1008.64 & 1\\
\hline
\multicolumn{3}{c}{\emph{Photometry}} \\
\emph{Gaia} $B_p-R_p$ color & 1.002\,mag & 2\\
\emph{Gaia} $G$ magnitude & 6.395\,mag & 2\\
\emph{Gaia} RUWE & 1.819 & 2
\enddata
\tablerefs{(1) This paper; (2)~\citet{2021AandA...649A...1G}; (3)~\citet{2019AJ....158..101M}; (4)~\citet{2018ApJS..238...29P}; (5)~\citet{2010AandA...515A.111S}.}
\end{deluxetable*}
\section{Data}\label{sec:data}
\subsection{Keck/HIRES radial velocity}
We have collected 283 archival RV data points from Keck/HIRES~\citep{2021ApJS..255....8R,2003ApJ...582..455B,2007ApJ...654..625W,2021AJ....161..134H} spanning over 20 years (1997-04-07 to 2020-02-26;~\citealt{2021ApJS..255....8R}). The High Resolution Echelle Spectrometer (HIRES;~\citealt{1994SPIE.2198..362V}), mounted on the Keck I 10-m Telescope, operates between $0.3-1\,\mu$m providing spectral resolutions between R$\sim25000-85000$ and RV precision down to 1\,m/s. The RV measurements for 14~Her fluctuate between $-$73\,m/s and 191\,m/s, with a semi-amplitude of 100\,m/s and a visible long-term slope caused by the c planet. The mean precision for our data is 1.08\,m/s. The RV curve of 14~Her is shown in Figure~\ref{fig:rv}.
We also found RV data in the literature from the ELODIE spectrograph at the Observatoire de Haute-Provence and the Automated Planet Finder (APF) at Lick Observatory. The ELODIE data spans epochs from 1997 to 2002, whereas the APF data also covers 5 years but between 2014 and 2019. The epoch span of the ELODIE data was sufficient to discover the b planet and fully determine its orbit~\citep{2004A&A...414..351N}. However, we require a longer baseline to determine the orbit of the c planet, and HIRES is the longest-running instrument with continuous RV monitoring of planet-hosting stars~\citep{2017AJ....153..208B}. Therefore, we decided to only use HIRES RV data to avoid any offsets or potential systematics across instruments. However, HIRES underwent an upgrade in 2004 which effectively changed its RV zero-point (~\citealt{2019MNRAS.484L...8T}; see Section~\ref{sec:rvoff}).We conservatively treat the pre- and post-upgrade data as coming from two different instruments which introduces two more degrees of freedom to the orbit fit.
\begin{figure}
\centering
\includegraphics[width=0.77\textwidth]{RV_OC_14_Her_InstAll.pdf}
\caption{\texttt{orvara}~fit to the HIRES RV curve spanning over 20 years. Data from the pre-2004 and post-2004 upgrade are shown as blue and yellow solid circles, respectively. Residuals are show in the bottom panel. Colorbar shows the range of masses of the b planet from the MCMC chains.}
\label{fig:rv}
\end{figure}
\subsection{\textit{Hipparcos} and \textit{Gaia} absolute stellar astrometry}
We combine the radial acceleration information with tangential acceleration from absolute astrometry. The long baseline between \emph{Hipparcos} and \emph{Gaia} epochs and increasingly better astrometric precision from \emph{Gaia} EDR3 significantly improve the uncertainty of the proper motion accelerations. We use the absolute astrometry of the Hipparcos-Gaia Catalog of Accelerations (HGCA;~\citealt{2021ApJS..254...42B}) which has now been updated with \emph{Gaia} EDR3 astrometry. The HGCA has cross-calibrated the astrometric solutions for all \emph{Hipparcos} stars that are present in \emph{Gaia} to a common reference frame~\citep{2018ApJS..239...31B}.
The proper motion of 14~Her across the \emph{Hipparcos} and \emph{Gaia} EDR3 epochs is inconsistent with a constant value by $31\sigma$. 14~Her has a mean acceleration in its proper motion direction of $ (a_{\alpha},a_{\delta}) = (0.812,-8.918)$\,m/s/yr. Its total acceleration ($a_{tot} = 8.955$\,m/s/yr) is lower than that for 97\% of stars in the HGCA, therefore it is a highly significant, low accelerating star.
\section{Three-body orbit fit with \textit{orvara}\label{sec:mcmc}}
In order to fully determine the orbits of the 14~Her planets, we use the RV and absolute astrometry data in concert. The Orbits from Radial Velocity, Absolute, and/or Relative Astrometry Python package (\texttt{orvara};~\citealt{2021arXiv210511671B}) works by fitting Keplerian orbits to a combination of absolute astrometry from \emph{Hipparcos-Gaia}, RV, and relative astrometry if available.
For 3-body orbits, as is the case for 14~Her, the star's motion is approximated as a superposition of one Keplerian orbit due to each companion (see \citealt{2021AJ....161..179B} for a demonstration and validation of the approach).
We use \texttt{orvara}~to infer masses and orbital parameters for both 14~Her planets simultaneously. We use a parallel-tempered MCMC sampler \citep{2013PASP..125..306F,2021ascl.soft01006V} to robustly explore the parameter space with several copies of the system, randomly initialized at 30 temperatures. For each temperature, we use 100 walkers with 600,000 steps per walker, discarding the first 30,000 steps as burn-in and saving every $50^{th}$ step which are then used for inference.
For our final values, we combined four such chains into a long pseudo-chain. We set a Gaussian prior on the primary mass of $0.98\pm0.04$\,$M_\mathrm{\odot}$ and a log-flat prior on the RV jitter to range between $10^{-5}-10$\,m/s, letting the jitter of the pre- and post-2004 HIRES upgrade vary independently. We test the convergence of the MCMC algorithm by running several 600,000 step chains and confirming a stable plateau in log likelihood for both planet solutions.
\section{Results\label{sec:results}}
\subsection{Orbital parameters of 14~Her~b and c}
Our derived stellar and planetary parameters are shown in Table~\ref{tab:props}.
Since our RV data covers nearly five full orbits of the b planet, we are able to set strong constraints on its orbital parameters (Figure~\ref{fig:bcorner}). The b planet has a mass of M$_b =\,$${9.1}_{-1.1}^{+1.0}$$M_\mathrm{Jup}$~and its orbit has a semimajor axis of $a = $${2.845}_{-0.039}^{+0.038}$\,AU with a moderate eccentricity of $e = $${0.3686}_{-0.0031}^{+0.0032}$ and a ${32.7}_{-3.2}^{+5.3}$\,degree inclination with respect to the plane of the sky. Its orbital period translates to $P = $${4.8277}_{-0.0023}^{+0.0022}$\,years.
Despite only seeing a long-term trend in the RV data covering $\sim$15\% of the period, we can set some constraints on the orbital parameters of the c planet. The mass of the c planet is lower than that of b (M$_c =\,$${6.9}_{-1.0}^{+1.7}$\,$M_\mathrm{Jup}$) and it is located much farther away from the star, at $a = $${27.4}_{-7.9}^{+16}$\,AU. 14~Her~c has a highly eccentric orbit ($e =$${0.64}_{-0.13}^{+0.12}$). With respect to the orbit of 14~Her~b, the c planet has a broad distribution of inclinations ($i =$${101}_{-33}^{+31}$\,degree) but hints at being misaligned (see Figure~\ref{fig:ccorner}). Its orbital period amounts to $P =$${144}_{-58}^{+139}$\,years with large uncertainties. Our $M\sin i$, semimajor axis, and eccentricity values for the b planet are in line with previous results~\citep[e.g.,][]{2021AJ....161..134H,2021ApJS..255....8R}. However, introducing the constraint from the absolute astrometry enhances these parameters for the c planet with respect to earlier determinations using only RV data. We calculate a minimum mass of $M_c\,\sin i_c$ = $6.79^{+1.85}_{-1.03}$\,$M_\mathrm{Jup}$~which is higher but consistent within $1\sigma$ with published values ($M_c\,\sin i_c$ = $5.8^{+1.4}_{-1.0}$\,$M_\mathrm{Jup}$,~\citealt{2021ApJS..255....8R}; $M_c\,\sin i_c$ = $6.1^{+1.3}_{-0.9}$,~\citealt{2021AJ....161..134H}).
\iffalse
Rosenthal:
Msini_b = 4.85+0.15-0.14,
a_b = 2.83+/-0.04
e_b = 0.3674
Msini_c = 5.8+1.4-1.0
a_c = 16.4+9.3-4.3
e_c = 0.45+0.17-0.15
Hirsch:
Msini_b = 4.96
a_b = 2.8619
e_b = 0.367
Msini_c = 6.1+1.3-0.9
a_c = 16.7+7.7-3.6
e_c = 0.4+/-0.1
\fi
\begin{deluxetable*}{lcc}
\tablenum{2}
\tablecaption{MCMC-derived properties for the 14 Her planets. \label{tab:props}}
\tablewidth{0pt}
\tablehead{\colhead{Parameter} & \multicolumn{2}{c}{Posterior Median $\pm1\sigma$}}
\startdata
\multicolumn{3}{c}{\emph{System parameters}} \\
& \multicolumn{2}{c}{\emph{14 Her}} \\
\hline
Stellar mass ($M_{*}$) & \multicolumn{2}{c}{${0.984}_{-0.046}^{+0.047}$\,$M_\mathrm{\odot}$} \\
Parallax ($\varpi$) & \multicolumn{2}{c}{${55.86573}_{-0.00049}^{+0.00046}$\,mas} \\
RV jitter ($\sigma_{jit}$, pre-upgrade) & \multicolumn{2}{c}{${2.85}_{-0.36}^{+0.42}$\,m/s} \\
RV jitter ($\sigma_{jit}$, post-upgrade) & \multicolumn{2}{c}{${2.88}_{-0.15}^{+0.16}$\,m/s} \\
RV zero-point (pre-upgrade) & \multicolumn{2}{c}{${6.9}_{-8.6}^{+9.1}$\,m/s} \\
RV zero-point (post-upgrade) & \multicolumn{2}{c}{${12.1}_{-8.1}^{+8.9}$\,m/s} \\
\texttt{orvara}\,reference epoch ($t_{\mathrm{ref}}$) & \multicolumn{2}{c}{2455197.50\,BJD} \\
\hline
\multicolumn{3}{c}{\emph{Planetary parameters}} \\
& $b$ & $c$ \\
\hline
Planet mass ($M$) & ${9.1}_{-1.1}^{+1.0}$\,$M_\mathrm{Jup}$ & ${6.9}_{-1.0}^{+1.7}$\,$M_\mathrm{Jup}$\\
Mass ratio (q) & ${0.00892}_{-0.0011}^{+0.00090}$ & ${0.00674}_{-0.00095}^{+0.0017}$ \\
Semi-major axis ($a$) & ${2.845}_{-0.039}^{+0.038}$\,AU & ${27.4}_{-7.9}^{+16}$\,AU \\
Semi-major axis ($\alpha$) & ${158.9}_{-2.2}^{+2.1}$\,mas & ${1529}_{-442}^{+869}$\,mas \\
Period (P) & ${4.8277}_{-0.0023}^{+0.0022}$\,yrs & ${144}_{-58}^{+139}$\,yrs \\
Eccentricity ($e$) & ${0.3686}_{-0.0031}^{+0.0032}$ & ${0.64}_{-0.13}^{+0.12}$ \\
Inclination ($i$) & ${32.7}_{-3.2}^{+5.3}$\,deg & ${101}_{-33}^{+31}$\,deg\\
Argument of periastron ($\omega$) & ${22.78}_{-0.55}^{+0.53}$\,deg & ${15.2}_{-6.0}^{+6.0}$\,deg \\
PA of ascending node ($\Omega$) & ${236}_{-15}^{+15}$\,deg & ${313}_{-57}^{+30}$\,deg \\
Mean longitude at reference epoch & ${82.71}_{-0.19}^{+0.19}$ & ${36}_{-10}^{+12}$\,deg \\
Epoch at periastron (T$_0$) & ${2456667.4}_{-2.2}^{+2.3}$\,JD & ${2504873}_{-21163}^{+50765}$\,JD\\
Minimum mass ($M\,\sin i$) & $4.93^{+0.51}_{-0.68}$\,$M_\mathrm{Jup}$ & $6.79^{+1.85}_{-1.03}$\,$M_\mathrm{Jup}$
\enddata
\end{deluxetable*}
\begin{figure}
\centering
\includegraphics[width=0.77\textwidth]{corner_HD145675b_chain48-51.pdf}
\caption{Corner plot for the orbital parameters of the b planet.}
\label{fig:bcorner}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.77\textwidth]{corner_HD145675c_chain48-51.pdf}
\caption{Corner plot for the orbital parameters of the c planet.}
\label{fig:ccorner}
\end{figure}
\subsection{Relative orientation of 14~Her~b and c\label{sec:imut}}
A surprising aspect of our MCMC results is the evidence for discrepant inclinations of the orbital planes for the two 14~Her planets. Orbit misalignment of outer giant planets can have an effect on the final configuration of super-Earths within a system, limiting their maximum mass~\citep{2019MNRAS.485..541C}, and increasing their eccentricities even to the point of migration~\citep{2017AJ....153...42L}, which in turn can have serious consequences on their potential for habitability. We evaluated the coplanarity of the orbits by calculating the 3D angle between the angular momentum vectors of the planetary orbits.
The unit vectors are given by:
\begin{eqnarray}
\hat{\bf L}_b =& \sin \Omega_b \sin i_b\,\hat{\bf x} -\cos\Omega_b \sin i_b \,\hat{\bf y} + \cos i_b\,\hat{\bf z}\\
\hat{\bf L}_c =& \sin \Omega_c \sin i_c\,\hat{\bf x} -\cos\Omega_c \sin i_c \,\hat{\bf y} + \cos i_c\,\hat{\bf z}
\end{eqnarray}
where $\Omega$ is the longitude of the ascending node and $i$ is the inclination of the orbit. The angle between the two angular momentum vectors is then
\begin{equation}\label{eq:Lbc}
\Theta_{bc} = \mathrm{cos^{-1}} \left(\hat{\bf L}_b \cdot \hat{\bf L}_c\right)
\end{equation}
Calculating this expression on each chain of our MCMC sampling returns an angle of $\Theta_{bc} = $$96.3_{-36.8}^{+29.1}$\,degrees. \texttt{orvara}~places uniform priors on the orientation of each orbital plane, and this translates into a $\sin i$ prior on the relative inclination between the two planes. Comparing the distribution of mutual inclinations to the $\sin i$ prior used in \texttt{orvara}~to give equal probability to all orbital orientations (Figure~\ref{fig:Lbc}), we see that mutual inclinations of roughly $0-60^{\circ}$ and $130-180^{\circ}$ are strongly disfavored, indicating that the orbits are misaligned. 14~Her joins the ranks of only 3 other systems with giant planets in misaligned orbits: $\pi$ Mensae~\citep{2020A&A...640A..73D}, $\nu$ Andromedae~\citep{2010ApJ...715.1203M}, and Kepler-108~\citep{2017AJ....153...45M}. In comparison with $\pi$~Men~c, whose inclination is not determined and only a plausible range is estimated, we have obtained both inclinations for the 14~Her planets and arrive at a misalignment constraint that is at least as confident as that for $\pi$~Men.
Another key measurement to constrain the orientation of the system is the spin-axis alignment of the star. Discrepant $V \sin i$ measurements for 14~Her are reported in the literature~\citep{2019AJ....158..101M,2016AandA...596A..76S,2005ApJS..159..141V,2017AJ....153...21L}, giving rise to all range of possible inclinations. However, the brightness of 14~Her makes it potentially amenable for 20\,s-cadence \emph{TESS} observations for asteroseismology to find the projected obliquity of the star (e.g. $\pi$ Men,~\citealt{2021arXiv210809109H}; $\alpha$ Men,~\citealt{2020arXiv201210797C}). A third constraint on the relative angle between the star and the planets would be needed for a full orientation characterization and more clear description of the system's past history.
\begin{figure}
\centering
\includegraphics[width=0.77\textwidth]{Lbc_vs_siniprior_chain048-51.pdf}
\caption{Mutual inclination distribution built from the inclination and ascending node posteriors as shown in Equation~\ref{eq:Lbc}. The the median, $16^{\rm th}$ and $84^{\rm th}$ percentiles are shown as the solid and dashed black lines. While still a broad distribution, the discrepancy between the mutual inclination and the $\sin i$ prior (shown in maroon) is a strong evidence for the misalignment of the b and c orbits.\label{fig:Lbc}}
\end{figure}
\iffalse
\subsection{Spin-axis orientation of the primary star}
In order to constrain the relative orientation of the planets with respect to the star, we can measure the inclination of the spin-axis of 14~Her~A. The inclination of the spin axis of the star defines the normal plane where the protoplanetary disk once existed, unless enacted by an external torque (e.g., from a binary companion), and therefore can provide an additional constraint on the misalignment of the planetary orbits. In principle, we can calculate the inclination of the star with an estimate of its radius, a rotation period, and a $V\sin i$ measurement to fully describe the orientation of the system.
Previous estimations of the rotation period of this star place it at 22.38\,days~\citep{2016AandA...596A..76S} and $48.5\pm1.137$\,days~\citep{2010MNRAS.408.1606W}, both of these coming from activity-period relations based on $logR'_{HK}$ measurements. While 14~Her has been observed in \emph{TESS} sectors 24 and 25, we are limited by the observing window of a given sector of 27.4\,days, and we do not attempt to combine \emph{TESS} sectors since is not trivial. From an archival ASAS-SN light curve, spanning just over 5 years (2013-02-13 to 2018-09-06), we have obtained a significant period of 29.5\,days with a Lomb-Scargle periodogram (Figure~\ref{fig:14HerLC}), which is consistent with the age of the star.
Multiple discrepant $V\sin i$ measurements exist in the literature for this source: 0.1\,km/s~\citep{2019AJ....158..101M}, 1.6\,km/s~\citep{2016AandA...596A..76S,2005ApJS..159..141V}, and 3.3\,km/s~\citep{2017AJ....153...21L}. At the root of these incompatible results are different choices of atomic line lists, model atmospheres, and analysis techniques.
\iffalse
0.1 from: (SME; Valenti & Piskunov
1996; Valenti & Fischer 2005; Piskunov & Valenti 2017). The SME algorithm fits selected
spectral regions with synthetic spectra generated with a particular
atmospheric model (ATLAS12; Kurucz 2013) and line list. Following Valenti & Fischer (2005), microturbulence was fixed to 0.85 km s−1
1.6 from: The projected rotational velocity, vsin i, was obtained from a
calibration of the full width at half maximum (FWHM) of the
cross-correlation function of SARG spectra. For the single stars we adopted the vsin i
from literature sources such as Valenti & Fischer (2005).
\fi
Together with a $V\sin i = 1.6$\,km/s~\citep{2005ApJS..159..141V} and an estimated radius of 1.0032\,$R_\mathrm{\odot}$~from \emph{Gaia} DR2, we obtain a spin-axis inclination angle of $68.27^{\circ}$.
\fi
\subsection{Radial velocity offset in HIRES between pre-and post-upgrade}\label{sec:rvoff}
The stability of an RV instrument is crucial to detect small variations that could signal the presence of low-mass and long-period companions. The HIRES spectrograph underwent a major upgrade in 2004 which enabled an RV precision of 1-3\,m/s, a factor of 3 improvement~\citep{2006ApJ...646..505B}. This upgrade changed the RV zero-point by $-1.5\pm0.1\,$m/s, introduced a long-term drift of $<1\,$m/s, and a small nightly drift~\citep{2019MNRAS.484L...8T}.~\citet{2021ApJS..255....8R} modeled the pre- and post-upgrade RV data separately for their planet search without applying the~\citet{2019MNRAS.484L...8T} offset manually. For our results, we conservatively used the~\citet{2021ApJS..255....8R} RV data in the same way, without any modifications. We labeled the pre- and post-upgrade data points as if they were coming from two different instruments to let \texttt{orvara}~find suitable zero-points and jitter values independently (see Table~\ref{tab:props}). Comparing the zero-points before and after the upgrade, we find a difference of $\Delta_{ZP} = $$-5.3_{-2.0}^{+2.0}$\,m/s, which is larger than the one presented by~\citep{2019MNRAS.484L...8T}.
We tested whether manually adding the~\citet{2019MNRAS.484L...8T} offset to the post-upgrade data would make a difference in our results. We ran four 600,000-step chains and combined them onto a long pseudo-chain with the same settings as for our science results but with a modified RV file with the~\citet{2019MNRAS.484L...8T} offset and two instrument IDs to reflect the pre- and post-upgrade data. The results for both the b and c planet are essentially the same as our science results, including similar jitter values and zero-points. We also tried using the~\citet{2021ApJS..255....8R} data as-is with a single instrument ID. In this case, we found that the posterior parameters for the b planet were the same as our science results within uncertainties, given that the orbital period of the b planet is fully covered several times within the post-upgrade epoch range alone. The median of the posterior parameters of the c planet differ slightly from our science results, especially the semi-major axis ($a_c = {31.1}_{-8.8}^{+16}$\,AU), the eccentricity ($e_c = {0.69}_{-0.11}^{+0.10}$), and the mass ($M_c = {7.11}_{-0.82}^{+1.5}$\,$M_\mathrm{Jup}$), although they stayed within the uncertainties of our science results. These differences imply that a small RV offset could have a significant effect in fitting long period planets. We also found more chains caught up in low probability areas compared to our science results. Therefore, it seems that results are robust as long as the pre- and post-upgrade data are treated separately.
\section{Discussion\label{sec:discussion}}
\subsection{On the formation and evolution of the 14 Her system\label{sec:alignment}}
In today's configuration, 14~Her is a stable system. A planetary system is stable against the mutual gravity of its components if their orbital separations exceed $2\sqrt{3}$ times their mutual Hill radii~\citep{1993Icar..106..247G}, and the large separation between the b and c planets exceeds the stability criterion by a factor of $\sim3$. However, we postulate that today's orbits are likely a far departure from their initial conditions.
Whether resulting from core accretion or gravitational instability, planets are expected to form in disks, causing multi-planet systems to be coplanar as a consequence~\citep{2010fee..book..101H,2010fee..book...71M}. Multi-planet systems tend to have lower eccentricities than single-planet systems~\citep{2009ApJ...693.1084W}, possibly because low eccentricites are energetically favorable for long-term stability. The strongly disfavored coplanarity and high eccentricities in the case of the 14~Her system point to subsequent dynamical evolution following the birth of its planets.
In a statistical analysis of orbital parameters,~\citet{2013ApJ...767L..24D} found that the orbits of giant planets around metal-rich stars were more eccentric than around metal-poor stars. They postulate that metal-rich stars have solid-rich protoplanetary disks that can form more giant planets than metal-poor stars, which could ultimately engage in gravitational interactions. Multiple planets of similar mass, formed close together, and originally in coplanar, circular orbits can gravitationally excite each other's orbits causing them to become eccentric, misaligned, and occasionally ejecting one planet out of the system in a process called planet-planet scattering~\citep{1996Natur.384..619W,2008ApJ...686..580C,2002aste.book..725M,2010ApJ...711..772R}. Given the high metallicity of 14~Her~A, and the similar masses, large eccentricities, and misaligned orbits of 14~Her~b and c, planet-planet scattering is a likely explanation for the current configuration of the system.
An external possibility is that a stellar fly-by may have triggered gravitational interactions between the planets, causing them to scatter into more eccentric orbits. Stellar fly-bys tend to disrupt a system over time scales of a few million to a few hundred million years and could lead to the ejection of one or more planets within 100\,Myr~\citep{2011MNRAS.411..859M,2015A&A...575A..35B}.
A more intriguing possibility is that the system initially might have had 3 nearly equal mass giant planets in relatively close, circular, coplanar orbits. After their natal disk is depleted of gas, the planets can engage in close encounters that result in excited orbital eccentricities. In the absence of gas drag, the eccentricities are dissipated through collisions, tidal circularization in the proximity of their stars, or dynamical friction by a residual population of planetesimals~\citep{2013ApJ...775...42I}. At moderate impact parameters, a perturber can cause wide scattering rather than a collision, with recoil velocities close to the planets' surface escape speed. At a distance of a few AU away from the star, this kind of perturbation typically leads to one planet escaping the gravitational potential of the star \citep{2008ApJ...686..580C,2008ApJ...686..603J}. In order to reach stability, the planets that survive develop large eccentricities and widely separated semimajor axes~\citep{2008ApJ...686..621F,2019A&A...629L...7C}, like in the case of the 14~Her system.
The mutual gravity of the planets would have caused the scattering of the most massive one to an eccentric, closer-in orbit (i.e., b with ${9.1}_{-1.1}^{+1.0}$\,$M_\mathrm{Jup}$~at ${2.845}_{-0.039}^{+0.038}$\,AU and $e = $${0.3686}_{-0.0031}^{+0.0032}$), the intermediate one to an eccentric, far-out orbit (i.e., c with ${6.9}_{-1.0}^{+1.7}$\,$M_\mathrm{Jup}$~at ${27.4}_{-7.9}^{+16}$\,AU and $e = $${0.64}_{-0.13}^{+0.12}$), and the ejection of the least massive one out of the system. Initially resonant orbits of 3 coplanar planets can become unstable causing planet-planet scattering, leaving behind a two planet-system with a large semimajor axial ratio ($\alpha = a_b/a_c < 0.3$) with mutual inclinations of $\sim30^{\circ}$ and up to $70^{\circ}$~\citep{2011MNRAS.412.2353L}. With a semimajor axial ratio of 0.12 and mutual inclination of $\Theta_{bc} = $$96.3_{-36.8}^{+29.1}$\,degrees, the orbital parameters of today's 14~Her system certainly fit these criteria.
The ejected planet would have had a mass lower or equal than that for the c planet ($M_c = $${6.9}_{-1.0}^{+1.7}$$M_\mathrm{Jup}$), and given the age of the primary star ($4.6^{+3.8}_{-1.3}$\,Gyr), it would have had a temperature of $\lesssim250\,$K. Isolated, planetary-mass objects at these temperatures are routinely identified as Y dwarfs, the coldest class of brown dwarfs of stellar-like origin~\citep[e.g.,][]{2011ApJ...743...50C,2014ApJ...786L..18L,2019ApJ...881...17M,2020ApJ...895..145B}. Depending on the relative occurrence of planet-planet scattering, the temperature-defined ``Y dwarf'' population might be of mixed origin, with both stellar-born objects and ejected planets among their ranks.
\subsection{Potential as a future extreme-AO target}
Past direct imaging campaigns have rejected the presence of stellar or substellar companions to 14~Her with confidence. In an effort to identify companions to planet-hosting FGK stars~\citet{2002ApJ...566.1132L} and~\citet{2002ApJ...581..654P} imaged 14~Her as part of their target lists with Keck and Lick adaptive optics (AO), respectively. These studies ruled out the presence of companions up to a magnitude difference of $\Delta K \geq 6.4$\,mag beyond $0\farcs7$ or 12.6\,AU, roughly equivalent to $\geq0.08\,$$M_\mathrm{\odot}$.~\citet{2009AJ....137..218C} further rejected $K_s = 18$\,mag companions at a $5\farcs0$ separation with Palomar AO.~\citet{2011ApJ...732...10R} used the MMT AO system for deep imaging of 14~Her in the $L'$-band as part of a larger survey to set direct imaging constraints on radial velocity planets.
However, no previous imaging has been able to resolve either planet due to their intrinsic faintness and large contrast with their host star. Based on our derived masses and the age of the star, we estimated effective temperatures and contrasts for the 14~Her planets for a suite of cloudless, hot start evolutionary models~\citep{2008ApJ...689.1327S,2003AandA...402..701B}. We estimate a $T_\mathrm{eff}$ $= 290-300$\,K for the b planet and $T_\mathrm{eff}$ $= 260$\,K for the c planet. These cold temperatures in turn imply extreme faintness in NIR bands, leading to contrasts of the order of $10^{-9}-10^{-10}$, far beyond the capabilities of current instrumentation (Table~\ref{tab:contrasts}). Therefore, it is no surprise that these planets have eluded direct detection to date.
We also estimated the reflected light fraction for both planets by calculating the fraction of light emitted by the star that is intersected by the planet at a given distance and then reflected, based on its global atmospheric properties and phase:
\begin{equation}
f_R\,(\alpha) = 0.25 \left(\frac{R_p}{a_p}\right)^2 A_B~ \Phi(\alpha)
\end{equation}
where $R_p = 1\,R_\mathrm{Jup}$ is the radius of the planet, $a_p$ is the semimajor axis of the planet, $A_B$ is the Bond albedo of the planet, which we approximate as Jupiter's value~($A_B = 0.503\pm0.012$;~\citealt{2018NatCo...9.3709L}), and $\Phi(\alpha)$ is the Lambert phase function at a given angle $\alpha$~\citep{2012ApJ...747...25M}. The angle $\alpha$ is defined as the angle between the observer, planet, and star with its vertex on the planet. For simplicity, we assume an achromatic phase. We evaluated the reflected fractions when the planets were at quadrature ($\alpha = \pi/2$)
\citep{2012ApJ...747...25M}.
The $T_\mathrm{eff}$~of these planets rival some of the coldest known brown dwarfs (e.g., WISE J0830+2837,~\citealt{2020ApJ...895..145B}; WISE J0855$-$0714,~\citealt{2014ApJ...786L..18L}), which are brightest in the mid-infrared~\citep{2012ApJ...756..172M,2011ApJ...743...50C}. Upcoming facilities such as the Near Infrared Camera (NIRCam) aboard the \emph{James Webb Space Telescope (JWST)} are ideally suited to detect objects of these temperatures at high sensitivity, although the contrast with the starlight may impact the detection of planets like 14~Her b or c (Table~\ref{tab:contrasts}). While the b planet has a reflected contrast ratio in the order of $10^{-9}$ and could be potentially detectable with the \emph{Nancy Grace Roman Space Telescope}, its angular proximity to the star ($a_c = 158.9^{+2.1}_{-2.2}$\,mas) could prove challenging for the Coronagraph Instrument. The c planet has a reflected contrast ratio in the order of $10^{-11}$, so even despite its larger angular separation from the star ($a_c = 1529_{-442}^{+869}$\,mas), it is too faint to be detected in optical wavelengths.
\begin{deluxetable*}{cccccccccc}
\tablenum{3}
\tablecaption{Estimated contrasts and planetary parameters from cloudless evolutionary models.\label{tab:contrasts}}
\tablewidth{0pt}
\tablehead{
\colhead{Component} &
\colhead{$T_\mathrm{eff}$} &
\colhead{log $g$} &
\colhead{L} &
\colhead{R} &
\colhead{$J$} &
\colhead{$K$} &
\colhead{$f_R$} &
\colhead{$J$ contrast} & \colhead{$K$ contrast} \\
& \colhead{(K)} & & (L$_{\odot})$ & (R$_{\odot})$ & (mag) & (mag) & & & }
\startdata
star & 5282 & 4.46 & $0.67 \pm 0.02$ & $0.99 \pm 0.02$ & 5.158 & 4.714 & \nodata & \nodata & \nodata\\
\hline
\multicolumn{9}{c}{\emph{Sonora-Bobcat}}\\
b & 300 & 4.33 & -7.07 & 0.11 & 27.202 & 24.912 & 1.08E-09 & 2.59E-09 & 9.42E-09 \\%without reflected: 1.95E-09 & 8.67E-09\\
c & 260 & 4.20 & -7.30 & 0.11 & 28.959 & 25.177 & 1.16E-11 & 3.13E-10 & 6.54E-09\\%without reflected: 1.77E-10 & 6.02E-09\\
\hline
\multicolumn{9}{c}{\emph{Saumon \& Marley 2008}}\\
b & 290 & 4.32 & -7.16 & 0.11 & 27.931 & 25.021 & 1.08E-09 & 1.86E-09 & 8.62E-09\\%without refelcted: 1.04E-09 & 7.87E-09\\
c & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & 1.16E-11 & \nodata & \nodata\\
\hline
\multicolumn{9}{c}{\emph{Baraffe et al. 2003}}\\
b & 300 & 4.36 & -7.11 & 0.10 & 27.257 & 24.919 & 1.08E-09 & 2.53E-09 & 9.36E-09\\%without contrast: 1.96E-09 & 8.67E-09\\
c & 260 & 4.22 & -7.34 & 0.10 & 29.013 & 25.185 & 1.16E-11 & 2.99E-10 & 6.49E-0
\enddata
\end{deluxetable*}
\section{Conclusions\label{sec:conclusions}}
In this paper we have characterized the orbital parameters and dynamical evolution of the 14~Her planetary system. Using \texttt{orvara}, which combines RV and astrometric accelerations, we have obtained a dynamical mass of ${9.1}_{-1.1}^{+1.0}$\,$M_\mathrm{Jup}$~and an inclination of ${32.7}_{-3.2}^{+5.3}$\,degrees for the b planet, hence disentangling the $M\sin i$ degeneracy for this object for the first time. We also set dynamical mass and orbital constraints on the c planet, albeit with larger uncertainties. We have also characterized the fundamental parameters of the star in order to study this system as an ensemble.
Our results describe a middle-aged K0 star with two massive planets in highly eccentric, misaligned orbits. The mutual orientation between the b and c orbits is $\Theta_{bc}$ = $96.3_{-36.8}^{+29.1}$~degrees. Coplanarity is disfavored for this system, a fact that combined with the large eccentricities, suggests a disruptive planet-planet scattering event leading to the current architecture. An N-body dynamical simulation could strengthen the hypothesis that a third $\lesssim7$\,$M_\mathrm{Jup}$~planet was ejected from the system.
Based on the age of the star and the dynamical planetary masses derived, we infer the effective temperature of the planets from hot start evolutionary models to be 300\,K and 260\,K for b and c, respectively. An ejected planet of these temperatures could be observed today as a planetary-mass Y dwarf.
Future imaging facilities mounted on 30-m class telescopes able to reach NIR contrasts of $10^{-9}$ could potentially be able to directly image these planets for the first time. Based on brown dwarfs studies of objects at similar temperatures~\citep{2014ApJ...793L..16F,2016ApJ...826L..17S,2018ApJ...858...97M}, the best chance of directly imaging the 14~Her planets will be in the mid-infrared.
\begin{acknowledgements}
We thank the referee and editor for their helpful comments. We thank Sean Raymond for fruitful dynamical discussion about planet-planet scattering. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
\end{acknowledgements}
\vspace{5mm}
\facilities{Keck(HIRES), \textit{Hipparcos}, \textit{Gaia}, Exoplanet Archive}
\software{astropy \citep{2013AandA...558A..33A}, orvara \citep{2021arXiv210511671B,2021ascl.soft05012B}, SPLAT \citep{2017ASInC..14....7B} }
|
1,116,691,500,390 | arxiv | \section{Introduction}
In quantum mechanics one is often interested in knowing the long-time behavior of a given state of a system. It is well-known that there exist states that to tend to remain localized in a region of space, called \textit{bound} states, while there are states that tend to drift away from all bounded regions of space, called \textit{scattering} states. The present article is concerned with the study of the latter. In particular, a propagation estimate is derived and serves to rigorously describe the long-time propagation, or behavior of these states. A classical way of obtaining a propagation estimate is by means of some resolvent estimates, or a Limiting Absorption Principle (LAP). The LAP is a powerful weighted estimate of the resolvent of an operator which implies a propagation estimate for scattering states as well as the absence of singular continuous spectrum for the system.
The theory of Mourre was introduced by E. Mourre in \cite{m} and aims at showing a LAP. Among others, we refer to \cite{cgh,FH,GGM,HS1,jmp,S,G,GJ1} and to the book \cite{ABG} for the development of the theory. In a nutshell, Mourre theory studies the properties of a self-adjoint operator $H$, the Hamiltonian of the system, with the help of another self-adjoint operator $A$, referred to as a \textit{conjugate operator} to $H$.
The standard Mourre theory relies on three hypotheses on the commutator of $H$ and $A$ which are, loosely speaking, that
\begin{enumerate}
\item[(M1)] $[H, \i A]$ be positive,
\item[(M2)] $[H, \i A]$ be $H$-bounded,
\item[(M3)] $[[H,\i A], \i A]$ be $H$-bounded.
\end{enumerate}
The main theory goes as follows:
\begin{align*}
&\underbrace{\mbox{(M1)}+\mbox{(M2)}} + \hspace{0.08cm} \mbox{(M3)} \Longrightarrow \mbox{Resolvent estimates (LAP)} \hspace{-0.3cm}&& \Longrightarrow \mbox{Propagation estimates} \\
& \hspace{1cm} \big \Downarrow && \Longrightarrow \mbox{No singular continuous spectrum.} \\
& \hspace{-0.18cm} \mbox{Absence of eigenvalues.} &&
\end{align*}
\noindent The purpose of the paper is to show that $\mbox{(M1)}+\mbox{(M2')} \Longrightarrow$ Weaker propagation estimates, where (M2') is slightly stronger than (M2).
We set up notation and basic notions. For arbitrary Hilbert spaces $\F$ and $\mathcal{G}$, denote the bounded operators from $\F$ to $\mathcal{G}$ by $\mathcal{B}(\F,\mathcal{G})$ and the compact operators from $\F$ to $\mathcal{G}$ by $\mathcal K(\F,\mathcal{G})$. When $\F = \mathcal{G}$, we shall abbreviate $\mathcal{B}(\mathcal{G}) := \mathcal{B}(\mathcal{G},\mathcal{G})$ and $\mathcal K(\mathcal{G}) := \mathcal K(\mathcal{G},\mathcal{G})$. When $\mathcal{G} \subset \H$, denote $\mathcal{G}^*$ the antidual of $\mathcal{G}$, when we identify $\mathcal{H}$ to its antidual $\mathcal{H}^*$ by the Riesz isomorphism Theorem. Fix self-adjoint operators $H$ and $A$ on a separable complex Hilbert space $\mathcal{H}$, with domains $\mathcal{D}(H)$ and $\mathcal{D}(A)$ respectively. In Mourre theory, regularity classes are defined and serve to describe the level of regularity that $A$ has with respect to $H$. The most important of these classes are defined in Section \ref{RegularityClasses}, but we mention that they are typically distinct in applications and always satisfy the following inclusions
\begin{equation}
\label{Chain}
\mathcal{C}^2(A) \subset \mathcal{C}^{1,1}(A) \subset \mathcal{C}^{1,\text{u}}(A) \subset \mathcal{C}^{1}(A).
\end{equation}
Of these, $\mathcal{C}^1(A)$ is the class with the least regularity, whereas $\mathcal{C}^2(A)$ is the class with the strongest regularity. Indeed if $H \in \mathcal{C}^1(A)$, then the commutator $[H,\i A]$ extends to an operator in $\mathcal{B}(\mathcal{D}(H),\mathcal{D}(H)^*)$ and is denoted $[H,\i A]_{\circ}$; whereas if $H \in \mathcal{C}^2(A)$, then in addition the iterated commutator $[[H,\i A],\i A]$ extends to an operator in $\mathcal{B}(\mathcal{D}(H),\mathcal{D}(H)^*)$ and is denoted by $[[H,\i A]_{\circ},\i A]_{\circ}$ (see Section \ref{RegularityClasses}). As the $\mathcal{C}^{1,\text{u}}(A)$ class plays a key role in this article we recall here its definition. We say that $H$ belongs to the $\mathcal{C}^{1,\text{u}}(A)$ class if the map $t \mapsto e^{-\i t A}(H+\i)^{-1}e^{\i tA}$ is of class $\mathcal{C}^1(\R; \mathcal{B}(\H))$, with $\mathcal{B}(\H)$ endowed with the norm operator topology. The standard example of operators belonging to the aforementioned classes is the following.
\begin{example} [Continuous Schr\"odinger operators]
\label{ex:1}
Let $H_0$ be the self-adjoint realization of the Laplace operator
$-\Delta$ in $L^2(\R^d)$. Let $Q$ be the operator of multiplication by $x=(x_1,...,x_d) \in \R^d$, and let $P:= -\i \nabla} \newcommand{\cs}{\mathcal {S}$. Set
\[H:=H_0+V_{\rm sr}(Q)+V_{\rm lr}(Q),\]
where $V_{\rm sr}(x)$ and $V_{\rm lr}(x)$ are real-valued functions that belong to $L^{\infty}(\R^d)$. Thus $V_{\rm sr}(Q)$ and $V_{\rm lr}(Q)$ are bounded self-adjoint operators on $L^2(\R^d)$ and they are respectively the short- and long-range perturbations. Suppose that $\lim V_{\rm sr}(x) = \lim V_{\rm lr}(x) = 0$ as $\|x\| \to +\infty$. Then $V_{\rm sr}(Q)$ and $V_{\rm lr}(Q)$ are $H_0$-form relatively compact operators. This notably implies that $\sigma_{\rm ess}(H)=[0,+\infty)$ by the Theorem of Weyl on relative compactness. Let $A:= (Q\cdot P + P \cdot Q)/2$ be the so-called generator of dilations. It is the standard conjugate operator to $H$. For the long-range perturbation, further assume that $x\cdot \nabla V_{\rm lr}(x)$ is a well-defined function. Table \ref{tabl1} displays Hamiltonians belonging to the classes introduced in \eqref{Chain}. The idea is clear: stronger decaying bounds on the potential imply stronger regularity. We study this example in Section \ref{Section:Examples} and prove the information reported in Table \ref{tabl1}. Finally, we should point out that many studies of (microlocal) resolvent estimates specifically for Schr\"odinger operators have been done previously; we refer to \cite{IK} and \cite{W} and references therein.
\begin{table}[ht]
\centering
\begin{tabular}{!{\vrule width 1.0pt}c|c!{\vrule width 1.0pt}}
\ChangeRT{1.0pt}
In addition, if $\langle x\rangle V_{\rm sr}(x)$ and $x \cdot \nabla V_{\rm lr}(x)$ are & Then $H$ belongs to \\
\ChangeRT{1.0pt}
$L^{\infty}(\R^d)$ & $\mathcal{C}^1(A)$ \\
$L^{\infty}(\R^d)$ and $o(1)$ & $\mathcal{C}^{1,\rm{u}}(A)$ \\
$L^{\infty}(\R^d)$ and $o(\langle x\rangle^{-\epsilon})$, for some $\epsilon >0$ & $\mathcal{C}^{1,1}(A)$ \\
$L^{\infty}(\R^d)$ and $O(\langle x\rangle^{-1})$ & $\mathcal{C}^{2}(A)$ \\
\ChangeRT{1.0pt}
\end{tabular}
\caption{Regularity of $H$ w.r.t.\ a bound on the decay of the potential at infinity}
\label{tabl1}
\end{table}
\end{example}
\begin{comment}
\begin{table}[ht]
\caption{Regularity of the Hamiltonian $H$ w.r.t.\ a bound on the decay of the potential at infinity}
\begin{threeparttable}
\centering
\begin{tabular}{|c | c|}
\hline
& \\ [-2ex]
In addition, if $\langle x\rangle V_{\rm sr}(x)$ and $x \cdot \nabla V_{\rm lr}(x)$ are & Then $H$ belongs to \\ [1ex]
\hline
& \\ [-2ex
$L^{\infty}(\R^d)$ & $\mathcal{C}^1(A)$ \\ [1ex
$L^{\infty}(\R^d)$ and $o(1)$ & $\mathcal{C}^{1,\rm{u}}(A)$ \\ [1ex]
$L^{\infty}(\R^d)$ and $o(\langle x\rangle^{-\epsilon}) ^{\dagger}$ & $\mathcal{C}^{1,1}(A)$ \\ [1ex]
$L^{\infty}(\R^d)$ and $O(\langle x\rangle^{-1})$ & $\mathcal{C}^{2}(A)$ \\ [1ex]
\hline
\end{tabular}
\begin{tablenotes}
\item $^{\dagger}$ \tiny for some $\epsilon > 0$.
\end{tablenotes}
\end{threeparttable}
\label{tabl1}
\end{table}
\end{comment}
Let $E_{\mathcal{I}}(H)$ be the spectral projector of $H$ on a bounded interval $\mathcal{I} \subset \R$. Assuming $H \in \mathcal{C}^1(A)$, we say that the \textit{Mourre estimate} holds for $H$ on $\mathcal{I}$ if there is $c>0$ and $K \in \mathcal K(\H)$ such that
\begin{equation}
\label{Mestimate}
E_{\mathcal{I}}(H)[H,\i A]_{\circ} E_{\mathcal{I}}(H) \geqslant c E_{\mathcal{I}}(H) + K,
\end{equation}
in the form sense on $\mathcal{H} \times \mathcal{H}$. The Mourre estimate \eqref{Mestimate} is the precise formulation of the positivity assumption (M1) alluded to at the very beginning. The Mourre estimate is localized in energy, hence it allows to infer information about the system at specific energies. Let $\mu^{A}(H)$ be the set of points where a Mourre estimate holds for $H$, i.e.\
\begin{equation*}
\mu^A(H) := \{\lambda \in \R : \exists c>0, K \in \mathcal K(\H) \ \text{and} \ \mathcal{I} \ \text{open for which} \ \eqref{Mestimate} \ \text{holds for} \ H \ \text{on} \ \mathcal{I} \ \text{and} \ \lambda \in \mathcal{I} \},
\end{equation*}
In \cite{M}, Mourre assumes roughly $H \in \mathcal{C}^2(A)$ and the estimate \eqref{Mestimate} with $K=0$ to prove the following LAP on any compact sub-interval $\mathcal{J} \subset \mathcal{I}$:
\begin{equation}
\label{Lapsup}
\sup \limits_{x \in \mathcal{J}, \ y > 0} \| \langle A \rangle ^{-s} (H-x-\i y)^{-1} \langle A \rangle ^{-s} \| < +\infty,
\end{equation}
for all $s > 1/2$. Here $\langle A \rangle := \sqrt{1+A^2}$. We remark that if the Mourre estimate holds on $\mathcal{I}$ with $K=0$, then $\mathcal{I}$ is void of eigenvalues, as a result of the Virial Theorem \cite[Proposition 7.2.10]{ABG}. Estimate \eqref{Lapsup} can be shown to yield the following Kato-type propagation estimate:
\begin{equation}
\label{Propagation}
\sup \limits_{\substack{\psi \in \H \\ \| \psi \| \leqslant 1 }} \int _{-\infty} ^{\infty} \|\langle A \rangle ^{-s} e^{- \i tH} E_{\mathcal{J}} (H) \psi \|^2 dt < +\infty,
\end{equation}
which in turn implies the absence of singular continuous spectrum on $\mathcal{J}$, e.g.\ \cite[Section XIII.7]{RS4}. The main improvement of this result is done in \cite{ABG}. The same LAP is derived assuming only $H \in \mathcal{C}^{1,1}(A)$ and the estimate \eqref{Mestimate}. It is further shown that this class is optimal in the general abstract framework. Precisely in \cite[Appendix 7.B]{ABG}, there is an example of $H\in\mathcal{C}^{1, \text{u}}(A)$ for which no LAP holds. However, other types of propagation estimates were subsequently derived for $H\in\mathcal{C}^{1, \text{u}}(A)$, see \cite{HSS, Ri} for instance. One major motivation for wanting to obtain dynamical estimates for this class was (and still is) to have a better understanding of the nature of the continuous spectrum of $H$. The aim of this article is to provide new propagation estimates for this class of operators. We also provide a simple criterion to check if an operator belongs to the $\mathcal{C}^{1, \text{u}}(A)$ class.
Let $P_{\rm{c}}(H)$ and $P_{\rm{ac}}(H)$ respectively denote the spectral projectors onto the continuous and absolutely continuous subspaces of $H$. Our first result is the following:
\begin{theorem}
\label{Main2}
Let $H$ and $A$ be self-adjoint operators in a separable Hilbert space $\H$ with $H \in \mathcal{C}^{1,\rm{u}}(A)$. Assume that $\mathcal{I} \subset \R$ is a compact interval for which $\lambda \in \mu^A(H)$ for all $\lambda \in \mathcal{I}$. Suppose moreover that $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ for all $\lambda \in \mathcal{I}$. Then for all $\psi \in \H$ and all $s>0$,
\begin{equation}
\label{NewFormula3}
\lim \limits_{t \to + \infty} \| \langle A \rangle ^{-s} e^{-\i tH} P_{\rm{c}} (H) E_\mathcal{I} (H) \psi \| =0.
\end{equation}
Moreover, if $W$ is $H$-relatively compact, then
\begin{equation}
\label{NewFormula4}
\lim \limits_{t \to + \infty} \| W e^{-\i tH} P_{\rm{c}} (H) E_\mathcal{I} (H) \psi \| =0.
\end{equation}
In particular, if $H$ has no eigenvalues in $\mathcal{I}$ and $\psi \in \H$, then the spectral measure \\
$\Omega \mapsto \langle \psi, E_{\Omega \cap \mathcal{I}}(H) \psi \rangle$ is a Rajchman measure, i.e., its Fourier transform tends to zero at infinity.
\end{theorem}
\begin{remark}
The last part of the Theorem follows by taking $W = \langle \psi, \cdot \rangle \psi$. If $H$ has no eigenvalues in $\mathcal{I}$, then $P_{\rm c} (H) E_{\mathcal{I}}(H) = E_{\mathcal{I}}(H)$ and so by the Spectral Theorem,
\[ W e^{-\i tH} P_{\rm{c}} (H) E_\mathcal{I} (H) \psi = \psi \times \langle \psi, e^{-\i tH} E_\mathcal{I} (H) \psi \rangle = \psi \times \int _{\R} e^{-\i tx} d \mu _{(\psi, E_\mathcal{I} (H) \psi)} (x).\]
The spectral measure $\mu$ satisfies $\Omega \mapsto \mu _{(\psi, E_\mathcal{I} (H) \psi)}(\Omega) = \langle \psi, E_{\Omega}(H) E_\mathcal{I} (H) \psi \rangle = \langle \psi, E_{\Omega \cap \mathcal{I}}(H) \psi \rangle$.
\end{remark}
\begin{remark}
The separability condition on the Hilbert space is used for the proof of \eqref{NewFormula4}, because the compact operator $W$ is approximated in norm by finite rank operators.
\end{remark}
\begin{remark}
Perhaps a few words about the condition $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. In general, it is satisfied if $H$ has a high regularity with respect to $A$, see \cite{FMS}. Although in the present framework it is not granted, it can be valid even if $H \in \mathcal{C}^1(A)$ only, as seen in \cite{JM}.
\end{remark}
This result is new to us. However, it is not strong enough to imply the absence of singular continuous spectrum for $H$. Indeed, there exist Rajchman measures whose support is a set of Hausdorff dimension zero, see \cite{B}. We refer to \cite{L} for a review of Rajchman measures. The proof of this result is an application of the minimal escape velocities obtained in \cite{Ri}. The latter is a continuation of \cite{HSS}. We refer to those articles for historical references.
We have several comments to do concerning the various propagation estimates listed above. First, it appears in practice that $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is not always a compact operator, and so \eqref{NewFormula3} is not a particular case of \eqref{NewFormula4}. The compactness issue of $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is discussed in Section \ref{Section:Compactness}, where we study several examples including continuous and discrete Schr\"odinger operators. In all of these examples, it appears that $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is compact in dimension one, but not in higher dimensions. Second, note that \eqref{Propagation} implies \eqref{NewFormula3}. Indeed, the integrand of \eqref{Propagation} is a $L^1(\R)$ function with bounded derivative (and hence uniformly continuous on $\R$). Such functions must go to zero at infinity. On the other hand, it is an open question to know if \eqref{Propagation} is true when $H \in \mathcal{C}^{1,\rm{u}}(A)$. Third, we point out that \eqref{NewFormula4} is a consequence of the Riemann-Lebesgue Lemma (see \eqref{RL} below) when $\psi = P_{\rm{ac}}(H)\psi$. This can be seen by writing the state in \eqref{NewFormula4} as $W(H+\i)^{-1} e^{-\i tH} P_{\rm c} (H) E_{\mathcal{I}} (H) (H+\i) \psi$ and noting that $W(H+\i)^{-1} \in \mathcal K(\H)$ and $E_{\mathcal{I}} (H) (H+\i) \in \mathcal{B}(\H)$.
Propagation estimates \eqref{NewFormula3} and \eqref{NewFormula4} cannot hold uniformly on the unit sphere of states in $\H$, for if they did, they would imply that the norm of a time-constant operator goes to zero as $t$ goes to infinity. Moving forward, we seek a propagation estimate uniform on the unit sphere and go deeper into the hypotheses. Let $\H$ be a Hilbert space. Let $H_0$ be a self-adjoint operator on $\H$, with domain $\mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0)$. We use standard notation and set $\H^2 := \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0)$ and $\H^1 := \mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H_0 \rangle ^{1/2})$, the form domain of $H_0$. Also, $\H^{-2} := (\H^2)^*$, an)d $\H^{-1} := (\H^1)^*$. The following continuous and dense embeddings hold:
\begin{equation}
\label{continuous embedding}
\H^2 \subset \H^{1} \subset \H = \H^* \subset \H^{-1} \subset \H^{-2}.
\end{equation}
These are Hilbert spaces with the appropriate graph norms. We split the assumptions into two categories: the spectral and the regularity assumptions. We start with the former.
\noindent \textbf{Spectral Assumptions:}
\begin{axioms}
\item \label{item:A1} : $H_0$ is a semi-bounded operator with form domain $\H^1$.
\item \label{item:A4} : $V$ defines a symmetric quadratic form on $\H^{1}$.
\item \label{item:A5} : $V \in \mathcal{K}(\H^{1}, \H^{-1})$.
\end{axioms}
Importantly, these assumptions allow us to define the perturbed Hamiltonian $H$. Indeed, \ref{item:A1} - \ref{item:A5} imply, by the KLMN Theorem (\cite[Theorem X.17]{RS2}), that $H := H_0 + V$ in the form sense is a semi-bounded self-adjoint operator with domain $\mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H \rangle ^{1/2}) = \H^{1}$. Furthermore, we have by Weyl's Theorem that $\sigma_{\text{ess}}(H) = \sigma_{\text{ess}}(H_0)$.
Before proceeding with the other assumptions, let us take a moment to recall two well-known propagation estimates that typically hold under these few assumptions. The first estimate is the RAGE Theorem due to Ruelle \cite{Ru}, Amrein and Georgescu \cite{AG} and Enss \cite{E}. It states that for any self-adjoint operator $H$ and any $W \in \mathcal{B}(\H)$ that is $H$-relatively compact, and any $\psi \in \H$,
\begin{equation}
\label{RAGE}
\lim \limits_{T \to \pm \infty} \frac{1}{T} \int_0 ^T \|W P_{\rm{c}}(H) e^{-\i tH} \psi\| ^2 dt =0.
\end{equation}
We refer to the appendix \ref{RAGEappendix} for an observation on this Theorem. Let us go back to Example \ref{ex:1}, the case of the Schr\"odinger operators. Assuming only that the short- and long-range potentials be bounded and go to zero at infinity, we see that \ref{item:A1} - \ref{item:A5} hold. Thus $H := H_0 + V_{\rm sr}(Q) + V_{\rm lr}(Q)$ is self-adjoint. Moreover $\mathbf{1}_{\Sigma}(Q)$ is a bounded operator that is $H$-relatively compact whenever $\Sigma \subset \R^d$ is a compact set. Hence, in this example, the above spectral assumptions and the RAGE Theorem combine to yield the following very meaningful propagation estimate:
\begin{equation}
\label{escapeCompact}
\lim \limits_{T \to \pm \infty} \frac{1}{T} \int_0 ^T \|\mathbf{1}_{\Sigma}(Q) P_{\rm{c}}(H) e^{-\i tH} \psi\| ^2 dt =0.
\end{equation}
In words, the scattering state $P_{\rm{c}}(H) \psi$ escapes all compact sets averagely in time. The second standard estimate we wish to recall is the Riemann-Lebesgue Lemma, see e.g. \cite[Lemma 2]{RS3}. It states that for any self-adjoint operator $H$ and any $W \in \mathcal{B}(\H)$ that is $H$-relatively compact, and any $\psi \in \H$,
\begin{equation}
\label{RL}
\lim \limits_{t \to \pm \infty} \|W P_{\rm{ac}}(H) e^{-\i tH} \psi\| =0.
\end{equation}
In particular, this estimate implies that the Fourier transform of the spectral measure
\[\Omega \mapsto \langle \psi, E_{\Omega} (H) P_{\rm ac} (H) \psi \rangle = \mu_{(\psi, P_{\rm ac}(H) \psi)} (\Omega) \]
goes to zero at infinity, i.e.\
\[ \int _{\R} e^{-\i tx} d\mu_{(\psi, P_{\rm ac}(H) \psi)}(x) \to 0 \quad \text{as} \quad t \to \pm \infty.\]
Applying the Riemann-Lebesgue Lemma to Example \ref{ex:1} gives for all compact sets $\Sigma \subset \R^d$,
\begin{equation}
\label{escapeCompact2}
\lim \limits_{t \to \pm \infty} \|\mathbf{1}_{\Sigma}(Q) P_{\rm{ac}}(H) e^{-\i tH} \psi\| =0.
\end{equation}
Thus, the scattering state $P_{\rm{ac}}(H) \psi$ escapes all compact sets in the long run. In contrast, a basic argument such as the one given in the Appendix \ref{Heuristic} as well as estimates like \eqref{Propagation} or \eqref{NewFormula3} indicate that the scattering states tend to concentrate in regions where the conjugate operator $A$ is prevalent. We continue with the assumptions concerning the operator $H$.
\noindent \textbf{Regularity Assumptions:} There is a self-adjoint operator $A$ on $\H$ such that
\begin{axioms}
\setcounter{enumi}{3}
\item \label{item:A3} : $e^{\i tA} \H^{1} \subset \H^{1}$ for all $t \in \R$.
\item \label{item:A2} : $H_0 \in \mathcal{C}^2(A; \H^{1}, \H^{-1})$.
\item \label{item:A6} : $V \in \mathcal{C}^{1,\text{u}}(A; \H^{1}, \H^{-1})$.
\varitem{'} \label{item:A6prime} : $V \in \mathcal{C}^{1}(A;\H^{1}, \H^{-1})$ and $[V,\i A]_{\circ} \in \mathcal{K}(\H^{1}, \H^{-1})$.
\end{axioms}
First we note that $\mathcal{C}^{\sharp}(A; \H^1, \H^{-1}) \subset \mathcal{C}^{\sharp}(A)$ for $\sharp \in \{1; 1\text{,u} ; 2\}$. We refer to Section \ref{RegularityClasses} for a complete description of these classes. While \ref{item:A3} and \ref{item:A2} are standard assumptions to apply Mourre theory, \ref{item:A6} is significantly weaker. It causes $H$ to have no more than the $\mathcal{C}^{1,\text{u}}(A; \H^{1}, \H^{-1})$ regularity, in which case the LAP is not always true, as mentioned previously. Proposition \ref{PropC1UU} proves the equivalence between \ref{item:A6} and \ref{item:A6prime}. In many applications, \ref{item:A6prime} is more convenient to check than \ref{item:A6}.
Let $\mu^A(H_0)$ be the set of points where a Mourre estimate holds for $H_0$. The assumptions mentioned above imply that $\mu^A(H) = \mu ^A(H_0)$, by Lemma \ref{Lemma1}. The uniform propagation estimate derived in this paper is the following:
\begin{theorem}\label{Main}
Suppose \ref{item:A1} through \ref{item:A6}. Let $\lambda \in \mu^A(H)$ be such that $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. Then there exists a bounded open interval $\mathcal{I}$ containing $\lambda$ such that for all $s > 1/2$,
\begin{equation}
\label{NewFormula}
\lim \limits_{T \to \pm \infty} \sup \limits_{\substack{\psi \in \H \\ \| \psi \| \leqslant 1 }} \frac{1}{T} \int_0 ^T \| \langle A \rangle ^{-s} P_{\rm{c}}(H) E_{\mathcal{I}}(H) e^{-\i t H} \psi \|^2 \ dt = 0.
\end{equation}
\end{theorem}
\begin{comment}
Let $\psi \in \H$, and assume without loss of generality that $\| \psi \| =1$. Fix $v \in (0,c')$ and let $a \in \R$. Then
\begin{align*}
\| \langle A \rangle ^{-s} e^{-\i tH'} E_{J'} (H') \psi \| & \leqslant \| \langle A \rangle ^{-s} e^{-\i tH'} E_{J'} (H') E_{(-\infty, a)}(A) \psi \| + \| \langle A \rangle ^{-s} e^{-\i tH'} E_{J'} (H') E_{[a,+\infty)}(A) \psi \| \\
& \leqslant \| E_{(-\infty, a)}(A) \psi \| + \| \langle A \rangle ^{-s} E_{(-\infty, a+vt]}(A) e^{-\i tH'} E_{J'} (H') E_{[a,+\infty)}(A) \psi \| \\
& \quad + \| \langle A \rangle ^{-s} E_{(a+vt, +\infty)}(A) e^{-\i tH'} E_{J'} (H') E_{[a,+\infty)}(A) \psi \| \\
& \leqslant \| E_{(-\infty, a)}(A) \psi \| + \| \langle A \rangle ^{-s} E_{(-\infty, a+vt]}(A) e^{-\i tH'} E_{J'} (H') E_{[a,+\infty)}(A) \| \\
& \quad + \| \langle A \rangle ^{-s} E_{(a+vt, +\infty)}(A) \|
\end{align*}
Let $\epsilon > 0$ be given. Choose $a$ so that $\|E_{(-\infty, a)}(A) \psi \| \leqslant \epsilon/3$. Then take $t$ large enough so that the other two terms on the r.h.s.\ of the previous inequality are less than $\epsilon/3$. This is possible thanks to \cite[Theorem 1]{R}. Then $\| \langle A \rangle ^{-s} e^{-\i tH'} E_{\mathcal{I}'} (H') \psi \| \leqslant \epsilon$.
\end{comment}
\begin{comment}
------------------------------------------------------------------------------------------------------------------------
\[ \| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) \| \leqslant \underbrace{\| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) E_{(-\infty, a)}(A) \|}_{:= \ \Xi_1} + \underbrace{\| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \|}_{:= \ \Xi_2} \]
\begin{align*} \Xi_2 & \leqslant \| \langle A \rangle ^{-s} E_{(-\infty, a+vt]}(A) e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \| + \| \langle A \rangle ^{-s} E_{(a+vt, +\infty)}(A) e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \| \\
& \leqslant \| \langle A \rangle ^{-s} E_{(-\infty, a+vt]}(A) e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \| + \| \langle A \rangle ^{-s} E_{(a+vt, +\infty)}(A) \| \to 0 \ \text{as} \ t \to \infty.
\end{align*}
\[ \Xi_1 \leqslant \| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) \langle A \rangle ^{s} \| \| \langle A \rangle ^{-s} E_{(-\infty, a)}(A) \| \to 0 \ \text{as} \ a \to -\infty. \]
We show that $\langle A \rangle ^{-s} e^{-\i tH} \eta (H) \langle A \rangle ^{s}$ is a bounded operator.
\[ \langle A \rangle ^{-s} e^{-\i tH} \eta (H) \langle A \rangle ^{s} = e^{-\i tH} \eta (H) + [\langle A \rangle ^{-s}, e^{-\i tH} \eta (H) ]_{\circ} \langle A \rangle ^{s}. \]
$H \in \mathcal{C}^1(A)$, so $e^{-\i tH} \eta (H) \in \mathcal{C}^1(A)$. As $e^{-\i tH} \eta (H)$ is also a bounded operator, we have that $e^{-\i tH} \eta (H) \in \mathcal{C}^1( f(A))$, with $f(x) = \langle x \rangle ^{-s}$, and
\[ [\langle A \rangle ^{-s}, e^{-\i tH} \eta (H)] = \frac{\i}{2\pi} \int _{\C} \frac{\partial \tilde{f}}{\partial \overline{z}} (z-A)^{-1} [A,e^{-\i tH} \eta (H)]_{\circ} (z-A)^{-1} \ dz \wedge d\overline{z}. \]
------------------------------------------------------------------------------------------------------------------------
\end{comment}
This formula is to be compared with \eqref{Propagation}, \eqref{NewFormula3} and \eqref{RAGE}. First note that \eqref{Propagation} implies \eqref{NewFormula}. Also, on the one hand, \eqref{NewFormula} without the supremum is a trivial consequence of \eqref{NewFormula3}. On the other hand, if \eqref{NewFormula3} held uniformly on the unit sphere, then it would imply \eqref{NewFormula}. But we saw that this is not the case. So the main gain in Theorem \ref{Main} over Theorem \ref{Main2} is the supremum. Let us further comment the supremum in \eqref{NewFormula}. This is because one can in fact take the supremum in the RAGE formula, as explained in the Appendix \ref{RAGEappendix}. The parallel with the RAGE formula (see Theorem \ref{CFKSRAGE}) raises an important concern however. The novelty of the propagation estimate \eqref{NewFormula} depends critically on the non-compactness of the operator $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$. As mentioned previously, it appears that $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is not always compact. Theorem \ref{Main} therefore appears to be a new result for multi-dimensional Hamiltonians.
To summarize, the various propagation estimates discussed in the Introduction are listed in Table \ref{tab:2} according to the regularity of the potential $V$. Sufficient regularity for the free operator $H_0$ is implicit. In this table, question marks indicate open problems and R.-L.\ stands for Riemann-Lebesgue.
\begin{table}[ht]
\centering
\begin{tabular}{ !{\vrule width 1.0pt} c| c| c| c| c| c| c!{\vrule width 1.0pt}}
\ChangeRT{1.0pt}
$V$ is of & RAGE & R.-L. & Prop. estimates& Prop. & Kato - type & LAP \\
class & formula & formula & \eqref{NewFormula3} and \eqref{NewFormula4} & estimate \eqref{NewFormula} & Prop.\ estimate & \\
\ChangeRT{1.0pt}
$\mathcal{C}^1(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & ? & ? & ? & ? \\
$\mathcal{C}^{1,\text{u}}(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & ? & ? \\
$\mathcal{C}^{1,1}(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; \\
$\mathcal{C}^{2}(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; \\
\ChangeRT{1.0pt}
\end{tabular}
\caption{The estimates for $H$ depending on the regularity of the potential $V$}
\label{tab:2}
\end{table}
\begin{comment}
\begin{table}[ht]
\caption{The estimates for $H$ depending on the regularity of the potential $V$}
\centering
\begin{tabular}{ | c| c| c| c| c| c| c|}
\hline
& & & & & & \\ [-2ex
$V$ is of & RAGE & R.-L. & Prop. estimates& Prop. & Kato - type & LAP \\
class & formula & formula & \eqref{NewFormula3} and \eqref{NewFormula4} & estimate \eqref{NewFormula} & Prop.\ estimate & \\ [1ex]
\hline
& & & & & & \\% [-2ex
$\mathcal{C}^1(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & ? & ? & ? & ? \\ [1ex
$\mathcal{C}^{1,\text{u}}(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & ? & ? \\ [1ex]
$\mathcal{C}^{1,1}(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; \\ [1ex]
$\mathcal{C}^{2}(A)$ & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; \\ [1ex]
\hline
\end{tabular}
\label{tab:2}
\end{table}
\end{comment}
We underline that the LAP has been derived for several specific systems where the Hamiltonian $H$ belongs to a regularity class as low as $\mathcal{C}^1(A)$, and sometimes even lower (see for example \cite{DMR}, \cite{GJ2}, \cite{JM} and \cite{Ma1} to name a few). In all these cases, a strong propagation estimate of type \eqref{Propagation} and absence of singular continuous spectrum follow. We also note that the derivation of the propagation estimate \eqref{NewFormula} is in fact very similar to the derivation of a weighted Mourre estimate which is used in the proof of a LAP for Hamiltonians with oscillating potentials belonging to the $\mathcal{C}^1(A)$ class, see \cite{G} and \cite{GJ2}.
The article is organized as follows: in Section \ref{RegularityClasses}, we review the classes of regularity in Mourre theory and in particular prove the equivalence between \ref{item:A6} and \ref{item:A6prime}. In Section \ref{MourreEstimateDiscussion}, we discuss the Mourre estimate and justify that under the assumptions of Theorem \ref{Main}, $H$ and $H_0$ share the same set of points where a Mourre estimate holds. In Section \ref{Section:Examples}, we give examples of continuous and discrete Schr\"odinger operators that fit the assumptions of Theorems \ref{Main2} and \ref{Main}. In Section \ref{PROOF2}, we prove Theorem \ref{Main2} and in Section \ref{PROOF}, we prove Theorem \ref{Main}. In Section \ref{Section:Compactness}, we discuss the compactness of the operator $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$. In Appendix \ref{Heuristic}, we provide a simple argument as to why we expect scattering states to evolve in the direction where the conjugate operator prevails. In Appendix \ref{RAGEappendix} we make the observation that one may in fact take a supremum in the RAGE Theorem. Finally, in Appendix \ref{Appendix} we review facts about almost analytic extensions of smooth functions that are used in the proof of the uniform propagation estimate.
\noindent \textbf{Acknowledgments:} We are very thankful to Jean-Fran\c{c}ois Bony, Vladimir Georgescu, Philippe Jaming and Thierry Jecko for precious discussions. We are very grateful to Serge Richard for explaining to us how \cite{Ri} could be used to improve our previous results. Finally, we warmly thank the anonymous referee for a meticulous reading of the manuscript and offering numerous valuable improvements. The authors were partially supported by the ANR project GeRaSic (ANR-13-BS01-0007-01).
\section{The classes of regularity in Mourre theory}
\label{RegularityClasses}
We define the classes of regularity that were introduced in \eqref{Chain}. Let $T \in \mathcal{B}(\H)$ and $A$ be a self-adjoint operator on the Hilbert space $\mathcal{H}$. Consider the map
\begin{align}
\label{DefC1u}
\R \ni t & \mapsto e^{-\i tA} Te^{\i tA} \in \mathcal{B}(\H).
\end{align}
Let $k \in \N$. If the map is of class $\mathcal{C}^k(\R; \mathcal{B}(\H))$, with $\mathcal{B}(\H)$ endowed with the strong operator topology, we say that $T \in \mathcal{C}^k(A)$; whereas if the map is of class $\mathcal{C}^k(\R ; \mathcal{B}(\mathcal{H}))$, with $\mathcal{B}(\mathcal{H})$ endowed with the operator norm topology, we say that $T \in \mathcal{C}^{k,\text{u}}(A)$. Note that $\mathcal{C}^{k,\text{u}}(A) \subset \mathcal{C}^{k}(A)$ is immediate from the definitions. If $T \in \mathcal{C}^1(A)$, then the derivative of the map \eqref{DefC1u} at $t=0$ is denoted $[T,\i A]_{\circ}$ and belongs to $\mathcal{B}(\H)$. Also, if $T_1,T_2 \in \mathcal{B}(\H)$ belong to the $\mathcal{C}^1(A)$ class, then so do $T_1+T_2$ and $T_1T_2$.
We say that $T \in \mathcal{C}^{1,1}(A)$ if
\begin{equation*}
\int_0 ^1 \Big\| [ [T, e^{\i tA}]_{\circ}, e^{ \i tA}]_{\circ} \Big \| t^{-2} dt < + \infty.
\end{equation*}
The proof that $\mathcal{C}^{2}(A) \subset \mathcal{C}^{1,1}(A) \subset \mathcal{C}^{1,\text{u}}(A)$ is given in \cite[Section 5]{ABG}. This yields \eqref{Chain}.
Now let $T$ be a self-adjoint operator (possibly unbounded), with spectrum $\sigma(T)$. Let $z \in \C \setminus \sigma(T)$. We say that $T\in \mathcal{C}^{\sharp}(A)$ if $(z-T)^{-1}\in \mathcal{C}^{\sharp}(A)$, for $\sharp \in \{k; k\text{,u}; 1\text{,1} \}$. This definition does not depend on the choice of $z \in \C \setminus \sigma(T)$, and furthermore if $T$ is bounded and self-adjoint then the two definitions coincide, see \cite[Lemma 6.2.1]{ABG}. If $T \in \mathcal{C}^1(A)$, one shows that $[T, \i A]_{\circ} \in \mathcal{B}(\mathcal{D}} \newcommand{\F}{\mathcal{E}(T),\mathcal{D}} \newcommand{\F}{\mathcal{E}(T)^*)$ and that the following formula holds:
\begin{equation}
\label{CommutatorResolvent}
[(z-T)^{-1}, \i A]_{\circ} = (z-T)^{-1} [T,\i A]_{\circ} (z-T)^{-1}.
\end{equation}
These definitions can be refined. Let $\mathcal{G}$ and $\H$ be Hilbert spaces verifying the following continuous and dense embeddings $\mathcal{G} \subset \H = \H^* \subset \mathcal{G}^*$, where we have identified $\H$ with its antidual $\H^*$ by the Riesz isomorphism Theorem. Let $A$ be a self-adjoint operator on $\H$, and suppose that the semi-group $\{e^{\i tA}\}_{t \in \R}$ stabilizes $\mathcal{G}$. Then by duality it stabilizes $\mathcal{G}^*$. Let $T$ be a self-adjoint operator on $\H$ belonging to $\mathcal{B}(\mathcal{G}, \mathcal{G}^*)$ and consider the map
\begin{equation}
\label{DefC1GG}
\R \ni t \mapsto e^{-\i tA} Te^{\i tA} \in \mathcal{B}(\mathcal{G},\mathcal{G}^*).
\end{equation}
If this map is of class $\mathcal{C}^k(\R; \mathcal{B}(\mathcal{G},\mathcal{G}^*))$, with $\mathcal{B}(\mathcal{G},\mathcal{G}^*)$ endowed with the strong operator topology, we say that $T \in \mathcal{C}^k(A; \mathcal{G}, \mathcal{G}^*)$; whereas if the map is of class $\mathcal{C}^{k}(\R ; \mathcal{B}(\mathcal{G},\mathcal{G}^*))$, with $\mathcal{B}(\mathcal{G},\mathcal{G}^*)$ endowed with the norm operator topology, we say that $T \in \mathcal{C}^{k,\text{u}}(A; \mathcal{G}, \mathcal{G}^*)$. If $T \in \mathcal{C}^1(A; \mathcal{G}, \mathcal{G}^*)$, then the derivative of map \eqref{DefC1GG} at $t=0$ is denoted by $[T, \i A]_{\circ}$ and belongs to $\mathcal{B}(\mathcal{G},\mathcal{G}^*)$. Moreover, by \cite[Proposition 5.1.6]{ABG}, $T \in \mathcal{C}^{\sharp}(A; \mathcal{G}, \mathcal{G}^*)$ if and only if $(z-T)^{-1} \in\mathcal{C}^{\sharp}(A; \mathcal{G}^*, \mathcal{G})$ for all $z \in \C \setminus \sigma(T)$ and $\sharp \in \{ k; k\text{,u} \}$. This notably implies that $\mathcal{C}^{\sharp}(A; \mathcal{G}, \mathcal{G}^*) \subset \mathcal{C}^{\sharp}(A)$.
In the setting of Theorem \ref{Main}, $\mathcal{G} = \H^{1} := \mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H_0 \rangle^{1/2})$, and $T$ stands for $H_0$, $V$ or $H$. In all cases $T\in \mathcal{B}(\H^1,\H^{-1})$. We also assume that $\{e^{\i tA}\}_{t \in \R}$ stabilizes $\H^1$, see \ref{item:A3}. Consider the map
\begin{equation}
\label{Map22}
\R \ni t \mapsto \langle H_0 \rangle ^{-1/2} e^{-\i tA} T e^{\i tA} \langle H_0 \rangle ^{-1/2} \in \mathcal{B}(\H).
\end{equation}
The latter operator belongs indeed to $\mathcal{B}(\H)$ since the domains concatenate as follows:
\begin{equation*}
\underbrace{\langle H_0 \rangle ^{-1/2}}_{\in \mathcal{B}(\H^{-1},\H)} \underbrace{e^{-\i tA}}_{\in \mathcal{B}(\H^{-1},\H^{-1})} \underbrace{T}_{\in \mathcal{B}(\H^{1}, \H^{-1})} \underbrace{e^{\i tA}}_{\in \mathcal{B}(\H^1,\H^1)} \underbrace{\langle H_0 \rangle ^{-1/2}}_{\in \mathcal{B}(\H, \H^1)}.
\end{equation*}
We remark that $T \in \mathcal{C}^{k} (A; \H^1,\H^{-1})$ is equivalent to the map \eqref{Map22} being of class $\mathcal{C}^k(\R; \mathcal{B}(\H))$, with $\mathcal{B}(\H)$ endowed with the strong operator topology; whereas $T \in \mathcal{C}^{k,\text{u}} (A; \H^1,\H^{-1})$ is equivalent to the map being of class $\mathcal{C}^k(\R; \mathcal{B}(\H))$, with $\mathcal{B}(\H)$ endowed with the norm operator topology.
In many applications, the free operator $H_0$ has a nice regularity with respect to the conjugate operator $A$, i.e.\ $H_0 \in \mathcal{C}^k(A; \mathcal{G}, \mathcal{G}^*)$ for some $k \geqslant 2$ and for some $\mathcal{G} \subset \H$. However, the perturbation $V$ typically doesn't have very much regularity w.r.t.\ $A$ and showing that $V$ is of class $\mathcal{C}^{1,\text{u}}(A; \mathcal{G},\mathcal{G}^*)$ directly from the definition is usually not very practical. To ease the difficulty we provide the following criterion. Its proof is inspired by \cite[Lemma 8.5]{Ge}.
\begin{proposition}
\label{PropC1UU}
Suppose that $T \in \mathcal K( \H^1,\H^{-1}) \cap \mathcal{C}^1(A; \H^1,\H^{-1})$. Then $T \in \mathcal{C}^{1,\rm{u}}(A; \H^1,\H^{-1})$ if and only if $[T,\i A]_{\circ} \in \mathcal K( \H^1,\H^{-1})$.
\end{proposition}
\begin{remark}
\label{remqa}
The proof actually shows that if $T \in \mathcal{B}(\H^1,\H^{-1}) \cap \mathcal{C}^1(A; \H^1,\H^{-1})$ and $[T,\i A]_{\circ} \in \mathcal K(\H^1,\H^{-1})$, then $T \in \mathcal{C}^{1,\rm{u}}(A; \H^1,\H^{-1})$. Thus the compactness of $T$ is needed only for the reverse implication in Proposition \ref{PropC1UU}.
\end{remark}
\begin{remark}
\label{Remark2comp}
Adapting the proof of Proposition \ref{PropC1UU}, one can see that the results of Proposition \ref{PropC1UU} and Remark \ref{remqa} are still valid if $\mathcal K(\H^1,\H^{-1})$ (resp.\ $\mathcal{C}^1(A; \H^1,\H^{-1})$, resp.\ $\mathcal{C}^{1,\rm{u}}(A; \H^1,\H^{-1})$, resp.\ $\mathcal{B}(\H^1,\H^{-1})$) is replaced by $\mathcal K(\H)$ (resp.\ $\mathcal{C}^1(A)$, resp.\ $\mathcal{C}^{1,\rm{u}}(A)$, resp.\ $\mathcal{B}(\H)$).
\end{remark}
\begin{proof}
We start with the easier of the two implications, namely $T \in \mathcal{C}^{1,\text{u}}(A; \H^1,\H^{-1})$ implies $[T,\i A]_{\circ} \in \mathcal K(\H^1,\H^{-1})$. Let
\begin{align*}
\R \ni t & \mapsto \Lambda (t) := \langle H_0 \rangle ^{-1/2} e^{-\i tA} T e^{\i tA} \langle H_0 \rangle ^{-1/2} \in \mathcal{B}(\H).
\end{align*}
To say that $T \in \mathcal{C}^{1,\text{u}}(A;\H^1,\H^{-1})$ is equivalent to $\Lambda$ being of class $\mathcal{C}^1(\R,\mathcal{B}(\H))$, with $\mathcal{B}(\H)$ endowed with the norm operator topology. Since
\begin{equation*}
\langle H_0 \rangle ^{-1/2} [T,\i A]_{\circ} \langle H_0 \rangle ^{-1/2} = \lim \limits_{t \to 0} \frac{\Lambda(t)-\Lambda(0)}{t}
\end{equation*}
holds w.r.t.\ the operator norm on $\mathcal{B}(\H)$ and $\Lambda(t) -\Lambda(0)$ is equal to
\begin{equation*}
\underbrace{\langle H_0 \rangle ^{-1/2} e^{-\i tA} \langle H_0 \rangle ^{1/2}}_{\in \ \mathcal{B}(\H)} \underbrace{\langle H_0 \rangle ^{-1/2} T \langle H_0 \rangle ^{-1/2}}_{\in \ \mathcal K(\H)} \underbrace{\langle H_0 \rangle ^{1/2} e^{\i tA} \langle H_0 \rangle ^{-1/2}}_{\in \ \mathcal{B}(\H)} - \underbrace{\langle H_0 \rangle ^{-1/2} T \langle H_0 \rangle ^{-1/2}}_{\in \ \mathcal K(\H)},
\end{equation*}
we see that $\langle H_0 \rangle ^{-1/2} [T,\i A]_{\circ} \langle H_0 \rangle ^{-1/2} \in \mathcal K(\H)$ as a norm limit of compact operators. Hence $ [T,\i A]_{\circ} \in \mathcal K(\H^1,\H^{-1})$.
We now show the reverse implication. We have to show that the map $\Lambda$ is of class $\mathcal{C}^1(\R, \mathcal{B}(\H))$. This is the case if and only if $\Lambda$ is differentiable with continuous derivative at $t=0$. Let
\begin{equation*}
\ell (t) := \langle H_0 \rangle ^{-1/2} e^{-\i tA} [T,\i A]_{\circ} e^{\i tA} \langle H_0 \rangle ^{-1/2} \in \mathcal{B}(\H).
\end{equation*}
The following equality holds strongly in $\mathcal{H}$ for all $t>0$ due to the fact that $T \in \mathcal{C}^1(A,\H^1,\H^{-1})$:
\begin{equation}
\label{IntegralCommutator1}
\frac{\Lambda(t) -\Lambda(0)}{t} - \ell(0) = \frac{1}{t} \int_0 ^t \langle H_0 \rangle ^{-1/2} \left(e^{-\i \tau A} [T,\i A]_{\circ} e^{\i \tau A} - [T,\i A]_{\circ}\right) \langle H_0 \rangle ^{-1/2} d\tau.
\end{equation}
Let us estimate the integrand:
\begin{align}
\begin{split}
\label{Triangle1}
& \quad \ \big\|\langle H_0 \rangle ^{-1/2}\left(e^{-\i \tau A} [T,\i A]_{\circ} e^{\i \tau A} - [T,\i A]_{\circ}\right) \langle H_0 \rangle ^{-1/2} \big\| \\
& \leqslant \big\|\langle H_0 \rangle ^{-1/2} \left(e^{-\i \tau A} [T,\i A]_{\circ} e^{\i \tau A} - e^{-\i \tau A} [T,\i A]_{\circ}\right) \langle H_0 \rangle ^{-1/2} \big \| \\
& \quad + \big \|\langle H_0 \rangle ^{-1/2} \left(e^{-\i \tau A} [T,\i A]_{\circ} - [T,\i A]_{\circ}\right) \langle H_0 \rangle ^{-1/2} \big\| \\
& \leqslant \Big\| \underbrace{\langle H_0 \rangle ^{-1/2} e^{-\i \tau A}\langle H_0 \rangle ^{1/2}}_{\| \cdot \| \leqslant 1} \underbrace{\langle H_0 \rangle ^{-1/2} [T,\i A]_{\circ} \langle H_0 \rangle ^{-1/2}}_{\in \ \mathcal K(\H)} \underbrace{\left(\langle H_0 \rangle ^{1/2} e^{\i \tau A}\langle H_0 \rangle ^{-1/2} -I\right)}_{\xrightarrow[]{s} 0} \Big\| \\
& \quad + \Big \|\underbrace{\left(\langle H_0 \rangle ^{-1/2}e^{-\i \tau A}\langle H_0 \rangle ^{1/2} -I \right)}_{\xrightarrow[]{s} 0} \underbrace{\langle H_0 \rangle ^{-1/2}[T,\i A]_{\circ} \langle H_0 \rangle ^{-1/2}}_{\in \ \mathcal K(\H)} \Big \|.
\end{split}
\end{align}
Thus the integrand of \eqref{IntegralCommutator1} converges in norm to zero as $t$ goes to zero. It follows that the l.h.s.\ of \eqref{IntegralCommutator1} converges in norm to zero, showing that $\Lambda'(0) = \ell (0)$. It easily follows that $\Lambda'(t) = \ell (t)$ for all $t \in \R$. Again invoking \eqref{Triangle1} shows that $\Lambda'$ is continuous at $t=0$, completing the proof.
\qed
\end{proof}
\begin{comment}
\begin{proposition}
\label{PropC1U}
We have:
\begin{enumerate}
\item\label{e:1} Let $V \in \mathcal{C}^1(A)$, and suppose that $V$ is compact. Then $[V,\i A]_{\circ}$ is compact if and only if $V \in \mathcal{C}^{1,\text{u}}(A)$. \\
\item \label{e:2} Let $z\in \C\setminus\sigma(H_0)$. Let $VR_0(z) \in \mathcal{C}^1(A)$, and suppose that $VR_0(z)$ is compact. Then $[VR_0(z),\i A]_{\circ}$ is compact if and only if $VR_0(z) \in \mathcal{C}^{1,\text{u}}(A)$.
\end{enumerate}
\end{proposition}
\begin{remark}
For \eqref{e:1} the proof actually shows that if $V \in \mathcal{C}^1(A)$, $V$ is bounded and $[V,\i A]_{\circ}$ is compact, then $V \in \mathcal{C}^{1,\text{u}}(A)$. Similarly for \eqref{e:2}, if $VR_0(z) \in \mathcal{C}^1(A)$, $VR_0(z)$ is bounded and $[VR_0(z),\i A]_{\circ}$ is compact, then $VR_0(z) \in \mathcal{C}^{1,\text{u}}(A)$.
\end{remark}
\begin{proof}
(1) We first show that $[V, \i A]_{\circ}$ compact implies $V \in \mathcal{C}^{1,\text{u}}(A)$. Note that the map
\begin{equation*}
\R \ni t \mapsto \Gamma (t) := e^{-\i tA} V e^{\i tA} \in \mathcal{B}(\mathcal{H})
\end{equation*}
is of class $C^1(\R)$ if and only if it is differentiable with continuous derivative at $t=0$. The following equality holds strongly in $\mathcal{H}$:
\begin{equation}
\label{IntegralCommutator}
\frac{e^{-\i tA} V e^{\i tA} - V}{t} - [V,\i A]_{\circ} = \frac{1}{t} \int_0 ^t \left(e^{-\i \tau A} [V,\i A]_{\circ} e^{\i \tau A} - [V,\i A]_{\circ}\right) d\tau.
\end{equation}
Writing
\begin{align}
\begin{split}
\label{Triangle}
\|e^{-\i \tau A} [V,\i A]_{\circ} e^{\i \tau A} - [V,\i A]_{\circ}\| & \leqslant \|e^{-\i \tau A} [V,\i A]_{\circ} e^{\i \tau A} - e^{-\i \tau A} [V,\i A]_{\circ}\| \\
&\quad + \|e^{-\i \tau A} [V,\i A]_{\circ} - [V,\i A]_{\circ}\| \\
& \leqslant \| [V,\i A]_{\circ} (e^{\i \tau A} - I)\| + \|(e^{-\i \tau A} -I) [V,\i A]_{\circ} \|,
\end{split}
\end{align}
we see that the integrand of \eqref{IntegralCommutator} converges in norm to zero as $t$ goes to zero, since $[V,\i A]_{\circ}$ is compact. It follows that the l.h.s.\ of \eqref{IntegralCommutator} converges in norm to zero, showing that $\Gamma'(0) = [V,\i A]_{\circ}$. It easily follows that $\Gamma'(t) = e^{-\i t A}[V,\i A]_{\circ}e^{\i t A}$ for all $t \in \R$. Again invoking \eqref{Triangle} shows that $\Gamma'$ is continuous at $t=0$, completing the proof of the first implication. \red{commencer par le cas trivial}To show the reverse implication, note that
\begin{equation*}
[V,\i A]_{\circ} = \lim \limits_{t \to 0} \frac{e^{-\i tA} V e^{\i tA} - V}{t}
\end{equation*}
is compact as a norm limit of compact operators. \\
(b) We rerun the same argument as in (a) with $VR(z)$ instead of $V$.
\qed
\end{proof}
We should point out that under stronger assumptions on the potential $V$ we obtain a similar result (with similar proof) that we quote.
\begin{proposition}
Let $z \in \R \setminus \sigma(H_0)$. Suppose that $VR_0(z) \in \mathcal{K}(\mathcal{H})$ and $VR_0(z) \in \mathcal{C}^1(A)$. Then $[VR_0(z),\i A]_{\circ} \in \mathcal{K}(\mathcal{H})$ if and only if $VR_0(z) \in \mathcal{C}^{1,\text{u}}(A)$.
\end{proposition}
The reason that we mention these assumptions is that in this case we can go as far as showing that $H \in \mathcal{C}^{1,\text{u}}(A)$.
\begin{corollary}
Let $H_0 \in \mathcal{C}^{1,\text{u}}(A)$ and $H \in \mathcal{C}^{1}(A)$. Moreover, suppose that $VR_0(z)$ and $[VR_0(z),\i A]_{\circ}$ belong to $\mathcal{K}(\mathcal{H})$. Then $H \in \mathcal{C}^{1,\text{u}}(A)$.
\end{corollary}
\begin{proof}
By the resolvent identity
\begin{equation*}
R(\i) -R_0(\i) = R_0(\i)VR(\i)
\end{equation*}
and the fact that $R_0(\i) \in C^{1,\text{u}}(A)$, it is enough to show that $R_0(\i)VR(\i) \in \mathcal{C}^{1,\text{u}}(A)$. By applying Lemma \ref{LemmaEquivalence}, $VR(\i)$ and $[VR(\i),\i A]_{\circ}$ are compact, which by Proposition \ref{PropC1UU} implies that $VR(\i) \in \mathcal{C}^{1,\text{u}}(A)$. Hence $R_0(\i)VR(\i) \in \mathcal{C}^{1,\text{u}}(A)$ as the product of two operators in this class.
\qed
\end{proof}
\end{comment}
\section{A few words about the Mourre estimate}
\label{MourreEstimateDiscussion}
This section is based on the content of \cite[Section 7.2]{ABG}, where the results are presented for a self-adjoint operator $T \in \mathcal{C}^1(A)$, which (we recall) contains the $\mathcal{C}^1(A; \mathcal{G},\mathcal{G}^*)$ class.
Let $T$ be a self-adjoint operator on $\H$ with domain $\mathcal{D}} \newcommand{\F}{\mathcal{E}(T) \subset \H$. Let $\mathcal{G}$ be a subspace such that
\[ \mathcal{D}} \newcommand{\F}{\mathcal{E}(T) \subset \mathcal{G} \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle T \rangle ^{1/2}) \subset \H = \H^* \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle T \rangle ^{1/2})^* \subset \mathcal{G}^* \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(T)^*.\]
If $T \in \mathcal{C}^1(A,\mathcal{G}, \mathcal{G}^*)$, then in particular $[T, \i A]_{\circ} \in \mathcal{B}(\mathcal{G}, \mathcal{G}^*)$. If $\mathcal{I} \subset \R$ is a bounded interval, then $E_{\mathcal{I}}(T) \in \mathcal{B}(\H, \mathcal{G})$ and by duality $E_{\mathcal{I}}(T) \in \mathcal{B}(\mathcal{G}^*, \H)$. We say that the \textit{Mourre estimate} holds for $T$ w.r.t.\ $A$ on the bounded interval $\mathcal{I}$ if there exist $c>0$ and $K \in \mathcal K(\H)$ such that
\begin{equation}
\label{MourreEst}
E_{\mathcal{I}}(T)[T, \i A]_{\circ} E_{\mathcal{I}}(T) \geqslant c E_{\mathcal{I}}(T) + K
\end{equation}
in the form sense on $\H \times \H$. Note that both the l.h.s.\ and r.h.s.\ of \eqref{MourreEst} are well-defined bounded operators on $\H$. For reminder, if this estimate holds, then the total multiplicity of eigenvalues of $T$ in $\mathcal{I}$ is finite by \cite[Corollary 7.2.11]{ABG}, whereas if the estimate holds with $K=0$, then $\mathcal{I}$ is void of eigenvalues, as a result of the Virial Theorem \cite[Proposition 7.2.10]{ABG}. We let $\mu^A(T)$ be the collection of points belonging to neighborhood for which the Mourre estimate holds, i.e.\
\begin{equation*}
\mu^A(T) := \{ \lambda \in \R : \exists c>0, K \in \mathcal K(\H) \ \text{and} \ \mathcal{I} \ \text{open for which} \ \eqref{MourreEst} \ \text{holds for} \ T \ \text{on} \ \mathcal{I} \ \text{and} \ \lambda \in \mathcal{I} \}.
\end{equation*}
This is an open set. It is natural to introduce a function defined on $\mu^A(T)$ which gives the best constant $c>0$ that can be achieved in the Mourre estimate, i.e.\ for $\lambda \in \mu^A(T)$, let
\begin{equation*}
\varrho_T ^A (\lambda) := \sup _{\mathcal{I} \ni \lambda} \big \{ \sup \{ c \in \R : E_{\mathcal{I}}(T) [T, \i A]_{\circ} E_{\mathcal{I}}(T) \geqslant c E_{\mathcal{I}}(T) + K, \ \text{for some} \ K \in \mathcal K(\H) \} \big \}.
\end{equation*}
Equivalent definitions and various properties of the $\varrho_T ^A$ function are given in \cite[Section 7.2]{ABG}. One very useful result that we shall use is the following:
\begin{proposition}
\label{conjugateResolvent}
\cite[Proposition 7.2.7]{ABG}
Suppose that $T$ has a spectral gap and that $T \in \mathcal{C}^1(A)$. Let $R(\varsigma) := (\varsigma -T)^{-1}$, where $\varsigma$ is a real number in the resolvent set of $T$. Then
\begin{equation}
\label{ConvertResolvent}
\varrho_{T}^A(\lambda) = (\varsigma-\lambda)^2 \varrho_{R(\varsigma)}^A((\varsigma-\lambda)^{-1}).
\end{equation}
In particular, $T$ is conjugate to $A$ at $\lambda$ if and only if $R(\varsigma) $ is conjugate to $A$ at $(\varsigma-\lambda)^{-1}$.
\end{proposition}
As a side note, this Proposition is stated without proof in \cite{ABG}, so we indicate to the reader that it may be proven following the same lines as that of \cite[Proposition 7.2.5]{ABG} together with the following Lemma, which is the equivalent of \cite[Proposition 7.2.1]{ABG}. Denote $\mathcal{I}(\lambda;\epsilon)$ the open interval of radius $\epsilon$ centered at $\lambda$.
\begin{Lemma} Suppose that $T \in \mathcal{C}^1(A)$. If $\lambda \notin \sigma_{\rm{ess}}(H)$, then $\varrho^A_T(\lambda) = + \infty$. If $\lambda \in \sigma_{\rm{ess}}(H)$, then $\varrho^A_T(\lambda)$ is finite and given by
\begin{equation*}
\varrho^A_T(\lambda) = \lim \limits_{\epsilon \to 0^+} \inf \big\{ \langle \psi, [T,\i A]_{\circ} \psi \rangle : \psi \in \H, \|\psi \| =1 \ \text{and} \ E_{\mathcal{I}(\lambda;\epsilon)}(T) \psi = \psi \big \}.
\end{equation*}
Furthermore, there is a sequence $(\psi_n)_{n=1}^{\infty}$ of vectors such that $\psi_n \in \H$, $\|\psi_n\| \equiv 1$, $\langle \psi_n, \psi_m \rangle = \delta_{nm}$, $E_{\mathcal{I}(\lambda;1/n)}\psi_n = \psi_n$ and $\lim _{n\to \infty} \langle \psi_n, [T, \i A]_{\circ} \psi_n \rangle = \varrho^A_T(\lambda)$.
\end{Lemma}
We will be employing formula \eqref{ConvertResolvent} in the proof of Theorem \ref{Main}, but for the moment we apply it to show that under the assumptions of Theorem \ref{Main}, $H$ and $H_0$ share the same points where a Mourre estimate hold. The remark is done after \cite[Theorem 7.2.9]{ABG}. Let $R(z) := (z-T)^{-1}$ and $R_0(z) := (z-T_0)^{-1}$.
\begin{Lemma}
\label{Lemma1}
Let $T_0$, $T$ and $A$ be self-adjoint operators on $\H$. Let $T_0$ have a spectral gap, and suppose that $T,T_0 \in \mathcal{C}^{1,\rm{u}}(A)$. If $R(\i)-R_0(\i) \in \mathcal K(\H)$ then $\mu^A(T)=\mu^A(T_0)$.
\end{Lemma}
\begin{remark}
\textnormal{The assumptions of Theorem \ref{Main} fulfill the requirements of this Lemma, with $(T_0,T)=(H_0,H)$. Indeed, $\mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H \rangle ^{1/2}) = \mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H_0 \rangle ^{1/2})$ implies the compactness of $R(\i) - R_0(\i)$:
\begin{equation*}
R(\i) - R_0(\i) = R(\i) V R_0(\i) = \underbrace{R(\i) \langle H \rangle ^{1/2}}_{\in \mathcal{B}(\H)} \underbrace{\langle H \rangle ^{-1/2} \langle H_0 \rangle ^{1/2}}_{\in \mathcal{B}(\H)} \underbrace{\langle H_0 \rangle ^{-1/2} V \langle H_0 \rangle ^{-1/2}}_{\in \mathcal K(\H) \ \text{by} \ \ref{item:A5}} \underbrace{\langle H_0 \rangle ^{1/2} R_0(\i)}_{\in \mathcal{B}(\H)}.
\end{equation*}
}
\end{remark}
\begin{proof}
Firstly, the assumption that $R(\i) - R_0(\i)$ is compact implies $\sigma_{\text{ess}}(T_0) = \sigma_{\text{ess}}(T)$. Because $T_0$ has a spectral gap, $\sigma_{\text{ess}}(T_0) = \sigma_{\text{ess}}(T) \neq \R$, and therefore there exists $\varsigma \in \R \setminus (\sigma(T) \cup \sigma(T_0))$. For all $z,z' \in \R \setminus (\sigma(T) \cup \sigma(T_0))$, the following identity holds:
\begin{equation*}
R(z) - R_0(z) = [I +(z' -z)R(z)][R(z')-R_0(z')][I+(z'-z)R_0(z)].
\end{equation*}
Thus $R(\varsigma) - R_0(\varsigma)$ is compact. To simplify the notation onwards, let $R_0 := R_0(\varsigma)$ and $R := R(\varsigma)$.
Secondly, if $\lambda \in \mu^A(T_0)$, then $(\varsigma-\lambda)^{-1} \in \mu^A(R_0)$ by Proposition \ref{conjugateResolvent}, and so there is an open interval $\mathcal{I} \ni (\varsigma-\lambda)^{-1}$, $c>0$ and a compact $K$ such that
\begin{equation*}
E_{\mathcal{I}}(R_0) [R_0, \i A]_{\circ} E_{\mathcal{I}}(R_0) \geqslant c E_{\mathcal{I}}(R_0) + K.
\end{equation*}
Applying to the right and left by $\theta(R_0)$, where $\theta \in \mathcal{C}_c^{\infty}(\R)$ is a bump function supported and equal to one in a neighborhood of $(\varsigma-\lambda)^{-1}$, we get
\begin{equation*}
\label{MourreEst1}
\theta(R_0) [R_0, \i A]_{\circ}\theta(R_0) \geqslant c \theta^2(R_0) + \text{compact}.
\end{equation*}
By the Helffer-Sj\"otrand formula and the fact that $R(z)-R_0(z)$ is compact for all $z \in \C \setminus \R$, we see that $\theta(R)-\theta(R_0)$ is compact, and likewise for $\theta^2(R)-\theta^2(R_0)$. Note also that $R_0 - R \in \mathcal{C}^{1,\rm u}(A)$ and so by Remark \ref{Remark2comp}, $[R_0 -R, \i A]_{\circ} \in \mathcal K(\H)$. Thus exchanging $R_0$ for $R$, $\theta(R_0)$ for $\theta(R)$, and $\theta^2(R_0)$ for $\theta^2(R)$ in the previous inequality, we have
\begin{equation*}
\theta(R) [R, \i A]_{\circ}\theta(R) \geqslant c \theta^2(R) + \text{compact}.
\end{equation*}
Let $\mathcal{I}' \subset \theta^{-1}(\{1\})$. Applying $E_{\mathcal{I}'}(R)$ to the left and right of this equation shows that the Mourre estimate holds for $R$ in a neighborhood of $(\varsigma-\lambda)^{-1}$. Thus $\lambda \in \mu^A(T)$ by Proposition \ref{conjugateResolvent}, and this shows $\mu^A(T_0) \subset \mu^A(T)$. Exchanging the roles of $T$ and $T_0$ shows the reverse inclusion.
\qed
\end{proof}
\begin{comment}
We mention that under slightly stronger assumptions, an even more precise result is known:
\begin{theorem} \cite[Theorem 7.2.9]{ABG} Let $T,T_0,A$ be self-adjoint operators on $\H$ such that both $T$ and $T_0$ are of class $\mathcal{C}^{1,\text{u}}(A)$. If $R(\i)-R_0(\i)$ is compact, then $\varrho_T^A = \varrho_{T_0}^A$. In particular, $\mu^A(T) = \mu^A(T_0)$.
\end{theorem}
\end{comment}
\section{Examples of Schr\"odinger operators}
\label{Section:Examples}
\subsection{The case of continuous Schr\"odinger operators}
\label{ex:3}
Our first application is to continuous Schr\"odinger operators. The setting has already been described in Example \ref{ex:1} for the most part. For an integer $d \geqslant 1$, we consider the Hilbert space $\H := L^2(\R^d)$.
The free operator is the Laplacian $H_0 := - \Delta = - \sum _{i=1} ^d \partial^2 / \partial x_i^2$ with domain the Sobolev space $\H^2 := \H^2(\R^d)$.
Then $H_0$ is a positive operator with purely absolutely continuous spectrum and $\sigma(H_0) = [0,+\infty)$. Let $Q$ be the operator of multiplication by $x=(x_1,...,x_d) \in \R^d$, and let $P:= -\i \nabla} \newcommand{\cs}{\mathcal {S}$. Set
\[H:=H_0+V_{\rm sr}(Q)+V_{\rm lr}(Q),\]
where $V_{\rm sr}(x)$ and $V_{\rm lr}(x)$ are real-valued functions belonging to $L^{\infty}(\R^d)$, satisfying $V_{\rm sr}(x)$, $V_{\rm lr}(x) = o(1)$ at infinity. Then $V_{\rm sr}(Q)$ and $V_{\rm lr}(Q)$ are bounded self-adjoint operators in $\H$ and $H_0$-form relatively compact operators, i.e.\ $V_{\rm sr}(Q), V_{\rm lr}(Q) \in \mathcal K(\H^1,\H^{-1})$, where $\H^1$ denotes the form domain of $H_0$ (i.e., the Sobolev space $\H^1(\R^d)$). The latter is a direct consequence of the following standard fact:
\begin{proposition}
\label{KnownFourierDecay}
Let $f,g$ be bounded Borel measurable functions on $\R^d$ which vanish at infinity. Then $g(Q)f(P) \in \mathcal K(L^2(\R^d))$.
\end{proposition}
Assumptions \ref{item:A1} - \ref{item:A5} are verified. We add that $\sigma_{\rm ess}(H)=[0,+\infty)$ by the Theorem of Weyl on relative compactness.
Moving forward, we use the following results:
\begin{proposition}\cite[p.\ 258]{ABG}
\label{c1invariant}
Let $T$ and $A$ be self-adjoint operators in a Hilbert space $\mathscr{H}$ and denote $\mathscr{H}^1 := \mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle T \rangle^{1/2})$, the form domain of $T$, and $\mathscr{H}^{-1} := (\mathscr{H}^1)^*$. Suppose that $e^{\i tA} \mathscr{H}^1 \subset \mathscr{H}^1$. Then the following are equivalent:
\begin{enumerate}
\item $T \in \mathcal{C}^1(A; \mathscr{H}^1,\mathscr{H}^{-1})$
\item The form $[T, \i A]$ defined on $\mathcal{D}} \newcommand{\F}{\mathcal{E}(T) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ extends to an operator in $\mathcal{B}(\mathscr{H}^1, \mathscr{H}^{-1})$.
\end{enumerate}
In this case, the derivative of the map $t \mapsto e^{-\i t A} T e^{\i t A}$ at $t=0$, which is denoted $[T, \i A]_{\circ}$ (see Section \ref{RegularityClasses}), coincides precisely with the bounded extension of the form $[T, \i A]$.
\end{proposition}
\begin{remark}
The form $[T, \i A]$ is defined for $\psi, \phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(T) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ as follows :
\[ \langle \psi, [T, \i A] \phi \rangle := \langle T^* \psi, \i A \phi \rangle - \langle A^* \psi, \i T \phi \rangle = \langle T \psi, \i A \phi \rangle - \langle A \psi, \i T \phi \rangle.\]
The last equality holds because $T$ and $A$ are assumed to be self-adjoint.
\end{remark}
\begin{proposition}\cite[p.\ 258]{ABG}
\label{c2invariant}
Under the same assumptions as that of Proposition \ref{c1invariant}, the following are equivalent:
\begin{enumerate}
\item $T \in \mathcal{C}^2(A; \mathscr{H}^1,\mathscr{H}^{-1})$
\item The forms $[T,\i A]$ and $[[T, \i A]_{\circ}, \i A]$ defined on $\mathcal{D}} \newcommand{\F}{\mathcal{E}(T) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ extend to operators \newline in $\mathcal{B}(\mathscr{H}^1, \mathscr{H}^{-1})$.
\end{enumerate}
\end{proposition}
Let $A:= (Q\cdot P + P \cdot Q)/2$ be the generator of dilations which is essentially self-adjoint on the Schwartz space $\mathcal{S}(\R^d)$. The relation
\begin{equation*}
(e^{\i tA} \psi) (x) = e^{td/2} \psi(e^t x), \quad \text{for all} \ \psi \in L^2(\R^d), x \in \R^d
\end{equation*}
implies that $\{e^{\i tA}\}_{t \in \R}$ stabilizes $\H^2(\R^d)$, and thus $\H^{\theta}(\R^d)$ for all $\theta \in [-2,2]$ by duality and interpolation. Thus \ref{item:A3} holds.
A straightforward computation gives
\begin{equation}
\label{commutatorH0}
\langle \psi, [H_0, \i A] \phi \rangle = \langle \psi, 2H_0 \phi \rangle
\end{equation}
for all $\psi,\phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. We see that $[H_0,\i A]$ extends to an operator in $\mathcal{B}(\H^1,\H^{-1})$, thereby implying that $H_0 \in \mathcal{C}^1(A;\H^1,\H^{-1})$ by Proposition \ref{c1invariant}. The strict Mourre estimate holds for $H_0$ with respect to $A$ on all intervals $\mathcal{I}$ verifying $\overline{\mathcal{I}} \subset (0,+\infty)$. In particular, $\mu^A(H_0) = (0, +\infty)$.
From \eqref{commutatorH0} it is immediate that
\begin{equation*}
\langle \psi, [[H_0, \i A]_{\circ}, \i A] \phi \rangle = \langle \psi, 4 H_0 \phi \rangle
\end{equation*}
holds for all $\psi,\phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. In particular, $[[H_0, \i A]_{\circ}, \i A]$ extends to an operator in $\mathcal{B}(\H^1,\H^{-1})$. Applying Proposition \ref{c2invariant} gives $H_0 \in \mathcal{C}^2(A;\H^1,\H^{-1})$ and \ref{item:A2} is fulfilled.
We now examine the commutator between the the full Hamiltonian $H := H_0 + V_{\rm lr}(Q)+V_{\rm sr}(Q)$ and $A$. Since $V_{\rm lr}(x)$ and $V_{\rm sr}(x)$ are assumed to be real-valued bounded functions, $\mathcal{D}} \newcommand{\F}{\mathcal{E}(H) = \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0)$. So we consider the form
\begin{equation}
\label{commutatorHH}
\langle \psi, [H, \i A] \phi \rangle = \langle \psi, [H_0, \i A] \phi \rangle + \langle \psi, [V_{\rm lr}(Q), \i A] \phi \rangle + \langle \psi, [V_{\rm sr}(Q), \i A] \phi \rangle
\end{equation}
defined for all $\psi,\phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. By linearity, we may treat each commutator form separately and the one with $[H_0, \i A]$ has already been dealt with.
For the long-range potential, we now additionally assume that $x \cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(x)$ exists as a function and belongs to $L^{\infty}(\R^d)$. A computation gives
\[ \langle \psi, [V_{\rm lr}(Q), \i A] \phi \rangle = - \langle \psi, Q \cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(Q) \phi \rangle, \]
for all $\psi, \phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. This shows that $[V_{\rm lr}(Q), \i A]$ extends to an operator in $\mathcal{B}(\H^1, \H^{-1})$.
For the short range potential, we now additionally assume that $\langle x \rangle V_{\rm sr}(x)$ belongs to $L^{\infty}(\R^d)$. For $\psi, \phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$, we have
\begin{align*}
\langle \psi, [ V_{\rm sr}(Q), \i A ] \phi \rangle &= \langle V_{\rm sr}(Q) \psi, \i A \phi \rangle + \langle \i A \psi, V_{\rm sr}(Q) \phi \rangle \\
&= \langle \langle Q \rangle V_{\rm sr}(Q) \psi, \langle Q \rangle ^{-1} (\i Q \cdot P + d/2) \phi \rangle + \langle \langle Q \rangle ^{-1} (\i Q \cdot P + d/2) \psi, \langle Q \rangle V_{\rm sr}(Q) \phi \rangle.
\end{align*}
We handle the operator in the first inner product on the r.h.s.\ of the previous equation. Note that $\langle Q \rangle ^{-1} (\i Q \cdot P + d/2) \in \mathcal{B}(\H^1, \H)$ and $\langle Q \rangle V_{\rm sr}(Q) \in \mathcal{B}(\H, \H^{-1})$. Thus
\begin{equation}
\label{firstoperator}
\langle Q \rangle V_{\rm sr}(Q) \times \langle Q \rangle ^{-1} (\i Q \cdot P + d/2)
\end{equation}
belongs to $\mathcal{B}(\H^1, \H^{-1})$. The operator in the second inner product on the r.h.s.\ of the previous equation also belongs to $\mathcal{B}(\H^1, \H^{-1})$, because it is the adjoint of \eqref{firstoperator}. We conclude that $[V_{\rm sr}(Q), \i A]$ extends to an operator in $\mathcal{B}(\H^1, \H^{-1})$.
Putting everything together, we see that the form $[H, \i A]$ given in \eqref{commutatorHH} extends to an operator in $\mathcal{B}(\H^1, \H^{-1})$. Applying Proposition \ref{c1invariant} gives $H \in \mathcal{C}^1(A; \H^1, \H^{-1})$. Writing $V_{\rm lr}(Q) + V_{\rm sr}(Q) = H - H_0$, we see that both $V_{\rm lr}(Q)$ and $V_{\rm sr}(Q)$ belong to $\mathcal{C}^1(A; \H^1, \H^{-1})$. Of course we note that the bounded extensions of $[V_{\rm lr}(Q), \i A]$ and $[V_{\rm sr}(Q), \i A]$ are precisely $[V_{\rm lr}(Q), \i A]_{\circ}$ and $[V_{\rm sr}(Q), \i A]_{\circ}$ respectively.
If $x \cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(x) = o(1)$ at infinity is further assumed, then $\langle H_0 \rangle ^{-1/2} Q \cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(Q) \langle H_0 \rangle ^{-1/2}$ belongs to $\mathcal K(L^2(\R^d))$ by Proposition \ref{KnownFourierDecay}. In other words, $[V_{\rm lr}(Q), \i A]_{\circ} = -Q \cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(Q) \in \mathcal K(\H^1, \H^{-1})$. So under this additional assumption, $V_{\rm lr}(Q) \in \mathcal{C}^{1,\rm{u}}(A; \H^1, \H^{-1})$ by Proposition \ref{PropC1UU}.
If $\langle x \rangle V_{\rm sr}(x) = o(1)$ at infinity is further assumed, then $\langle H_0 \rangle ^{-1/2} \langle Q \rangle V_{\rm sr}(Q)$ belongs to $\mathcal K(L^2(\R^d))$ by Proposition \ref{KnownFourierDecay}. In other words $\langle Q \rangle V_{\rm sr}(Q) \in \mathcal K(\H, \H^{-1})$. So \eqref{firstoperator} and its adjoint belong to $\mathcal K(\H^1,\H^{-1})$. Thus $[V_{\rm lr}(Q), \i A]_{\circ} \in \mathcal K(\H^1, \H^{-1})$. Applying Proposition \ref{PropC1UU} gives $V_{\rm sr}(Q) \in \mathcal{C}^{1,\rm{u}}(A; \H^1, \H^{-1})$. Assumption \ref{item:A6} is fulfilled.
The above calculations along with Theorems \ref{Main2} and \ref{Main} yield the following specific result for continuous Schr\"odinger operators:
\begin{theorem}
\label{ContSchrodingerThm}
Let $\H = L^2(\R^d)$, $H:=H_0+V_{\rm sr}(Q)+V_{\rm lr}(Q)$ and $A$ be as above, namely
\begin{enumerate}
\item $H_0 =-\Delta$ and $A = (Q \cdot P + P \cdot Q)/2$,
\item $V_{\rm sr}(x)$ and $V_{\rm lr}(x)$ are real-valued functions in $L^{\infty}(\R^d)$,
\item $\lim V_{\rm sr}(x) = \lim V_{\rm lr}(x) =0$ as $\| x\| \to +\infty$,
\item \label{Assumption4} $\lim \langle x \rangle V_{\rm sr}(x) = 0$ as $\| x\| \to +\infty$, and
\item \label{Assumption5} $x\cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(x)$ exists as a function, belongs to $L^{\infty}(\R^d)$, and $\lim x\cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(x) = 0$ as $\| x\| \to +\infty$.
\end{enumerate}
Then for all $\lambda \in (0,+\infty)$ there is a bounded open interval $\mathcal{I}$ containing $\lambda$ such that for all $s>0$ and $\psi \in \H$, propagation estimates \eqref{NewFormula3} and \eqref{NewFormula4} hold, and for all $s>1/2$, estimate \eqref{NewFormula} holds.
\end{theorem}
\begin{remark}
As discussed above, Assumptions (1) - (5) of Theorem \ref{ContSchrodingerThm} imply that $V_{\rm sr}(Q)$ and $V_{\rm lr}(Q)$ belong to $\mathcal{C}^{1,\rm{u}}(A; \H^1,\H^{-1})$. In particular $H \in \mathcal{C}^{1,\rm{u}}(A)$. Moreover, $\mu^A(H) = \mu^A(H_0) = (0,+\infty)$, by Lemma \ref{Lemma1}.
\end{remark}
\begin{remark}
Notice that the condition $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ that appears in the formulation of Theorems \ref{Main2} and \ref{Main} is totally absent here. This is because under the assumptions $\lim \langle x \rangle V_{\rm sr}(x) = \lim x\cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(x) = 0$ as $\| x\| \to +\infty$, it is well-known from research in the sixties that the continuous Schr\"odinger operator $H$ does not have any eigenvalues in $[0,+\infty)$, see articles by Kato \cite{K2}, Simon \cite{Si} and Agmon \cite{A}.
\end{remark}
To end this section, we justify the statements made in the last two rows of Table \ref{tabl1}. We refer the reader to \cite[Theorem 7.6.8]{ABG} for the proof that $H := H_0 + V_{\rm sr}(Q) + V_{\rm lr}(Q)$ belongs to $\mathcal{C}^{1,1}(A)$ whenever $\langle x \rangle V_{\rm sr}(x)$ and $x \cdot \nabla} \newcommand{\cs}{\mathcal {S} V_{\rm lr}(x)$ belong to $L^{\infty}(\R^d)$ and satisfy $o(\langle x \rangle ^{-\epsilon})$ at infinity for some $\epsilon > 0$. As for the statement concerning the $\mathcal{C}^{2}(A)$ regularity, it remains to prove that if some arbitrary potential $V(x)$ is a bounded real-valued function with $\langle x \rangle ^2 V(x) \in L^{\infty}(\R^d)$, then $H := H_0 + V(Q)$ belongs to $\mathcal{C}^{2}(A)$. Specifically, under these assumptions for $V$, we will show that $H$ belongs to $\mathcal{C}^2(A; \H^1,\H^{-1}) \subset \mathcal{C}^2(A)$.
\begin{comment}
\begin{proposition}\cite[p.\ 258]{ABG}
\label{c3invariant}
Let $T$ and $A$ be self-adjoint operators in a Hilbert space $\mathscr{H}$ and denote $\mathscr{H}^2 := \mathcal{D}} \newcommand{\F}{\mathcal{E}(T)$, the domain of $T$, and $\mathscr{H}^{-2} := (\mathscr{H}^2)^*$. Suppose that $e^{\i tA} \mathscr{H}^2 \subset \mathscr{H}^{-2}$. Then the following are equivalent:
\begin{enumerate}
\item $T \in \mathcal{C}^2(A; \mathscr{H}^2,\mathscr{H}^{-2})$
\item The forms $[T,\i A]$ and $[[T, \i A]_{\circ}, \i A]$ defined on $\mathcal{D}} \newcommand{\F}{\mathcal{E}(T) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ extend to operators \newline
in $\mathcal{B}(\mathscr{H}^2, \mathscr{H}^{-2})$.
\end{enumerate}
\end{proposition}
\end{comment}
Since $V(x)$ is bounded and real-valued, $\mathcal{D}} \newcommand{\F}{\mathcal{E}(H) = \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0)$ and so we consider first the form
\begin{equation*}
\label{commutatorHHH}
\langle \psi, [H, \i A] \phi \rangle = \langle \psi, [H_0, \i A] \phi \rangle + \langle \psi, [V(Q), \i A] \phi \rangle
\end{equation*}
defined for $\psi,\phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. The calculations from above imply that $[H, \i A]$ extends to an operator in $\mathcal{B}(\H^1, \H^{-1})$. Second we consider the form
\begin{equation*}
\label{secondcommutatorHHH}
\langle \psi, [[H, \i A]_{\circ}, \i A] \phi \rangle = \langle \psi, [[H_0, \i A]_{\circ}, \i A] \phi \rangle + \langle \psi, [[V(Q), \i A]_{\circ}, \i A] \phi \rangle
\end{equation*}
defined for $\psi, \phi \in \mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) \cap \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. The first commutator form on the r.h.s.\ of this equation simplifies to $\langle \psi, 4 H_0 \phi \rangle$, while the for second commutator form we have :
\begin{align}
\langle \psi, [[V(Q), \i A]_{\circ}, \i A] \phi \rangle &= \big \langle [V(Q), \i A]_{\circ} ^* \psi, (\i A) \phi \big \rangle + \big \langle (\i A) \psi, [V(Q), \i A]_{\circ} \phi \big\rangle \nonumber \\
&= \big \langle [V(Q), \i A]_{\circ} \psi, (\i A) \phi \big \rangle + \big \langle (\i A) \psi, [V(Q), \i A]_{\circ} \phi \big\rangle.
\label{commut2}
\end{align}
Let us first have a look at the second term $\big \langle (\i A) \psi, [V(Q), \i A]_{\circ} \phi \big\rangle$. Since $\langle x \rangle V(x) \in L^{\infty}(\R^d)$, this term is equal to
$$\big \langle (\i A) \psi, \underbrace{\langle Q \rangle V(Q) \langle Q \rangle ^{-1} (\i Q \cdot P +d/2)}_{ := \ \Delta \ \in \ \mathcal{B}(\H^1,\H^{-1})} \phi \big\rangle + \big\langle (\i A) \psi, \underbrace{(-\i P \cdot Q +d/2) \langle Q \rangle ^{-1} \langle Q \rangle V(Q)}_{ = \ \Delta^* \ \in \ \mathcal{B}(\H^1,\H^{-1})} \phi \big\rangle.$$
Since $\langle x \rangle ^{2} V(x) \in L^{\infty}(\R^d)$, this is also equal to
$$\big \langle \langle Q \rangle ^{-1} (\i A) \psi, \underbrace{\langle Q \rangle ^{2} V(Q) \langle Q \rangle ^{-1} (\i Q \cdot P +d/2)}_{ := \ \tilde{\Delta} \ \in \ \mathcal{B}(\H^1,\H^{-1})} \phi \big\rangle +\big \langle (\i A) \psi, \underbrace{(-\i P \cdot Q +d/2) \langle Q \rangle ^{-2} \langle Q \rangle ^2 V(Q)}_{ = \ \Delta^* \ \in \ \mathcal{B}(\H^1,\H^{-1})} \phi \big\rangle. $$
Let $\mathcal{A}_Q := \langle Q \rangle ^{-1} (\i A) \in \mathcal{B}(\H^1,\H^{-1})$. Using the fact that $[(- \i P\cdot Q +d/2), \langle Q \rangle ^{-1} ] = [-\i P, \langle Q \rangle ^{-1} ] \cdot Q = \i |Q|^2 \langle Q \rangle ^{-3}$ , we get that $\big \langle (\i A) \psi, [V(Q), \i A]_{\circ} \phi \big\rangle$ is equal to
$$\big \langle \mathcal{A}_Q \psi, \tilde{\Delta} \phi \big\rangle + \big \langle \mathcal{A}_Q \psi, \underbrace{(-\i P \cdot Q +d/2) \langle Q \rangle ^{-1} \langle Q \rangle ^2 V(Q)}_{ \in \ \mathcal{B}(\H^1,\H^{-1})} \phi \big\rangle + \big \langle \mathcal{A}_Q \psi, \underbrace{\i |Q|^2 \langle Q \rangle ^{-1} V(Q)}_{\in \ \mathcal{B}(\H)}\phi \big\rangle. $$
For the first term of \eqref{commut2}, we note that it is equal to $\overline{\big \langle (\i A) \phi,[V(Q), \i A]_{\circ} \psi \big \rangle}$. Performing the same calculations as we just did shows that the commutator $[[V(Q), \i A]_{\circ}, \i A]$ extends to a bounded operator in $\mathcal{B}(\H^1,\H^{-1})$.
Hence $[[H, \i A]_{\circ}, \i A]$ extends to a bounded operator in $\mathcal{B}(\H^1,\H^{-1})$ and by Proposition \ref{c2invariant} this implies that $H \in \mathcal{C}^2(A ; \H^1, \H^{-1})$.
\subsection{The case of discrete Schr\"odinger operators}
\label{ex:2}
Our second application is to discrete Schr\"odinger operators. For an integer $d \geqslant 1$, let $\H := \ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d)$
The free operator is the discrete Laplacian, i.e.\ $H_0 := \Delta \in \mathcal{B}(\H)$, where
\begin{equation}
\label{DeltaDef}
(\Delta \psi)(n) := \sum_{m : \|m-n\|=1} \psi(n) - \psi(m).
\end{equation}
Here we have equipped $\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d$ with the following norm: for $n=(n_1,...,n_d)$, $\|n\| := \sum_{i =1}^d |n_i|$. It is well-known that $\Delta$ is a bounded positive operator on $\H$ with purely absolutely continuous spectrum, and $\sigma(\Delta) = \sigma_{\rm{ac}}(\Delta) = [0,4d]$. Let $V$ be a bounded real-valued function on $\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d$ such that $V(n) \to 0$ as $\|n\| \to +\infty$. Then $V$ induces a bounded self-adjoint compact operator on $\H$ as follows, $(V\psi)(n) := V(n) \psi(n)$. Recall that a multiplication operator $V$ on $\ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d)$ is compact if and only if $V(n) \to 0$ as $\|n\| \to +\infty$. Assumptions \ref{item:A1} - \ref{item:A5} are verified. Set $H := H_0 + V$. Then $H$ is a bounded self-adjoint operator and $\sigma_{\text{ess}}(H) = [0,4d]$.
To write the conjugate operator, we need more notation. Let $S=(S_1,...,S_d)$, where, for $1 \leqslant i \leqslant d$, $S_i$ is the shift operator given by
\begin{equation*}
(S_i \psi)(n) := \psi(n_1,...,n_i -1,...,n_d), \quad \text{for all} \ n \in \mathbb{Z}} \newcommand{\C}{\mathbb{C}^d \ \text{and} \ \psi \in \H.
\end{equation*}
Let $N=(N_1,...,N_d)$, where, for $1 \leqslant i \leqslant d$, $N_i$ is the position operator given by
\begin{equation*}
(N_i \psi)(n) := n_i \psi(n), \quad \text{with domain} \quad \mathcal{D}} \newcommand{\F}{\mathcal{E}(N_i) := \bigg \{ \psi \in \H : \sum_{n \in \mathbb{Z}} \newcommand{\C}{\mathbb{C}^d} |n_i \psi(n)|^2 < +\infty \bigg \}.
\end{equation*}
The conjugate operator, denoted by $A$, is the closure of the following operator
\begin{equation}
\label{Adef}
A_0 := \frac{\i}{2} \sum _{i=1}^d (S_i -S_i^*)N_i + N_i (S_i -S_i^*), \quad \text{with domain} \quad \mathcal{D}} \newcommand{\F}{\mathcal{E}(A_0) := \ell_0(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d),
\end{equation}
the sequences with compact support. The operator $A$ is self-adjoint, see \cite{BS} and \cite{GGo}. That $\{e^{\i tA}\}_{t \in \R}$ stabilizes the form domain of $H_0$ is a triviality, because $\mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0) = \H$. So Assumption \ref{item:A3} is true.
Next, we study the commutator between $H_0$ and $A$. A calculation shows that
\begin{equation}
\label{CommutatorLaplacian}
\langle \psi, [H_0, \i A] \psi \rangle = \langle \psi, \sum_{i=1}^d \Delta_i (4- \Delta_i) \psi \rangle,
\end{equation}
for all $\psi \in \ell_0(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d)$. Here $\Delta_i := 2-S_i-S_i^*$. Since $H_0$ is a bounded self-adjoint operator, \eqref{CommutatorLaplacian} implies that $H_0 \in \mathcal{C}^1(A)$, thanks to a simple criterion for such operators, see \cite[Lemma 6.2.9]{ABG} and \cite[Theorem 6.2.10]{ABG}. We could also have invoked Proposition \ref{c1invariant}, but that is a bit of an overkill. An easy induction shows that $H_0 \in \mathcal{C}^k(A)$ for all $k \in \N$. In particular, Assumption \ref{item:A2} holds.
By \eqref{CommutatorLaplacian} and \cite[Theorem 8.3.6]{ABG}, we have that
\begin{equation}
\label{SetDelta}
\mu^A(H_0) = [0,4d] \setminus \{4k : k=0,...,d\}.
\end{equation}
Let us now study the commutator between $V$ and $A$. Let $\tau_i V$ be the shifted potential acting as follows:
\begin{equation*}
[(\tau_i V)\psi](n) := V(n_1,...,n_i -1,...,n_d) \psi(n), \quad \text{for all} \ \psi \in \H.
\end{equation*}
Define $\tau_i^*V$ correspondingly. A straightforward computation gives
\begin{equation*}
\langle \psi, [V, \i A ] \psi \rangle = \sum_{i=1}^d \Big \langle \psi, \Big( (2^{-1}+N_i)(V-\tau_i ^*V)S_i^* + (2^{-1}-N_i)(V-\tau_i V)S_i \Big) \psi \Big \rangle,
\end{equation*}
for all $\psi \in \ell_0(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d)$. If $\sup_{n \in \mathbb{Z}} \newcommand{\C}{\mathbb{C}^d} | n_i(V-\tau_i V)(n)| <+\infty$ is assumed for all $1 \leqslant i \leqslant d$, we see that $V \in \mathcal{C}^1(A)$. The bounded extension of the form $[V, \i A]$ is precisely $[V, \i A]_{\circ}$. If $\lim | n_i(V-\tau_iV)(n)| = 0$ as $\|n\| \to +\infty$ for all $1\leqslant i \leqslant d$ is further assumed, then $[V,\i A]_{\circ} \in \mathcal K(\H)$. This is equivalent to $V \in \mathcal{C}^{1,\text{u}}(A)$, by Remark \ref{Remark2comp}. Thus \ref{item:A6} is fulfilled.
The above calculations along with Theorems \ref{Main2} and \ref{Main} yield the following specific result for discrete Schr\"odinger operators:
\begin{theorem}
Let $\H = \ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d)$, $H:=H_0+V$ and $A$ be as above, namely
\begin{enumerate}
\item $H_0$ is given by \eqref{DeltaDef} and $A$ is the closure of \eqref{Adef},
\item $V(n)$ is a bounded real-valued function defined on $\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d$,
\item \label{Assumption33} $\lim V(n) = 0$ as $\| n\| \to +\infty$, and
\item \label{Assumption44} $\lim |n_i(V-\tau_iV)(n)| = 0$ as $\|n\| \to +\infty$ for all $1\leqslant i \leqslant d$.
\end{enumerate}
Then for all $\lambda \in [0,4d] \setminus \{4k : k=0,...,d\}$ there is a bounded open interval $\mathcal{I}$ containing $\lambda$ such that for all $s>0$ and $\psi \in \H$, propagation estimates \eqref{NewFormula3} and \eqref{NewFormula4} hold, and for all $s>1/2$, estimate \eqref{NewFormula} holds.
\end{theorem}
\begin{remark}
As seen above, Assumptions (1) - (4) imply that $V$ belongs to $\mathcal{C}^{1,\rm{u}}(A)$. In particular $H \in \mathcal{C}^{1,\rm{u}}(A)$. Moreover, $\mu^A(H) = \mu^A(H_0) = [0,4d] \setminus \{4k : k=0,...,d\}$, by Lemma \ref{Lemma1} and \eqref{SetDelta}.
\end{remark}
\begin{remark}
As in the continuous operator case, the condition $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ holds here for all $\lambda \in \mu^{A}(H)$. Indeed, if $\psi \in \ker(H-\lambda)$ and $\lambda \in \mu^A(H)$, then $\psi$ decays sub-exponentially, see \cite[Theorem 1.5]{Ma2}. Under Assumptions \eqref{Assumption33} and \eqref{Assumption44}, the absence of positive eigenvalues holds for one-dimensional discrete Schr\"odinger operators, by \cite[Theorem 1.3]{Ma2}. To our knowledge, the absence of positive eigenvalues under Assumptions \eqref{Assumption33} and \eqref{Assumption44} is an open problem for multi-dimensional discrete Schr\"odinger operators on $\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d$.
\end{remark}
\section{Proof of Theorem \ref{Main2}}
\label{PROOF2}
We start with an improvement of \cite[Proposition 2.1]{GJ1}.
\begin{Lemma}
\label{Lemma:2}
For $\phi, \varphi \in \mathcal{D}(A)$, the rank one operator $|\phi \rangle \langle \varphi | : \psi \mapsto \langle \varphi, \psi \rangle \phi$ is of class $\mathcal{C}^{1,\rm{u}}(A)$.
\end{Lemma}
\begin{proof}
First, by \cite[Lemma 6.2.9]{ABG}, $|\phi \rangle \langle \varphi | \in \mathcal{C}^1(A)$ if and only if the sesquilinear form
\[ \mathcal{D}} \newcommand{\F}{\mathcal{E}(A) \ni \psi \mapsto \langle \psi, [ | \phi \rangle \langle \varphi |, A ] \psi \rangle := \langle \langle \phi, \psi \rangle \varphi, A \psi \rangle - \langle A \psi, \langle \varphi | \psi \rangle \phi \rangle \]
is continuous for the topology induced by $\H$. Since
\[ \langle \psi, [ | \phi \rangle \langle \varphi |, A ] \psi \rangle = \langle \psi, \phi \rangle \langle A \varphi, \psi \rangle - \langle \psi, A \phi \rangle \langle \varphi, \psi \rangle = \langle \psi, \left( |\phi \rangle \langle A \varphi | - | A \phi \rangle \langle \varphi | \right) \psi \rangle, \]
we see that $|\phi \rangle \langle \varphi | \in \mathcal{C}^1(A)$ and $[ | \phi \rangle \langle \varphi |, A ]_{\circ} = |\phi \rangle \langle A \varphi | - | A \phi \rangle \langle \varphi |$, which is a bounded operator of rank at most two. Apply Proposition \ref{PropC1UU}, more specifically Remark \ref{Remark2comp}, to obtain the result. \qed
\end{proof}
\vspace{0.5cm}
Next, we quote for convenience the result of \cite{Ri} that we use in the proof of Theorem \ref{Main2}.
\begin{theorem}\cite[Theorem 1]{Ri}
Let $H$ and $A$ be self-adjoint operators in $\H$ with $H \in \mathcal{C}^{1,\rm u}(A)$. Assume that there exist an open interval $J \subset \R$ and $c >0$ such that $\eta (H) [H, \i A]_{\circ} \eta(H) \geqslant c \cdot \eta^2(H)$ for all real $\eta \in \mathcal{C}_{\rm c} ^{\infty}(J)$. Let $a$ and $t$ be real numbers. Then for each real $\eta \in \mathcal{C}_{\rm c}^{\infty}(J)$ and for each $v < c$ one has uniformly in $a$,
\[ \| E_{(-\infty, a+vt]}(A) e^{-\i tH} \eta(H) E_{[a,+\infty)}(A) \| \to 0 \quad \text{as} \quad t \to +\infty.\]
\end{theorem}
We are now ready to prove Theorem \ref{Main2}. \\
\vspace{0.2cm}
\noindent \textit{Proof of Theorem \ref{Main2}.}
Let $\mathcal{I} \subset \R$ be a compact interval as in the statement of Theorem \ref{Main2}, that is, for all $\lambda \in \mathcal{I}$, $\lambda \in \mu^A(H)$ and $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$. Let $\lambda \in \mathcal{I}$ be given, and assume that a Mourre estimate holds with $K \in \mathcal K(\H)$ in a neighborhood $J$ of $\lambda$.
\\ {\bf Step 1:} This step is a remark due to Serge Richard. In this step, we assume that $\lambda$ is not an eigenvalue of $H$. In this case, from the Mourre estimate, we may derive a strict Mourre estimate on a possibly smaller neighborhood of $\lambda$, because $E_{J}(H)KE_J(H)$ converges in norm to zero as the support of $J$ shrinks to zero around $\lambda$. So, without loss of generality, there is an open interval $J$ containing $\lambda$ and $c >0$ such that a strict Mourre estimate holds for $H$ on $J$, i.e.
\[ E_J(H) [H, \i A]_{\circ} E_J(H) \geqslant c E_J (H). \]
In particular, $J$ does not contain any eigenvalue of $H$. We look to apply \cite[Theorem 1]{Ri}. Let $\psi \in \H$, and assume without loss of generality that $\| \psi \| =1$. Fix $v \in (0,c)$ and let $a \in \R$. Let $\eta \in \mathcal{C}^{\infty}_c (J)$ be such that $\max_{x \in J} | \eta (x) | \leqslant 1$, so that $\| \eta(H) \| \leqslant 1$. Note also that $\| \langle A \rangle ^{-s} \| \leqslant 1$ for all $s>0$. Then
\begin{align*}
\| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) \psi \| & \leqslant \| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) E_{(-\infty, a)}(A) \psi \| + \| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \psi \| \\
& \leqslant \| E_{(-\infty, a)}(A) \psi \| + \| \langle A \rangle ^{-s} E_{(-\infty, a+vt]}(A) e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \psi \| \\
& \quad + \| \langle A \rangle ^{-s} E_{(a+vt, +\infty)}(A) e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \psi \| \\
& \leqslant \| E_{(-\infty, a)}(A) \psi \| + \| E_{(-\infty, a+vt]}(A) e^{-\i tH} \eta (H) E_{[a,+\infty)}(A) \| \\
& \quad + \| \langle A \rangle ^{-s} E_{(a+vt, +\infty)}(A) \|
\end{align*}
Let $\epsilon > 0$ be given. Choose $a$ so that $\|E_{(-\infty, a)}(A) \psi \| \leqslant \epsilon/3$. Then take $t$ large enough so that the other two terms on the r.h.s.\ of the previous inequality are each less than $\epsilon/3$. The second one is controlled by \cite[Theorem 1]{Ri} and the third one by functional calculus. Then $\| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) \psi \| \leqslant \epsilon$. Thus
\[ \lim \limits_{t \to +\infty} \| \langle A \rangle ^{-s} e^{-\i tH} \eta (H) \psi \| = 0. \]
By taking a sequence $\eta_k \in \mathcal{C}^{\infty} _c(J)$ that converges pointwise to the characteristic function of $J$, we infer from the previous limit that
\[ \lim \limits_{t \to +\infty} \| \langle A \rangle ^{-s} e^{-\i tH} E_J (H) \psi \| = 0. \]
Finally, as there are no eigenvalues of $H$ in $J$, $E_{J}(H)=E_{J}(H) P_{\rm c}(H)$ and we have
\begin{equation}
\label{eqn1st}
\lim \limits_{t \to +\infty} \| \langle A \rangle ^{-s} e^{-\i tH} E_J (H) P_{\rm c} (H) \psi \| = 0.
\end{equation}
\noindent {\bf Step 2:} In this step, $\lambda \in \mathcal{I}$ is assumed to be an eigenvalue of $H$. By adding a constant to $H$, we may assume that $\lambda \neq 0$. By assumption, there is an interval $J$ containing $\lambda$, $c >0$ and $K \in \mathcal K(\H)$ such that
\[ E_J(H) [H, \i A]_{\circ} E_J(H) \geqslant c E_J(H) + K. \]
As the point spectrum of $H$ is finite in $J$, we further choose $J$ so that it contains only one eigenvalue of $H$, namely $\lambda$. Furthermore, the interval $J$ is chosen so that $0 \not \in J$. Denote $P = P_{\{ \lambda \} }(H)$ and $P^{\perp} := 1 - P_{\{ \lambda \} }(H)$. Also let $H' := H P^{\perp}$. Then
\[ P^{\perp}E_J(H) [H, \i A]_{\circ} E_J(H) P^{\perp} \geqslant c E_J(H) P^{\perp} + P^{\perp} E_J (H) K E_J(H) P^{\perp}. \]
Functional calculus yields $P^{\perp} E_J (H) = E_J (H P ^{\perp})$ -- this is where the technical point $0 \not \in J$ is required. Moreover, $P^{\perp} E_J (H) K E_J (H) P^{\perp}$ converges in norm to zero as the size of the interval $J$ shrinks to zero around $\lambda$. Therefore there is $c' >0$ and an open interval $J'$ containing $\lambda$, with $J' \subset J$, such that
\[ E_{J'}(H') [H', \i A]_{\circ} E_{J'}(H') \geqslant c' E_{J'}(H'). \]
In other words, a strict Mourre estimate holds for $H'$ on $J'$. Now $H' := H P^{\perp} = H - H P$. Note that $P$ is a finite sum of rank one projectors because $\lambda \in \mu^A(H)$. Thanks to the assumption $\ker(H-\lambda) \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$, we have by Lemma \ref{Lemma:2} that $P \in \mathcal{C}^{1,\rm{u}}(A)$. Thus $H' \in \mathcal{C}^{1,\rm{u}}(A)$. Performing the same calculation as in Step 1 with $(H',J')$ instead of $(H,J)$ gives
\[ \lim \limits_{t \to + \infty} \| \langle A \rangle ^{-s} e^{-\i tH'} E_{J'} (H') \psi \| = 0. \]
Since $e^{-\i tH'} E_{J'} (H') = e^{-\i tH} E_{J'} (H) P^{\perp}$, we have
\[\lim \limits_{t \to + \infty} \| \langle A \rangle ^{-s} e^{-\i tH} E_{J'} (H) P^{\perp} \psi \| = 0. \]
The only eigenvalue of $H$ belonging to $J'$ is $\lambda$, so $E_{J'}(H) P^{\perp}=E_{J'}(H) P_{\rm c}(H)$. Thus
\begin{equation}
\label{eqn2nd}
\lim \limits_{t \to + \infty} \| \langle A \rangle ^{-s} e^{-\i tH} E_{J'} (H) P_{\rm c}(H) \psi \| = 0.
\end{equation}
\noindent{\bf Step 3:} In this way, for each $\lambda \in \mathcal{I}$, we obtain an open interval $J_{\lambda}$ or $J'_{\lambda}$ containing $\lambda$ such that \eqref{eqn1st} or \eqref{eqn2nd} holds true, depending on whether $\lambda$ is an eigenvalue of $H$ or not. To conclude, as
\[ \left( \bigcup \limits_{\substack{\lambda \in \mathcal{I}, \\ \lambda \in \sigma_{\rm{c}}(H)}} J_{\lambda} \right) \bigcup \left( \bigcup \limits_{\substack{\lambda \in \mathcal{I}, \\ \lambda \in \sigma_{\rm{pp}}(H)}} J'_{\lambda} \right) \]
is an open cover of $\mathcal{I}$, we may choose a finite sub-cover. If $\{J_i\}_{i=1}^n$ denotes this sub-cover, we may further shrink these intervals so that $J_i \cap J_j = \emptyset$ for $i \neq j$, $\overline{\cup J_i} = \mathcal{I}$, and $\overline{J_i} \cap \overline{J_j} \in \sigma_{\rm{c}}(H)$ for $\i \neq j$. Thus $E_{\mathcal{I}}(H) P_{\rm{c}}(H) = \sum_i ^n E_{J_i}(H) P_{\rm{c}}(H)$. Then, by applying \eqref{eqn1st} and \eqref{eqn2nd} we get
\[ \lim \limits_{t \to + \infty} \| \langle A \rangle ^{-s} e^{-\i tH} E_{\mathcal{I}}(H) P_{\rm c}(H) \psi \| \leqslant \lim \limits_{t \to + \infty} \sum_{i=1}^n \| \langle A \rangle ^{-s} e^{-\i tH} E_{J_i}(H) P_{\rm c}(H) \psi \| = 0. \]
This proves the estimate \eqref{NewFormula3}.
\\
\noindent{\bf Step 4:} We turn to the proof of \eqref{NewFormula4}. Since, $A$ is self-adjoint, $\mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ is dense in $\H$. Let $\{ \phi_n \}_{n=1}^{\infty} \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ be an orthonormal set. Let $W \in \mathcal K(\H)$ and denote $F_N := \sum _{n=1} ^N \langle \phi_n, \cdot \rangle W\phi_n$. The proof of \cite[Theorem VI.13]{RS1} shows that $\| W-F_N \| \to 0$ as $N \to + \infty$. Then
\begin{align*}
\| W P_{\rm{c}} (H) E_\mathcal{I} (H) e^{-\i tH} \psi \| &\leqslant \| (W-F_N) P_{\rm{c}} (H) E_\mathcal{I} (H) e^{-\i tH} \psi \| + \| F_N P_{\rm{c}} (H) E_\mathcal{I} (H) e^{-\i tH} \psi \| \\
& \leqslant \underbrace{\| W-F_N \|}_{\to \ 0 \ \text{as} \ N \to +\infty} + \ \| F_N \langle A \rangle \| \underbrace{\| \langle A \rangle ^{-1} P_{\rm{c}} (H) E_\mathcal{I} (H) e^{-\i tH} \psi \|}_{\to \ 0 \ \text{as} \ t \to + \infty}.
\end{align*}
The result follows by noting that $F_N \langle A \rangle$ is a bounded operator for each $N$. If $W$ is $H$-relatively compact, use the fact that $E_\mathcal{I}(H) (H+\i)$ is a bounded operator.
\qed
\section{Proof of Theorem \ref{Main}}
\label{PROOF}
To prove the result, we will need the following fact:
\begin{Lemma}
\label{Lem1}
Let $T$ be a self-adjoint operator with $T \in \mathcal{C}^1(A)$. Let $\lambda \in \mu^A(T)$ and suppose that $\ker(T-\lambda) \subset \mathcal{D}(A)$. Then there is an interval $\mathcal{I} \subset \mu^A(T)$ containing $\lambda$ such that $P_{\rm{c}}^{\perp}(T) E_{\mathcal{I}}(T)$ and $P_{\rm{c}} (T) \eta(T)$ are of class $\mathcal{C}^1(A)$ for all $\eta \in C_c^{\infty}(\R)$ with $\text{supp}(\eta) \subset \mathcal{I}$.
\end{Lemma}
\begin{proof}
Since there are finitely many eigenvalues of $T$ in a neighborhood of $\lambda$, there is a bounded interval $\mathcal{I}$ containing $\lambda$ such that $\ker(T-\lambda') \subset \mathcal{D}} \newcommand{\F}{\mathcal{E}(A)$ for all $\lambda' \in \mathcal{I}$. Then $P_{\rm{c}}^{\perp}(T) E_{\mathcal{I}}(T)$ is a finite rank operator and belongs to the class $\mathcal{C}^1(A)$ by Lemma \ref{Lemma:2}. Moreover $T \in \mathcal{C}^1(A)$ implies $\eta(T) \in \mathcal{C}^1(A)$, by the Helffer-Sj\"ostrand formula. So $P_{\rm{c}}^{\perp} (T) E_{\mathcal{I}}(T) \eta(T) \in \mathcal{C}^1(A)$ as the product of two bounded operators in this class. Finally, $P_{\rm{c}} (T) \eta(T) = \eta(T) - P_{\rm{c}}^{\perp} (T) E_{\mathcal{I}}(T) \eta(T)$ is a difference of two bounded operators in $\mathcal{C}^1(A)$, so $P_{\rm{c}} (T) \eta(T) \in \mathcal{C}^1(A)$.
\qed
\end{proof}
\vspace{0.5cm}
\noindent \textit{Proof of Theorem \ref{Main}.}
Since $H_0$ is semi-bounded and $\sigma_{\text{ess}}(H) = \sigma_{\text{ess}}(H_0)$, there is $\varsigma \in \R \setminus (\sigma(H) \cup \sigma(H_0))$. Denote the resolvents of $H$ and $H_0$ respectively by $R(z) := (z-H)^{-1}$ and $R_0(z) := (z-H_0)^{-1}$. Also denote the spectral projector of $R(z)$ onto the continuous spectrum by $P_{\rm{c}}(R(z))$. We split the proof into four parts. First we translate the problem in terms of the resolvent $R(\varsigma)$. Second we show the following formula:
\begin{equation}
\label{Formula1}
\begin{split}
& P_{\rm{c}}(R(\varsigma)) \theta(R(\varsigma)) [R(\varsigma), \i \varphi(A/L)]_{\circ} \theta(R(\varsigma)) P_{\rm{c}}(R(\varsigma)) \geqslant \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ L^{-1} P_{\rm{c}}(R(\varsigma)) \theta(R(\varsigma)) \Big( C \langle A/L \rangle ^{-2s} + K \Big) \theta(R(\varsigma)) P_{\rm{c}}(R(\varsigma)),
\end{split}
\end{equation}
where $\theta$ is a smooth function compactly supported about $(\varsigma - \lambda)^{-1}$, $\varphi$ is an appropriately chosen smooth bounded function, $L \in \R^+$ is sufficiently large, $K$ is a compact operator uniformly bounded in $L$, $C>0$, and $s \in (1/2,1)$. $\theta, \varphi, C$ and $s$ are independent of $L$. This formula is expressed in terms of the resolvent $R(\varsigma)$. Third, we look to convert it into a formula for $H$. We show that the latter formula implies the existence of an open interval $J$ containing $\lambda$ such that
\begin{equation}
\begin{split}
\label{Formula2}
& P_{\rm{c}}(H) E_{J}(H) [R(\varsigma), \i \varphi(A/L)]_{\circ} E_{J}(H) P_{\rm{c}}(H) \geqslant \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ L^{-1} P_{\rm{c}}(H) E_{J}(H) \Big( C \langle A/L \rangle ^{-2s} + K \Big) E_{J}(H) P_{\rm{c}}(H).
\end{split}
\end{equation}
We note that the operator $K$ is the same in \eqref{Formula1} and \eqref{Formula2}. Fourth, we insert the dynamics into the previous formula and average over time. We notably use the RAGE Theorem \eqref{RAGEsup} to derive the desired formula, i.e.
\begin{equation}
\label{Formula3}
\lim \limits_{T \to \pm \infty} \sup \limits_{\| \psi\| \leqslant 1} \frac{1}{T} \int_0 ^T \| \langle A \rangle ^{-s} e^{-\i t H} P_{\rm{c}}(H) E_{J}(H)\psi \|^2 dt = 0.
\end{equation}
\noindent \textbf{Part 1:} Let $\lambda \in \mu^A(H)$ be such that $\ker(H-\lambda) \subset \mathcal{D}(A)$. Then there are finitely many eigenvalues in a neighborhood of $\lambda$ including multiplicity. We may find an interval $\mathcal{I} = (\lambda_0,\lambda_1)$ containing $\lambda$ such that $\mathcal{I} \subset \mu^A(H)$ and for all $\lambda' \in \mathcal{I}$, $\ker(H-\lambda') \subset \mathcal{D}(A)$. Define
\begin{equation}
\label{Functionf}
f:\R \setminus \{\varsigma\} \mapsto \R, \quad f:x \mapsto 1/(\varsigma-x).
\end{equation}
Since eigenvalues of $H$ located in $\mathcal{I}$ are in one-to-one correspondence with the eigenvalues of $R(\varsigma)$ located in $f(\mathcal{I}) = (f(\lambda_0),f(\lambda_1))$, it follows that $f(\mathcal{I})$ is an interval containing $f(\lambda)$ such that $f(\mathcal{I}) \subset \mu^A(R(\varsigma))$ and $\ker(R(\varsigma)-\lambda') \subset \mathcal{D}(A)$ for all $\lambda' \in f(\mathcal{I})$. Note the use of Proposition \ref{conjugateResolvent}.
To simplify the notation in what follows, we let $R := R(\varsigma)$, $R_0 := R_0(\varsigma)$ and $P_{\rm{c}} := P_{\rm{c}}(R(\varsigma))$, as $\varsigma$ is fixed. Also let $R_A(z) := (z-A/L)^{-1}$, where $L \in \R^+$.
\noindent \textbf{Part 2:} Let $\theta,\eta,\chi \in C^{\infty}_c (\R)$ be bump functions such that $f(\lambda) \in$ supp$(\theta) \subset$ supp$(\eta) \subset$ supp$(\chi) \subset f(\mathcal{I})$, $\eta \theta = \theta$ and $\chi \eta =\eta$. Let $s \in (1/2,1)$ be given. Define
\begin{equation*}
\varphi : \R \mapsto \R, \qquad \varphi : t \mapsto \int _{-\infty} ^t \langle x \rangle ^{-2s} dx.
\end{equation*}
Note that $\varphi \in \mathcal{S}^{0}(\R)$. The definition of $\mathcal{S}^0(\R)$ is given in \eqref{decay1}. Consider the bounded operator
\begin{equation*}
F := P_{\rm{c}} \theta(R) [R, \i \varphi(A/L)]_{\circ} \theta(R) P_{\rm{c}} = \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) R_A(z) [R, \i A]_{\circ} R_A(z) \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z}.
\end{equation*}
By Lemma \ref{Lem1} with $T = R$, $P_{\rm{c}} \eta(R) \in C^1(A)$, so
\begin{equation*}
[P_{\rm{c}} \eta(R),R_A(z)]_{\circ} = L^{-1} R_A(z) [P_{\rm{c}} \eta(R), A]_{\circ} R_A(z).
\end{equation*}
In the formula defining $F$, we introduce $P_{\rm{c}}\eta(R)$ next to $P_{\rm{c}} \theta(R)$ and commute it with $R_A(z)$:
\begin{align*}
\label{la;}
F &= \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) \Big(R_A(z) P_{\rm{c}} \eta(R) + [P_{\rm{c}} \eta(R),R_A(z)]_{\circ} \Big) [R, \i A]_{\circ} \times \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Big(\eta(R) P_{\rm{c}} R_A(z)+ [R_A(z),P_{\rm{c}} \eta(R)]_{\circ} \Big) \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z} \\
&= \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) R_A(z) P_{\rm{c}} \eta(R) [R, \i A]_{\circ} \eta(R) P_{\rm{c}} R_A(z) \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z} \\
&\quad + L^{-1} P_{\rm{c}} \theta(R) \left( I_1 + I_2 + I_3 \right) \theta(R) P_{\rm{c}},
\end{align*}
where
\begin{align*}
I_1 &= \frac{\i}{2\pi} \int _{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) [P_{\rm{c}}\eta(R), R_A(z)]_{\circ} [R, \i A]_{\circ} \eta(R) P_{\rm{c}} R_A(z) \ dz\wedge d\overline{z}, \\
I_2 &= \frac{\i}{2\pi} \int _{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) [P_{\rm{c}}\eta(R), R_A(z)]_{\circ} [R, \i A]_{\circ} \ dz\wedge d\overline{z}, \\
I_3 &= \frac{\i}{2\pi} \int _{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) R_A(z) P_{\rm{c}} \eta(R) [R, \i A]_{\circ} [R_A(z), P_{\rm{c}}\eta(R)]_{\circ} \ dz\wedge d\overline{z}.
\end{align*}
Applying \eqref{dei} and Lemma \ref{didid2}, and recalling that $s<1$, we have for some operators $B_i$ uniformly bounded with respect to $L$ that
\begin{equation*}
I_i = \Big \langle \frac{A}{L} \Big \rangle ^{-s} \frac{B_i}{L} \Big \langle \frac{A}{L} \Big \rangle ^{-s}, \quad \text{for} \ i=1,2,3.
\end{equation*}
Using $\chi \eta = \eta$, we insert $\chi(R)$ next to $\eta(R)$ . So far we get the following expression for $F$ :
\begin{align*}
F &= \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) R_A(z) P_{\rm{c}} \chi(R) \underbrace{\eta(R) [R, \i A]_{\circ} \eta(R)}_{\text{to be developed}} \chi(R) P_{\rm{c}} R_A(z) \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z} \\
&\quad + P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}}.
\end{align*}
Now write
\begin{equation*}
\eta(R) [R, \i A]_{\circ} \eta(R) = \eta(R) R [H, \i A]_{\circ} R \eta(R) = \eta(R) R [H_0, \i A]_{\circ} R \eta(R) + \eta(R) R [V, \i A]_{\circ} R \eta(R).
\end{equation*}
Let us start with the second term on the r.h.s.\ of this equation. It decomposes into
\begin{align*}
\eta(R) R [V , \i A]_{\circ} R \eta(R) &= \eta(R) \underbrace{R \langle H \rangle}_{\in \ \mathcal{B}(\H)} \langle H \rangle ^{-1/2} \underbrace{\langle H \rangle ^{-1/2} \langle H_0 \rangle^{1/2}}_{\in \ \mathcal{B}(\H)} \underbrace{\langle H_0 \rangle ^{-1/2} [V , \i A]_{\circ} \langle H_0 \rangle ^{-1/2} }_{\in \ \mathcal K(\H) \ \text{by \ref{item:A6prime}}} \times \\
& \quad \times \underbrace{ \langle H_0 \rangle ^{1/2}\langle H \rangle ^{-1/2}}_{\in \ \mathcal{B}(\H)} \langle H \rangle ^{-1/2} \underbrace{\langle H \rangle R}_{\in \ \mathcal{B}(\H)} \eta(R).
\end{align*}
It is therefore compact. As for the first term on the r.h.s., it decomposes as follows
\begin{equation*}
\eta(R) R [H_0, \i A]_{\circ} R \eta(R) = \eta(R_0) R_0 [H_0, \i A]_{\circ} R_0 \eta(R_0) + \Xi_1 + \Xi_2,
\end{equation*}
where
\begin{equation*}
\Xi_1 := (\eta(R) R -\eta(R_0) R_0) [H_0, \i A]_{\circ} R \eta(R) \quad \text{and} \quad \Xi_2 := \eta(R_0) R_0 [H_0, \i A]_{\circ} (R \eta(R)- R_0\eta(R_0)).
\end{equation*}
We show tht $\Xi_1$ is compact, and similarly one shows that $\Xi_2$ is compact. We have
\begin{equation*}
\Xi_1= \underbrace{(\eta(R) R -\eta(R_0) R_0) \langle H_0 \rangle ^{1/2}}_{\in \ \mathcal K(\H)} \underbrace{\langle H_0 \rangle ^{-1/2} [H_0 , \i A]_{\circ} \langle H_0 \rangle ^{-1/2} }_{\in \ \mathcal{B}(\H) \ \text{by \ref{item:A2}}} \underbrace{ \langle H_0 \rangle ^{1/2}\langle H \rangle ^{-1/2}}_{\in \ \mathcal{B}(H)} \langle H \rangle ^{-1/2} \underbrace{\langle H \rangle R}_{\in \ \mathcal{B}(\H)} \eta(R).
\end{equation*}
Let us justify that $(\eta(R) R -\eta(R_0) R_0) \langle H_0 \rangle ^{1/2}$ is compact. Let $ \kappa : x \mapsto x \eta(x)$. By the Helffer-Sj\"ostrand formula,
\begin{align*}
(\eta(R) R -\eta(R_0) R_0) \langle H_0 \rangle ^{1/2} &= \frac{\i}{2\pi} \int_{\C} \frac{\partial \tilde{\kappa}}{\partial \overline{z}} (z) \left( (z-R)^{-1} - (z-R_0)^{-1} \right) \langle H_0 \rangle ^{1/2} \ d z \wedge d\overline{z} \\
&= \frac{\i}{2\pi} \int_{\C} \frac{\partial \tilde{\kappa}}{\partial \overline{z}} (z) (z-R)^{-1} RVR_0 (z-R_0)^{-1} \langle H_0 \rangle ^{1/2} \ d z \wedge d\overline{z} \\
&= \frac{\i}{2\pi} \int_{\C} \frac{\partial \tilde{\kappa}}{\partial \overline{z}} (z) (z-R)^{-1} \underbrace{R \langle H \rangle^{1/2}}_{\in \ \mathcal{B}(\H)} \underbrace{\langle H \rangle ^{-1/2} \langle H_0 \rangle^{1/2}}_{\in \ \mathcal{B}(H)} \times \\
& \quad \times \underbrace{\langle H_0 \rangle^{-1/2} V\langle H_0 \rangle^{-1/2}}_{\in \ \mathcal K(\H) \ \text{by \ref{item:A5}}} \underbrace{\langle H_0 \rangle^{1/2} R_0 \langle H_0 \rangle ^{1/2}}_{\in \ \mathcal{B}(\H)} (z-R_0)^{-1} \ d z \wedge d\overline{z}.
\end{align*}
The integrand of this integral is compact for all $z \in \C \setminus \R$, and moreover the integral converges in norm since $\kappa$ has compact support. It follows that $(\eta(R) R -\eta(R_0) R_0) \langle H_0 \rangle ^{1/2}$, and thus $\Xi_1$, is compact. Thus we have shown that
\begin{equation}
\label{KeyFormula1}
\eta(R) [R, \i A]_{\circ} \eta(R) = \eta(R_0) [R_0, \i A]_{\circ} \eta(R_0) + \text{compact}.
\end{equation}
Therefore there is a compact operator $K_1$ uniformly bounded in $L$ such that
\begin{align*}
F &= \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) R_A(z) M R_A(z) \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z} \\
&\quad +P_{\rm{c}} \theta(R) \frac{K_1}{L} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}}.
\end{align*}
Here $M := P_{\rm{c}} \chi(R) \eta(R_0) [R_0, \i A]_{\circ} \eta(R_0) \chi(R) P_{\rm{c}}$. Since $P_{\rm{c}} \chi(R), \eta(R_0)$ and $[R_0,\i A]_{\circ}$ belong to $\mathcal{C}^1(A)$, it follows by product that $M \in \mathcal{C}^1(A)$ and we may commute $R_A(z)$ with $M$:
\begin{align*}
F &= \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) R_A(z)^{2} M \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z} \\
&\quad+ \frac{\i}{2\pi L} \int_{\C} \frac{\partial \tilde{\varphi}}{\partial \overline{z}}(z) P_{\rm{c}} \theta(R) R_A(z) [M, R_A(z)]_{\circ} \theta(R) P_{\rm{c}} \ dz\wedge d\overline{z} \\
&\quad+P_{\rm{c}} \theta(R) \frac{K_1}{L} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}}.
\end{align*}
We apply \eqref{derivative} to the first integral (which converges in norm), while for the second integral we use the fact that $M \in \mathcal{C}^1(A)$ to conclude that there exists an operator $B_4$ uniformly bounded in $L$ such that
\begin{align*}
F &= L^{-1} P_{\rm{c}} \theta(R) \varphi'(A/L) M \theta(R) P_{\rm{c}} \\
&\quad+P_{\rm{c}} \theta(R) \frac{K_1}{L} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3+B_4}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}}.
\end{align*}
Now $\varphi'(A/L) = \langle A/L \rangle ^{-2s}$. As a result of the Helffer-Sj\"{o}strand formula, \eqref{dei} and \eqref{use2},
\begin{equation*}
[\langle A/L \rangle ^{-s}, M ]_{\circ} \langle A/L \rangle ^{s} = L^{-1}B_5
\end{equation*}
for some operator $B_5$ uniformly bounded in $L$. Thus commuting $\langle A/L \rangle ^{-s}$ and $M$ gives
\begin{align*}
F &= L^{-1} P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} M \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}} \\
&\quad +P_{\rm{c}} \theta(R) \frac{K_1}{L} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3+B_4+B_5}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}} \\
& \geqslant c L^{-1} P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} P_{\rm{c}} \chi(R) \eta(R_0)^2 \chi(R) P_{\rm{c}} \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}} \\
&\quad +P_{\rm{c}} \theta(R) \frac{K_1 + K_2}{L} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3+B_4+B_5}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}},
\end{align*}
where $c>0$ and $K_2$ come from applying the Mourre estimate \eqref{MourreEst} to $R_0$ on $f(\mathcal{I})$. Exchanging $\eta(R_0)^2$ for $\eta(R)^2$, we have a compact operator $K_3$ uniformly bounded in $L$ such that
\begin{align*}
F & \geqslant c L^{-1} P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} P_{\rm{c}} \chi(R) \eta(R)^2 \chi(R) P_{\rm{c}} \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \frac{K_1 +K_2+K_3}{L} \theta(R) P_{\rm{c}} \\
&\quad+ P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3+B_4+B_5}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}}.
\end{align*}
We commute $P_{\rm{c}} \chi(R) \eta(R)^2 \chi(R) P_{\rm{c}} = P_{\rm{c}} \eta(R)^2 P_{\rm{c}}$ with $\langle A/L \rangle ^{-s}$, and see that
\begin{equation*}
[P_{\rm{c}} \eta(R)^2 P_{\rm{c}},\langle A/L \rangle ^{-s}]_{\circ} \langle A/L \rangle ^{s} = L^{-1}B_6
\end{equation*}
for some operator $B_6$ uniformly bounded in $L$. Thus
\begin{align*}
F & \geqslant c L^{-1} P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-2s} \theta(R) P_{\rm{c}} \ + \ P_{\rm{c}} \theta(R) \frac{K_1 +K_2+K_3}{L} \theta(R) P_{\rm{c}} \\
& \quad + P_{\rm{c}} \theta(R) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \left(\frac{B_1+B_2+B_3+B_4+B_5+B_6}{L^2} \right) \Big \langle \frac{A}{L} \Big \rangle ^{-s} \theta(R) P_{\rm{c}}.
\end{align*}
Taking $L$ large enough gives $C>0$ such that $c + (B_1+B_2+B_3+B_4+B_5+B_6)/L \geqslant C$. Denoting $K := K_1+K_2+K_3$ yields formula \eqref{Formula1}.
\noindent \textbf{Part 3:} For all open intervals $(e_1,e_2)$ located strictly above or below $\varsigma$ we have the identity
\begin{equation}
\label{ResolventProjector}
E_{(e_1,e_2)}(H) = E_{(f(e_1),f(e_2))}(R(\varsigma)),
\end{equation}
where $f$ is the function defined in \eqref{Functionf}. Now let $\mathcal{J} :=$ interior$(\theta^{-1}\{1\})$. This is an open interval and we have $E_{\mathcal{J}}(R) \theta(R) = E_\mathcal{J}(R)$. Thus applying $E_\mathcal{J}(R)$ to \eqref{Formula1} gives
\begin{equation*}
P_{\rm{c}} E_\mathcal{J}(R) [R, \i \varphi(A/L)]_{\circ} E_\mathcal{J}(R) P_{\rm{c}} \geqslant L^{-1} P_{\rm{c}} E_\mathcal{J}(R) \left(C \langle A/L \rangle ^{-2s} +K\right) E_\mathcal{J}(R) P_{\rm{c}}.
\end{equation*}
We have that $P_{\rm{c}} E_\mathcal{J}(R) := P_{\rm{c}}(R) E_\mathcal{J}(R)$ is a spectral projector of $R$ onto a finite disjoint union of open intervals. Let $\{\lambda_i\}$ be the (finite) collection of eigenvalues of $R$ located in $\mathcal{J}$. Then $\{f^{-1}(\lambda_i)\}$ are the eigenvalues of $H$ located in $f^{-1}(\mathcal{J})$, and by \eqref{ResolventProjector},
\begin{equation*}
P_{\rm{c}}(R) E_\mathcal{J}(R) = \sum_i E_{\mathcal{J}_i}(R) = \sum_i E_{f^{-1}(\mathcal{J}_i)}(H) = P_{\rm{c}}(H) E_{f^{-1}(\mathcal{J})}(H),
\end{equation*}
where the $\mathcal{J}_i$ are the open intervals such that $\cup_i \mathcal{J}_i \cup \{ \lambda_i\} = \mathcal{J}$. Denoting the open interval $J := f^{-1}(\mathcal{J})$ proves formula \eqref{Formula2}. Note that $\lambda \in J$.
\noindent \textbf{Part 4:} Let $F'$ be the l.h.s.\ of \eqref{Formula2}, i.e.\
\begin{equation*}
F' := P_{\rm{c}}(H) E_{J}(H) [R(\varsigma), \i \varphi(A/L)]_{\circ} E_{J}(H) P_{\rm{c}}(H).
\end{equation*}
\noindent Formula \eqref{Formula2} implies that for all $\psi \in \mathcal{H}$ and all $T >0$:
\begin{align*}
\frac{L}{T} \int_0 ^T \langle e^{-\i tH} \psi, F' e^{-\i t H} \psi \rangle dt & \geqslant \frac{C}{T} \int_0 ^T \Big \| \langle A/ L \rangle ^{-s} E_{J}(H) P_{\rm{c}}(H) e^{-\i t H} \psi \Big \|^2 \ dt + \\
&\quad+ \frac{1}{T} \int_0 ^T \langle E_{J}(H) P_{\rm{c}}(H) e^{-\i tH} \psi, K E_{J}(H) P_{\rm{c}}(H) e^{-\i t H} \psi \rangle \ dt.
\end{align*}
First, for all $L \geqslant 1$,
\begin{equation*}
\frac{L}{T} \int_0 ^T e^{\i tH} F' e^{-\i t H} \ dt = \frac{L}{T} \big [ e^{\i tH}P_{\rm{c}}(H) E_{J}(H) R(\varsigma) \varphi(A/L) R(\varsigma) E_{J}(H) P_{\rm{c}}(H) e^{-\i t H} \big]_0 ^T \xrightarrow[T \to \pm \infty]{} 0.
\end{equation*}
Second, by the RAGE Theorem \eqref{RAGEsup},
\begin{equation*}
\begin{split}
\sup \limits_{\| \psi\| \leqslant 1} \frac{1}{T} \int_0 ^T \langle & E_{J}(H) P_{\rm{c}}(H) e^{-\i tH} \psi, K E_{J}(H) P_{\rm{c}}(H) e^{-\i t H} \psi \rangle \ dt \\
& \leqslant \sup \limits_{\| \psi\| \leqslant 1} \frac{1}{T} \int_0 ^T \| K E_{J}(H) e^{-\i t H} P_{\rm{c}}(H) \psi \| \ dt \\
& \leqslant \sup \limits_{\| \psi\| \leqslant 1} \left(\frac{1}{T} \int_0 ^T \| K E_{J}(H) e^{-\i t H} P_{\rm{c}}(H) \psi \|^2 \ dt \right)^{1/2} \xrightarrow[T \to \pm \infty]{} 0.
\end{split}
\end{equation*}
It follows that for $L$ sufficiently large (but finite),
\begin{equation*}
\lim \limits_{T \to \pm \infty} \sup \limits_{\| \psi\| \leqslant 1} \frac{1}{T} \int_0 ^T \bigg \| \Big \langle \frac{A}{L} \Big \rangle ^{-s} e^{-\i t H} P_{\rm{c}}(H) E_{J}(H) \psi \bigg \|^2 \ dt = 0.
\end{equation*}
Finally \eqref{Formula3} follows by noting that $\langle A \rangle^{-s} \langle A / L \rangle ^{s}$ is a bounded operator. \qed
\begin{comment}
Before closing out the section, we would like to point out another estimate that emerges from this proof. Although weaker than \eqref{NewFormula3}, it is worth mentioning.
\begin{remark} Under the same assumptions as that of Theorem \ref{Main}, for all $\psi \in \mathcal{H}$,
\begin{equation}
\label{NewFormula2}
\lim \limits_{t \to \pm \infty} \| \langle A \rangle ^{-s} P_{\rm{ac}}(H) E_{\mathcal{I}}(H) e^{-\i t H} \psi \|= 0.
\end{equation}
\noindent To see this, take the inner product with $e^{-\i tH}P_{\rm{ac}}(H) \psi$ in \eqref{Formula2} to get
\begin{align*}
q(t) := \langle e^{-\i tH} P_{\rm{ac}}(H) \psi, F' e^{-\i t H} P_{\rm{ac}}(H) \psi \rangle & \geqslant C \| \langle A/ L \rangle ^{-s} E_{J}(H) e^{-\i t H} P_{\rm{ac}} (H) \psi \rangle \|^2 \\
&\quad+ \langle E_{J}(H) e^{-\i tH} P_{\rm{ac}} (H) \psi, K E_{J}(H) e^{-\i t H}P_{\rm{ac}} (H) \psi \rangle.
\end{align*}
Here $F'$ is the r.h.s.\ of \eqref{Formula2}. By the Riemann-Lebesgue Lemma, the second term on the r.h.s.\ of the previous inequality goes to zero as $t$ goes to infinity. As for the l.h.s.\ of this inequality, we note that
\begin{align*}
q'(t) &= \langle e^{-\i tH} P_{\rm{ac}}(H) \psi, \i H E_J(H) [R(\varsigma), \i \varphi(A / L) ]_{\circ} E_J(H) e^{-\i t H} P_{\rm{ac}}(H) \psi \rangle \\
&\quad - \langle e^{-\i tH} P_{\rm{ac}}(H) \psi, E_J(H) [R(\varsigma), \i \varphi(A / L) ]_{\circ} E_J(H) \i H e^{-\i t H} P_{\rm{ac}}(H) \psi \rangle.
\end{align*}
As $J$ is a bounded interval, $H E_J(H)$ is a bounded operator. So $q'$ is a uniformly bounded function and thereby $q$ is a uniformly continuous function. Moreover, for all $s,t \in \R$,
\[\int_s ^t |q(\tau)| d\tau \leqslant 2\| R(\varsigma) \varphi(A/L) R(\varsigma) \| \| \psi \|^2. \]
Hence, $\int_{\R} |q(\tau)| d\tau < \infty$. In particular $\lim _{t \to \pm \infty} q(t) =0$. Denoting $\mathcal{I} = J$, \eqref{NewFormula2} follows.
\end{remark}
\end{comment}
\section{A discussion about the compactness of operators of the form $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$}
\label{Section:Compactness}
As pointed out in the Introduction, the novelty of formula \eqref{NewFormula} is conditional on the non-relative compactness of the operator $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$. The non-compactness of $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is also what sets \eqref{NewFormula3} apart from \eqref{NewFormula4}. We start by noting that $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is $H$-relatively compact if and only if it is compact, since $\mathcal{I} \subset \R$ is a bounded interval.
We will allow ourselves to consider operators of the form $\langle A \rangle ^{-s} \chi(H)$, where $\chi$ is a smooth function, rather than $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$. On the one hand, if $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$ is compact, then so is $\langle A \rangle ^{-s} \chi(H)$, where $\chi$ is any smooth function that has support contained in $\mathcal{I}$. On the other hand, if $\langle A \rangle ^{-s} \chi(H)$ is compact, where $\chi$ is a smooth bump function that approximates the characteristic function of $\mathcal{I}$ and equals one above $\mathcal{I}$, then so is $\langle A \rangle ^{-s} E_{\mathcal{I}}(H)$.
We will also suppose that $H = H_0 + V$, where $V$ is some $H_0$-form compact operator, and $H_0$ is viewed as the "free" operator. In other words we will work under the assumption \ref{item:A5}.
The reason for doing so is that $H_0$ is much easier to work with than $H$ in practice. In this case we note that $\langle A \rangle ^{-s} \chi(H)$ is compact if and only if $\langle A \rangle ^{-s} \chi(H_0)$. We therefore have the question: Is $\langle A \rangle ^{-s} \chi(H_0)$ a compact operator? A first result is:\begin{proposition}
\label{propNoEigenvalues}
Let $H_0,A$ be self-adjoint operators in $\H$. Suppose that $H_0$ has a spectral gap. Suppose that $H_0 \in \mathcal{C}^1(A)$ and that for some $\lambda \in \R$, $[(H_0-\lambda)^{-1},\i A]_{\circ} := C \geqslant 0$ is an injective operator. Then $A$ does not have any eigenvalues. In particular, $\langle A \rangle ^{-s} \not \in \mathcal K(\H)$ for any $s>0$.
\end{proposition}
\begin{remark}
The examples of Section \ref{Section:Examples} satisfy the hypotheses of Proposition \ref{propNoEigenvalues}. The positivity of $C$ holds because $\sigma(H_0) \subset [0,+\infty)$. The injectivity holds because $0$ is not an eigenvalue of $H_0$.
\end{remark}
\begin{proof}
Let $\psi$ be an eigenvector of $A$. Since $A \in \mathcal{C}^1((H_0-\lambda)^{-1})$, the Virial Theorem (\cite[Proposition 7.2.10]{ABG}) says that $0= \langle \psi, [(H_0-\lambda)^{-1},\i A]_{\circ} \psi \rangle = \langle \psi, C \psi \rangle = \|\sqrt{C} \psi \|^2$. The injectivity of $\sqrt{C}$ forces $\psi = 0$, i.e.\ $\sigma_{\rm{p}}(A) = \emptyset$. Now, it is known that the spectrum of a self-adjoint operator with compact resolvent consists solely of isolated eigenvalues of finite multiplicity, see e.g.\ \cite[Theorem 6.29]{K}. So if $A$ had compact resolvent, then we would have $\sigma(A) = \sigma_{\rm{p}}(A) = \emptyset$. However this is not possible because the spectrum of a self-adjoint operator is non-empty. We conclude that $A$ does not have compact resolvent. Writing $(z-A)^{-1} = (z-A)^{-1} \langle A \rangle \langle A \rangle ^{-1}$, we infer that $\langle A \rangle ^{-1} \not \in \mathcal K(\H)$. Finally, consider the bounded self-adjoint operator $\langle A \rangle ^{-s}$ for some $s>0$. If this operator were compact, then by the spectral theorem for such operators we would have $\langle A \rangle ^{-s} = \sum_i \lambda_i \langle \phi_i, \cdot \rangle \phi_i$ for some eigenvalues $\{\lambda_i\}$ and eigenvectors $\{\phi_i\}$ which form an orthonormal basis of $\H$. But then $\langle A \rangle ^{-1} = \sum_i \lambda_i^{1/s} \langle \phi_i, \cdot \rangle \phi_i$, implying that the latter operator is compact. This contradiction proves $\langle A \rangle ^{-s} \not \in \mathcal K(\H)$ for all $s>0$. \qed
\end{proof}
Unfortunately, this result does not settle the debate because it does not guarantee the non-compactness of $\langle A \rangle ^{-s} \chi(H_0)$. In fact, we have examples where this operator is compact. For lack of a more robust result, we shall spend the rest of this section examining several examples. Our conclusion is that $\langle A \rangle ^{-s} \chi(H_0)$ is sometimes compact, sometimes not. Specifically, in each of our examples, the compactness holds in dimension one but does not in higher dimensions. To start off, we cook up a simple example that will reinforce the viewpoint that non-compactness is possible, especially in higher dimensions.
\begin{example}
Let $\H := L^2(\R^2)$, $H_0:= -\partial^2/ \partial x_1^2$ and $A := -\i(x_1 \partial / \partial x_1 + \partial / \partial x_1 x_1)$ be a conjugate operator to $H_0$. The spectrum of $H_0$ is purely absolutely continuous and $\sigma(H_0) = [0,+\infty)$. In particular, $\ker(H_0-\lambda) = \emptyset$ for all $\lambda \in \R$. Also $[H_0, \i A]_{\circ} = 2H_0$ exists as a bounded operator from $\mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H_0 \rangle ^{1/2})$ to $\mathcal{D}} \newcommand{\F}{\mathcal{E}(\langle H_0 \rangle ^{1/2})^*$, implying that $H_0 \in \mathcal{C}^{\infty}(A)$ and that the Mourre estimate holds for all positive intervals supported away from zero. In addition, $\{e^{\i tA}\}_{t \in \R}$ stabilizes $\mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0)$. The assumptions of Theorems \ref{Main2} and \ref{Main} are therefore thoroughly verified. Moreover $\langle A \rangle ^{-s} \chi (H_0)$ is clearly not compact in $L^2(\R^2)$. This can be seen by applying $\langle A \rangle ^{-s} \chi (H_0)$ to a sequence of functions $(f(x_1) g_n (x_2))_{n=1} ^{+\infty}$ with $g_n$ chosen so that $\int_{\R} |g_n(x_2)|^2 dx_2$ is constant.
\end{example}
To continue with other examples, we set up notation. Let $\mathcal{C}_0(\R)$ be the continuous functions vanishing at infinity and $\mathcal{C}_c^{\infty}(\R)$ the compactly supported smooth functions.
\begin{example}
\label{ex:Derivative}
Let $\mathcal{H} := L^2(\R^d)$, $H_0 := x_1+...+x_d$ and $A:= \i(\partial / \partial x_1 +...+ \partial / \partial x_d)$. This system verifies the Mourre estimate at all energies thanks to commutator relation $[H_0,\i A]_{\circ} = d I$, and $H_0 \in \mathcal{C}^{\infty}(A)$ holds. Although this system does not quite fall within the framework of this article because $H_0$ is not semi-bounded ($\sigma(H_0) =\R$), it conveys the idea that compactness holds only in dimension one:
\end{example}
\begin{proposition}
\label{TwoDimensionKnownResult}
Let $H_0$ and $A$ be those from Example \ref{ex:Derivative}. Let $\chi \in \mathcal{C}_0(\R)$ and $s \in \R$ be given. If $d=1$, then $\langle A \rangle ^{-s} \chi(H_0) \in \mathcal K(L^2(\R))$. If $d=2$, then $\langle A \rangle ^{-s} \chi(H_0) \not\in \mathcal K(L^2(\R^2))$.
\end{proposition}
\begin{proof} The one-dimensional result is a classic, see Proposition \ref{KnownFourierDecay}. We prove the two-dimensional result. Let $\mathcal{I}(\lambda,r)$ denote the open interval centered at $\lambda \in \R$ and of radius $r >0$. Fix $\lambda$ and $r$ such that $\mathcal{I}(\lambda,r) \subset$ supp$(\chi)$. Then the function of two variables $\chi(x_1+x_2)$ has support containing the oblique strip $\cup _{t \in \mathcal{I}(\lambda,r)}\{ (s,t-s) : s \in \R \} \subset \R^2$. Let $\psi \in \mathcal{C}^{\infty}_c(\R)$ be a bump function that equals one on $\mathcal{I}(\lambda,r)$ and zero on $\R \setminus \mathcal{I}(\lambda,2r)$. Let $\theta \in \mathcal{C}^{\infty}_c(\R)$ be a bump function that equals one on $[-1,1]$ and zero on $\R \setminus [-2,2]$. Let $\Psi_n(x,y) := n^{-1/2} \psi(x+y) \theta(y/n)$. Then $\| \Psi_n\| \equiv \|\psi\| \| \theta \|$. Here $\|\cdot \|$ denotes the norm on $L^2(\R^2)$. Fix $\nu \in \N$, and let $\varphi ^{\nu}_n := (A+\i)^{\nu} \Psi_n$. For $\nu = 0$, clearly $\| \varphi^{\nu}_n\| = \|\Psi_n\|$ is uniformly bounded in $n$ and an easy induction proves it for all fixed values of $\nu \in \N$. Consider now $\phi_n := \chi(H_0) (A+\i)^{-\nu} \varphi^{\nu}_n = \chi(H_0) \Psi_n$. Since $\chi \in \mathcal{C}_0(\R)$ and $\Psi_n \xrightarrow[]{w} 0$, $\phi_n \xrightarrow[]{w} 0$.
If $\chi(H_0) (A+\i)^{-\nu} \in \mathcal K(L^2(\R^2))$ for some $\nu \in \N$, then the image of the ball $B(0,\sup_{n\geqslant 1} \|\varphi^{\nu}_n\|)$ by this operator is pre-compact in $L^2(\R^2)$, and so there exists $\phi \in L^2(\R^2)$ and a subsequence $(n_k)^{\infty}_{k=1}$ such that $\lim_{k \to +\infty} \|\phi_{n_k} -\phi\| =0$. Since $\phi_{n_k} \xrightarrow[]{w} 0$, it must be that $\phi=0$ since the strong and weak limits coincide and are unique. But this contradicts the fact that $\|\phi_{n_k}\| \geqslant \|\chi \mathbf{1}_{\mathcal{I}(\lambda,r)}(\cdot)\| \| \theta \|$ for all $k \geqslant 1$. So $\chi(H_0) (A+\i)^{-\nu} \not \in \mathcal K(L^2(\R^2))$, and this implies that $\chi(H_0) \langle A \rangle ^{-s} \not \in \mathcal K(L^2(\R^2))$ for all $s \leqslant \nu$. The result follows by taking adjoints.
\qed
\end{proof}
For what it is worth, we tweak Example \ref{ex:Derivative} to create a system that fits all the assumptions of this article. We state a variation of it and leave the details of the proof to the reader.
\begin{example}
Let $\H := L^2(\R^d)$. Let $H_0$ be the operator of multiplication by $x_1 h(x_1)+...+x_d h(x_d)$, where $h \in \mathcal{C}^{\infty}(\R)$ is a smooth version of the Heaviside function (which is zero below the origin, positive above the origin and strictly increasing). Then $\sigma(H_0) = [0,+\infty)$. In particular, $H_0$ is a positive operator. The conjugate operator is still $A:= \i(\partial / \partial x_1 +...+ \partial / \partial x_d)$. We have $H_0 \in \mathcal{C}^{\infty}(A)$ and the Mourre estimate holds on all positive bounded intervals. One also verifies that $\{e^{\i tA}\}_{t \in \R}$ stabilizes $\mathcal{D}} \newcommand{\F}{\mathcal{E}(H_0)$ (note that $\{e^{\i tA}\}_{t \in \R}$ is the group of translations on $L^2(\R^d)$). Assumptions \ref{item:A1} - \ref{item:A2} are verified. With regard to the compactness issue, Proposition \ref{TwoDimensionKnownResult} holds, but for the two-dimensional result, one must also assume that $\chi$ has non empty support in $(0,+\infty)$.
\end{example}
\begin{comment}
\begin{example} (The Stark Effect)
\normalfont
Let $\H := L^2(\R_+^d)$. The Hamiltonian describing a quantum mechanical particle in a constant electric field in the --$x_1$ direction is given by $-\Delta + Ex_1$, where $E>0$ is the strength of the electric field. Let $H_0$ be the closure of this operator on $\mathcal{S}(\R_+^d)$, the Schwartz class. Then $H_0$ is self-adjoint (\cite[Theorem 7.1]{CFKS}). The spectrum of $H_0$ is purely absolutely continuous and bounded below. Let $A := \i \partial / \partial x_1$. We have that $H_0 \in \mathcal{C}^{\infty}(A)$, and $[H_0,\i A]_{\circ} = I$ implies the Mourre estimate on all bounded intervals contained in the spectrum of $H_0$. Consider the decomposition $L^2(\R_+^d) = L^2(\R_+) \otimes L^2(\R_+^{d-1})$ according to the coordinate decomposition $x=(x_1,x_{\perp})$ in position space and $p= (p_1,p_{\perp})$ in momentum space. Then on $\S(\R_+^d)$ the following formula holds:
\begin{equation*}
H_0 = e^{-\i p_1 ^3/3}(p_{\perp}^2 +x_1)e^{\i p_1^3/3}.
\end{equation*}
It follows that $\chi(H_0) = e^{-\i p_1 ^3/3}\chi(p_{\perp}^2 +x_1)e^{\i p_1^3/3}$ for all $\chi \in \mathcal{C}^{\infty}_c(\R)$. We also have $\tau(A) = e^{-\i p_1 ^3/3} \tau(p_1) e^{\i p_1^3/3}$ for all $\tau \in \mathcal{C}^{\infty}_c(\R)$. Thus $\tau(A) \chi(H_0)$ is unitarily equivalent to $\tau(p_1) \chi(p_{\perp}^2 +x_1)$. This operator is compact in dimension one, as in Proposition \ref{KnownCompactResult}, but isn't in higher dimensions.
\end{example}
\end{comment}
Our next model is the continuous Laplacian. We refer to Section \ref{ex:3} for a description of the model. The situation is the same as with the preceding example: compactness in dimension one, non-compactness in higher dimensions.
\begin{example} [Continuous Laplacian with generator of dilations]
\label{ex:contLaplacian}
Let $\H := L^2(\R^d)$, $H_0 := -\Delta$ be the Laplacian and $A := -\i (x\cdot \nabla} \newcommand{\cs}{\mathcal {S} +\nabla} \newcommand{\cs}{\mathcal {S}\cdot x)/2 = -\i (2x\cdot \nabla} \newcommand{\cs}{\mathcal {S} +d)/2$ be the generator of dilations. We will be making use of the Fourier transform on $L^2(\R^d)$ given by
\begin{equation}
\label{FourierTransform}
(\mathcal{F}\psi)(\xi) = (2\pi)^{-d/2} \int _{\R^d} \psi(x) e^{-\i \xi \cdot x} dx.
\end{equation}
Note that $\F A \F^{-1} = -A = \sum_{i=1}^d \i (\xi_i \partial / \partial \xi_i + \partial / \partial \xi_i \xi_i)/2$ and $\F H_0 \F^{-1} = |\xi|^2 := \sum_{i=1}^d \xi_i^2$.
\end{example}
\begin{proposition}
\label{Prop:DeltaA}
Let $H_0$ and $A$ be those from Example \ref{ex:contLaplacian}. We have $\tau(A)\chi(H_0) \in \mathcal K(L^2(\R))$ for all $\tau,\chi \in \mathcal{C}_0(\R)$, with $\chi$ supported away from zero.
\end{proposition}
\noindent \textit{First proof:}
Let $Q$ be the operator of multiplication by the variable $x$ and $P := -\i d/d x$. Let $\chi \in \mathcal{C}_c^{\infty}(\R)$ be supported away from zero. Let $(A+\i)^{-1} \chi(H_0) = (A+\i)^{-1}\chi_1(P)$, where $\chi_1 := \chi \circ \sigma$ and $\sigma(\xi)=\xi^2$. We implement a binary relation $\approx$ on $\mathcal{B}(L^2(\R))$ whereby two operators are equivalent if their difference is a compact operator. We have:
\begin{align*}
(A+\i)^{-1} \chi(H_0) &= (A+\i)^{-1} \chi_1(P) (Q+\i)(Q+\i)^{-1} \\
& \approx (A+\i)^{-1} \chi_1(P) Q(Q+\i)^{-1} \\
&\approx (A+\i)^{-1} Q \chi_1(P) (Q+\i)^{-1} \\
&= (A+\i)^{-1} (QP) \chi_2(P) (Q+\i)^{-1} \\
&\approx (A+\i)^{-1} (A+\i) \chi_2 (P) (Q+\i)^{-1} \approx 0.
\end{align*}
Note the use of Proposition \ref{TwoDimensionKnownResult} each time a compact operator was removed. In the third step we used that $[\chi_1(P),Q]_{\circ} (Q+\i)^{-1} = \chi_1'(P) (Q+\i)^{-1} \approx 0$. In the fourth step we took advantage of the fact that $\chi_1$ is supported away from zero to let $\chi_2(P) := P^{-1}\chi_1(P)$ and thereby allow to recreate $A:= (QP + PQ)/2 = QP -\i/2$.
Thus we have shown that $(A+\i)^{-1} \chi(H_0) \in \mathcal K(L^2(\R))$. It follows that $(A-z)^{-1} \chi(H_0) \in \mathcal K(L^2(\R))$ for all $z \in \C \setminus \R$. Note that the functions $\{(x-z)^{-1} : z \in \C \setminus \R\}$ and $\mathcal{C}_{c}^{\infty}(\R)$ are dense in $\mathcal{C}_0(\R)$ with respect to the uniform norm. Since $H_0$ and $A$ are self-adjoint operators, they are unitarily equivalent to a multiplication operator by a real-valued function in some appropriate $L^2(M)$ space. The norm of a multiplication operator from $L^2(M)$ to $L^2(M)$ is equal to the uniform norm of the multiplication function. Two limiting arguments, one for the $H_0$ first and then one for $A_0$, or vice-versa, extends the compactness to $\tau(A)\chi(H_0)$ as in the statement of the Proposition.
\qed
\noindent \textit{Second proof:}
We see that $\mathcal{F}(A - \i/2)^{-1}\chi(H_0)\mathcal{F}^{-1}$ is an integral transform acting in the momentum space as follows:
\begin{equation*}
L^2(\R) \ni \varphi \mapsto (\mathcal{F}(A - \i/2)^{-1}\chi(H_0)\mathcal{F}^{-1} \varphi)(\xi) = \frac{\i}{\xi} \int _{0} ^{\xi} \chi(t^2) \varphi(t) dt \in L^2(\R).
\end{equation*}
The fact that $\chi$ is supported away from zero is crucial here. Moreover, if $\chi \in \mathcal{C}^{\infty}_c(\R)$, then this integral transform is Hilbert-Schmidt and there is $c>0$ such that
\begin{equation*}
\|(A-\i /2)^{-1} \chi(H_0)\|_{HS}^2 = \int_{\R} \int_{\R} \mathbf{1}_{(0,\xi)}(t) \xi^{-2} |\chi(t^2)|^2 dt d\xi \leqslant c \|\chi\|^2_2.
\end{equation*}
In particular, $(A-\i /2)^{-1} \chi(H_0)$ is compact. One extends the compactness to operators of the form $\tau(A)\chi(H_0)$ as in the statement of the Proposition using the same limiting argument explained in the first proof.
\qed
To complete the one-dimensional picture, we mention for what it is worth that it is possible to show that $(A+\i)^{-1}\chi(H_0) \not \in \mathcal K(L^2(\R))$ for any $\chi \in \mathcal{C}^{\infty}_c(\R)$ with $\chi(0) \neq 0$. We now turn to the multi-dimensional case.
\begin{proposition}
\label{dnotcompact}
Let $H_0$ and $A$ be those from Example \ref{ex:contLaplacian}. If $d \geqslant 2$, then $\langle A \rangle ^{-s} \chi(H_0) \not \in \mathcal K(L^2(\R^d))$ for any $\chi \in \mathcal{C}^{\infty}_c(\R)$ whose support is non-empty in $(0,+\infty)$ and for any $s \in \R$.
\end{proposition}
\begin{proof}
Let $\mathcal{I}(\lambda, r)$ denote the interval of radius $r>0$ centered at $\lambda$. There are $\lambda \in (0,+\infty)$ and $r>0$ such that $\mathcal{I}(\lambda,r) \subset (0,+ \infty)$ and $m := \inf_{x \in \mathcal{I}(\lambda,r)} |\chi(x)| >0$. Consider the constant energy curves
\[ \{(\xi_1,...,\xi_d) \in \R^d : E = \xi_1^2+...+\xi_d^2 \}. \]
For $d=2$, these are just circles centered at the origin. Forth we work in dimension two to keep the notation clean, but the necessary adjustments are obvious for $d \geqslant 2$. The support of the function of two variables $\chi(\xi_1^2+\xi_2^2)$ contains the annulus obtained by rotating $\mathcal{I} (\lambda',r')$ about the origin, where
\begin{equation*}
\lambda' := (\sqrt{\lambda+r}+\sqrt{\lambda-r})/2, \quad r' := (\sqrt{\lambda+r}-\sqrt{\lambda-r})/2.
\end{equation*}
Let $\psi_1,\psi_2 \in \mathcal{C}_c^{\infty}(\R)$ be any bump functions verifying : a) $\psi_1(0) \neq 0$, b) supp($\psi_1$) $= [-1,1]$, c) supp($\psi_2$) $\subset \mathcal{I} (\lambda', r'/2)$, and d) $\|\psi_i \|= 1$, where $\| \cdot \|$ denotes the $L^2$ norm. Now let $\Psi_n(\xi_1,\xi_2) := \sqrt{n} \psi_1(\xi_1 n) \psi_2(\xi_2)$. Then $\| \Psi_n\|=1$ for all $n \geqslant 1$, and $\Psi_n \xrightarrow[]{w} 0$. Also, for $n$ sufficiently large, $\Psi_n$ is supported in the aforementioned annulus. Now fix $\nu \in \N$ and let $\varphi_n^{\nu} := \F(A+\i)^{\nu}\F^{-1} \Psi_n$. Then for $\nu =0$, $\|\varphi_n ^{\nu}\| = \| \Psi_n \| \equiv 1$, while for $\nu = 1$,
\begin{equation*}
\varphi_n^{\nu}(\xi_1,\xi_2) = -2n^{3/2}\i \xi_1\psi_1'(\xi_1 n)\psi_2(\xi_2) - 2n^{1/2} \i \psi_1(\xi_1 n) \xi_2 \psi_2'(\xi_2) - \i \Psi_n(x),
\end{equation*}
and we see that $\|\varphi_n^{\nu}\|$ is uniformly bounded in $n$. A simple induction on $\nu$ shows that for every fixed value of $\nu \in \N$, $\|\varphi_n ^{\nu}\|$ is uniformly bounded in $n$. Consider $\phi_n := \F \chi(H_0)(A+\i)^{-\nu} \F^{-1} \varphi_n ^{\nu} = \F \chi(H_0) \F^{-1} \Psi_n$.
If $\F \chi(H_0)(A+\i)^{-\nu}\F^{-1} \in \mathcal K(L^2(\R^2))$ for some value of $\nu$, the image of the ball $B(0, \sup_{n\geqslant 1} \|\varphi_n ^{\nu}\|)$ by this operator is pre-compact in $L^2(\R^2)$, and so there exists $\phi \in L^2(\R^2)$ and a subsequence $(n_k)_{k=1}^{\infty}$ such that $\lim_{k \to +\infty} \|\phi_{n_k} - \phi\| =0$. Since $\phi_{n_k} \xrightarrow[]{w} 0$, it must be that $\phi = 0$ since the strong and weak limits coincide and are unique. But this contradicts the fact that $\|\phi_{n_k} \| \geqslant m \|\Psi_{n_k} \| = m > 0$ for all $k\geqslant 1$. So $\chi(H_0)(A+\i)^{-\nu} \not \in \mathcal K(L^2(\R^2))$ and this implies that $\chi(H_0)\langle A \rangle ^{-s} \not \in \mathcal K(L^2(\R^2))$ for all $s \leqslant \nu$. The result follows by taking adjoints.
\qed
\end{proof}
A nice corollary of Proposition \ref{dnotcompact} that deserves a mention is the following. It uses Proposition \ref{KnownFourierDecay}. The result can also be proven to hold in dimension one.
\begin{corollary}
Let $A$ be that from Example \ref{ex:contLaplacian}. Let $d \geqslant 2$. Then for all $(s,\epsilon) \in \R \times (0,+\infty)$, $\langle A \rangle ^{-s} \langle Q \rangle ^{\epsilon} \not \in \mathcal{B}(L^2(\R^d))$.
\end{corollary}
\begin{example} [Continuous Laplacian with Nakamura's conjugate operator]
In \cite{N}, Nakamura presents an alternate conjugate operator to the continuous Laplacian $H_0$. Let $\beta > 0$. In momentum space it reads
\[ \F A \F^{-1} := \frac{\i}{2\beta} \sum _{i=1}^d \left( \sin(\beta \xi_i) \frac{\partial}{\partial \xi_i} + \frac{\partial}{\partial \xi_i} \sin(\beta \xi_i) \right).\]
Under some conditions on the potential $V$, it is shown that the Mourre theory holds for $H := H_0 + V$ with respect to $A$ on the interval $(0, (\pi/\beta)^2/2)$. We refer also to \cite{Ma} for a generalization of this conjugate operator and a more in-depth discussion. An argument as in Propositions \ref{dnotcompact} and \ref{discretenotcompact} shows that, for $d \geqslant 2$, $\langle A \rangle ^{-s} \chi(H_0) \not \in \mathcal K(L^2(\R^d))$ for all $\chi \in \mathcal{C}_0(\R)$ and $s \in \R$.
\end{example}
Our last example is the discrete Laplacian on $\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d$. We refer to Section \ref{ex:2} for the details on the model.
\begin{example} [Discrete Schr\"odinger operators]
\label{ex:discSchro}
Let $\H := \ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d)$, $H_0 := \Delta$ be the discrete Laplacian and $A$ be its conjugate operator as in Example \ref{ex:2}. Let
\begin{equation*}
\ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d) \ni u \mapsto (\F u)(\theta) = (2\pi)^{-d/2} \sum_{n \in \mathbb{Z}} \newcommand{\C}{\mathbb{C}^d} u(n) e^{\i \theta \cdot n} \in L^2([-\pi,\pi]^d)
\end{equation*}
be the Fourier transform. We recall that $H_0$ is unitarily equivalent to the operator of multiplication by $\sum_{i=1}^d (2-2\cos(\theta_i))$ and that $A$ is unitarily equivalent to the self-adjoint realization of the operator $\i \sum _{i=1} ^d (\sin(\theta_i) \partial / \partial \theta_i + \partial / \partial \theta_i \sin(\theta_i))$, which we denote by $A_{\F}$.
\end{example}
\begin{proposition}
Let $H_0$ and $A$ be those from Example \ref{ex:discSchro}. If $d=1$, then $\tau (A) \chi (H_0) \in \mathcal K(\ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}))$ for all $\tau \in \mathcal{C}_0(\R)$ and $\chi \in \mathcal{C}([0,4])$ supported away from zero and four.
\end{proposition}
\begin{proof}
Using simple techniques from the theory of first order differential equations, we see that $\chi(H_0) (A+\i)^{-1}$ is a Hilbert-Schmidt integral transform acting as follows:
\begin{equation*}
L^2([-\pi,\pi]) \ni \psi \mapsto (\F \chi(H) (A+\i)^{-1} \F^{-1}\psi)(\theta) = \frac{1}{2\i \sin(\theta/2)} \int _0 ^{\theta} \frac{\sin(t/2)}{\sin(t)} \chi \left(2-2\cos(t)\right) \psi(t) dt.
\end{equation*}
Note that it is crucial that $\chi(2-2\cos(t))$ be supported away from zero and $\pm \pi$.
\qed
\end{proof}
\begin{figure}[ht]
\begin{tikzpicture}[domain=-5:5]
\begin{axis}
[grid = major,
clip = true,
clip mode=individual,
axis x line = middle,
axis y line = middle,
xlabel={$\theta_1$},
xlabel style={at=(current axis.right of origin), anchor=west},
ylabel={$\theta_2$},
ylabel style={at=(current axis.above origin), anchor=south},
domain = -3.14152965:3.14152965,
xmin = -3.3,
xmax = 3.3,
ymin = -3.3,
ymax = 3.3,
y=1cm,
x=1cm,
xtick={-3.14159, -1.5708, 1.5708, 3.14159},
xticklabels={$-\pi$, $-\frac{\pi}{2}$, $\frac{\pi}{2}$, $\pi$},
ytick={-3.14159, -1.5708, 1.5708, 3.14159},
yticklabels={$-\pi$, $-\frac{\pi}{2}$, $\frac{\pi}{2}$, $\pi$},
after end axis/.code={\path (axis cs:0,0) node [anchor=north east,yshift=-0.075cm] {0} ;}]
\addplot[samples=200, domain = -1.04719755:1.04719755, color = red] {acos(2-cos(deg(x))-0.5)*3.14159265/180};
\addplot[samples=200, domain = -1.04719755:1.04719755, color = red] {-acos(2-cos(deg(x))-0.5)*3.14159265/180};
\addplot[samples=200, domain = -1.570796:1.570796, color = red] {acos(2-cos(deg(x))-1)*3.14159265/180};
\addplot[samples=200, domain = -1.570796:1.570796, color = red] {-acos(2-cos(deg(x))-1)*3.14159265/180};
\addplot[samples=200, domain = -2.0943951:2.0943951, color = red] {acos(2-cos(deg(x))-1.5)*3.14159265/180};
\addplot[samples=200, domain = -2.0943951:2.0943951, color = red] {-acos(2-cos(deg(x))-1.5)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:3.14159265, color = red] {acos(2-cos(deg(x))-2)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:3.14159265, color = red] {-acos(2-cos(deg(x))-2)*3.14159265/180};
\addplot[samples=200, domain = 1.04719755:3.14159265, color = red] {acos(2-cos(deg(x))-2.5)*3.14159265/180};
\addplot[samples=200, domain = 1.04719755:3.14159265, color = red] {-acos(2-cos(deg(x))-2.5)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:-1.04719755, color = red] {acos(2-cos(deg(x))-2.5)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:-1.04719755, color = red] {-acos(2-cos(deg(x))-2.5)*3.14159265/180};
\addplot[samples=200, domain = 1.570796:3.14159265, color = red] {acos(2-cos(deg(x))-3)*3.14159265/180};
\addplot[samples=200, domain = 1.570796:3.14159265, color = red] {-acos(2-cos(deg(x))-3)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:-1.570796, color = red] {acos(2-cos(deg(x))-3)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:-1.570796, color = red] {-acos(2-cos(deg(x))-3)*3.14159265/180};
\addplot[samples=200, domain = 2.0943951:3.14159265, color = red] {acos(2-cos(deg(x))-3.5)*3.14159265/180};
\addplot[samples=200, domain = 2.0943951:3.14159265, color = red] {-acos(2-cos(deg(x))-3.5)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:-2.0943951, color = red] {acos(2-cos(deg(x))-3.5)*3.14159265/180};
\addplot[samples=200, domain = -3.14159265:-2.0943951, color = red] {-acos(2-cos(deg(x))-3.5)*3.14159265/180};
\end{axis}
\end{tikzpicture}
\caption{Level curves $\{(\theta_1,\theta_2) \in [-\pi,\pi]^2 : E = 2-2\cos(\theta_1) + 2 - 2\cos(\theta_2) \}$ of constant energy for $d=2$}
\label{fig:levelcurve}
\end{figure}
\begin{proposition}
\label{discretenotcompact}
Let $H_0$ and $A$ be those from Example \ref{ex:discSchro}. If $d \geqslant 2$, then $\langle A \rangle ^{-s} \chi (H_0) \not \in \mathcal K(\ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d))$ for all $\chi \in \mathcal{C}([0,4d])$ with non-empty support in $(0,4d)$, and for all $s \in \R$.
\end{proposition}
\begin{proof}
Let $\lambda \in (0, 4d)$ and $r>0$ be such that $\mathcal{I}(\lambda,r) \subset (0,4d)$ and $m:= \inf_{x \in \mathcal{I}(\lambda,r)} | \chi(x)| > 0$. Fix an energy $E \in \mathcal{I}(\lambda,r)$. Consider the constant energy curves
\begin{equation*}
\{(\theta_1,...,\theta_d) \in [-\pi,\pi]^d : E = 2-2\cos(\theta_1) + ... + 2-2\cos(\theta_d) \}.
\end{equation*}
For $d=2$, these level curves are drawn in Figure \ref{fig:levelcurve} for various energies in $[0,8]$. Let us proceed in dimension two to keep things simple. The aim is to show that $\F \chi (H_0) \F^{-1} (A_{\F} +\i)^{-\nu}$ is not compact for every fixed value of $\nu \in \N$. Now $\F \chi (H_0) \F^{-1}$ is equal to the operator of multiplication by $\chi(2-2\cos(\theta_1)+2-2\cos(\theta_2))$. The support of this function of two variables contains a neighborhood of a portion of the following vertical axes : $\theta_1=-\pi,0$ or $\pi$. Let $\mathcal{N}$ be such a neighborhood. Let $T$ be one of these three values depending on the situation. We can then create a sequence $\Psi_n(\theta_1,\theta_2) = \sqrt{n} \psi_1((\theta_1-T) n )\psi_2(\theta_2)$ that is supported in $\mathcal{N}$, converges weakly to zero and $\|\Psi_n\| \equiv 1$. Now let $\varphi_n^{\nu} := (A_{\F} + \i)^{\nu} \Psi_n$. Then for every fixed value of $\nu$, $\|\varphi_n ^{\nu}\|$ is uniformly bounded in $n$. The rest of the proof follows the guidelines as that of Proposition \ref{dnotcompact}.
\qed
\end{proof}
Finally, as in the continuous case, we have:
\begin{corollary}
Let $A$ be that from Example \ref{ex:discSchro}. Let $d \geqslant 2$. Then for all $(s,\epsilon) \in \R \times (0,+\infty)$, $\langle A \rangle ^{-s} \langle N \rangle ^{\epsilon} \not \in \mathcal{B}(\ell^2(\mathbb{Z}} \newcommand{\C}{\mathbb{C}^d))$.
\end{corollary}
\begin{comment}
\begin{proposition}
\label{Prop:1dnotcompact}
Let $-\Delta$ be the Laplacian on $L^2(\R)$, and let $A := \i \left(x \frac{d}{dx} +\frac{d}{dx} x\right) = \i \left(2x\frac{d}{dx} +1\right)$ be the generator of dilations. Then $\chi(-\Delta) (A+\i)^{-1} \not \in \mathcal K(L^2(\R))$ for any $\chi \in \mathcal{C}^{\infty}_c(\R)$ with $\chi(0) \neq 0$.
\end{proposition}
\begin{proof}
Let $\psi \in \mathcal{C}^{\infty}_c(\R)$ be any compactly supported function with $\|\psi\|= 1$, where $\| \cdot \|$ denotes the $L^2(\R)$ norm. Let $\psi_n(x) := \psi(x/n)n^{-1/2}$. Then $\|\psi_n\| = \|\psi\| = 1$ for all $n \geqslant 1$ and $\psi_n \xrightarrow[]{w} 0$ as $n\to \infty$. Let $\varphi_n := (A+\i)\psi_n$. Then $\varphi_n(x) = 2\i (x/n) \psi'(x/n)n^{-1/2} + 2\i \psi_n(x)$ and we see that $\|\varphi_n\|$ is uniformly bounded in $n$. Now let $\R \ni t \mapsto F(t) := \chi(-t\Delta) \in \mathcal{B}(\H)$. By the Helffer-Sj\"ostrand and resolvent formulae, we see that
\begin{equation*}
DF(t) := \lim \limits_{h \to 0} \frac{F(t+h) -F(t)}{h} = -\chi'(-t\Delta) \Delta.
\end{equation*}
By the Fundamental Theorem of Operator Calculus, we have
\begin{equation*}
\chi(-\Delta) - \chi(0) = F(1) - F(0) = \int_0 ^1 DF(t) dt = -\int_0 ^1 \chi'(-t\Delta)\Delta dt = b(\Delta) \Delta,
\end{equation*}
where $b(\Delta) := -\int_0 ^1 \chi'(-t\Delta) dt \in \mathcal{B}(\H)$. Then
\begin{equation}
\label{phi plus O}
\phi_n := \chi(-\Delta) (A+\i)^{-1} \varphi_n = \chi(-\Delta) \psi_n = \chi(0) \psi_n + b(\Delta) \Delta \psi_n.
\end{equation}
Note that $b(\Delta) \Delta \psi_n = O(n^{-2})$. If $\chi(-\Delta)(A+\i)^{-1} \in \mathcal K(L^2(\R))$, then the image of the ball $B(0, \sup_{n\geqslant 1} \|\varphi_n\|)$ by this operator is pre-compact in $L^2(\R)$, and so there exists $\phi \in L^2(\R)$ and a subsequence $(n_k)_{k=1}^{\infty}$ such that $\lim_{k \to \infty} \|\phi_{n_k} - \phi\| =0$. By \eqref{phi plus O}, it follows that $\lim_{k \to \infty} \| \chi(0) \psi_{n_k} - \phi\| = 0$. Since $\chi(0) \psi_{n_k} \xrightarrow[]{w} 0$, it must be that $\phi = 0$ since the strong and weak limits coincide and are unique. But this contradicts the fact that $\| \chi(0) \psi_{n_k} \| = |\chi(0)| >0$ for all $k\geqslant 1$.
\qed
\end{proof}
\end{comment}
|
1,116,691,500,391 | arxiv | \section{Introduction}
Based on the methods used by Floer \cite{floer} in Symplectic Topology to study the intersection properties of Lagrangian submanifolds, in \cite{OS2,OS4} Ozsv\' ath and Szab\' o introduced a package of three-manifold invariants called Heegaard Floer homology. This eventually led to the definition of Knot Floer homology \cite{OS7,Ras1}, a related package of knot invariants.
In the last two decades Knot Floer homology proved to be an extremely powerful tool for the study of knots in $S^3$ \cite{tau,surgery,YN}. In particular, in the realm of knot concordance, invariants like the upsilon invariant $\Upsilon_K(t)$ introduced by Ozsv\'ath, Stipsicz, and Szab\'o \cite{OSS4} turned out to be extremely efficient to decide certain questions about independence in the knot concordance group $\mathcal{C}$.
In \cite{upsiloncovers}, extending the definition of the upsilon invariant to knots inside rational homology spheres, new invariants for knots in $S^3$ were introduced. Indeed, given a knot $K\subset S^3$ one can consider its branched double cover $\Sigma(K)$. This is a rational homology sphere carrying a unique spin structure $\mathfrak{s}_0$. By considering the pull-back $\widetilde{K} \subset \Sigma(K)$ of $K$ to $\Sigma(K)$ we get a null-homologous knot $\widetilde{K} \subset \Sigma(K)$. One can see that the upsilon invariant $\Upsilon_{K, \mathfrak{s}_0}(t)$ of $(\Sigma(K), \widetilde{K})$ with respect to $\mathfrak{s}_0$ yields a knot concordance invariant of $K \subset S^3$.
Further invariants carrying information about the concordance type of $K$ can be obtained by considering the invariants $\Upsilon_{K, \mathfrak{s}}(t)$ of $\widetilde{K}$ associated to the other $\text{Spin}^c$ structures of the double branched cover $\Sigma(K)$. More specifically in \cite{upsiloncovers} the following theorem, reminiscent of the results of Grigsby, Ruberman, and Strle \cite{grigsby2008knot}, was proved.
\begin{thm}[Alfieri, Celoria \& Stipsicz]
If $K$ is a slice knot then there exists a subgroup $G < H^2(\Sigma(K); \mathbb{Z})$ of cardinality
$\sqrt{|H^2(\Sigma(K), \mathbb{Z})|}$ such that $\Upsilon_{K,\mathfrak{s}_0+\xi}(t)=0$ for all $\xi\in G$.
\end{thm}
In \cite{upsiloncovers} we performed computations of the invariants $\Upsilon_{K,\mathfrak{s}_0+\xi}(t)$, $\xi \in H^2(\Sigma^m(K), \mathbb{Z})$ for some families of knots having genus one doubly-pointed Heegaard diagrams. These are known as $(1,1)$ knots, and were first studied by Rasmussen \cite{ras11}.
In this paper we address the issue of computations in the case of graph knots. In what follows a \textit{graph knot} is a knot that can be described by means of a pluming tree $\Gamma$ with one unframed vertex $v_0$, see Figure \ref{trefoil} below. Note that the knot of an irreducible plane curve singularity $f(x,y)=0$ is a graph knot. Indeed, so is its lift to the branched double cover $\Sigma(K)$. (See Proposition \ref{procedure} below.) The study of graph knots was initiated by Ozsv\' ath, Stipsicz, and Szab\' o in \cite{Lspaceknots}.
\begin{thm}\label{maingoal} Let $K$ be a null-homologous graph knot associated to a plumbing $\Gamma$ with unframed vertex $v_0$. Suppose that $G=\Gamma-v_0$ is a negative-definite plumbing tree with at most two bad vertices. Let $\mathfrak{s}$ be a $\text{Spin}^c$ structure of $Y(G)$ and $k$ a characteristic vector of the intersection form of the associated plumbing of spheres $X(G)$ representing the $\text{spin}^c$ structure $\mathfrak{s}$ on the boundary. Then
\[\Upsilon_{K, \mathfrak{s}}(t)= -2 \min_{x\in \mathbb{Z}^s} \chi_t(x)+\left( \frac{k^2+ |G|}{4} - t \ \frac{k\cdot F-F^2}{2} \right) \ , \]
where $\chi_t$ denotes the twisted Riemann-Roch function
\[\chi_t(x)=-\frac{1}{2}\left( (k+tv_0^*) \cdot x + x^2 \right)\ ,\]
and $F\in H_2(X(G), \mathbb{Q})$ a homology class representing $-v_0^*\in H^2(X(G), \mathbb{Z})$.
\end{thm}
This leads to related formulae for the $\tau$-invariants introduced by Grigsby, Ruberman, and Strle \cite{grigsby2008knot}. These are related to the $\Upsilon$-invariant via the identity $\tau_{\mathfrak{s}}(K)= -\lim_{t\to 0^+}\Upsilon_{K, \mathfrak{s}}(t)/t$.
The proof of Theorem \ref{maingoal} we outline presently is an adaptation of the argument presented in \cite{OSGraphManifolds}. (A similar type of work was carried on in \cite{instantons} where the same technique was employed to perform computations in the setting of Instanton Floer homology.) There are two main ingredients. The first is a surgery exact triangle involving the ''$t$-modified'' knot homologies introduced by Ozsv\'ath, Stipsicz, and Szab\'o in \cite{OSS4}. Compare with the knot Floer exact triangle of Ozsv\' ath and Szab\' o \cite[Theorem 8.2]{OS7}.
\begin{thm}\label{exactanalyticthm}
Let $K \subset Y$ be a knot in a rational homology sphere, and $C \subset Y$ a framed knot in its complement. Let $\lambda$ denotes the framing of $C$, and $\mu$ its meridian. Suppose that the surgery three-manifolds $Y_\lambda(C)$ and $Y_{\lambda+\mu}(C)$ are rational homology spheres. Then there is an exact triangle
\begin{equation}\label{exactanalytic}
\begin{tikzcd}[column sep=small]
\text{tHFK}^-(Y,K) \arrow{rr} & & \text{tHFK}^-(Y_\lambda(C), K) \arrow{ld} \\
& \text{tHFK}^-(Y_{\lambda+\mu}(C), K) \arrow{lu} &
\end{tikzcd} \ .
\end{equation}
\end{thm}
Secondly, in the spirit of \cite{Nemethi1,OSS1}, we introduce a combinatorial invariant associated to algebraic knots. This is a one-parameter family of knot homologies $t\mathbb{HFK}_*(\Gamma)$ with the same formal structure as $t\text{HFK}(Y,K)$. In Section \ref{combinatorialexactsequence} we establish a long exact sequence playing the role of the surgery exact triangle in the combinatorial theory.
\begin{thm}\label{latticeexact}There is a long exact sequence of modules
\[ \xymatrix{
\ar[r] & t\mathbb{HFK}_p(\Gamma-v) \ar[r]^{\ \ \ \phi_* } & t\mathbb{HFK}_p(\Gamma) \ar[r]^{\psi_* \ \ \ } & t\mathbb{HFK}_p(\Gamma_{+1}(v)) \ar[r]^{\ \ \delta} & t\mathbb{HFK}_{p-1}(\Gamma) \ar[r] &} \]
\end{thm}
Our main result is then obtained by comparing the two exact triangles as in \cite{OSGraphManifolds}. Note that the combinatorial theory we develop here builds on the work carried on in \cite{Lspaceknots} by Ozsv\' ath, Stipsicz, and Szab\' o.
\vspace{0.3cm}
\begin{footnotesize}
\textbf{Acknowledgements}
\textit{I would like to thank Peter Ozsv\' ath for showing interest in this projects while in its early stages, and for some useful advice. I would also like to thank Andr\' as Stipsicz, Andr\' as Juhasz, Ian Zemke, Daniele Celoria, Andr\' as N\' emethi and Liam Watson for useful conversations. Most of this work was carried on in the summer of 2018 when I was partially supported by the NKFIH Grant \' Elvonal (Frontier) KKP 126683 and K112735.}
\end{footnotesize}
\section{Embedded resolutions of curves and branched double coverings}
In what follows we will deal with knots in rational homology spheres. As a consequence we will adopt the following terminology.
\begin{defi} A knot is a pair $(Y, K)$ where $Y$ is a smooth closed three-manifold and $K$ is the image of a $C^\infty$ embedding $S^1 \hookrightarrow Y$.
\end{defi}
A knot $(Y, K)$ is called \textit{null-homologous} if $[K]=0$ in $H_1(Y; \mathbb{Z})$. This is the same as asking that $K$ has a Seifert surface, \text{i.e.} that there exists a surface with boundary $\Sigma\subset Y$ such that $\partial \Sigma=K$.
Most of the time in what follows we will deal with null-homologous knots $(Y,K)$ where the three manifolds $Y$ is a rational homology sphere, that is $H_*(Y, \mathbb{Q})\simeq H_*(S^3,\mathbb{Q})$. Furthermore, knots and three-manifolds are always assumed to be \textit{oriented}. Note that in a rational homology sphere knots are guaranteed to be \textit{rationally null-homologous}, that is $[K]=0$ in $H_1(Y; \mathbb{Q})$.
Two knots $(Y_0,K_0)$ and $(Y_1,K_1)$ are called \textit{rationally homology concordant} if there is a rational homology cobordism $X:Y_0 \to Y_1$ containing a smoothly embedded cylinder $C \subset X$ such that $\partial C= C \cap \partial X = K_0 \cup-K_1$. If $Y_0$ and $Y_1$ are equipped with $\text{spin}^c$ structures, say $\mathfrak{s}_0$ and $\mathfrak{s}_1$, and these extends over $X$ we say that $(Y_0,K_0, \mathfrak{s}_0)$ and $(Y_1,K_1, \mathfrak{s}_1)$ are $\text{Spin}^c$ rationally homology concordant.
There are basically two ways one can use to represent a knot $(Y,K)$:
\begin{enumerate}
\item via a \textit{mixed diagram} that is a pair $(L,K)$ where $L$ represents a framed link $L$ in the three-sphere $S^3$, and $K$ a knot lying in the link complement $S^3-L$,
\item or via a \textit{doubly-pointed Heegaard diagram} that is a Heegaard diagram
\[(\Sigma, \{\alpha_1, \dots, \alpha_g\},\{\beta_1, \dots, \beta_g\})\]
together with a pair of base points $z, w\in \Sigma-\alpha_1- \dots- \alpha_g-\beta_1- \dots- \beta_g$ lying in the complement of the $\alpha$- and the $\beta$-curves \cite{OS7}.
\end{enumerate}
\subsection{Algebraic knots} Let $(C, 0) \subset (\mathbb{C}^2, 0)$ be the germ of an irreducible plane curve singularity. By looking at the intersection of $C \subset \mathbb{C}^2$ with a small sphere $S_\epsilon(0)\subset \mathbb{C}^2$ centred at the origin of the axes we get a knot $(S^3, K)$. Knots of this kind are usually called \textit{algebraic knots}.
The topology of an algebraic knot $(S^3, K)=(S_\epsilon(0), S_\epsilon(0)\cap C)$ can be understood by means of an embedded resolution of the curve singularity $(C,0)\subset (\mathbb{C}^2, 0)$. By this we mean a complex map $\rho: \mathbb{C}^2 \# n\overline{\mathbb{C} P^2} \to \mathbb{C}^2$ such that:
\begin{itemize}
\item $\rho$ defines an isomorphism $\mathbb{C}^2 \# n\overline{\mathbb{C} P^2} \setminus \rho^{-1}(0) \to \mathbb{C}^2\setminus \{0\} $ away from the origin,
\item the \textit{exceptional divisor},
\[E:=\rho^{-1}(C)=E_1 \cup \dots \cup E_n \cup \widetilde{C} \ ,\]
is an algebraic curve with smooth components $\{E_1 , \dots , E_n,\widetilde{C} \}$. Furthermore, $E_i \simeq \mathbb{C} P^1$, $i\in \{1, \dots, n\}$, and $\widetilde{C}=\overline{\rho^{-1}(C\setminus\{0\})}$
\item no three components of the exceptional divisor pass through the same point,
\item the exceptional divisor $E$ has only normal crossing singularities, that is: the intersection of two components of $E$ is locally modelled on \[\{(x,y)\in \mathbb{C}^2 \ | \ x^{m}y^{l}=0\}\] for some $m,l\geq 1$ (the \textit{multiplicities}).
\end{itemize}
The exceptional divisor $E$ of an embedded resolution $\mathbb{C}^2 \# n\overline{\mathbb{C} P^2} \to \mathbb{C}^2$ is encoded in the so called \textit{resolution graph}. This is the graph $\Gamma$ having as vertices the irreducible components of $E$ and an edge connecting each pair of intersecting components. Note that the resolution graph comes with a distinguished vertex (the one corresponding to the proper transform $\widetilde{C}$ of the curve $C$) and integers $e_1,\dots, e_n$ labelling the other vertices. These are defined as the self intersections $e_i=E_i \cdot E_i$ of the curves $E_1 , \dots , E_n$ in the blow-up $\mathbb{C}^2 \# n\overline{\mathbb{C} P^2}$.
\begin{exa}The curve $C=\{(x,y) \in \mathbb{C}^2 \ : \ x^2+y^3=0\}$ has an isolated singularity at the origin $0\in \mathbb{C}^2$. After three consecutive blow-ups \cite[Example 7.2.3 (a)]{thebook} we get an embedded resolution $\mathbb{C}^2 \# 3\overline{\mathbb{C} P^2} \to \mathbb{C}^2$ with graph:
\[ \ \ \ \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(1.5,0) }*+{\bullet}="c1"
!{(0,-1.5) }*+{\bullet}="b1"
!{(0,0.5) }*+{E_3}
!{(-1.8,0.5) }*+{E_1}
!{(1.8,0.5) }*+{E_2}
!{(0.5,-1.3) }*+{\widetilde{C}}
"x"-"c1"
"x"-"a1"
"x"-"b1"
} \ .
\]
Here $E_1^2=-3, E_2^2=-2$, and $E_3^2=-1$. Furthermore the curves $E_1, E_2$, and $E_3$ have multiplicity $m_1=2, m_2=3$ and $m_3=6$ respectively.
\end{exa}
Note that once we erase the unframed vertex from the embedded resolution graph of a curve singularity we get a negative-definite plumbing tree representing $S^3$. (This is because an embedded resolution $\mathbb{C}^2 \# n\overline{\mathbb{C} P^2} \to \mathbb{C}^2$ is in particular a resolution of the trivial surface singularity $(\mathbb{C}^2,0)$.)
The resolution graph $\Gamma$ gives rise to a surgery diagram representing the algebraic knot $(S^3,K)$ associated to the curve singularity $(C,0)\subset (\mathbb{C}^2, 0)$.
Indeed, given any rooted tree $(\Gamma, v_0)$ and a weight assignment $m: \Gamma \setminus \{v_0\} \to\mathbb{Z}$, we can look at the plumbed three-manifold $Y(G)$ associated to $G=\Gamma \setminus \{v_0\}$ and consider the knot $(Y(G), K)$ represented by the unframed vertex as in Figure \ref{trefoil}. This gives rise to an interesting class of knots.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.8]{A}
\caption{\label{trefoil} A surgery diagram of the trefoil knot.}
\end{center}
\end{figure}
\begin{defi}
A knot $(Y, K)$ that can be presented by means of a plumbing tree $\Gamma$ with one unframed vertex $v_0$ is called a graph knot.
\end{defi}
\begin{exa} Let $(X, 0) \subset \mathbb{C}^N$ be a complex surface singularity and $(C,0) \subset (X,0)$ be a complex curve singularity. Then $(Y,K)=(S_\epsilon(0) \cap X, S_\epsilon(0) \cap C)$ is an algebraic knot in the sense of this new definition.
\end{exa}
\begin{exa}Let $(Y_1, K_1)$ and $(Y_2, K_2)$ are algebraic knots with plumbing diagrams $\Gamma_1$ and $\Gamma_2$. Then the connected sum $(Y_1\# Y_2, K_1\#K_2)$ is an algebraic knot. Indeed $\Gamma=\Gamma_1 * \Gamma_2$, the graph obtained joining $\Gamma_1$ and $\Gamma_2$ along their unframed vertices, gives rise to a diagram for $(Y_1\# Y_2, K_1\#K_2)$.
\end{exa}
\subsection{Branched coverings} A knot $(S^3, K)$ gives rise to a knot in a rational homology sphere $(\Sigma(K), \widetilde{K})$ where the ground three-manifold is given by the branched double cover $\Sigma(K)$ of $S^3$ along $K$, and $\widetilde{K}=\text{Fix}(\tau)$ by the fixed point set $\text{Fix}(\tau)$ of the covering involution $\tau: \Sigma(K) \to \Sigma(K)$.
\begin{prop} \label{procedure}The double branched cover $(\Sigma(K), \widetilde{K})$ of an algebraic knot $(S^3, K)$ associated to a plane curve singularity $(C, 0) \subset (\mathbb{C}^2, 0)$ is algebraic.
\end{prop}
\begin{proof} Let $ f(x,y)=0$ be an equation for $C\subset \mathbb{C}^2$, and $\rho: \mathbb{C}^2 \# n\overline{\mathbb{C} P^2} \to \mathbb{C}^2$ denote an embedded resolution. Furthermore, let $\rho^{-1}(0)=E_1 \cup \dots \cup E_n \cup \widetilde{C}=E$ denote the exceptional divisor of the embedded resolution.
First we note that $\phi=f\circ \rho$ defines an equation for $E$. This assigns multiplicities $m_1, \dots, m_n \geq 1$ to the rational components of the exceptional divisor. By looking at the branched double cover of $\mathbb{C}^2 \# n\overline{\mathbb{C} P^2}$ along the Weil divisor $D=\widetilde{C} + \sum_{i=1}^n m_i E_i$ (see \cite[pp. 239-241]{thebook} ) we obtain a smooth surface $\widetilde{S}$ representing a resolution of the surface singularity $(S,0)$ with equation $z^2=f(x,y)$.
The various components of the exceptional divisor $E \subset \mathbb{C}^2 \# n\overline{\mathbb{C} P^2}$ lift to $\widetilde{S}$ as explained in \cite[p. 252]{thebook} and form a configuration of curves inside $\widetilde{S}$. (Note that for the purposes of singularity theory one usually ignores the pull-back of the strict transform $\widetilde{C}$ to $\widetilde{S}$, while locating the latter here plays a crucial role.) The adjacency graph of this configuration gives rise to a tree $\Gamma$ describing $(\Sigma(K), \widetilde{K})$.
\end{proof}
\begin{exa}Following the procedure outlined in the proof of Proposition \ref{procedure} we get a diagram for the branched double cover of the resolution graph of the trefoil knot:
\[ \ \ \ \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,1.5) }*+{\bullet}="d1"
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(1.5,0) }*+{\bullet}="c1"
!{(0,-1.5) }*+{\bullet}="b1"
!{(0.5,0.5) }*+{-2}
!{(0.5,1.5) }*+{-1}
!{(-1.5,0.5) }*+{-3}
!{(1.8,0.5) }*+{-3}
!{(0.5,-1.3) }*+{K}
"x"-"c1"
"x"-"a1"
"x"-"b1"
"x"-"d1"
} \ .
\]
This represents a knot in the lens space $L(3,2)$:
\[
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,1.5) }*+{\bullet}="d1"
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(1.5,0) }*+{\bullet}="c1"
!{(0.5,1.5) }*+{-1}
!{(0.5,0.5) }*+{-2}
!{(-1.5,0.5) }*+{-3}
!{(1.5,0.5) }*+{-3}
"x"-"c1"
"x"-"a1"
"x"-"d1"
} =
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(1.5,0) }*+{\bullet}="c1"
!{(0,0.5) }*+{-1}
!{(-1.5,0.5) }*+{-3}
!{(1.5,0.5) }*+{-3}
"x"-"c1"
"x"-"a1"
} =
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(0,0.5) }*+{-2}
!{(-1.5,0.5) }*+{-2}
"x"-"a1"
} = L(3,2) \ .
\]
Similar computations can be run for all torus knots starting from the defining equation $x^p+y^q=0$.
\end{exa}
\section{$t$-Modified knot Floer homology, and the upsilon invariant of knots in rational homology spheres}\label{analytictheory}
In \cite{OSS4} Ozsv\'ath, Stipsicz and Szab\' o introduced a one-parameter family of knot homologies $\text{tHFK}^-(K)$ giving rise to knot invariants of knots in $S^3$. In what follows we go through the straightforward generalisation of their construction taking into account knots in rational homology spheres.
\subsection{Notation}In what follows we will work over the ring $\mathcal{R}$ of long power series with $\mathbb{F}$ coefficients. This is the commutative ring of infinite formal sums $\sum_{\alpha \in A} q^\alpha$, with $A \subset \mathbb{R}_{\geq0}$ well-ordered. One defines:
\[\left( \sum_{\alpha \in A} q^\alpha \right) + \left( \sum_{\beta \in B} q^\beta \right)= \sum_{\gamma \in A\cup B} q^\gamma \]
and
\[\left( \sum_{\alpha \in A} q^\alpha \right) \cdot \left( \sum_{\beta \in B} q^\beta \right)= \sum_{\gamma \in A+ B} c_\gamma \cdot q^\gamma \ , \]
where $A+B=\{\alpha+ \beta \ | \ \alpha \in A, \ \beta \in B\} \subset \mathbb{R}_{\geq 0}$ and
\[c_\gamma=\#\left\{(\alpha, \beta)\in A \times B \ | \ \alpha+\beta=\gamma \right\} \ \mod 2 . \]
The ring $\mathcal{R}$ has the fundamental property that every finitely generated $\mathcal{R}$-module $M$ is sum of cyclic modules \cite[Section 11]{Brandal} \text{i.e.}
\[ M \simeq \mathcal{R}^k \oplus \mathcal{R}/f_1 \oplus \dots \mathcal{R}/f_m \]
for some $f_1 , \dots , f_m \in \mathcal{R}$, and $k \geq 0$ (the \textit{rank} of $M$). Notice that the field of fraction of the ring $\mathcal{R}$ is given by
\[ \mathcal{R}^*= \left\{\left.\sum_{\alpha \in A} q^\alpha \ \right| \ A \subset \mathbb{R} \text{ well-ordered} \right\} \ ,\]
and that the rank of a finitely generated $\mathcal{R}$-module $M$ equals the dimension of $M_{\mathcal{R}^*}= M \otimes_\mathcal{R} \mathcal{R}^*$ as an $\mathcal{R}^*$-vector space.
We will think $\mathcal{R}$ as a graded ring, with $\text{deg }q=-1$. Note that $\mathbb{F}[U]\hookrightarrow \mathcal{R}$ via the identification $U=q^2$.
\subsection{$t$-Modified Knot homologies}
Let $Y$ be a rational homology sphere. Recall \cite{OS7} that a knot $(Y,K)$ can be represented by a doubly-pointed Heegaard diagram $(\Sigma, \boldsymbol{\alpha}, \boldsymbol{\beta}, z, w)$. In the symmetric product $\text{Sym}^g(\Sigma)$, the space of degree $g$ divisors over the genus $g$ Riemann surface $\Sigma$, this specifies two half-dimensional, totally-real submanifolds $\mathbb{T}_{\alpha}= \alpha_1 \times \dots \times \alpha_g$ and $\mathbb{T}_\beta= \beta_1 \times \dots \times \beta_g$, and two analytic submanifolds $V_z=\{ z\} \times \text{Sym}^{g-1}(\Sigma)$ and $V_w=\{ w\} \times \text{Sym}^{g-1}(\Sigma)$ of complex codimension one. We define
\[ CF(\mathbb{T}_\alpha, \mathbb{T}_\beta)= \bigoplus_{\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \mathcal{R} \cdot \mathbf{x} \ .\]
Given two intersection points $\mathbf{x}$ and $\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ consider the set $\pi_2(\mathbf{x},\mathbf{y})$ of homotopy classes of topological disks $u: D^2\simeq [0,1] \times\mathbb{R} \to \text{Sym}^g(\Sigma)$ such that
\begin{itemize}
\item $u \left( 0\times \mathbb{R} \right) \subseteq \mathbb{T}_{\boldsymbol{\alpha}}$ and $u \left( 1 \times \mathbb{R} \right) \subseteq \mathbb{T}_{\boldsymbol{\beta}}$,
\item $\lim_{t\to - \infty } u(s+it)= \mathbf{x}$ and $\lim_{t \to + \infty } u(s+it)= \mathbf{y}$.
\end{itemize}
For a generic choice of a path of almost-complex structures $J_s$ we can look at the moduli space $\mathcal{M}(\phi)$ of solutions of the (perturbed) Cauchy-Riemann equation
\begin{equation} \label{cauchyriemann}
\frac{\partial u}{\partial s} (s,t) + J_s \left( \frac{\partial u }{ \partial t} (s,t) \right) = 0
\end{equation}
within a given homotopy class $\phi \in \pi_2(\mathbf{x},\mathbf{y})$ as it was done in \cite{OS2}. It turns out that if we restrict our attention to those classes with Maslov index $\mu(\phi)=1$ then $\mathcal{M}(\phi)$ is a \textit{finite} collection of lines (Gromov's Compactness Theorem). In \cite[Section 4]{OS2} this led to the definition of a differential
\[\partial \mathbf{x}=\sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{ \substack{ \phi \in \pi_2(\mathbf{x} , \mathbf{y}) \\ \mu( \phi ) =1 }}\#\left|\frac{\mathcal{M}(\phi)}{\mathbb{R}}\right| \ q^{2n_z(\phi)} \cdot \mathbf{y} \]
turning $CF(\mathbb{T}_\alpha, \mathbb{T}_\beta)$ into a chain complex. Here $n_z(\phi)=\#|\phi(D^2)\cap V_z|$ denotes the intersction with the divisor $V_z$. Notice that the differential of $CF(\mathbb{T}_\alpha, \mathbb{T}_\beta)$ completely ignores the base point $w$. In fact, the chain homotopy type of $CF(\mathbb{T}_\alpha, \mathbb{T}_\beta)$ only provides an invariant of the background three-manifold \cite[Theorem 1.1]{OS2}.
In order to take into account the base point $w$ and hence the knot $K \subset Y$, we can use the following differential
\[\partial_t \mathbf{x}=\sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{ \substack{ \phi \in \pi_2(\mathbf{x} , \mathbf{y}) \\ \mu( \phi ) =1 }}\#\left|\frac{\mathcal{M}(\phi)}{\mathbb{R}}\right| \ q^{tn_z(\phi)+(2-t)n_w(\phi)} \cdot \mathbf{y} \ \ \ \ \ \text{for } t \in [0,2] \ , \]
also recording the intersection with the divisor $V_w$. We will denote by $\text{tHFK}^-(Y,K)$ the homology of the resulting chain group $CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta)=(CF(\mathbb{T}_\alpha, \mathbb{T}_\beta), \partial_t)$.
\subsection{$\text{Spin}^c$ refinement}
In \cite[Section 2.6]{OS2} Ozsv\' ath and Szab\' o built a map $\mathfrak{s}_z: \mathbb{T}_\alpha \cap \mathbb{T}_\beta \to \text{Spin}^c(Y)$ associating to an intersection point a $\text{Spin}^c$ structure of $Y$. Define
$CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta, \mathfrak{s})=\bigoplus_{\substack{ \mathfrak{s}_z(\mathbf{x})=\mathfrak{s}}} \mathcal{R} \cdot \mathbf{x} \ .$
Since \cite[Lemma 2.19]{OS2} for any pair of intersection points $\pi_2(\mathbf{x}, \mathbf{y})$ is non-empty iff $\mathfrak{s}_z(\mathbf{x})=\mathfrak{s}_z(\mathbf{y})$, we conclude that $CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta, \mathfrak{s})$ is a sub-complex of $CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta)$. We set $\text{tHFK}^-(Y, K, \mathfrak{s})=H_*(CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta,\mathfrak{s}))$.
\subsection{Gradings} Attached to an intersection point $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ of a doubly pointed Heegaard diagram there are two rational numbers: the Alexander grading $A(\mathbf{x})$ and the Maslov grading $M(\mathbf{x})$. These have the property that
\[A(\mathbf{x})-A(\mathbf{y})=n_w(\phi) - n_z(\phi) \ \ \ \text{ and } \ \ \ M(\mathbf{x})-M(\mathbf{y})= \mu(\phi) - 2n_z(\phi) \]
where $\phi \in \pi_2(\mathbf{x}, \mathbf{y})$ is disk connecting $\mathbf{x}$ to $\mathbf{y}$. We define a real-valued grading $\text{gr}_t$ on $CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta)$ via the formula $\text{gr}_t(\mathbf{x})=M(\mathbf{x})-tA(\mathbf{x})$. Note that $\partial_t$ drops the grading by one \cite[Lemma 3.3]{OSS4}. We set
\[\Upsilon_{K ,\mathfrak{s}}(t)= \max \{ \text{gr}_t(\xi) \ | \ \xi \in \text{tHFK}^-(Y,K, \mathfrak{s}) \text{ with } q^\alpha \cdot \xi \not=0 \text{ for } \alpha>0 \} \ . \]
This is the upsilon invariant of the knot $(Y,K)$ in the $\text{spin}^c$ structure $\mathfrak{s}$. In the sequel we list some basic properties of the upsilon invariant.
\begin{prop}Let $(Y,K)$ be a knot in a rational homology sphere and $\mathfrak{s}$ a $\text{spin}^c$ structure of $Y$, then
\begin{itemize}
\item $\Upsilon_{K,\mathfrak{s}}(0)=d(Y, \mathfrak{s})$ where $d(Y, \mathfrak{s})$ denotes the Heegaard Floer correction term of the pair $(Y,\mathfrak{s})$ as defined by Ozsv\' ath and Szab\' o in \cite{OS24},
\item $\Upsilon_{K,\mathfrak{s}}(t)=\tau_s(K) \cdot t +d(Y, \mathfrak{s})$ for all $t>0$ close enough to zero, where $\tau_s(K)$ denotes the $\tau$-invariant defined by Grigsby, Ruberman, and Strle in \cite{grigsby2008knot},
\item the invariant $\Upsilon_{K, \mathfrak{s}}(t)$ defined presently agrees with the one of \cite{upsiloncovers}, that is
\[\Upsilon_{K, \mathfrak{s}}(t)= -2 \cdot \min_\xi \left\{ \frac{t}{2} A(\xi) + \left( 1- \frac{t}{2}\right) j(\xi) \right\}+d(Y(G), \mathfrak{s}) \ ,\]
where $\xi$ ranges between all cycles with Maslov grading $d= d(Y(G), \mathfrak{s})$
\end{itemize}
Furthermore, $\Upsilon_{K,\mathfrak{s}}(t)$ is an invariant of $\text{Spin}^c$ rational homology concordance.
\end{prop}
\begin{proof}
The first assertion follows from the fact that $\partial_t$ agrees with the differential of $CF^-(Y,\mathfrak{s})$ when $t=0$. The fact that $\Upsilon_{K,\mathfrak{s}}(t)=\tau_s(K) \cdot t +d(Y, \mathfrak{s})$ for small values of $t$ follows from the same argument of \cite[Proposition 1.6]{OSS4}. While the last assertion is proved with the same argument presented in \cite[Section 14.1]{Livingston1}.
The fact that $\Upsilon_{K,\mathfrak{s}}(t)$ is an invariant of $\text{Spin}^c$ rational homology concordance was proved in \cite[Proposition 4.1]{upsiloncovers}.
\end{proof}
\subsection{Zemke's inequality} A key feature of the Heegaard Floer correction terms is the Ozsv\' ath and Szab\' o inequality \cite{OS24} relating the correction terms of two three-manifold connected by a negative-definite $\text{spin}^c$ cobordism. This asserts that given a $\text{spin}^c$ cobordism $(W, \mathfrak{t}): (Y_0, \mathfrak{s}_0) \to (Y_1, \mathfrak{s}_1)$ between two $\text{spin}^c$ rational homology spheres $(Y_0, \mathfrak{s}_0)$ and $(Y_1, \mathfrak{s}_1)$ with $b_1(W)=b_2^+(W)=0$ one has that:
\[d(Y_1, \mathfrak{s}_1) \geq d(Y_0, \mathfrak{s}_0) + \frac{c_1(\mathfrak{t})^2+ b_2(W)}{4} \ .\]
In \cite{zemkegradings} Zemke proved that a similar inequality holds for the upsilon invariant.
\begin{thm}[Zemke \cite{zemkegradings}]\label{Zemkeinequality} Let $(Y_0, K_0)$ and $(Y_1, K_1)$ be two knots, and $\mathfrak{s}_i\in \text{Spin}^c(Y_i)$ $\text{spin}^c$ structures. Suppose that there is a $\text{spin}^c$ cobordism $(W, \mathfrak{t}): (Y_0, \mathfrak{s}_0) \to (Y_1, \mathfrak{s}_1)$ containing a properly embedded surface $\Sigma\hookrightarrow W$ with $\partial \Sigma= K_1\cup-K_0$. If $b_1(W)=b_2^+(W)=0$ then
\[\Upsilon_{K_1, \mathfrak{s}_1}(t)\geq \Upsilon_{K_0, \mathfrak{s}_0}(t)+ \frac{c_1(\mathfrak{t})^2+ b_2(W)-2t\langle c_1(\mathfrak{t}), [\Sigma]\rangle +2t [\Sigma]^2 }{4} + g(\Sigma) \cdot (|t-1|-1) ,\]
where $g(\Sigma)$ denotes the genus of the surface $\Sigma$.
\end{thm}
Suppose that $(Y,K)$ is an algebraic knot represented by a negative definite plumbing tree $\Gamma$. Let $v_0 \in \Gamma$ be the unframed vertex and $G=\Gamma\setminus \{v_0\}$. Then $K \subset Y=Y(G)$ bounds a smooth disk $\Delta \subset X(G)$ properly embedded in the plumbing of spheres associated to $G$. (If $(Y,K) = (S^3, K)$ is the link of a plane curve singularity $ (C, 0) \subset (\mathbb{C}^2, 0)$ then $X(G)$ is identified with the total space of a resolution $\rho: \mathbb{C}^2 \# n\overline{\mathbb{C} P^2} \to \mathbb{C}^2$ of $\mathbb{C}^2$ at the origin and $\Delta = \widetilde{C}$ is just the proper transform of $C$.)
Since $X(G)$ is simply connected, given a $\text{spin}^c$ structure $\mathfrak{s}$ of $Y(G)$ we can chose an extension $\mathfrak{t}$ to $X(G)$. Then according to Theorem \ref{Zemkeinequality} one has that
\begin{equation}\label{charineq} \Upsilon_{K, \mathfrak{s}}(t)\geq \frac{k^2+ |G|}{4}-t \cdot \frac{ k \cdot F- F^2 }{2} \ ,
\end{equation}
where $k=c_1(\mathfrak{t})$ denotes the first Chern class of $\mathfrak{t}$, and $F \in H_2(X(G), \mathbb{Q})$ a homology class representing the Poincar\' e dual of $\phi(c)=-\#(\Delta \cap c)$, $c \in H_2(X(G), \mathbb{Z})$.
See Section \ref{alexanderfiltration} below.
In what follows we will show (Theorem \ref{maingoal}) that the inequality displayed in Equation~\eqref{charineq} is sharp, that is $\Upsilon_{K, \mathfrak{s}}(t)= (k^2+ |G|)/4-t \cdot (k \cdot F- F^2)/2$ for some characteristic vector $k$, if the graph $G$ satisfies suitable combinatorial hypothesis.
\section{Deformations of lattice cohomology}
\subsection{A quick review of lattice cohomology} Let $G$ be a negative-definite plumbing of spheres. Denote by $X(G)$ the plumbing of spheres associated to $G$. Let
\begin{align*}
\text{Char}(G)&=\{c_1(\mathfrak{s}) : \mathfrak{s} \in \text{Spin}^c(X(G))\} \\
&=\{ k \in H^2(X(G), \mathbb{Z}) : k(x) \equiv x^2 \ (\text{mod 2}) \text{ for every }x \in H_2(X(G), \mathbb{Z})\} \ .
\end{align*}
be the set of characteristic vectors of the intersection form of $X(G)$. Notice that, since $X(G)$ is simply connected, a $\text{Spin}^c$ structure $\mathfrak{s}$ of $X(G)$ is uniquely determined by its first Chern class $c_1(\mathfrak{s})$. Thus, $\text{Char}(G)\simeq \text{Spin}^c(X(G))$.
In what follows we will be interested in the $\text{Spin}^c$ structures of $Y(G)=\partial X(G)$. These always extend over $X(G)$, and two $\text{Spin}^c$ structures represented by characteristic vectors $k$ and $k'$ induce the same $\text{Spin}^c$ structure on the boundary $\partial X(G)=Y(G)$ if and only if $k-k' \in 2 \cdot H^2(X(G), Y(G))\simeq 2 \cdot H_2(X(G))$.
In \cite{OS20} a computational scheme for the Heegaard Floer homologies of graph manifolds was described. This eventually lead to the definition of the \textit{lattice homology groups} \cite{Nemethi1} whose construction we presently review. Let $s$ denotes the number of vertices of $G$, and $\mathfrak{s}$ a $\text{Spin}^c$ structure of $Y$. Think $H_2(X(G), \mathbb{Z})= \mathbb{Z}^s$ as a lattice in $H_2(X(G) , \mathbb{R})= \mathbb{R}^s$. The points of $\mathbb{Z}^s \subset \mathbb{R}^s$ specify the vertices of a subdivision into hypercubes of $H_2(X(G) , \mathbb{R})=\mathbb{R}^s$, and hence a CW-complex decomposition of $\mathbb{R}^s$. A $p$-cell of this CW-complex decomposition is specified by a pair $(\ell, I)$ where $\ell \in H_2(X(G), \mathbb{Z})=\mathbb{Z}^s$, and $I \subset G$ with $|I|=p$. More specifically, we associate to such a pair $(\ell, I)$ the $|I|$-cell corresponding to the convex hull of $\{ \ell + \sum_{v \in J} v \ |\ J \subset I \}$. Fix a reference characteristic vector $k\in \text{Char}(G)$ representing $\mathfrak{s}$, and set
$ \chi_k(x) =- \frac{1}{2} (k(\ell)+ \ell^2)$. This is the Riemann-Roch quadratic form associated to the characteristic vector.
For a $p$-cell $\Box$ of the latter CW decomposition of $\mathbb{R}^s$ we set
\[w_k(\Box)= \max_{\text{vertices of } \Box} \chi_k(v) \ . \]
For $l \in \mathbb{Z}$ consider the sub-level set $M_l= \bigcup_{w_k(\Box)\leq l}\Box$ and form the chain complex
\[ \mathbb{C} \mathbb{F}_*(G,\mathfrak{s})=\bigoplus_{l \in \mathbb{Z}} C_{*}(M_l, \mathbb{F}) \ ,\]
where $C_*(-, \mathbb{F})$ denotes CW homology with $\mathbb{F}$-coefficients. (Note that sub-level sets are sub-complexes of $\mathbb{R}^s$ since $w_k(\Box_i)\leq w_k(\Box)$ for every $(p-1)$-dimensional face $\Box_i$ of a given $p$-cell $\Box$.) This is the lattice homology chain group associated to $(G, \mathfrak{s})$. Notice that the inclusions $\dots M_{l-1} \hookrightarrow M_l \hookrightarrow M_{l+1} \dots$ induce a chain map $U: \mathbb{C} \mathbb{F}_*(G,\mathfrak{s}) \to \mathbb{C} \mathbb{F}_*(G,\mathfrak{s})$ turning $\mathbb{C} \mathbb{F}_*(G,\mathfrak{s})$ into a chain complex over the ring $\mathbb{F}[U]$. Notice that $\mathbb{C} \mathbb{F}_*(G,\mathfrak{s})$ has a natural $\mathbb{F}[U]$-basis: $\mathcal{B}=\{ \Box \in C_*(M_{w_k(\Box)},\mathbb{Z}) : \Box \ p\text{-face of } \mathbb{R}^s , \ 0 \leq p \leq s\}$. With respect to $\mathcal{B}$ the differential of $\mathbb{C} \mathbb{F}_*(G,\mathfrak{s})$ reads as:
\begin{equation} \label{latticedifferential}
\partial \Box = \sum_{ i} U^{w(\Box)- w (\Box_i)} \cdot \Box_i \ ,
\end{equation}
where the sum is extended to all $(p-1)$-dimensional faces $\Box_i$ of $\Box$.
\subsection{Gradings}
In addition to the grading induced by the homological grading of the $C_{*}(M_t, \mathbb{Z})$ summands, $\mathbb{C} \mathbb{F}_*(G,\mathfrak{s})$ has another grading corresponding to the Maslov grading of the analytic theory:
\[ \ \ \ \ \ \ \ \ \ \ \ \ \text{gr}(\Box)=\text{dim}(\Box)-2w_k(\Box)+ \frac{k^2+ |G|}{4} \ ,\]
for a basis element $\Box \in \mathcal{B}$. This is then extended to $\mathbb{F}$-generators via the identity $\text{gr}(U^j\cdot\Box)=\text{gr}(\Box)-2j$.
\subsection{The filtration of an algebraic knot}\label{alexanderfiltration}
An algebraic knot $K \subset Y(G)$ is a knot that can be described by a plumbing tree $\Gamma$ with one unframed vertex $v_0$ such that $G=\Gamma -v_0$. As in the analytic theory, the choice of such a knot $K$ induces a filtration on $\mathbb{C} \mathbb{F}_*(G, \mathfrak{s})$. We define the Alexander grading of a generator $\Box \in\mathcal{B}$ by
\[A(\Box)= w_{k+2v_0^*}(\Box)- w_k(\Box) + \frac{k \cdot F -F^2}{2} \ .\]
where $F \in H_2(X(G), \mathbb{Q})$ is a rational homology class representing the Poincar\' e dual\footnote{The class of $v_0$ makes no sense in $H_2(X(G); \mathbb{Z})$ since $v_0$ does not represent a closed surface in $X(G)$. On the other hand, $v_0$ represents a properly embedded disk $\Delta \subset X(G)$ and hence a class in $H_2(X(G), \partial X(G); \mathbb{Z})$. Thus we can consider its Poincar\' e dual $v_0^*\in Hom(H_2(X(G); \mathbb{Z}), \mathbb{Z})$. This is characterized by the property that $v_0^* \cdot v=1$ if $v_0v$ is in $\Gamma$, and zero otherwise.} of $-v_0^*$, \text{i.e.} such that $F\cdot v = -v_0^* \cdot v$ for each $v \in G$. We define the Alexander grading of a chain $\xi = \sum_{j=1}^m U^{m_j}\Box_j$ as the maximum of the Alexander grading of its components $A (\xi)= \max_j A(\Box_j)-m_j$. Note that the multiplication by $U$ drops the Alexander grading by one.
\begin{prop} $A(\partial \Box)\leq A(\Box)$.
\end{prop}
\begin{proof}We have to prove that for a $p$-cell $\Box$ we have
\[A( U^{w_k(\Box)- w_k(\Box_i)} \cdot \Box_i ) \leq A(\Box)\]
for every $(p-1)$-face $\Box_i$ of $\Box$. Since multiplication by $U$ drops the Alexander grading by one everything boils down to prove that
\[ A( \Box_i ) - (w_k(\Box)- w_k (\Box_i)) \leq A(\Box) \ .\]
Substituting the value of the Alexander filtration, and canceling $(k \cdot F +F^2)/4$ on both sides we get
\[ w_{k+2v_0^*}(\Box_i)-\cancel{ w_k(\Box_i) } - (\cancel{w_k(\Box)}- \cancel{ w_k (\Box_i)}) \leq w_{k+2v_0^*}(\Box)- \cancel{w_k(\Box )} \ , \]
On the other hand the inequality $w_{k+2v_0^*}(\Box_i) \leq w_{k+2v_0^*}(\Box)$ follows immediately from the definitions.
\end{proof}
\begin{rmk}
Exactly as in the analytic theory the group $\mathbb{C} \mathbb{F}(G, \mathfrak{s})$ has an algebraic filtration $j$. For a $q$-cell $\Box \subset M_l$ this is given by $j(\Box)= w_k(\Box)- l$.
\end{rmk}
\subsection{Proof of Theorem \ref{maingoal} for rational graphs} Recall that a vertex $v$ of a negative definite plumbing tree $G$ is said to be \textit{bad} if $\deg (v) >- v^2$. Using the number of bad vertices we can partition algebraic knots into complexity classes: we say that a knot $K\subset Y(G)$ is of \textit{type-$k$} if it can be represented by a plumbing diagram $\Gamma$ with underlying plumbing tree $G$ having no more than $k$ bad points.
We now prove that \eqref{charineq} is sharp in the special case of algebraic knots that can be represented by means of a plumbing diagram with no bad points.
\begin{prop} Suppose that $K$ is an algebraic knot of type-0 (no bad vertices) then
\begin{equation}\label{inspiring}
\Upsilon_{K, \mathfrak{s}}(t)= -2 \min_{x\in \mathbb{Z}^s} \chi_t(x)+\left( \frac{k^2+ |G|}{4} - t \ \frac{k\cdot F-F^2}{2} \right) \ , \end{equation}
where $\chi_t$ denotes the twisted Riemann-Roch function
\[\chi_t(x)=-\frac{1}{2}\left( (k+tv_0^*) \cdot x + x^2 \right)\ .\]
\end{prop}
\begin{proof} According to \cite{Lspaceknots} $CFK^\infty(K, Y(G), \mathfrak{s})$ has the same filtered chain homotopy type of $(\mathbb{C} \mathbb{F}^-(G, \mathfrak{s}) \otimes_{\mathbb{F}[U]} \mathbb{F}[U, U^{-1}], \partial, A)$. Thus
\begin{equation}\label{pippo}
\Upsilon_{K, \mathfrak{s}}(t)= -2 \cdot \min_\xi \left\{ \frac{t}{2} A(\xi) + \left( 1- \frac{t}{2}\right) j(\xi) \right\}+d(Y(G), \mathfrak{s})
\end{equation}
where the minimum is taken over all cycles $\xi$ with Maslov grading $d= d(Y(G), \mathfrak{s})$ representing the generator of $H_d(\mathbb{C} \mathbb{F}^-(G, \mathfrak{s}) \otimes_{\mathbb{F}[U]} \mathbb{F}[U, U^{-1}])=\mathbb{F}$.
Since $H_p(\mathbb{R}^s, \mathbb{Z})=0$ for $p > 0$ any $p$-cycle $\xi \subset \mathbb{R}^s$ eventually bounds a $(p+1)$-chain in $M_{l}$. Hence the minimum in Equation~\eqref{pippo} can be taken over all cycles of the form $\xi=U^{-j} \cdot x$, with $x$ representing a vertex of the CW-decomposition of $\mathbb{R}^s$, and $j \in \mathbb{Z}$. Imposing $\text{gr}(U^{-j} \cdot x)=d$, and substituting into Equation \eqref{pippo}, we get
\begin{align*}
\Upsilon_{K, \mathfrak{s}}(t) &= -2 \cdot \min_{x \in \mathbb{Z}^s}\left\{
\frac{t}{2}\left(A(x)+ \frac{d-\text{gr}(x)}{2}\right) + \left(1-\frac{t}{2} \right) \left( \frac{d-\text{gr}(x)}{2}\right)\right\}+ d\\
&= -2 \cdot \min_{x \in \mathbb{Z}^s}\left\{ \frac{t}{2}\big(\chi_{k+2v_0*}(x) -\chi_k(x) \big) + \chi_k(x) + t \ \frac{k\cdot F- F^2}{4} -\frac{k^2+ |G|}{8}\right\} \\
&= -2 \cdot \min_{x \in \mathbb{Z}^s}\left\{ -\frac{t}{2}v_0^* \cdot x + \chi_k(x)\right\} +\frac{k^2+ |G|}{4} - t \ \frac{k\cdot F- F^2}{2} \ ,
\end{align*}
from where the claimed identity.
\end{proof}
\subsection{Constructions of the groups}\label{construction}
Formula \eqref{inspiring} suggests that the upsilon invariant of an algebraic knot can be expressed as the correction term of suitable lattice groups.
Let $K \subset Y(G)$ be an algebraic knot ($G$ negative-definite) presented by a tree $\Gamma$ with one unframed vertex $v_0$. For $t \in [0,2]$ we twist the Riemann-Roch function by means of the real cohomology class $t v_0^* \in H^2(X(G), \mathbb{R})$ \[\chi_t(x)=-\frac{1}{2}\left( (k+tv_0^*) \cdot x + x^2 \right) \ . \]
Again we stress the fact that the homology class of the vertex $v_0$ does not exists while its Poincar\' e dual is always defined. With this said, we extend $\chi_t$ to $p$-cells ($0<p\leq n$) via the identity
\begin{equation}\label{levelfunction}
w_t(\Box)=(2-t) \cdot w_{k}(\Box)+t \cdot w_{k+2v_0^*}(\Box) \ .
\end{equation}
Note that $w_t$ has the key property that $w_t(\Box_i)\leq w_t(\Box)$ for every face $\Box_i$ of $\Box$. For each real parameter $l \in \mathbb{R}$ we can now consider the sub-level set $M_l= \bigcup_{w_t(\Box)\leq l}\Box$ and form the persistent homology module
\[\mathbb{C} \mathbb{F}_t(\Gamma,\mathfrak{s})=\prod_{l \in \mathbb{R}} C_{*}(M_l, \mathbb{F}) \ .\]
This has an $\mathcal{R}$-module structure: for $\alpha \geq 0$ we set $q^\alpha \cdot \Box= \iota_{ \#}^\alpha(\Box)$, where
$\iota^\alpha : M_l \hookrightarrow M_{l+\alpha}$ denotes the inclusion, and we extend it to an $\mathcal{R}$-action by linearity. (Note that the use of direct products in the definition of $\mathbb{C} \mathbb{F}_t$ is crucial.)
\subsection{Gradings in the deformations} As in the analytic setting we define a grading on $\mathbb{C} \mathbb{F}_t(\Gamma, \mathfrak{s})$ setting $\text{gr}_t(\Box)= \text{gr}(\Box)- t A(\Box)$.
\begin{prop} The differential of $\mathbb{C} \mathbb{F}_t(\Gamma, \mathfrak{s})$ drops the grading by one.
\end{prop}
\begin{proof}Using $\mathcal{B}=\{ \Box \in C_*(M_{w_k(\Box)},\mathbb{Z}) : \Box \ p\text{-face of } \mathbb{R}^s , \ 0 \leq p \leq s\}$ as $\mathbb{F}[U]$-basis the differential $\partial_t$ of $\mathbb{C} \mathbb{F}_t(\Gamma,\mathfrak{s})$ writes as
\begin{equation}
\partial_t \Box = \sum_{ i} q^{w_t(\Box)- w_t (\Box_i)} \cdot \Box_i \ ,
\end{equation}
where the sum is extended to all faces $\Box_i$ of $\Box$. For a $p$-cell $\Box$ one computes
\[\text{gr}_t(\Box)=\text{dim}(\Box) -w_t(\Box) + \frac{k^2+|G|}{4}-t \ \frac{k \cdot F- F^2}{2} \ .\]
Thus, for a component $c_i=q^{w_t(\Box)- w_t (\Box_i)} \cdot \Box_i$ of the differential $\partial_t \Box$ of a $p$-cell we have
\begin{align*}
\text{gr}_t(\Box)-\text{gr}_t(c_i)&= \text{gr}_t(\Box)- \text{gr}_t(\Box_i)- (w_t(\Box)- w_t(\Box_i))\\
&= (\text{dim}(\Box)- \cancel{w_t(\Box)})- (\text{dim}(\Box_i)- \cancel{w_t(\Box_i)})- \cancel{w_t(\Box)}+ \cancel{ w_t(\Box_i)}\\
&= \text{dim}(\Box)- \text{dim}(\Box_i)=1 \ .
\end{align*}
\vspace{-0.6cm}
\end{proof}
\subsection{Correction terms}\label{combcorterms}Here is a structure theorem for the module $t\mathbb{HFK}_*$.
\begin{lem}\label{shape}
The group $t\mathbb{HFK}_*(\Gamma, \mathfrak{s})$ is an $\mathcal{R}$-module of rank one. Furthermore, non-torsion elements are concentrated in lattice grading zero.
\end{lem}
\begin{proof}
Let $\xi \subset M_l$ be a $p$-chain, for some $p>0$. Since $\mathbb{R}^s$ is contractible there exists $l'\geq l$ such that $\xi=\partial \beta$ in $M_{l'}$. Thus $q^{l'-l} \cdot [\xi]=0$, showing that all cycles of this type are torsion.
If $p=0$ on the other hand, $\xi=v_1+ \dots+v_m$ with either $m$ even or odd. If $m$ is even then we can find $l'\geq l$ and a $1$-chain $\gamma \subset M_{l'}$ such that either $\xi=\partial \gamma$. Again this proves that $q^{l'-l}[\xi]=0$, hence
that $[\xi]$ is an element of $\mathcal{R}$-torsion. If $m$ is odd on the other hand given any other $0$-cycle $\xi'=v_1+\dots +v_{2n+1} \subset M_{l'}$ we can find $1$-chain $\gamma \subset M_{l''}$ with $l''>\max\{l, l'\}$ such that $\xi-\xi'=\partial\gamma$. In this case $q^{l''-l} [\xi] + q^{l''-l'} [\xi']=0$ and we are done showing that the rank is one.
\end{proof}
As in the analytic theory we define
\[\Upsilon_{\Gamma, \mathfrak{s}}(t)=\max \{ \text{gr}_t(\xi) \ | \ \xi \in t\mathbb{HFK}(\Gamma, \mathfrak{s}) \text{ with } q^\alpha \cdot \xi \not=0 \text{ for } \alpha>0 \} \ .\]
The following proves that $\Upsilon_{\Gamma, \mathfrak{s}}(t)$ can be computed combinatorially as in Equation \eqref{inspiring}.
\begin{lem}\label{goal}
Let $k$ be a characteristic vector representing the $\text{Spin}^c$ structure $\mathfrak{s}$. Then
\[\Upsilon_{\Gamma, \mathfrak{s}}(t)=-2 \min_{x\in \mathbb{Z}^s} \chi_t(x)+\left( \frac{k^2+ |G|}{4}- t \ \frac{k\cdot F- F^2}{2} \right) \ . \]
\end{lem}
\begin{proof}As consequence of Lemma \ref{shape} we have that
\begin{align*}\Upsilon_{\Gamma, \mathfrak{s}}(t)&= \max_{x\in \mathbb{Z}^s} \text{gr}_t(x)\\
&=-\min_{x\in \mathbb{Z}^s}\Big\{(2-t)\chi_k(x)+t\chi_{k+2v_0^*}(x)\Big\} + \left( \frac{k^2+ |G|}{4}- t \ \frac{k\cdot F- F^2}{2} \right)\\
&=-2\min_{x\in \mathbb{Z}^s}\chi_t(x) + \left( \frac{k^2+ |G|}{4}- t \ \frac{k\cdot F- F^2}{2} \right)\ ,
\end{align*}
and we are done.
\end{proof}
\section{The surgery exact triangle of the $t$-modified knot homologies}\label{sectionanalytictriangle}
Let $\Sigma$ be a genus $g$ Riemann surface, and let $\boldsymbol{\eta}^1, \dots \boldsymbol{\eta}^k$ be collections of compressing circles for some genus $g$ solid handlebodies $U_{\boldsymbol{\eta}_1}, \dots , U_{\boldsymbol{\eta}_k}$ with boundary $\Sigma$. For $i=1, \dots, k$ let $\mathbb{T}_i=\eta_i^1\times \dots \times \eta_i^g \subset \text{Sym}^g(\Sigma)$ denote the Lagrangian torus associated to $\boldsymbol{\eta}^i=\{\eta_i^1, \dots , \eta_i^g\}$. Without loss of generality we can assume that the various $\eta$-curves intersect transversely, hence $\mathbb{T}_i \pitchfork \mathbb{T}_j$ for $i\not=j$. Given intersection points $\mathbf{x}_i \in \mathbb{T}_{i-1} \cap \mathbb{T}_{i} $ for $i=1, \dots, k$ and $\mathbf{y} \in \mathbb{T}_1 \cap \mathbb{T}_k$, denote by $\pi_2(\mathbf{x}_1, \dots, \mathbf{x}_k, \mathbf{y})$ the set of homotopy classes of continuous maps $u: D^2 \to \text{Sym}^g(\Sigma)$ with domain the complex unit disk $D^2$ with $k+1$ marked points $z_0, z_1, \dots z_k \in \partial D^2$ (lying in the order on the unit circle) such that
\begin{itemize}
\item $u(z_i)=\mathbf{x}_i$ for $i=1, \dots , k$ and $u(z_0)=\mathbf{y}$
\item $u(a_i) \subset \mathbb{T}_i$ where $a_i \subset \partial D^2$ denotes the boundary arc in between $z_i$ and $z_{i+1}$ for $i=0, \dots ,k$ (mod $k$).
\end{itemize}
We will be interested in the moduli spaces $\mathcal{M}(P)$ of pseudo-holomorphic representatives of a given homotopy class $P \in \pi_2(\mathbf{x}_1, \dots, \mathbf{x}_k, \mathbf{y})$, \text{i.e.} maps $u: D^2 \to \text{Sym}^g(\Sigma)$ in $P$ solving the Cauchy-Riemann equation on the interior of the unit disk. Note that for $k \geq 4$ the source of these maps themselves have moduli: if $\mathcal{M}_{0,k}$ denotes the moduli space of disks with $k$ punctures on the boundary then $\text{dim} \mathcal{M}_{0,k}=k-3$. As in the case of pseudo-holomorphic strips discussed in Section \ref{analytictheory}, associated to an homotopy class $P \in \pi_2(\mathbf{x}_1, \dots, \mathbf{x}_k, \mathbf{y})$ there is a Maslov index $\mu(P) \in \mathbb{Z}$. For a generic choice of perturbations of the Cauchy-Riemann equation, the moduli space $\mathcal{M}(P)$ forms a smooth finite-dimensional manifold of dimension
\[\text{dim} \mathcal{M}(P)= \mu(P)+ \text{dim} \mathcal{M}_{0,k}=\mu(P) + k-3 \ .\]
We define maps $ f_{\boldsymbol{\eta}_1, \dots ,\boldsymbol{\eta}_k}: CF_t(\mathbb{T}_{k-1}, \mathbb{T}_k) \otimes_\mathcal{R} \dots \otimes_\mathcal{R} CF_t(\mathbb{T}_{1}, \mathbb{T}_2) \to CF_t(\mathbb{T}_1, \mathbb{T}_k)$ by counting pseudo-holomorphic $k$-gons
\[f_{\boldsymbol{\eta}_1, \dots ,\boldsymbol{\eta}_k}(\mathbf{x}_k \otimes \dots \otimes \mathbf{x}_1)= \sum_{y \in \mathbb{T}_0 \cap \mathbb{T}_k} \sum_{\mu(P)=3-k} \# \mathcal{M}(P) \ q^{tn_z(P)+ (2-t)n_w(P)} \cdot \mathbf{y} \ .\]
An inspection of the ends of moduli spaces of pseudo-holomorphic $k$-gons with Maslov index $\mu=2-k$ shows that the maps $f_{\boldsymbol{\eta}_1, \dots ,\boldsymbol{\eta}_k}$ satisfy the so called $A_\infty$-relations:
\[\sum_{0\leq i < j \leq k}f_{\boldsymbol{\eta}_1, \dots, \boldsymbol{\eta}_i, \boldsymbol{\eta}_j, \dots ,\boldsymbol{\eta}_k}(\mathbf{x}_1\otimes \dots \otimes\mathbf{x}_{i-1}\otimes f_{\boldsymbol{\eta}_i, \dots ,\boldsymbol{\eta}_j}(\mathbf{x}_{i}\otimes \dots \otimes \mathbf{x}_{j-1})\otimes \mathbf{x}_{j}\otimes \dots \otimes \mathbf{x}_k )=0 \ . \]
We will be interested in these relations for low values of $k$. If we set $\mathbf{x} \cdot \mathbf{y}=f_{\boldsymbol{\eta}_1, \boldsymbol{\eta}_2, \boldsymbol{\eta}_3}(\mathbf{x}\otimes \mathbf{y} )$ then the $A_\infty$-relations for $k=4$ read as
\begin{equation}
\partial_t (\mathbf{x}\cdot \mathbf{y})=\partial_t \mathbf{x}\cdot \mathbf{y} +\mathbf{x}\cdot \partial_t\mathbf{y} \ ,
\end{equation}
proving that $\mathbf{x} \cdot \mathbf{y}$ satisfies the Leibeniz rule.
Note that this product operation is not associative. On the other hand, for $k=5$ the $A_\infty$-relations say that so happens up to homotopy. More precisely we have that:
\begin{equation}
(\mathbf{x} \cdot \mathbf{y}) \cdot\boldsymbol{z}+ \mathbf{x} \cdot (\mathbf{y} \cdot \boldsymbol{z})= \partial_t f_{\boldsymbol{\eta}_1, \boldsymbol{\eta}_2, \boldsymbol{\eta}_3, \boldsymbol{\eta}_4}(\mathbf{x} \otimes \mathbf{y} \otimes \boldsymbol{z})+f_{\boldsymbol{\eta}_1, \boldsymbol{\eta}_2, \boldsymbol{\eta}_3, \boldsymbol{\eta}_4}(\partial_t(\mathbf{x} \otimes \mathbf{y} \otimes \boldsymbol{z})) \ .
\end{equation}
Suppose now that $K \subset Y$ is a knot in a rational homology sphere and that $C \subset Y\setminus K$ is a framed loop in its complement. We wish to establish an exact triangle of the form
\begin{equation}\label{exactanalytic}
\begin{tikzcd}[column sep=small]
\text{tHFK}^-(Y,K) \arrow{rr} & & \text{tHFK}^-(Y_\lambda(C), K) \arrow{ld} \\
& \text{tHFK}^-(Y_{\lambda+\mu}(C), K) \arrow{lu} &
\end{tikzcd}
\end{equation}
where $\lambda$ denotes the chosen longitude of $C$, and $\mu$ a meridian. To this end we model the triple $(Y, Y_\lambda(C), Y_{\lambda+\mu}(C))$ by means of four collections of compressing circles $\boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}$ and $\boldsymbol{\delta}$ on a genus $g$ Riemann surface $\Sigma$. More precisely we choose:
\begin{itemize}
\item the $\alpha$- and the $\beta$-curves so that $(\Sigma, \boldsymbol{\alpha}, \boldsymbol{\beta})$ forms a Hegaard diagram of $Y$,
\item the first $\beta$-curve $\beta_1$ to be a meridian $\mu$ of $C$, the first $\gamma$-curve $\gamma_1$ to be the longitude $\lambda$, and the first of the $\delta$-curves to be a curve of type $\lambda+\mu$,
\item the last $g-1$ $\gamma$- and $\delta$-curves to be small Hamiltonian translates of the corresponding last $g-1$ $\beta$-curves.
\end{itemize}
Note that the base points $z$ and $w$ can be chosen so that $\mathcal{H}_{\boldsymbol{\alpha}, \boldsymbol{\beta}}=(\Sigma, \boldsymbol{\alpha}, \boldsymbol{\beta}, z, w)$ represents $(Y, K)$, $\mathcal{H}_{\boldsymbol{\alpha}, \boldsymbol{\gamma}}=(\Sigma, \boldsymbol{\alpha}, \boldsymbol{\gamma}, z, w)$ represents $(Y_\lambda(C), K)$, and $\mathcal{H}_{\boldsymbol{\alpha}, \boldsymbol{\delta}}=(\Sigma, \boldsymbol{\alpha}, \boldsymbol{\delta}, z, w)$ represents $(Y_{\lambda+\mu}(C), K)$. Notice that we can assume: $\beta_2$ to be a meridian of $K$, the two base points $z$ and $w$ to lie near to the two sides of $\beta_2$, and the Hamiltonian isotopies sending $\{\beta_2, \dots, \beta_g\}$ onto $\{\gamma_2, \dots, \gamma_g\}$ and $\{\delta_2, \dots, \delta_g\}$ to not cross the two basepoints.
We now introduce a triangle of maps
\begin{equation}
\begin{tikzcd}[column sep=small]
CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta) \arrow{rr}{F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}} & & CF_t(\mathbb{T}_\alpha, \mathbb{T}_\gamma) \arrow{ld}{F_{\boldsymbol{\gamma} , \boldsymbol{\delta}}} \\
& CF_t(\mathbb{T}_\alpha, \mathbb{T}_\delta) \arrow{lu}{F_{\boldsymbol{\delta} , \boldsymbol{\beta}}} &
\end{tikzcd}
\end{equation}
inducing in homology the maps appearing in \eqref{exactanalytic}. First we observe that, since the two basepoints $z$ and $w$ lie on the same connected component of $\Sigma \setminus \boldsymbol{\beta} \cup \boldsymbol{\gamma}$, we have an identification $H_*(CF_t(\mathbb{T}_\beta, \mathbb{T}_\gamma))=\Lambda^*H_1(T^{g-1}) \otimes \mathcal{R}$. In fact, the same equality holds for $H_*(CF_t(\mathbb{T}_\gamma, \mathbb{T}_\delta))$ and $H_*(CF_t(\mathbb{T}_\delta, \mathbb{T}_\beta))$. Denote by $\Theta_{\beta, \gamma}$, $\Theta_{\gamma, \delta}$ and $\Theta_{\delta,\beta}$ the cycles descending to the top-dimensional generator of $\Lambda^*H_1(T^{g-1}) \otimes \mathcal{R}$ in $CF_t(\mathbb{T}_\beta, \mathbb{T}_\gamma)$, $CF_t(\mathbb{T}_\gamma, \mathbb{T}_\delta)$, and $CF_t(\mathbb{T}_\delta, \mathbb{T}_\beta)$ respectively. We define $F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\gamma)$ by $F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}(x)=x \cdot \Theta_{\beta, \gamma}$. Note that $F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}$ is a chain map:
\[\partial_t F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}(x)=\partial_t(x \cdot \Theta_{\beta, \gamma})=
\partial_t x \cdot \Theta_{\beta, \gamma} + \cancel{x \cdot \partial_t \Theta_{\beta, \gamma}}= \partial_t x \cdot \Theta_{\beta, \gamma}= F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}(\partial_t x) \ . \]
Analogously we define chain maps $F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\gamma) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\delta)$ and $F_{\boldsymbol{\delta}, \boldsymbol{\beta}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\delta) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta)$ using the top-dimensional generators $\Theta_{\gamma, \delta}$ and $\Theta_{\delta,\beta}$.
We now consider the triangle of maps induced in homology by $F_{\boldsymbol{\beta}, \boldsymbol{\gamma}},F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}$ and $F_{\boldsymbol{\delta}, \boldsymbol{\beta}}$. We would like to show that the composition of two consecutive maps is zero. To this end define $H_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\delta)$ by counting pseudo hlomorphic rectangles: $H_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(x)=f_{ \boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}( x \otimes \Theta_{\beta, \gamma}\otimes \Theta_{ \gamma, \delta})$. In this case the $A_\infty$-relations prescribe the identity
\begin{align*}
F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}\circ F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}(x)&= (x \cdot \Theta_{\beta, \gamma}) \cdot \Theta_{\gamma, \delta} \\
&=x \cdot (\Theta_{\beta, \gamma} \cdot \Theta_{\gamma, \delta}) \\
&+\partial_t( f_{ \boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(x \otimes \Theta_{\beta, \gamma} \otimes \Theta_{\gamma, \delta})) + f_{ \boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(\partial_t x \otimes \Theta_{\beta, \gamma} \otimes \Theta_{\gamma, \delta}) \\
&+\cancel{f_{ \boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(x \otimes \partial_t \Theta_{\beta, \gamma}\otimes \Theta_{\gamma, \delta})}+\cancel{f_{ \boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(x \otimes \Theta_{\beta, \gamma} \otimes \partial_t \Theta_{\gamma, \delta})} \\
&=x \cdot (\Theta_{\beta, \gamma} \cdot \Theta_{\gamma, \delta}) +\partial_tH_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(x) + H_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}(\partial_t x) \ .
\end{align*}
On the other hand: $\Theta_{\beta, \gamma} \cdot \Theta_{\gamma, \delta}=0$ based on the very same neck stretching argument of \cite{}. Hence,
\[F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}\circ F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}= \partial_t \circ H_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}} + H_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}\circ \partial_t \ , \]
showing that the composition $F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}\circ F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}$ is null-homotopic via the map $H_{ \boldsymbol{\beta}, \boldsymbol{\gamma}, \boldsymbol{\delta}}$.
Similarly one defines homotopy equivalences $H_{\boldsymbol{\gamma},\boldsymbol{\delta}, \boldsymbol{\beta}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\gamma) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta)$ and $H_{\boldsymbol{\delta},\boldsymbol{\beta}, \boldsymbol{\gamma}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\delta) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\gamma)$ for $F_{\boldsymbol{\delta}, \boldsymbol{\beta}} \circ F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}$, and $F_{\boldsymbol{\beta}, \boldsymbol{\gamma}} \circ F_{\boldsymbol{\delta}, \boldsymbol{\beta}}$.
Finally one would like to show that the triangle of maps induced by $F_{\boldsymbol{\beta}, \boldsymbol{\gamma}}, F_{\boldsymbol{\gamma}, \boldsymbol{\delta}}$ and $F_{\boldsymbol{\delta}, \boldsymbol{\beta}}$
has trivial homology (exactness). This will be based on the following algebraic lemma.
\begin{lem}\label{algebra}Let $\{A_i\}_{i\in\mathbb{Z}}$ be a collection of chain complexes, and let $\{f_i:A_i\to A_{i+1}\}_{i\in \mathbb{Z}}$ be a collection of chain maps satisfying the following two properties:
\begin{enumerate}
\item\label{condition} $f_{i+1}\circ f_i$ is chain homotopically trivial via a chain homotopy $H_i:A_i \to A_{i+1}$,
\item the map $\psi_i=f_{i+2}\circ H_i+H_{i+1}\circ f_i$ is a quasi-isomorphism.
\end{enumerate}
Then $H_*(\text{Cone}(f_i))\simeq H_*(A_{i+2})$, where $\text{Cone}(f_i)$ denotes the mapping cone of $f_i$.
\end{lem}
\begin{proof}[Proof of Theorem \ref{exactanalyticthm}]
We proceed in analogy with the proof of the exact triangle
\begin{equation} \label{exacthat}
\begin{tikzcd}[column sep=small]
\widehat{\text{HFK}}(Y, K) \arrow{rr} & & \widehat{\text{HFK}}(Y_\lambda(C), K) \arrow{ld} \\
& \widehat{\text{HFK}}( Y_{\lambda+\mu}(C), K) \arrow{lu} &
\end{tikzcd}
\end{equation}
established by Ozsv\' ath and Szab\' o \cite[Theorem 8.2]{OS7}. See also \cite[Section 2]{OS6}.
To meet precisely the hypothesis of Lemma \ref{algebra} we choose a sequence of Hamiltonian translates $\boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i)}$ and $\boldsymbol{\delta}^{(i)}$ of the $\beta$-curves, the $\gamma$-curves, and the $\delta$-curves. Set $A_{3i}=CF_t(\mathbb{T}_{\boldsymbol{\alpha}},\mathbb{T}_{\boldsymbol{\beta}^{(i)}})$, $A_{3i+1}=CF_t(\mathbb{T}_{\boldsymbol{\alpha}},\mathbb{T}_{\boldsymbol{\gamma}^{(i)}})$, and $A_{3i}=CF_t(\mathbb{T}_{\boldsymbol{\alpha}},\mathbb{T}_{\boldsymbol{\delta}^{(i)}})$, and note that there are obvious identifications $A_{3i}=CF_t(\mathbb{T}_{\boldsymbol{\alpha}},\mathbb{T}_{\boldsymbol{\beta}})$, $A_{3i+1}=CF_t(\mathbb{T}_{\boldsymbol{\alpha}},\mathbb{T}_{\boldsymbol{\gamma}})$, and $A_{3i}=CF_t(\mathbb{T}_{\boldsymbol{\alpha}},\mathbb{T}_{\boldsymbol{\delta}})$. Furthermore, we define $f_{3i}=F_{\boldsymbol{\beta}^{(i)},\boldsymbol{\gamma}^{(i)}}$, $f_{3i+1}=F_{\boldsymbol{\gamma}^{(i)},\boldsymbol{\delta}^{(i)}}$, and $f_{3i+2}=F_{\boldsymbol{\delta}^{(i)},\boldsymbol{\beta}^{(i)}}$. Similarly, we take $H_{3i}=H_{ \boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i)}, \boldsymbol{\delta}^{(i)}}$, $H_{3i+1}=H_{\boldsymbol{\gamma}^{(i)},\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}}$ and $H_{3i+2}: H_{\boldsymbol{\delta}^{(i)},\boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i)}}$ so that condition \eqref{condition} of Lemma \ref{algebra} is met.
Writing down the $A_\infty$-relations for $k=5$ (the one coming from the count of pseudo-holomorphic pentagons) we get that
\begin{align*}
0&= f_{i+2}(H_i(x))+H_{i+1}( f_i(x))\\
&+ f_{\boldsymbol{\alpha},\boldsymbol{\gamma}^{(i)}, \boldsymbol{\delta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}(x \otimes \Theta_{\boldsymbol{\gamma}^{(i)}, \boldsymbol{\delta}^{(i)}} \otimes \cancel{\Theta_{\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}} \cdot \Theta_{\boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}} )\\
&+ f_{\boldsymbol{\alpha},\boldsymbol{\gamma}^{(i)}, \boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}(x \otimes \cancel{\Theta_{\boldsymbol{\gamma}^{(i)}, \boldsymbol{\delta}^{(i)}} \cdot \Theta_{\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}}} \otimes \Theta_{\boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}})\\
&+x \cdot H_{\boldsymbol{\gamma}^{(i)},\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}
(\Theta_{\boldsymbol{\gamma}^{(i)}, \boldsymbol{\delta}^{(i)}} \otimes \Theta_{\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}} \otimes \Theta_{\boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}) \ .
\end{align*}
Hence, in order to show that the exact triangle holds, we must show that the map
\[\psi_i: x \mapsto x \cdot H_{\boldsymbol{\gamma}^{(i)},\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}
(\Theta_{\boldsymbol{\gamma}^{(i)}, \boldsymbol{\delta}^{(i)}} \otimes \Theta_{\boldsymbol{\delta}^{(i)}, \boldsymbol{\beta}^{(i)}} \otimes \Theta_{\boldsymbol{\beta}^{(i)}, \boldsymbol{\gamma}^{(i+1)}}) \]
induces an isomorphism in homology. To this end we observe that specialising $q=0$ in the chain complex $CF_t$ we get the hat version of knot Floer homology $\widehat{CFK}$ (with the real-valued $\text{gr}_t$-grading instead of the usual bi-grading), and that $\psi_i$ is a qusi-isomorphism provided that $\widehat{\psi}_i$ (its restriction to $\widehat{CFK}$) is a quasi-isomorphism. On the other hand, the map $\widehat{\psi}_i$ was shown to be a quasi-isomorphism in Ozsv\' ath and Szab\' o's proof of the exact triangle \cite[Theorem 8.2]{OS7}.
\end{proof}
\section{A surgery exact sequence for deformations of lattice cohomology}\label{combinatorialexactsequence}
Let $\Gamma$ be a plumbing graph with one unframed vertex $v_0$. Let $G=\Gamma-v_0$ and $v\in G$ be a vertex that is \textit{not} directly connected to $v_0$. We denote by $G_{+1}(v)$ the graph obtained from $G$ by increasing the weight of $v$ by one, and by $G'(v)$ the graph obtained from $G$ by adding a $(-1)$-framed vertex $e$ connected to $v$. Let $\Gamma_{+1}(v)$ and $\Gamma'(v)$ the graphs obtained similarly from $\Gamma$. These represent knots in $Y(G_{+1}(v))$, $Y(G'(v))$, and $Y(G-v)$.
\begin{thm}\label{latticeexact}There is a long exact sequence of $\mathcal{R}$-modules
\[ \xymatrix{
\ar[r] & t\mathbb{HFK}_p(\Gamma-v) \ar[r]^{\ \ \ \phi_* } & t\mathbb{HFK}_p(\Gamma) \ar[r]^{\psi_* \ \ \ } & t\mathbb{HFK}_p(\Gamma_{+1}(v)) \ar[r]^{\ \ \delta} & t\mathbb{HFK}_{p-1}(\Gamma) \ar[r] &} \ .\]
\end{thm}
The exact sequence of Theorem \ref{latticeexact} is obtained by applying the Snake Lemma to a short exact sequence
\begin{equation}\label{shortexact}
\xymatrix{
0 \ar[r] &\mathbb{CF}_t(\Gamma-v) \ar[r]^{ \ \ \ A_t } & \mathbb{CF}_t(\Gamma) \ar[r]^{B_t \ \ \ } & \mathbb{CF}_t(\Gamma_{+1}(v)) \ar[r] & 0}
\end{equation}
preserving the lattice grading. To describe the maps $A_t$ and $B_t$ appearing in \eqref{shortexact} we resort to some convenient notation introduced by Ozsv\' ath, Stipsicz, and Szab\' o \cite{OSS1}. This has the advantage that no choice of ground characteristic vector is needed in the definition of the differential. Passing from one notation to the other consists in choosing an origin for the affine space of characteristic vectors associated to the plumbing.
Let $G$ be a negative-definite plumbing tree. Let $v_1, \dots, v_s$ denote the vertices of $G$. For every pair $[K, E]$, with $E\subseteq \{v_1, \dots, v_s\}$, and $K\in H^2(X(G); \mathbb{Z})$ characteristic, define
\[2 f_G(K,I)= \sum_{v\in I} K(v) +\left( \sum_{v\in I} v \right) \cdot \left( \sum_{v\in I} v \right) = K\cdot I + I^2 \ ,\]
and take
\[g_G[K, E]= \min_{I \subseteq E} f(K,I) \ .\]
Then $\mathbb{C} \mathbb{F}^-(G)$ can be identified with the free $\mathbb{F}[U]$-module formally generated by all such pairs $[K, E]$ with differential
\[\partial[K,E]= \sum_{v \in E } U^{a_v[K,E]}\otimes [K, E- v] + \sum_{v \in E } U^{b_v[K,E]} \otimes [K+2v^*, E- v] \ , \]
where:
\begin{align*}
a_v[K,E]&= g[K,E-v]-g[K,E] \ , \\
b_v[K,E]&= g[K+2v^*,E-v]-g[K,E]+ \frac{K\cdot v+v^2}{2} \ .
\end{align*}
If $\Gamma$ is a negative-definite plumbing diagram with one unframed vertex $v_0$ representing a knot in $Y(G)$, $G=\Gamma\setminus\{v_0\}$, then the differential of $\mathbb{C} \mathbb{F}_t(\Gamma)=\mathbb{C} \mathbb{F}(G) \otimes \mathcal{R}$ is given by
\[\partial_t[K,E]= \sum_{v \in E } q^{a_v(t)}\otimes [K, E- v] + \sum_{v \in E } q^{b_v(t)} \otimes [K+2v^*, E- v] \ , \]
where $a_v(t)= (2-t)a_v[K,E]+t a_v[K+2v_0^*, E]$, and similarly $b_v(t)= (2-t)b_v[K,E]+t b_v[K+2v_0^*, E]$. In this context the $\text{gr}_t$-grading is given by
\[\text{gr}_t[K,E]={}^tg_G[K,E]+|E|+\frac{K^2+|G|}{4}-t \ \frac{K\cdot F-F^2}{2} \ ,\]
where we set ${}^tg_G[K,E]= (2-t)g_G[K,E]+tg_G[K+2v_0^*,E]$.
Let now $v$ be a vertex of $G$. We define $\psi_v: \mathbb{CF}_t(\Gamma-v) \to \mathbb{CF}_t(\Gamma)$ by
\[\psi_v[K,E] =\sum_{p \equiv v^2 (mod \ 2)}[K, p, E]\]
where $[K, p, E]$ stands for the generator of $\mathbb{CF}_t(\Gamma)$ associated to the subset $E$, and the characteristic vector $(K,p) \in \text{Char}(G)$ extending $K \in \text{Char}(G-v)$ by $K(v)=p$. (The fact $p \equiv v^2 \ \text{mod } 2$ ensures that $(K,p)$ is actually characteristic.)
\begin{prop} $\partial_t \circ \psi_v= \psi_v \circ \partial_t$.
\end{prop}
\begin{proof} One computes
\begin{align*}
\psi_v\partial_t[K,E]&= \sum_{ w \in E} q^{a_w(t)}\otimes \psi_v[K, E] + \sum_{ w \in E} q^{b_w(t)}\otimes\psi_v[K, E]\\
&=\sum_{p \equiv v^2 (mod \ 2),\ w \in E} q^{a_w(t)}\otimes[K, p, E] + \sum_{p \equiv v^2 (mod \ 2),\ w \in E} q^{b_w(t)}\otimes[K, p, E] \ ,
\end{align*}
where $a_w(t)= (2-t)a_w[K,E]+t a_w[K+2v_0^*, E]$, and $b_w(t)= (2-t)b_w[K,E]+t b_w[K+2v_0^*, E]$. On the other hand,
\begin{align*}
\partial_t \psi_v[K,E]&= \sum_{p \equiv v^2 (mod \ 2)} \partial_t[K, p, E] \\
&=\sum_{p \equiv v^2 (mod \ 2),\ w \in E} q^{a'_w(t)}\otimes[K, p, E] + \sum_{p \equiv v^2 (mod \ 2),\ w \in E} q^{b'_w(t)}\otimes[K, p, E] \ ,
\end{align*}
where $a'_w(t)= (2-t)a_w[K,p,E]+t a_w[K+2v_0^*,p, E]$, and $b'_w(t)= (2-t)b_w[K,p,E]+t b_w[K+2v_0^*,p, E]$. Comparing the exponents the claim boils down to the following identities
\[
\begin{cases}
a_w[K,E]&= a_w[K,p,E]\\
b_w[K,E]&= a_w[K,p,E]\\
\end{cases} \ \ \
\text{ and } \ \ \
\begin{cases}
a_w[K+2v_0^*,E]&= a_w[K+2v_0^*,p,E]\\
b_w[K+2v_0^*,E]&= a_w[K+2v_0^*,p,E]\\
\end{cases}
\]
these follow from the fact that
\[f_{G-v}[K,E]=f_G[K,p,E]\ \ \
\text{ and } \ \ \ f_{G-v}[K+2v_0^*,E]=f_G[K+2v_0^*,p,E]\]
when the vertex $v$ does not belong to the vertex set $E$.
\end{proof}
Thus, in \eqref{shortexact} we can take $A_t=\psi_v$. Recall that the graph $\Gamma'(v)$ is obtained from $\Gamma$ by adding a $(-1)$-framed vertex $e$. We define $B_t:\mathbb{CF}_t(\Gamma) \to \mathbb{CF}_t(\Gamma_{+1}(v))$ as the composition of the map $\psi_e:\mathbb{CF}_t(\Gamma) \to \mathbb{CF}_t(\Gamma'(v))$ with the map $P_t:\mathbb{CF}_t(\Gamma'(v)) \to \mathbb{CF}_t(\Gamma_{+1}(v))$ defined as follows.
Let $[K, p, 2m-1, E]$ be the generator of $\mathbb{CF}_t(\Gamma'(v))$ associated to the vertex set $E$, and the characteristic vector $(K, p, 2m-1)\in \text{Char}(\Gamma'(v))$ extending $K \in \text{Char}(G-v)$ to $\Gamma'(v)$ so that $K(v)=p$, and $K(e)=2m-1$. Then we define:
\[P_t[K,p,2m-1]=
\begin{cases}
q^{s_m(t)} \otimes [K, p+2m-1,E] \text{ if } e \not\in E\\
\ \ \ \ \ \ \text{zero otherwise}
\end{cases} \ ,\]
where
\[s_m(t)={}^tg_{G_{+1}(v)}[K, p+2m-1, E]-{}^tg_{G}[K, p, 2m-1, E]+m(m-1) \ .\]
\begin{prop}$P_t$ is well-defined.
\end{prop}
\begin{proof}
We must show that $s_m(t)\geq0$. Indeed, $s_m(t)\geq0$ iff both
\begin{equation}\label{A}
g_{G_{+1}(v)}[K, p+2m-1, E]-g_{G}[K, p, 2m-1, E]\geq -\frac{m(m-1)}{2} \ ,
\end{equation}
and
\begin{equation}\label{B}
g_{G_{+1}(v)}[K+2v_0^*, p+2m-1, E]-g_{G}[K+2v_0^*, p, 2m-1, E]\geq -\frac{m(m-1)}{2} \ .
\end{equation}
There are two cases. If $v\not\in E$ then for every $I \subseteq E$ we have identities:
\begin{align*}
&f_{G_{+1}(v)}[K, p+2m-1, I]=f_{G}[K, p, 2m-1, I] \\
&f_{G_{+1}(v)}[K+2v_0^*, p+2m-1, I]=f_{G'(v)}[K+2v_0^*, p, 2m-1, I]
\end{align*}
Hence $s_m(t)=m(m-1)\geq 0$. If on the other hand $v\in I \subseteq E$ we have that
\begin{align*}
&f_{G_{+1}(v)}[K, p+2m-1, E]-f_{G}[K, p, 2m-1, E] = m \\
&f_{G_{+1}(v)}[K+2v_0^*, p+2m-1, E]-f_{G'(v)}[K+2v_0^*, p, 2m-1, E]=m \end{align*}
and we conclude that also in this case $s_m(t)\geq0$.
\end{proof}
\begin{lem} \label{shortexactlemma} The sequence
\begin{equation}\label{shortexact}
\xymatrix{
0 \ar[r] &\mathbb{CF}_t(\Gamma-v) \ar[r]^{ \ \ \ A_t } & \mathbb{CF}_t(\Gamma) \ar[r]^{B_t \ \ \ } & \mathbb{CF}_t(\Gamma_{+1}(v)) \ar[r] & 0}
\end{equation}
defined as above is short exact.
\end{lem}
\begin{proof}
One computes
\begin{equation}\label{pluto}
B_t\circ A_t \ [K,E]=\sum_{m= -\infty}^{+\infty} \ \sum_{p = v^2 (mod \ 2)} q^{m(m-1)} \otimes [K,p+2m-1,E]
\end{equation}
where $s_m(t)\equiv m(m-1)$ since ${}^tg_{G_{+1}(v)}[K, p+2m-1, E]={}^tg_{G}[K, p, E]$ in the eventuality that $v$ is not in $E$. Moreover the sum on the right hand side of \eqref{pluto} vanishes since the term corresponding to the parameter $(p,m)$ cancels in pair with the one with parameters $(p+4m-2,-m+1)$. (Here plays the fact that we are using $\mathbb{F}$-coefficients.) Thus, $B_t\circ A_t=0$ proving that the sequence in \eqref{shortexact} forms a five-term chain complex. Denote with $H^-_\Delta(G,v)$ its homology.
If we prove that $H^-_\Delta(G,v)=0$ we are done. A direct proof of this vanishing results can be lengthy and somehow confusing, we will argue along another line. First we note that plugging in $q=0$ in \eqref{shortexact} we get a "time independent" chain complex over the base field $\mathbb{F}$
\begin{equation}\label{paperino}
\xymatrix{
0 \ar[r] &\widehat{\mathbb{CF}}_t(\Gamma-v) \ar[r]^{ \ \ \ \widehat{A}_t } & \widehat{\mathbb{CF}}_t(\Gamma) \ar[r]^{\widehat{B}_t \ \ \ } & \widehat{\mathbb{CF}}_t(\Gamma_{+1}(v)) \ar[r] & 0}
\end{equation}
This can be verified to be a short exact sequence directly, adapting the computations of \cite[Proposition 6.6]{OSS1}.
Thus the five-term chain complex in \eqref{paperino} is acyclic. If $\widehat{H}_\Delta(G,v)$ denotes its homology, then we have that $H^-_\Delta(G,v)\otimes_\mathcal{R} \mathbb{F}= \widehat{H}_\Delta(G,v)=0$ by the Universal Coefficients Theorem. (A long power series $f=\sum_{\alpha \in A}q^\alpha \in \mathcal{R}$ acts on $x\in \mathbb{F}$ by multiplication by its constant term $f(0)$.) Based on this we can argue for the vanishing of $H^-_\Delta(G,v)$ as follows: suppose by contradiction that $H^-_\Delta(G,v)\not=0$, then a non-trivial element $x \in H^-_\Delta(G,v)$ generates a graded $\mathcal{R}$-module $M \subset H^-_\Delta(G,v)$. Because of the special nature of the ring $\mathcal{R}$ the module $M$ can be decomposed as sums of cyclic modules of the form $\mathcal{R}/q^\alpha$ \cite[Lemma 5.3]{OSS4}, from where the contradiction since $\mathcal{R}/q^\alpha \otimes_\mathcal{R} \mathbb{F}=\mathbb{F}$.
To see this note that $\mathcal{R}/q^\alpha$ is the vector space of long power series $f=\sum_{\gamma \in \Omega} q^\gamma$ with $\gamma\leq \alpha$. These can be divided into two equivalence classes: those such that $f(0)=0$ and those such that $f(0)=1$. If $f=\sum_{\gamma \in \Omega} q^\gamma$ is of the first kind then
\[f\otimes 1 = 1\otimes f(0) \cdot 1= 1\otimes 0=0 \ , \]
otherwise $f\otimes 1 = 1\otimes f(0) \cdot 1= 1\otimes 1$.
\end{proof}
\begin{rem}
Note the perfect analogy of the argument presented in the proof of Lemma \ref{shortexactlemma} and the one for the exact triangle in the analytic theory. A similar argument appeared in \cite[Theorem 6.4]{OSS1}.
\end{rem}
\section{First consequences of the exact triangle}
\subsection{Floer simple knots} We briefly explore some immediate consequences of the exact triangle. First recall that given a knot $(Y,K)$ there is a spectral sequence starting with $\widehat{HFK}(Y,K)$ and converging to $\widehat{HF}(Y)$, the Heegaard Floer homology of the ambient three-manifold. This leads to a rank inequality:
\[\text{rk} \widehat{HFK}(Y,K) \geq \text{rk}\widehat{HF}(Y) \ . \]
If the equality holds then we say that $(Y,K)$ is \textit{Floer simple}. If $Y$ is assumed to be an $L$-space this is the same as saying that $tHFK^-(Y,K) = \mathcal{R}^{|H_1(Y; \mathbb{Z})|}$.
\begin{exa} The plumbing
\vspace{-0.3cm}
\[\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(0.5,0) }*+{K}
!{(-1.5,0.5) }*+{-p}
"x"-"a1"
} \ .\]
represents a Floer simple knot in the lens space $L(p,1)$.
\end{exa}
Floer simple knots played an important role in the work of Baker, Grigsby and Hedden \cite{}. We use the exact triangle to produce some new examples of knots belonging to this class.
\begin{thm} \label{simpleknots}Suppose that a knot $(Y,K)$ can be represented by means of a negative-definite plumbing tree $\Gamma$ with one unframed vertex $v_0$ and no bad points. Then
\[tHFK^-(Y(G),K)\simeq t\mathbb{HFK}_*(\Gamma)=\mathcal{R}^{|H_1(Y(G);\mathbb{Z})|} \ .\]
In particular, the knot $(Y,K)$ is Floer simple.
\end{thm}
\begin{proof} First suppose that the strict inequality $\deg(w)<-w^2$ holds at every vertex $w$ of $G=\Gamma\setminus \{v_0\}$. Consider the long exact sequence of Theorem \ref{latticeexact}
\begin{equation*} \xymatrix{
\ar[r] & t\mathbb{HFK}_p(\Gamma-v) \ar[r] & t\mathbb{HFK}_p(\Gamma) \ar[r] & t\mathbb{HFK}_p(\Gamma_{+1}(v)) \ar[r] & t\mathbb{HFK}_{p-1}(\Gamma) \ar[r] &} \ ,
\end{equation*}
and choose $v$ to be a leaf. Then going by induction as in \cite[Lemma 2.11]{OS20} we can conclude that $t\mathbb{HFK}_p(\Gamma)=0$ for $p>0$, and $t\mathbb{HFK}_0(\Gamma)=\mathcal{R}^{|\det(G)|}$.
If $\deg(w)=-w^2$ at some vertices $\{v_{i_1}, \dots, v_{i_k}\}$ of $G$, we can proceed by induction on $k$ choosing $v$ to be one of these vertices. This proves that
\[t\mathbb{HFK}_*(\Gamma)=\mathcal{R}^{|\det(G)|} = \mathcal{R}^{|H_1(Y(G);\mathbb{Z})|}\]
when there are no bad points. On the other hand, in \cite{Lspaceknots} Ozsv\' ath, Stipsicz, and Szab\' o showed that when there are no bad points there is a chain homotopy equivalence $\mathbb{CFK}_*^\infty(\Gamma)\simeq CFK^\infty(Y(G),K)$. Since $tHFK^-(Y(G),K)$ and $t\mathbb{HFK}_*(\Gamma)$ are obtained by $t$-modification \cite[Section 4]{OSS4} from $\mathbb{CFK}_*^\infty(\Gamma)$ and $CFK^\infty(Y(G),K)$ respectively, we can conclude that there is an isomorphism $tHFK^-(Y(G),K)\simeq t\mathbb{HFK}_*(\Gamma)$.
\end{proof}
\begin{exa}The trefoil knot $(S^3, K)$is not Floer simple. Indeed, for the trefoil knot $\text{rk} \widehat{HFK}(S^3,K)=3$, while $\text{rk}\widehat{HF}(S^3)=1$. Note that the trefoil knot can be represented by means of a plumbing tree with one bad vertex.
\end{exa}
\subsection{Reduced lattice groups} Given a $CW$-complex $X$ one can consider the augmentation homomorphism $\epsilon: C_0(X;\mathbb{F}) \to \mathbb{F}$. This gives rise to the reduced homology $\widetilde{H}_*(X; \mathbb{F})$ of $X$ \cite{AH}. Similarly given a plumbing diagram $\Gamma$ of an algebraic knot $(Y,K)$ we can consider the augmentation homomorphism
\[\mathbb{CF}_0(G, \mathfrak{s}) =\prod_{l\in \mathbb{R}}C_0(M_l; \mathbb{F}) \longrightarrow \prod_{l\geq \gamma(t)} \mathbb{F}=\mathcal{R}_{(\gamma(t))} \ ,\]
where, in the notation of Section \ref{construction}, $\gamma(t)=\min_{x\in \mathbb{Z}^s} \chi_t(x)$. This gives rise to the reduced lattice group $t\mathbb{HFK}_\text{red,*}(\Gamma,\mathfrak{s})= \prod_{l\in \mathbb{R}}\widetilde{H}_*(M_l; \mathbb{F})$
with the property that:
\[t\mathbb{HFK}_*(\Gamma,\mathfrak{s})=\mathcal{R}_{(\Upsilon_{\Gamma, \mathfrak{s}}(t))} \oplus t\mathbb{HFK}_\text{red,*}(\Gamma,\mathfrak{s}) \ .\]
In particular, we have that $t\mathbb{HFK}_\text{red,p}(\Gamma,\mathfrak{s})= t\mathbb{HFK}_\text{p}(\Gamma,\mathfrak{s})$ for $p\geq 1$.
\begin{cor}\label{badpoints}If $\Gamma$ represents a type-n knot then $t\mathbb{HFK}_{\text{red},p}(\Gamma)=0$ for $p\geq n$.
\end{cor}
\begin{proof}By induction on the number of bad points as in \cite[Theorem 5.1]{OSS1}. The base step here is provided by Theorem \ref{simpleknots}.
\end{proof}
\section{A map $t\mathbb{HFK}^-_0(\Gamma)\to t \text{HFK}^-(Y(G), K)$}
We start with an handy description of the group $t\mathbb{HFK}^-_0(\Gamma)$.
\begin{lem}\label{relations}
The group $t\mathbb{HFK}^-_0(\Gamma)$ can be identified with the quotient
\[t\mathbb{HFK}^-_0(\Gamma)= \left. \left( \bigoplus_{k \in \text{Char}(G)} \mathcal{R} \otimes k \right) \right/ \sim \]
where $\sim$ denotes the equivalence relation generated by the following elementary relations. Let $v$ be a vertex of $G=\Gamma-v_0$, $k\in \text{Char}(G)$ be a characteristic vector, and let $2n=k(v)+v\cdot v$. If $v$ is not connected to $v_0$ in $\Gamma$ then
\begin{itemize}
\item if $n\geq 0$ then $q^{2n+m}\otimes (k+2v^*)\sim q^{m}\otimes k$,
\item while if $n\leq 0$ then $q^{m}\otimes (k+2v^*)\sim q^{m-2n}\otimes k$.
\end{itemize}
If $v$ is connected to $v_0$ in $\Gamma$ then
\begin{itemize}
\item if $n\geq 0$ then $q^{m+2n+t}\otimes (k+2v^*)\sim q^{m}\otimes k$,
\item while if $n\leq -1$ then $q^{m}\otimes (k+2v^*)\sim q^{m-2n-t}\otimes k$.
\end{itemize}
\end{lem}
\begin{proof}
This is just a direct computation of the differential $\partial_t: \mathbb{C} \mathbb{F}_1(G)\to \mathbb{C} \mathbb{F}_0(G)$.
\end{proof}
Based on this explicit description of the lattice group $t\mathbb{HFK}^-_0(\Gamma)$ we build a module homomorphism $t\mathbb{HFK}^-_0(\Gamma)\to t \text{HFK}^-(Y(G), K)$.
First we observe that the algebraic knot associated to a plumbing diagram $\Gamma$ comes with an associated doubly pointed Heegaard diagram. This is obtained as follows. Fix a planar drawing $\Gamma \hookrightarrow \mathbb{R}^2$ of the graph $\Gamma$ and build the associated surgery diagram $\mathcal{D}_\Gamma$ suggested by Figure \ref{trefoil}. Forgetting about the framings and the under-over conventions this gives a connected, $4$-valent planar graph. Thickening $D_\Gamma \subset \mathbb{R}^2\times 0 \subset \mathbb{R}^3$ we get a genus $g=\#(\text{vertices})+ \# (\text{edges})$ solid handlebody $V \subset S^3$ providing, together with its complement $V^c=\overline{S^3-V}$, a Heegaard splitting of $S^3$. We choose as $\alpha$-curves of the corresponding Heegaard diagram (supported on $\Sigma= \partial V$) the boundary of the compact complementary regions of $\mathcal{D}_\Gamma \subset \mathbb{R}^2$ (this provides a set of compressing circles for $U_\alpha=V^c$). As compressing circles of $U_\beta=V$ ($\beta$-curves) we take a meridian for each link component of $\mathcal{D}_\Gamma$, and one further curve in correspondence of each edge of $\Gamma$ as suggested by Figure \ref{beta}. By taking base points on the two sides of the $\beta$-curve corresponding to the meridian of the unlabled vertex $v_0\in \Gamma$ we get a doubly pointed Heegaard diagram $\mathcal{H}_{\alpha\beta}=(\Sigma, \alpha, \beta, z, w)$ representing the unknot $U \subset S^3$.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{B}
\caption{\label{beta}}
\end{center}
\end{figure}
Notice that in the choice of the $\beta$-curves above we can substitute the meridians of the framed components with the framings. By replacing the other curves with small Hamiltonian translates of the remaining $\beta$-curves we get another $g$-tuple of curves $\gamma$. This gives rise to other two doubly-pointed Heegaard diagrams: $\mathcal{H}_{\alpha\gamma}=(\Sigma, \alpha, \gamma, z, w)$ representing $K \subset Y(G)$, and $\mathcal{H}_{\beta\gamma}=(\Sigma, \beta, \gamma, z, w)$ representing the unknot in $\#^{g-\ell} S^1 \times S^2$.
Let $\Theta_{\beta\gamma} \in \mathbb{T}_\beta \cap \mathbb{T}_\gamma$ denote the intersection point representing the top-dimensional generator of $H_*(CF_t(\mathbb{T}_\beta, \mathbb{T}_\gamma))= \Lambda^*H_1(T^{g-\ell}) \otimes \mathcal{R}$. Given intersection points $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ and $\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$ Ozsv\' ath and Szab\' o \cite[Proposition 8.4]{OS2} built a map $\mathfrak{s}_z : \pi_2(\mathbf{x}, \Theta_{\beta \gamma} , \mathbf{y}) \to \text{Spin}^c(X(G))$. Given a $\text{Spin}^c$ structure $\mathfrak{t}$ of the plumbing of spheres $X(G)$ we define a map
$f_{\Gamma, \mathfrak{t}}: CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta) \to CF_t(\mathbb{T}_\alpha, \mathbb{T}_\gamma)$
setting
\[f_{\Gamma, \mathfrak{t}}(\mathbf{x})=\sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma} \sum_{ \substack{ \Delta \in \pi_2(\mathbf{x} ,\Theta_{\beta\gamma}, \mathbf{y}) \\ \mathfrak{s}_z(\Delta)=\mathfrak{t}, \ \mu( \Delta ) =0 }}\#\mathcal{M}(\Delta) \ q^{tn_z(\Delta)+ (2-t) n_w(\Delta)} \cdot \mathbf{y} \ .\]
Of course, this is just a $\text{Spin}^c$ refinement of the holomorphic triangle count of Section \ref{sectionanalytictriangle}. Inspecting the ends of moduli spaces of holomorphic triangles with Maslov index $\mu=1$ one concludes that $f_{\Gamma, \mathfrak{t}}$ is a chain map. Thus given a characteristic vector $k \in \text{Char}_\mathfrak{s}(G)\subset \text{Spin}^c(X(G))$ we get a map
$F_{\Gamma, k}: \mathcal{R}=H_*(CF_t(\mathbb{T}_\alpha, \mathbb{T}_\beta)) \to \text{tHFK}^-(K, Y(G), \mathfrak{s})$, $F_{\Gamma, k}= (f_{\Gamma, k})_*$.
\begin{exa}[Lens space cobordism]\label{lensspaces}
Let $D_{-p}$ denote the total space of the disk bundle $D_{-p}\to S^2$ with Euler number $e=-p$, and let $\Delta \subset D_p$ be a fibre disk. Then $\partial(D_{-p}, \Delta)= (L(p,1), K)$ is a Floer simple knot with plumbing
\vspace{-0.3cm}
\[\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!~-{@{-}@[|(2.5)]}
!{(0,0) }*+{\bullet}="x"
!{(-1.5,0) }*+{\bullet}="a1"
!{(0.5,0) }*+{K}
!{(-1.5,0.5) }*+{-p}
"x"-"a1"
} \ .\]
A doubly pointed Heegaard triple representing $(D_{-p}, \Delta)$ is given by $(T^2, \alpha, \beta, \gamma, z, w)$, where $\alpha$ and $\gamma$ represent respectively a longitude and a meridian of the two torus $T^2$, and $\beta$ a type $(1,p)$ curve. The two base points $z$ and $w$ are chosen to lie on the two opposite sides of the $\beta$-curve.
Let $v$ represents the generator of $H_2(D_{-p}, \mathbb{Z})$. If $k$ is a characteristic vector such that $2n=k(v)+v\cdot v$ then
\begin{itemize}
\item $q^{2n+t}F_{\Gamma, \ k+2v^*}=F_{\Gamma,k}$ if $n\geq 0$, and
\item $F_{\Gamma, \ k+2v^*}= q^{-2n-t}F_{\Gamma,k}$ if $n\leq -1$.
\end{itemize}
This is a direct computation along the line of \cite[Section 4.1]{correctiontermslens}.
\end{exa}
We define $\varphi_{\Gamma}: \text{Char}_\mathfrak{s}(G) \to \text{tHFK}^-(Y(G),K, \mathfrak{s})$ setting $\varphi_{\Gamma}(k)=F_{\Gamma, k}(1)$.
\begin{lem} $\varphi_{\Gamma}$ descends to an $\mathcal{R}$-module homomorphism
\[\Phi_{\Gamma}:t\mathbb{HFK}^-_0(\Gamma, \mathfrak{s})\to t\text{HFK}^-(Y(G),K, \mathfrak{s}) \ .\]
\end{lem}
\begin{proof}A vertex $v$ of $G$ corresponds to a $(-p)$-framed sphere embedded in $X(G)$. This has a tubular neighbourhood $D_{-p}$ diffeomorphic to a disk bundle with base the two-sphere $S^2$, and Euler number $e=-p$. Let $\Delta \subset D_p$ be a fibre disk.
There are two cases: the case when $v$ is linked to the unframed vertex $v_0$ and the case when it is not. In the case when $v$ is not linked to $v_0$ in $\Gamma$ one concludes that the map $F_{\Gamma, k}:\mathcal{R} \to tHFK^-(K, Y(G), \mathfrak{s})$ factors through the cobordism map associated to $D_{-p}: S^3 \to L(p,1)$. This implies the first set of relations of Lemma \ref{relations}. If $v$ is linked to $v_0$ on the other hand, the map $F_{\Gamma, k}:\mathcal{R} \to tHFK^-(K, Y(G), \mathfrak{s})$ factors through the cobordism map associated to the pair $(D_{-p}, \Delta)$ we computed in Example~\ref{lensspaces}. In this case we get the second set of relations.
\end{proof}
\begin{prop}If $\Gamma$ has at most one bad point then there is an isomorphism of $\mathcal{R}$-modules $tHFK^-(Y(G), K) \simeq t\mathbb{HFK}^-_*(\Gamma)$.
\end{prop}
\begin{proof}
Putting the pieces together we get a commutative diagram with exact rows:
\[\xymatrix{
& tHFK^-(Y(G-v), K_0) \ar[r] &tHFK^-(Y(G), K) \ar[r] & tHFK^-(Y(G_{+1}(v)), K') \\
&t\mathbb{HFK}^-_0(\Gamma-v) \ar[r] \ar[u]^{\Phi_{\Gamma-v}}&t\mathbb{HFK}^-_0(\Gamma) \ar[r] \ar[u]^{\Phi_{\Gamma}} & t\mathbb{HFK}^-_0(\Gamma_{+1}(v)) \ar[u]^{\Phi_{\Gamma_{+1}(v)}} \\
}\]
as in \cite[Lemma 2.10]{OS20}. Note that the bottom row fits in a short exact sequence
\begin{equation*} \xymatrix{
0\ar[r] & t\mathbb{HFK}_0(\Gamma-v) \ar[r] & t\mathbb{HFK}_0(\Gamma) \ar[r] & t\mathbb{HFK}_0(\Gamma_{+1}(v)) \ar[r] &0} \ ,
\end{equation*}
provided that $t\mathbb{HFK}_1(\Gamma_{+1}(v))=0$. Indeed, one can show that similarly the top row fits in a short exact sequence
\[0 \to tHFK^-(Y(G-v), K_0) \to tHFK^-(Y(G), K) \to tHFK^-(Y(G_{+1}(v)), K')\to 0 \ ,\]
provided that $G_{+1}(v)$ is negative definite and $G-v$ has no bad points. To see this we acknowledge that $tHFK^\infty(Y,K)$, the homology of the localisation $CF_t(Y,K)\otimes \mathcal{R}_*$, agrees with $HF^\infty(Y)\otimes \mathcal{R}_*\simeq \mathcal{R}_*$ and that the map
\[
\xymatrix{
HF^\infty(Y(G_{+1}(v))) \otimes \mathcal{R}_* \ar[r] & HF^\infty(Y(G-v)) \otimes \mathcal{R}_* \\
tHFK^\infty(Y(G_{+1}(v)), K') \ar[r] \ar[u]^{\cong}& tHFK^\infty(Y(G-v), K_0) \ar[u]^{\cong} \\
}
\]
is trivial being the underlying cobordism indefinite.
With the above said one can run the very same argument of \cite[Theorem 2.1]{OS20} to show that there is an identification $tHFK^-(Y(G), K) \simeq t\mathbb{HFK}^-_0(\Gamma)$. On the other hand, according to Lemma \ref{badpoints}, for a diagram with at most one bad point we have that $t\mathbb{HFK}^-_p(\Gamma)=0$ for $p>0$. Hence $t\mathbb{HFK}^-_*(\Gamma)=t\mathbb{HFK}^-_0(\Gamma)$, proving the claim.
\end{proof}
\subsection{$\mathbb{Z}/2\mathbb{Z}$-grading and extension to the case of two bad points} We now want to extend our main result to the case of two bad points. To this end we introduce the relative $\mathbb{Z}/2\mathbb{Z}$-grading on \textit{some} of the $t$-modified knot homologies.
First, note that $tHFK^-(Y,K)$ has a natural real-valued relative $\text{gr}_t$-grading
\[\text{gr}_t(x,y)=\text{gr}_t(x)-\text{gr}_t(y) \ ,\]
characterised by the property that for every pair of intersection points $\mathbf{x}$ and $\mathbf{y}$
\[\text{gr}_t(\mathbf{x},\mathbf{y})= \mu(\phi)+t\cdot n_z(\phi)+(2-t) \cdot n_w(\phi) \ , \]
where $\phi\in \pi_2(\mathbf{x},\mathbf{y})$ denotes a Whitney disk connecting $\mathbf{x}$ to $\mathbf{y}$. Analogously, in the combinatorial theory $t\mathbb{HFK}_*$ we have a relative $\text{gr}_t$-grading
\begin{align*}
\text{gr}_t(\Box,\Box')&= \text{dim}(\Box)- \text{dim}(\Box') \\
&+t \cdot (w_k(\Box)-w_k(\Box'))+ (2-t) \cdot (w_{k+2v_0^*}(\Box)-w_{k+2v_0^*}(\Box'))
\end{align*}
Secondly, recall \cite[Section 3]{OSS4} that if $0\leq t=\frac{m}{n}\leq 2$ is a rational number then in the definition of $tHFK^-(Y,K)$ the ring $\mathcal{R}$ can be replaced with the ring of polynomials with fractional exponents $\mathbb{F}[q^{1/n}] \subset \mathcal{R}_*$. In this case the relative $\text{gr}_t$-grading is rational valued. More specifically, it takes values in $\mathbb{Z}[\frac{1}{n}] \subset \mathbb{Q}$, the ring of fractions associated to the multiplicative set $S=\{n^k: k\geq 0\}\subset \mathbb{Z}$. Note that the ideals of $\mathbb{Z}[\frac{1}{n}]$ are in one-to-one correspondence with the ideals of $\mathbb{Z}$ that do not meet $S$. In particular if $n$ is \emph{odd} then $\mathbb{Z}[\frac{1}{n}]$ contains a (unique) maximal ideal $\mathfrak{m}$ with quotient field $\mathbb{Z}[\frac{1}{n}]/\mathfrak{m} \simeq \mathbb{Z}/2\mathbb{Z}$. Thus, if we choose $0\leq t\leq 2$ to be of the form $t=m/(2b+1)$ then there is a well defined $(\text{mod }2)$ reduction of the relative $\text{gr}_t$-grading. The equivalence relation:
\[\mathbf{x} \sim \mathbf{y} \ \ \Longleftrightarrow \ \ \text{gr}_t(\mathbf{x},\mathbf{y})=0 \ (\text{mod }2)\]
partitions the generators of $tHFK^-(Y,K)$ into two equivalence classes, and determines a splitting of the chain
group: $CF_t(Y,K)= CF^\text{even}_t(Y,K) \oplus CF^\text{odd}_t(Y,K)$. Since the differential $\partial_t$ flips the two summands, we have a decomposition of the groups
\[tHFK^-(Y,K)= tHFK^-_\text{even}(Y,K) \oplus tHFK^-_\text{even}(Y,K) \ . \]
Similarly one has a splitting in the combinatorial theory
\[t\mathbb{HFK}_*(\Gamma)= t\mathbb{HFK}_\text{even}(\Gamma) \oplus t\mathbb{HFK}_\text{odd}(\Gamma) \ . \]
Thirdly, we observe that if we choose $t=m/(2b+1)$ with $m=2a$ \textit{even} then the situation greatly simplifies. In the analytic theory the relative $\mathbb{Z}/2\mathbb{Z}$-grading $\text{gr}_t(\mathbf{x},\mathbf{y})= \mu(\phi) \ (\text{mod }2)$ collapses on the "classical" Maslov grading, while in the combinatorial theory there are identifications:
\[t\mathbb{HFK}_\text{even}(\Gamma)= \bigoplus_{p \text{ even}} t\mathbb{HFK}_{p}(\Gamma) \ , \text{ and } \
t\mathbb{HFK}_\text{odd}(\Gamma)= \bigoplus_{p \text{ odd}} t\mathbb{HFK}_{p}(\Gamma) \ ,\]
since $\text{gr}_t(\Box, \Box')=\text{dim}(\Box)- \text{dim}(\Box') \ (\text{mod }2)$. Note that in the case when $\Gamma$ has at most one bad point we have a vanishing result for the odd homologies: $tHFK^-_\text{odd}(Y(G), K)\simeq t\mathbb{HFK}^-_\text{odd}(\Gamma)=0$.
\begin{prop}\label{close}Suppose that $0\leq t \leq 2$ is a rational number of the form $t=2a/(2b+1)$. If $\Gamma$ has at most two bad points then there is an isomorphism of $\mathcal{R}$-modules $tHFK^-_\text{even}(Y(G), K) \simeq t\mathbb{HFK}^-_0(\Gamma)$.
\end{prop}
\begin{proof}
If we choose $v$ to be one of the two bad points we get a commutative diagram with exact rows:
\[\xymatrix{
tHFK^-_\text{even}(Y(G-v), K_0) \ar[r] &tHFK^-_\text{even}(Y(G), K) \ar[r] & tHFK^-_\text{even}(Y(G_{+1}(v)), K') \\ t\mathbb{HFK}^-_0(\Gamma-v) \ar[r] \ar[u]^{\Phi_{\Gamma-v}}&t\mathbb{HFK}^-_0(\Gamma) \ar[r] \ar[u]^{\Phi_{\Gamma}} & t\mathbb{HFK}^-_0(\Gamma_{+1}(v)) \ar[u]^{\Phi_{\Gamma_{+1}(v)}} \\
}\]
Again, we have that the bottom row fits in a short exact sequence
\begin{equation*} \xymatrix{
0\ar[r] & t\mathbb{HFK}_0(\Gamma-v) \ar[r] & t\mathbb{HFK}_0(\Gamma) \ar[r] & t\mathbb{HFK}_0(\Gamma_{+1}(v)) \ar[r] &0} \ .
\end{equation*}
Furthermore the last map on the top row is surjective since the triangle map
\[tHFK^-(Y(G_{+1}(v)), K')\to tHFK^-(Y(G-v), K_0)\]
flips the relative Maslov grading and $tHFK^-_\text{even}(Y(G-v), K_0)=0$. Like in \cite[Theorem 2.2]{OS20} this information is sufficient to conclude that the right-most map should be an isomorphism provided that the other two are.
\end{proof}
\begin{proof}[Proof of Theorem \ref{maingoal}]
First suppose that $0\leq t \leq 2$ is a rational number of the form $t=2a/2b+1$. According to the computations of the degree shifts of the triangle counting maps performed by Zemke in \cite{zemkegradings} the isomorphism $tHFK^-_\text{even}(Y(G),K)\simeq t\mathbb{HFK}_0(\Gamma)$ of Proposition \ref{close} preserves the $\text{gr}_t$-grading. Since non-torsion elements are concentrated in even gradings we have that $\Upsilon_{K,\mathfrak{s}}(t)=\Upsilon_{\Gamma, \mathfrak{s}}(t)$ for all values of the parameter $t$ in
\[C=\left\{\frac{2a}{2b+1} \ : \ a, b\in \mathbb{Z}_{\geq 0} \right\} \cap [0,2]\ . \]
On the other hand $C$ is dense in $[0,2]$, the upsilon function is continuous \cite[Proposition 1.4]{OSS4}, and two continuous functions that agree on a dense set agree everywhere.
\end{proof}
|
1,116,691,500,392 | arxiv | \section{Introduction}
The high-energy gamma-ray sky (0.1-10 GeV), as observed by the Fermi Large Area Telescope \citep{2009ApJ...697.1071A}, is dominated by two classes of sources: blazars and pulsars. Gamma-ray pulsars are concentrated along the Galactic plane, while extragalactic blazars are distributed uniformly. Both of these sources are strongly variable, and their gamma-ray emission is non-thermal, indicating efficient mechanisms of particle acceleration.
Gamma-ray variability has been observed in certain blazars and radio galaxies on time scales of several minutes, much shorter than the light crossing time of their supermassive black holes (typically a few hours), e.g.: PKS~2155-304 \citep{2007ApJ...664L..71A}, Mrk~501 \citep{2007ApJ...669..862A}, PKS~1222+216 \citep{2011ApJ...730L...8A}, IC~310 \citep{2014Sci...346.1080A}, 3C~279 \citep{2016ApJ...824L..20A}. It has been argued that such short variability time scales require a highly localized dissipation mechanism, not directly related to variations in the jet structure induced at the central black hole. In addition, such rapid variations observed at very high gamma-ray luminosity impose a potential problem of intrinsic absorption of the gamma-ray radiation in a photon-photon pair creation process. Such absorption can be avoided by postulating very high Doppler or Lorentz factor $\mathcal{D} \sim \Gamma \sim 100$ \citep{2008MNRAS.384L..19B}. In the case of luminous quasars, like PKS~1222+216 and 3C~279, additional absorption of gamma rays can be expected at subparsec scales due to the external radiation field that includes broad emission lines and direct accretion disk radiation.
Relativistic magnetic reconnection has been proposed as a solution to these challenges in the form of the minijets model, in which reconnection produces additional relativistic bulk outflows in the jet co-moving frame, increasing the effective Doppler factor \citep{2009MNRAS.395L..29G}. A semi-analytical model of minijets has been applied directly to the case of PKS~2155-304 \citep{2011MNRAS.413..333N}. However, that model was highly simplified, and over the last several years numerical simulations showed that relativistic magnetic reconnection is a much more complex phenomenon.
Understanding of magnetic reconnection has been developing slowly since the first ideas were formulated in the 1950s in the context of solar physics. Analytical models have difficulty in describing the reconnection process in detail, as it necessarily involves plasma physics beyond the standard MHD or force-free regimes. Kinetic numerical simulations, in particular the particle-in-cell (PIC) algorithm, are our best tools for studying magnetic reconnection in both non-relativistic and relativistic regimes.
Particle acceleration in relativistic reconnection has been studied with PIC simulations since the work of \cite{2001ApJ...562L..63Z}. At that time, it was not even clear whether relativistic reconnection is an efficient dissipation mechanism, as solutions based on smooth Sweet-Parker current layers predicted very low reconnection rates (slow inflow velocities, and hence weak electric fields). It has been known that long current layers are prone to tearing-mode instability, which produces chains of plasmoids, but PIC simulations were necessary to demonstate that formation of plasmoids accelerates the reconnection rate \citep{2004PhPl...11.1151J}.
With increasing computational power, PIC simulations showed that relativistic reconnection is very efficient in accelerating particles, producing power-law particle energy distributions with indices $N(\gamma) \propto \gamma^{-p}$ approaching $p \to 1$ in the limit of relativistic background magnetization $\sigma_0 = B_0^2/(4\pi w_0) \gg 1$, where $w_0$ is relativistic enthalpy including the rest-mass energy \citep{2014ApJ...783L..21S,2014PhRvL.113o5005G,2016ApJ...816L...8W}. Recent simulations of Harris layers with open boundaries showed how reconnection can operate as a steady-state mechanism producing plasmoids of specific size distribution accelerated to relativistic bulk motions \citep{2016MNRAS.462...48S}.
Introduction of synchrotron radiative losses to the PIC algorithm \citep{2013ApJ...770..147C} allowed to study particle acceleration under severe radiative losses, with direct application to the gamma-ray flares from the Crab Nebula \citep{2011Sci...331..736T}, which are interpreted in terms of synchrotron emission exceeding the radiation reaction photon energy limit of $\sim 100\;{\rm MeV}$.
Finally, alternative initial conditions are being explored, i.e., ``ABC fields'' that allow in addition to study the formation and dynamics of current layers \citep{2016ApJ...826..115N,2016ApJ...828...92Y,2017JPlPh..83f6302L}.
One of the most interesting properties of radiation produced by relativistic magnetic reconnection is its rapid variability. This is illustrated with the example of 2-dimensional simulation of a Harris layer described in detail in \cite{2015ApJ...815..101N}.
These results have been obtained by us from numerical simulations performed with the PIC code Zeltron \citep{2013ApJ...770..147C} created by Beno{\^i}t Cerutti\footnote{http://benoit.cerutti.free.fr/Zeltron/}.
The space-time diagrams of synchrotron emissivity and the corresponding lightcurves are presented here for the first time.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{xymap_ne.png}
\caption{Snapshots (x,y-maps) from the evolution of Harris layer undergoing tearing instability and hierarchical mergers of plasmoids. Color scale indicates the particle (electron-positron) density $n_{\rm e}$, and cyan lines indicate the magnetic field lines. Simulation time for each panel is given on the left side. The dashed white lines mark the regions, from which the x-profiles used to build the (x,t) space-time diagrams were extracted.}
\label{fig_xy}
\end{figure}
\section{Rapid variability of emission produced during relativistic reconnection}
A Harris layer is defined as \citep{1962NCim...23..115H,2003ApJ...591..366K}:
\begin{eqnarray}
B_{\rm x}(y) &=& B_0\tanh(y/\delta)\,,
\\
n_{\rm e}(y) &=& n_{e,0}\cosh^{-2}(y/\delta)\,,
\end{eqnarray}
with equilibrium provided by tuning the temperature (pressure) and drift velocity (current density) of the particles in order to match the magnetic field gradient.
Fig.\,\ref{fig_xy} shows the spatial distribution of plasma and magnetic fields along a region centered on initially uniform Harris layer for several simulation times. We can see the formation of multiple plasmods due to the tearing-mode instability and their subsequent evolution. In order to reveal the temporal evolution of plasmoids in greater temporal detail, we extracted 1-dimensional profiles of various plasma parameters that were combined into 2-dimensional space-time diagrams.
Fig.\,\ref{fig_xt} shows two examples of such space-time diagrams: for particle density $n_{\rm e}$ and for mean particle energy $\left<\gamma\right>$ (see \citealt{2015ApJ...815..101N} for more examples). Plasmoids can be seen to consists of two main parts: dense cool cores, and hot dillute layers.
The synchrotron radiation power scales with both particle density, particle energy, and magnetic field strength. It turns out that the total synchrotron radiation power is concentrated along the hot layers, rather than the dense cores.
\begin{figure}[ht]
\includegraphics[width=0.49\textwidth]{xtmap_ne.pdf}
\includegraphics[width=0.49\textwidth]{xtmap_g.pdf}
\caption{Space-time diagrams of particle density $n_e$ (left) and particle mean energy $\left<\gamma\right>$ (right) for the same simulation as shown in Fig.\,\ref{fig_xy}.}
\label{fig_xt}
\end{figure}
Fig.\,\ref{fig_lc} shows synchrotron emissivity calculated for two opposite observers: Observer 1 at $+x$ and Observer 2 at $-x$. It can be seen that emissivity, like the total power, is concentrated along the hot plasmoid layers. However, the difference between emissivity distributions directed towards opposite observers demonstrates significant anisotropy of the radiation. Observer 1 detects more radiation from the plasmoids propagating to the right, while the plasmoids propagating to the left contribute very little to the observed emission.
Each space-time diagram of emissivity can be converted into the observed
lightcurve by collecting radiation along the light cones indicating fixed observation times $t_{\rm obs}$.
In the bottom row of Fig.\,\ref{fig_lc}, we show lightcurves that correspond exactly to the space-time diagrams of synchrotron emissivity presented in the upper row.
Several characteristic observation times are indicated, i.e., moments of major observed flares/spikes.
Each dashed vertical line in the lightcurves has a corresponding light cone in the space-time diagram. These light cones are essential in order to locate the emission event responsible for each observed flare. We can see that most of such light cones pass through a plasmoid merger event. We can thus associate sharp radiation flares with plasmoid mergers.
\begin{figure}[ht]
\includegraphics[width=0.49\textwidth]{xtmap_jsyn_obs2.pdf}
\includegraphics[width=0.49\textwidth]{xtmap_jsyn_obs1.pdf}
\includegraphics[width=0.49\textwidth]{lc_obs2.pdf}
\includegraphics[width=0.49\textwidth]{lc_obs1.pdf}
\caption{Top row: space-time diagrams of synchrotron emissivity directed towards two observers: Observer 1 (right panels) is located at $+x$, Observer 2 (left panels) is located at $-x$. Bottom row: lightcurves expected for Observers 1 and 2 for two frequency bands. Vertical dashed lines marking several flares seen by each observer correspond to the light cones (cyan lines) on the space-time diagrams above.}
\label{fig_lc}
\end{figure}
The lightcurves shown in Fig.\,\ref{fig_lc} indicate that synchrotron radiation produced during relativistic magnetic reconnection can be variable on time scales shorter by order of magnitude from the global light-crossing time scale. Indeed, while the light-crossing time of the simulation domain is $c\, \Delta t_{\rm obs} = 800\, \rho_{\rm c}$, the observed flares are clearly shorter than $100\,\rho_{\rm c}$. Two reasons for such rapid variability have been suggested \citep{2012ApJ...754L..33C}: spatial bunching vs. sweeping beams. Spatial bunching corresponds to highly localized emitting regions, our space-time diagrams of synchrotron emissivity suggest emitting events localized both in space and in time. Sweeping beams result from a highly anisotropic distribution of energetic particles that are forced to gyrate collectively in magnetic field. Relativistic reconnection results in particle anisotropy that is strongly energy-dependent, this effect is known as kinetic beaming \citep{2012ApJ...754L..33C}, and is distinct from the bulk Doppler beaming (relativistic jet). Analysis of the time evolution of angular distribution of radiation revealed that these kinetic beams are sweeping across certain observers. It has been difficult to resolve this dilemma, it seems that both spatial bunching and sweeping beams are important in modulating the observed radiation signals.
Qualitatively similar results were obtained from analysis of radiation signatures of 2-dimensional magnetostatic structures called ``ABC fields'' \citep{2016ApJ...826..115N,2016ApJ...828...92Y}. These structures are defined as:
\begin{eqnarray}
B_x(x,y,z) &=& B_0\left[\sin(\alpha z)+\cos(\alpha y)\right]\,,
\\
B_y(x,y,z) &=& B_0\left[\sin(\alpha x)+\cos(\alpha z)\right]\,,
\\
B_z(x,y,z) &=& B_0\left[\sin(\alpha y)+\cos(\alpha x)\right]\,,
\end{eqnarray}
where $\alpha = 2\pi k/L$. Initial equilibrium is provided by smoothly distributed current density $\bm{j} = (kc/L)\bm{B}$ obtained by shaping the local momentum distributions of uniformly spaced particles. For $k > 1$, this configuration is unstable to coalescence modes that result in formation of dynamical current layers where magnetic reconnection and particle acceleration take place. Most of the high-frequency synchrotron radiation is produced when energetic particles leave the current layers and begin to interact with strong perpendicular magnetic fields. Once again, we find evidence for both spatial bunching and beam sweeping taking place simultaneously, the observed lightcurves show spikes on time scales order-of-magnitude shorter than the light-crossing time of the simulation domain.
The effective reduction factor for variability time scale of radiation produced during relativistic magnetic reconnection remains unknown. It can be defined formally as $f_\gamma = R_{\rm diss}/(ct_{\rm obs})$,
where $R_{\rm diss}$ is the characteristic radius of the dissipation region.
In \cite{2016ApJ...824L..20A}, we suggested that it can be of the order of $f_\gamma \sim 10-100$, however, further investigation is necessary.
\acknowledgements{The author is grateful to Beno{\^i}t Cerutti, Dmitri Uzdensky, Gregory Werner, Mitchell Begelman, Jonathan Zrake, Yajie Yuan, Roger Blandford, and Martyna Chru{\'s}li{\'n}ska for collaboration on PIC simulations of relativistic reconnection. This work was supported by the Polish National Science Centre grant No. 2015/18/E/ST9/00580.}
\bibliographystyle{ptapap}
|
1,116,691,500,393 | arxiv | \section{Introduction}
Since the early 80s there is a vision that gaze-based interfaces could make our interaction with computer easier and more efficient \cite{Bolt:1981}.
Gaze-based interfaces have many promises: they work over distances, they are hygienic as there is nothing to touch, they keep the hands free for other tasks, they are silent, and they are maintenance-free as eye trackers have no moving parts.
At the same time, gaze-based interfaces usually need a time consuming calibration, they lack high accuracy, and they are prone to the so-called Midas touch problem \cite{Jacob:1990}.
In 2013, Vidal et al.~introduced a novel concept for gaze interaction based on smooth pursuit eye movements \cite{Vidal:2013}.
In interfaces with moving targets, they compare the user's gaze and the movement of the target, hence allowing a matching pursuit movement to be detected by calculating Pearson's correlation coefficient.
The strength of this approach is its independence from offset and scaling and, therefore, the eye tracker does not need to be calibrated but can be instantly used.
Another advantage is that due to being independent from scale, interfaces on small areas, such as a smartwatch display \cite{Esteves:2015}, can be built.
A typical interface based on smooth pursuits offers several targets to give the user a choice.
Esteves et al.~\cite{Esteves:2015} showed that it is possible to distinguish eight targets moving on a circle.
However, they reported false positive rates of 12\% for pursuits-based interaction.
We argue that to make gaze-based interaction usable in everyday life, this rate needs to be significantly reduced.
Similarly, Vidal et al.~showed that detection accuracy drops significantly when showing more than 8 targets moving at the same speed and trajectory \cite{Esteves:2015}.
\begin{figure} [t]
\centering
\includegraphics[width=0.45\textwidth]{screenshotPNI}
\caption{We present an approach to enhance the detection of smooth pursuits eye movement. In particular, by using the slope of a regression line, our approach allows for (a) increasing the number of distinguishable targets and (b) decreasing the number of false positives. }
\label{figure:screenshotPNI}
\vspace{-3mm}
\end{figure}
This underpins an inherent challenge in Pursuits-based interfaces -- the number of targets and reliability present a trade-off: reducing the number of targets increases the detection reliability and vice versa.
At the same time, today's interfaces provide many different elements, such as the number of application icons on a smartphone or the keys on a soft keyboard.
To address this, we introduce a new Pursuits detection method that increases the accuracy of selections even with high numbers of on-screen targets.
Rather than the widely used Pearson correlation \cite{Esteves:2015,Esteves:2017:SSP:3126594.3126616,Khamis:2015:FSS:2800835.2804335,Khamis:2017:EAE:3126594.3126630,khamis2018avi,Khamis:2016:TUT:2971648.2971679,Khamis:2016:EWU:3012709.3012743,Velloso:2017:MCS:3086563.3064937,Velloso:2016:ADC:2901790.2901867}, our novel method uses the slope of a regression line.
In a study (N=16), we compared the performance of our approach\xspace to the state-of-the-art for pursuits-based interfaces.
In particular, we compared the influence of the number of targets on input speed and error rate.
Results show that our approach\xspace allows up to 24 targets to be distinguished.
For eight or more targets it reduces the error rate by factor 5 to 10 compared to the state-of-the-art approach.
We built a sample application and discuss how our approach\xspace supports designers in building highly reliable calibration-free gaze-based interfaces.
The contribution of this work is twofold: First, we describe a novel detection method for smooth pursuits eye movements. Second, we report on a comparison of the approach with state-of-the-art, revealing a significant increase in number of detectable targets as well as in accuracy.
\vspace{-1mm}
\section{Background and Related Work}
While early works in gaze-based interaction relied mostly on fixations, the research community started to move towards detecting gaze behavior, such as gaze gestures \cite{Drewes2007}, and more recently smooth pursuit \cite{Vidal:2013}.
Smooth pursuit eye movements are naturally performed when gazing at a moving target.
Interaction using smooth pursuit (aka Pursuits) is promising since it does not require calibration because it relies on relative eye movements rather than precise fixation points.
\vspace{-1mm}
\subsection{Applications of Pursuits}
Pursuits has been utilized in several applications and domains.
Being a calibration-free and contactless gaze-only modality, a large body of work investigated its use on public displays, where immediate usability is essential.
For example, Vidal et al.~used Pursuits on public displays for gaming and entertainment applications \cite{Vidal:2013}.
In EyeVote, Pursuits was used for voting on public displays~\cite{Khamis:2016:EWU:3012709.3012743}.
Pursuits was also successfully deployed in active eye tracking settings, where the tracker moved on a rail system to follow users as they pass by large public displays \cite{Khamis:2017:EAE:3126594.3126630}.
Lutz et al.~used Pursuits for entering text on public displays \cite{JEMR2394}.
They worked around Pursuits' limitations by performing each letter's selection on two stages: the user first selects one of 5 groups of letters, the group then expands to allow the user to finally select the desired letter.
Other ubiquitous technologies leveraged Pursuits as well.
Esteves et al.~\cite{Esteves:2015} used Pursuits for gaze interaction with smart watches.
Velloso et al.~\cite{Velloso:2016} utilized Pursuits in smart homes.
Pursuits were also used in mixed reality. VR benefits from using Pursuits during interaction, especially when moving in VR \cite{khamis2018avi}, and when interacting with occluded targets \cite{7893315}. Pursuits was also employed in augmented reality glasses \cite{Esteves:2017:SSP:3126594.3126616}.
Eye movements are subtle and hard to observe. Hence Pursuits was used for authentication~\cite{Cymek:2014,Rajanna:2018,Rajanna:2017:GGU:3027063.3053070}.
As for desktop settings, Kangas et al.~\cite{Kangas:2016} and \v{S}pakov et al.~\cite{Spakov:2016} employed Pursuits in the form of a continuous signal to control on-screen widget to, for example, adjust volume.
In addition to using it as a calibration-free gaze interaction technique, Pursuits can also be used for calibration. Pfeuffer et al.~\cite{Pfeuffer:2013:PCM:2501988.2501998} introduced a method to calibrate the eye tracker as users follow on-screen moving targets. Similarly, Celebi et al.~\cite{Celebi:2014:SPC:2578153.2583042} used Pursuits for eye tracker calibration.
Khamis et al.~\cite{Khamis:2016:TUT:2971648.2971679} used gradually revealing text to calibrate the eye tracker while users read-and-pursue. A major drawback of previous works is that the interfaces often has a limited number of targets shown at once.
Previous implementations could distinguish up to 8 targets reliably \cite{Esteves:2015,Velloso:2017:MCS:3086563.3064937,Vidal:2013}. We show that it is possible to distinguish 24 targets with significantly higher accuracy compared to state of the art.
\subsection{Implementations of Pursuits}
There are two predominant implementations of Pursuits detection for interaction, one of which uses the Euclidean distance between the gaze estimates and target positions \cite{Kangas:2016,Rajanna:2018,Rajanna:2017:GGU:3027063.3053070,Spakov:2016}, while the other one employ Pearson's product moment correlation \cite{Esteves:2015,Esteves:2017:SSP:3126594.3126616,Khamis:2015:FSS:2800835.2804335,Khamis:2017:EAE:3126594.3126630,khamis2018avi,Khamis:2016:TUT:2971648.2971679,Khamis:2016:EWU:3012709.3012743,Velloso:2017:MCS:3086563.3064937,Velloso:2016:ADC:2901790.2901867}.
The Euclidean distance method is susceptible to inaccurate detection in the presence of an offset between the real gaze point and the estimated one. This means that it is not reliable when the eye tracker is not calibrated, or when the gaze estimation is not accurate. In contrast, the correlation method is independent of offsets and scaling. For this reason, it works reliably without calibration \cite{Khamis:2016:EWU:3012709.3012743,Velloso:2017:MCS:3086563.3064937,Vidal:2013}, and even on small interfaces such that of smart watches \cite{Esteves:2015}. On the downside, the accuracy of the correlation-based detection drops significantly in the presence of more than 8 targets \cite{Esteves:2015,Velloso:2017:MCS:3086563.3064937,Vidal:2013}.
\section{Regression Slope-based Detection of Pursuits}
We introduce a novel approach of detecting Pursuits and start with theoretical foundations before describing our enhancements and implementation.
\subsection{Theoretical Background} \label{Theoretical Considerations}
A smooth pursuit detection algorithm receives the gaze coordinates and the coordinates of the on-screen targets as input.
It collects a certain number of data samples, calculates a metric function for each target, and then compares the metric values of each target with a threshold or threshold interval. Targets whose metric values match the threshold condition are reported as detected. Typical metric functions for pursuit detection are Euclidean distance \cite{Kangas:2016,Rajanna:2018,Rajanna:2017:GGU:3027063.3053070,Spakov:2016} or correlation \cite{Esteves:2015,Esteves:2017:SSP:3126594.3126616,Khamis:2015:FSS:2800835.2804335,Khamis:2017:EAE:3126594.3126630,khamis2018avi,Khamis:2016:TUT:2971648.2971679,Khamis:2016:EWU:3012709.3012743,Velloso:2017:MCS:3086563.3064937,Velloso:2016:ADC:2901790.2901867}. Detection algorithms using Euclidean distance need a calibrated eye tracker \cite{Rajanna:2018,Rajanna:2017:GGU:3027063.3053070,Spakov:2016}, while detection methods using correlation are independent from offset and scaling \cite{Velloso:2017:MCS:3086563.3064937,Vidal:2013}. The implicit assumption behind this statement is that the calibration error can be described by an affine transformation.
\subsection{From Correlation-based to Slope-based detection}
The algorithm described here works with linear regression which is, in terms of mathematics, closely related to correlation.
Linear regression and correlation need a list of value pairs which in our case are the x-coordinates of the gaze g\textsubscript{x} and the target position t\textsubscript{x} or the y-coordinates, respectively. Every value pair can be plotted in a plane. The linear regression analysis finds the straight line that best fits the plotted data.
The regression coefficient is the \emph{slope} of the line, the \emph{intercept} is the value where the line crosses the abscissa and the \emph{correlation} is a measure for the quality of the fit. If the gaze follows the target perfectly and the eye tracker provides accurate positions, then g\textsubscript{x} = t\textsubscript{x} and the plot is a bisecting line of ordinate and abscissa with intercept=0.0, slope=1.0, and correlation=1.0. If the gaze \emph{does not follow the target}, the values for intercept, slope and correlation are very different from the perfect values.
The correlation detection method typically requires a correlation value higher than 0.8 \cite{Velloso:2017:MCS:3086563.3064937}. A \emph{calibration error} results in an intercept (=offset) different from zero and an only slightly changed value for the slope (=scaling factor) while the correlation does not change. Our pilot studies showed that calibration errors for the scaling factor are in a range from 0.9 to 1.1.
\vspace{-2mm}
\subsection{Advantages of Slope-based Pursuits detection}
Our method presented here requires the slope to be close to 1.0 -- hence, we refer to this method as slope detection.
For the study, we used a threshold interval from 0.77 to 1.3.
Similar to the correlation method, the slope is independent from offsets. Consequently, the slope method detects Pursuits without calibration.
The slope detection has a further advantage: It distinguishes between synchronously moving targets of different trajectory sizes, while the correlation method does not. The reason is that the correlation is insensitive to offsets and scaling, while the regression line's slope is only insensitive to offsets.
\vspace{-2mm}
\subsection{Implementation}
We implemented both detection methods, the correlation and the slope method. We used the following formulas:
Regression analysis:
\vspace{-3mm}
\[ s = \frac{n \displaystyle\sum_{i=1}^n x y - \displaystyle\sum_{i=1}^n x \displaystyle\sum_{i=1}^n y} { n \displaystyle\sum_{i=1}^n x^2 - (\displaystyle\sum_{i=1}^n x)^2} \]
Correlation:
\vspace{-3mm}
\[ r = \frac{n \displaystyle\sum_{i=1}^n x y - \displaystyle\sum_{i=1}^n x \displaystyle\sum_{i=1}^n y} { \sqrt{n \displaystyle\sum_{i=1}^n x^2 - (\displaystyle\sum_{i=1}^n x)^2 } \sqrt{n \displaystyle\sum_{i=1}^n y^2 - (\displaystyle\sum_{i=1}^n y)^2 } } \]
\\
where x is a gaze coordinate, y the corresponding target coordinate, and n the size of the data window.
In contrast to formulas which require mean values and consequently need to sum up values over all data in the data window, these formulas allow a sliding window by only subtracting an old value and adding a new value.
As a result, the algorithm's run time is independent from the data window size.
We further enhanced the algorithm. For a positive detection, rather than relying on a single sample as in previous work \cite{Velloso:2017:MCS:3086563.3064937,Vidal:2013}, our threshold condition needs to be met for a number of consecutive samples, hence introducing a \emph{minimum signal duration}. The minimum signal duration reduces false positives.
Reducing false positives is also possible by increasing the data window size. However, pilot studies showed that a small data window and a minimum signal duration excludes more false positives compared to a larger data window.
As a further enhancement, we added some \emph{smoothing} to the gaze signal by calculating the average over the last k samples. Smoothing the gaze signal improves the detection with the slope method but increases the false positive rate for the correlation detection. For a fair comparison in the user study, we used the smoothed signal only for the slope method. We also adjusted the minimum signal duration for best results.
While pilot testing, we observed that a false positive detection of the same target often followed successful detections (despite clearing all buffers after a successful detection). We found the reason to be the reaction time of the user who usually continues gazing at the target after successful detection. To address this, we dropped some samples after a positive detection.
Table \ref{table:Parameters} shows the parameters used for both detection methods. We used an eye tracker which delivers 60 samples per second.
\begin{table} [t]
\centering
\begin{tabular}{ l r r }
\toprule
Parameter & Correlation Method & Slope Method\\ [0.5ex]
\midrule
Window size & 30 samples & 30 samples\\
Smoothing & 0 samples & 20 samples \\
Minimum duration & 20 samples & 15 samples \\
Threshold & 0.8 & 0.77 -- 1.3 \\
Skipped samples & 30 samples & 30 samples \\
\bottomrule
\end{tabular}
\caption{Parameters used for correlation and slope detection methods}
\vspace{-3mm}
\label{table:Parameters}
\end{table}
\section{Evaluation}
We conducted a user study to compare our approach to the state of the art method for detecting Pursuits.
\subsection{Apparatus}
To evaluate our Pursuits detection approach, we developed a sample application (see Figure \ref{figure:screenshotPNI}) in which users can enter digits (0 to 9) and letters (A to N) via Pursuits.
The application runs on an Acer Aspire V17 Nitro laptop with integrated Tobii IS4 Base AC eye tracker (60 Hz).
The display has a resolution of 1920 times 1080 pixels on 38.4\,cm times 21.7\,cm, which results in 0.2\,mm for one pixel or 50\,px per centimeter. The average distance between the participants' eyes to the display is around 50\,cm +/- 5\,cm, which corresponds to 0.02$^{\circ}$ per pixel or around 50\,px per degree. The targets move clockwise on a circle with a radius of 130\,px (2.6$^{\circ}$), except for the `cancel'-target which moves counter-clockwise on a circle with a radius of 80\,px (1.6$^{\circ}$). The radius of each target is 20\,px (0.4$^{\circ}$) and they move at 6.5$^{\circ}$/s (2.5 seconds per rotation).
The interface provides visual and acoustic feedback for the detection. Every target that matches the threshold condition is filled with color, whose intensity increases the longer the threshold condition stays true, and reaches its maximum once the minimum signal duration is reached. Different beeps are used for correct and wrong entries.
\subsection{Study Design}
The study was designed as a repeated-measures experiment with two independent variables. The first was the Pursuits detection method, with two conditions: correlation-based detection (baseline) and slope-based detection (our approach\xspace). The second was the number of targets; participants went through 10 blocks, with the first block showing 6 targets, and gradually incremented the number of targets by 2 up to 24 simultaneously moving targets. The order of methods was randomized. The task of the user was to enter 4 symbols in each block.
\vspace{-1mm}
\subsection{Procedure}
We invited 16 participants (3 females) with normal or corrected to normal vision aged between 24 and 58. After arriving at the lab, participants filled out a form with the demographic data and received a short introduction to the system (Figure \ref{figure:screenshotPNI}). To test how well the methods work for spontaneous gaze interaction, we did not calibrate the eye tracker for each participant. Instead, it was calibrated only once by one of the authors. The participants' task was to enter a four-digit number by following the clockwise rotating number targets using gaze. In case of entering a wrong digit, the participants had to delete it by selecting the the counter-clockwise rotating `cancel'-target.
Participants first completed a training task with six targets in which they entered four symbols (digits and letters), and tried to cancel an entry. These entries were excluded from the analysis. Participants then went through the 10 blocks, each covering a number of targets (6,8,10,12,14,16,18,20,22,24) and consisting of two selection tasks (one per detection method).
Every selection task had a timeout of 90 seconds. If a participant was not able to fulfill the task in time or wished to abort, the study continued with the other method until the participant failed. We concluded with a semi-structured interview.
\vspace{-1mm}
\subsection{Results}
Apart from the qualitative feedback and observations, we logged the \textit{maximum number of targets} shown simultaneously from which participants could still perform successful selections. We further logged the \textit{errors}, which correspond to the number of times users canceled their input. We also logged the average \textit{task completion time}, which denotes the time taken to enter all 4 symbols correctly. Finally we logged the average \textit{entry time} for entering each symbol.
\vspace{-1mm}
\subsubsection{Interviews and Observations}
All participants understood immediately how to operate the system and how to enter the digits, but it seemed that they were at the beginning of a steep learning curve. Many saw the user study like a computer game and were highly ambitious to reach a high score. All participants reported that the task required a lot of focussing. All participants reported that they found the slope-based method more accurate and easier, some of them even mentioned their preference before being asked.
\subsubsection{Maximum Selectable Targets}
We counted the maximum number of displayed targets from which participants were able to enter the four demanded symbols (see the bars in Figure \ref{figure:graph_times}). The slope detection approach outperformed the correlation detection method.
Only one participant was able to select more targets with the latter.
A Wilcoxcon signed ranked test revealed that the slope detection method results in a significant increase of the number of displayed targets from which participants successfully made selections \wilcoxon{3.168}{0.01}. Using the correlation method, the maximum target number with which the participants were able to accomplish the task was between 10 and 24 \meansd{15.0}{3.7}. Using the slope method, the maximum target number was between 8 and 24 \meansd{21.6}{4.6}.
\subsubsection{Errors}
Whenever the participant entered a wrong digit she or he had to cancel the entry with the `cancel'-target. Every entry of the `cancel'-target was counted as error. As seen in Figure \ref{figure:graph_errors}, the average number of errors increases in the presence of more targets, however the increase in errors is sharper in case of the correlation method. For example, while both methods yielded almost no errors at 6 targets across all participants, the mean number of errors at 8 targets was 1.25 and 0.13 for the correlation and slope methods respectively. Similarly, at 24 targets, participants performed 22 errors on average in case of correlation, but only 3 errors on average in case of slope method. Note, Figure \ref{figure:graph_errors} displays an average over the participants who were successful in the respective conditions.
\subsubsection{Task Completion Time}
We measured the completion time for successfully entering 4 symbols, starting from the moment displaying the symbols, until the moment the fourth symbol was entered. This also includes cancellations. As illustrated by Figure \ref{figure:graph_times}, the average completion time is almost similar across both methods for up to 8 targets, but then increases sharply when using the correlation method compared to when using the slope method. Similar to the errors, successful completion times exclude cases where participants failed to enter the 4 symbols, and hence the average is calculated over a varying number of participants. Completion times are longer for the correlation method. This is mainly due to the many cancellations that participants had to perform.
\begin{figure} [t]
\centering
\includegraphics[trim={3.5cm 18cm 1cm 3cm},clip,width=0.5\textwidth]{graph_times2}
\caption{Completion time over number of targets. The slope method was consistently faster than the correlation method. The bars in the background indicate the number of participants successfully completed the task.}
\vspace{-4mm}
\label{figure:graph_times}
\end{figure}
\begin{figure} [t]
\centering
\includegraphics[trim={2cm 16cm 3cm 4cm},clip,width=0.45\textwidth]{graph_errors}
\caption{Errors over number of targets. User made consistently fewer errors with the slope method.}
\label{figure:graph_errors}
\end{figure}
\begin{figure} [t]
\centering
\includegraphics[trim={2cm 16cm 3cm 4cm},clip,width=0.45\textwidth]{graph_timeperentry}
\caption{Time per Entry. Participants performed slightly faster on a single entry with the correlation method.}
\vspace{-3mm}
\label{figure:graph_timeperentry}
\end{figure}
\subsubsection{Symbol Entry Time}
Unlike the overall completion time which accounts for entering 4 symbols, including the cancellations, this metric reflects the average time it took to select a single entry from the moment the target was gazed at until the moment the target was deemed selected. Figure \ref{figure:graph_timeperentry} shows the times for entering a digit or a cancel operation.
The slight decrease in selection times could be the results of a learning effect or from the fact that the entry times for the higher target numbers are calculated based on successful participants only, who might as well be well-performing. Entry times did not vary a lot across the detection methods. A Wilcoxon signed rank test showed no evidence of significant effects of detection method on entry time.
One interesting observation is that the time per entry does not increase with the number of targets. The other interesting observation is that the times for the slope detection method are higher than the times for the correlation detection method (see discussion). This is remarkable as the slope detection uses a smaller minimum signal duration.
\section{Discussion}
\subsection{Comparing both Methods}
The detection methods studied here depend on different parameters -- the threshold, the data window size, the minimum signal duration, and the smoothing window size.
A systematic approach with five different values for each parameter would have led to 625 combinations for each detection method. Hence, we decided to compare the methods using optimal parameters for each of them. In particular, we used the same correlation value of 0.8 and a data window size of 30 samples as Vidal et al.~\cite{Vidal:2013}. We showed that using a different approach it is possible to almost triple the number of targets. Note, that in our implementation, the correlation method performs even better than in previous work \cite{Vidal:2013}, hence supporting our endeavour to provide a fair comparison.
\subsection{Understanding the Results}\label{Understanding the Results}
The evaluation yielded significant differences in both methods. We explain and discuss the reasons for these findings.
If we assume a perfectly calibrated and accurate eye tracker, and a user whose gaze follows exactly a target on a circle, the x and the y coordinate of the gaze over time would have the shape of a sine and would be shifted $\pi$/2 against each other.
If there are n targets on the circle, the coordinates of the previous and next target are phase shifted $\pm$2$\pi$/n against the gaze coordinates. The situation for n=20 is depicted in Figure \ref{figure:phase20_part}. The gray area in the figure indicates the current data window. Figure \ref{figure:linreg} shows the regression analysis for the data window in Figure \ref{figure:phase20_part}.
\begin{figure} [t]
\centering
\includegraphics[width=0.30\textwidth]{phase20_part}
\caption{The previous (red) and the next (green) targets are phase shifted by $\pm$2$\pi$/20 against the gaze (black).}
\label{figure:phase20_part}
\end{figure}
\begin{figure} [t]
\centering
\includegraphics[width=0.22\textwidth]{linregx}
\includegraphics[width=0.22\textwidth]{linregy}
\caption{Regression analysis plot for the x-coordinate (left) and the y-coordinate (right).}
\label{figure:linreg}
\end{figure}
The points all lie on a Lissajous curve which has the shape of an ellipsis. The phase shift affects the eccentricity; the smaller the phase shift the closer the shape is to a diagonal line. The data window size determines the fraction of the ellipsis on which the points lie.
This allows the influence of the data window size on the detection to be understood. If the data window covers a full cycle, meaning the time for the data window is the time for a target to complete a full circle, the data points form a complete ellipsis.
In this case, the slope of the regression line and the correlation will be constant over time. If the phase shift is small, both values are close to 1.0. In the depicted case with 20 targets, these values are around 0.95.
With a smaller data window (Figure \ref{figure:phase20_part}), the data points fill only a part of the ellipsis (Figure \ref{figure:linreg}), which moves over time. At the time shown here for the x values, the data points are on an almost straight line and the correlation and the slope are close to 1.0. At the same time, the y values fill the ellipsis tip and the slope of the regression line and the correlation are different from 1.0. As the threshold condition has to be true for the x and y coordinate, this means that there is no positive detection at that moment. However, there are moments in between, where the detection algorithm reports a positive detection.
To get smooth pursuit eye movements, the target speed has to be in a certain range, typically 5--20$^{\circ}$/s. If we reduce the circle radius and keep the target speed, the cycles per second increase. If we also keep the data window size this means that the data window covers more of the ellipse shown in Figure \ref {figure:linreg}.
\begin{figure} [t]
\centering
\includegraphics[width=0.40\textwidth]{phase20_cor_part}
\caption{Correlation values for the example in Figure \ref{figure:phase20_part}. The bars indicate a true threshold condition for x (up), y (middle) and both (down). The light color in the bars indicate the minimum signal duration.}
\vspace{-5mm}
\label{figure:phase20_cor_part}
\end{figure}
\begin{figure} [t]
\centering
\includegraphics[width=0.40\textwidth]{phase20_slope_part}
\caption{Slope values for the example given in Figure \ref{figure:phase20_part}. The bars have the same meaning as explained in Figure \ref{figure:phase20_cor_part}.}
\vspace{-4mm}
\label{figure:phase20_slope_part}
\end{figure}
After understanding the relations of target speed, data window size, and phase difference the question why the slope method performed better remains open. Figure \ref{figure:phase20_cor_part} shows the correlation values for the given example and Figure \ref{figure:phase20_slope_part} shows the values for the slope from the linear regression. The dash-dotted line indicates the thresholds and the bars indicate whether the threshold condition is true. The bars have light color before the minimum signal duration is reached. The lowest bars indicate whether both threshold conditions are true.
The correlation is close to 1.0 most of the time and satisfies the threshold condition (Figure \ref{figure:phase20_cor_part}). The correlation value drops, when the data window covers the ellipsis' tip. As the threshold condition has to be true for the x and the y coordinate, the correlation method signals detection between both drops.
The slope values pass the threshold interval quite quickly and satisfy the threshold condition for a shorter time (see Figure \ref{figure:phase20_slope_part}). The overlap of both signals for the x and y coordinate is minimal. Together with the concept of minimum signal duration, the slope method does not report false positives for 20 targets on a circle (under optimal conditions) while the correlation method does. This is the reason why the slope method can distinguish more targets on a circle.
On the other hand, this means also that the correlation method detects more easily and more quickly (but at the expense of more false positives). This could explain, why the entry time for the correlation method is slightly shorter (Figure \ref{figure:graph_timeperentry}).
\section{Conclusions and Future Work}
The introduction of a minimum signal duration improves both detection methods, correlation and slope method, as it filters out false positives.
The new detection algorithm based on the slope of the regression line performs better in separating targets on a circle.
This does not mean that the slope detection is better in general.
It seems that the slope detection does not detect true positives as well as the correlation method but creates fewer false positives.
In the situation of selecting a target from a circle, it is not necessary to have a continuous signal for true positives.
The first occurrence of a positive signal triggers the entry and possible gaps in the detection signal later do not matter. The property of fewer false positives seems to be more important in our scenario.
The improvement of smooth pursuit detection with the slope method can be either used for increasing the number of targets which offers the user more options or to provide a more robust interface with fewer false positives. We discussed the capabilities of two detection algorithms under idealistic conditions. Future work could try to explain the influence of noise in the gaze data on a theoretical level.
Directions for future work also include testing the new detection method in specific application scenarios and with other eye trackers. Researchers could investigate, how quickly users adapt to such interfaces and whether the need to strongly focus on the target decreases over time. Furthermore, researchers and practitioners could apply and evaluate the slope-based method in domains other than gaze, such as motion matching for body movements \cite{Clarke:2017:RCB:3139486.3130910,Clarke:2016:TCV:2971648.2971714,Clarke:2017:MSS:3126594.3126626}, and mid-air gestures \cite{Carter:2016:PMG:2858036.2858284}.
\balance{}
\bibliographystyle{SIGCHI-Reference-Format}
|
1,116,691,500,394 | arxiv | \section{Introduction}
In this paper, we fully describe the intersection theory of the moduli stack $\mathcal{B}_{r,d}$ of vector bundles on $\mathbb{P}^1$ bundles. Precisely, an object of $\mathcal{B}_{r,d}$ over a scheme $T$ is the data of a rank $2$ vector bundle $W$ on $T$ and a rank $r$, relative degree $d$ vector bundle $E$ on $\mathbb{P} W$.
To describe generators of the Chow or cohomology ring,
let $\pi: \mathbb{P} \mathcal{W} \rightarrow \mathcal{B}_{r,d}$ be the universal $\mathbb{P}^1$ bundle and let $w_1 = c_1(\mathcal{W})$ and $w_2 = c_2(\mathcal{W})$ be Chern classes of the universal rank $2$ bundle $\mathcal{W}$.
Let $\mathcal{E}$ be the universal rank $r$ bundle on $\mathbb{P} \mathcal{W}$. If $z = c_1(\O_{\mathbb{P} \mathcal{W}}(1))$, the Chern classes of $\mathcal{E}$ are uniquely expressible as $c_i(\mathcal{E}) = \pi^*(a_i) + \pi^*(a_i') z$ for $a_i \in A^i(\mathcal{B}_{r,d})$ and $a_i' \in A^{i-1}(\mathcal{B}_{r,d})$. We show that the rational Chow ring of $\mathcal{B}_{r,d}$ is freely generated by these classes. Then, we show that the integral Chow ring is torsion-free, and describe it as a subring of the rational Chow ring. This also determines the cohomology ring of $\mathcal{B}_{r,d}$, as it agrees with the Chow ring.
\begin{theorem} \label{main}
We have $A_{\mathbb{Q}}^*(\mathcal{B}_{r,d}) = \mathbb{Q}[w_1, w_2, a_1, \ldots, a_r, a_2', \ldots, a_r']$. The integral Chow ring $A^*(\mathcal{B}_{r,d}) \subset A_{\mathbb{Q}}^*(\mathcal{B}_{r,d})$ is the subring generated by $w_1, w_2$ and the Chern classes of $\pi_*\mathcal{E}(i)$ for $i = 0, 1, 2, \ldots$. Moreover, the cycle class map $A^*(\mathcal{B}_{r,d}) \to H^{2*}(\mathcal{B}_{r,d})$ is an isomorphism.
\end{theorem}
In the special case when the $\mathbb{P}^1$ bundle is trivial ($W$ is trivial), the integral Chow ring has a somewhat simpler description. This describes the Chow ring of the moduli space $\mathcal{B}_{r,d}^\dagger$ of vector bundles on a fixed $\mathbb{P}^1$. (Precisely, an object of $\mathcal{B}_{r,d}^\dagger$ over a scheme $T$ is a rank $r$, relative degree $d$ vector bundle $E$ on the trivial $\mathbb{P}^1$ bundle $\mathbb{P}^1 \times T$.) The map $\mathcal{B}_{r,d}^\dagger \to \mathcal{B}_{r,d}$ is the $\mathrm{GL}_2$ bundle associated to $\mathcal{W}$ on $\mathcal{B}_{r,d}$. By \cite[Theorem 2]{V} of Vistoli, the pullback $A^*(\mathcal{B}_{r,d}) \to A^*(\mathcal{B}_{r,d}^\dagger)$ is surjective with kernel generated by $w_1, w_2$.
\begin{theorem} \label{m2}
We have $A_{\mathbb{Q}}^*(\mathcal{B}_{r,d}^\dagger) = \mathbb{Q}[a_1, \ldots, a_r, a_2', \ldots, a_r']$.
Integrally, $A^*(\mathcal{B}_{r,d}^\dagger) \subset A_{\mathbb{Q}}^*(\mathcal{B}_{r,d}^\dagger)$ is the subring generated by
$a_1, \ldots, a_r$ and the coefficients $f_i$ of the power series
\begin{equation*}
\sum_{i=0}^\infty f_i t^i = \exp\left(\int \frac{d \cdot (a_1 +a_2t +\ldots + a_r t^{r-1}) - (a_2' + a_3't + \ldots + a_r' t^{r-2})}{1 + a_1t + \ldots + a_rt^r}dt \right).
\end{equation*}
Moreover, the cycle class map $A^*(\mathcal{B}_{r,d}^\dagger) \to H^{2*}(\mathcal{B}_{r,d}^\dagger)$ is an isomorphism.
\end{theorem}
We point out several interesting features of our results:
\begin{enumerate}
\item Although $A^*_{\mathbb{Q}}(\mathcal{B}_{r,d})$ is obviously finitely generated as a $\mathbb{Q}$ algebra, $A^*(\mathcal{B}_{r,d})$ is \emph{not} finitely generated as a $\mathbb{Z}$ algebra (see Corollary \ref{nfg}).
\item The rational Chow ring $A^*_{\mathbb{Q}}(\mathcal{B}_{r,d})$ is independent of $d$.
This may lead one to wonder if the isomorphism class of $\mathcal{B}_{r,d}$ could be independent of $d$. However, considering integral Chow rings, one can show that $\mathcal{B}_{2,1}$ and $\mathcal{B}_{2,0}$ (resp. $\mathcal{B}_{2,1}^\dagger$ and $\mathcal{B}_{2,0}^\dagger$) are \emph{not} isomorphic (see Corollary \ref{b2}).
\item To show $A^*(\mathcal{B}_{r,d})$ is torsion-free, we stratify by splitting loci, which in turn are modeled by spaces admitting affine stratifications. This stratification is also what allows us to see that the Chow and cohomology rings of $\mathcal{B}_{r,d}$ agree (see Lemma \ref{coh}).
\item Using the theory of higher Chow groups, we show that push forward maps for including strata are all injective on Chow. This relies on a vanishing result for higher Chow groups of a point with torsion coefficients. Although this vanishing result only holds for an algebraically closed field of characteristic zero, we deduce from it a rank equality which allows us to establish our theorem in \emph{all} characteristics.
\end{enumerate}
\begin{remark}
Here, we are considering $\mathbb{P}^1$ bundles equipped with a relative degree $1$ line bundle. However, not all families of genus $0$ curves admit a relative degree $1$ line bundle. Nevertheless, our work does determine the rational Chow ring $A_{\mathbb{Q}}^*([\mathcal{B}_{r,d}^\dagger/\mathrm{PGL}_2])$, because it is equal to $A_{\mathbb{Q}}^*([\mathcal{B}_{r,d}^\dagger/\mathrm{SL}_2])$ (the $\mathrm{SL}_2$ quotient is a $\mu_2$ gerbe over the $\mathrm{PGL}_2$ quotient). To find the latter, note that $[\mathcal{B}_{r,d}^\dagger/\mathrm{SL}_2] \to \mathcal{B}_{r,d}$ is the $\gg_m$ bundle associated to $\det \mathcal{W}$, so applying Vistoli's theorem \cite[Theorem 2]{V}, we have
\[A_{\mathbb{Q}}^*([\mathcal{B}_{r,d}^\dagger/\mathrm{SL}_2]) = A_{\mathbb{Q}}^*(\mathcal{B}_{r,d})/\langle w_1 \rangle = \mathbb{Q}[w_2, a_1, \ldots, a_r, a_2', \ldots, a_r'].\]
This result will be used to determine the intersection theory of low-degree Hurwitz spaces by S. Canning and the author in \cite{CL}.
\end{remark}
This paper is organized as follows. In Section \ref{2}, we briefly state necessary results concerning equivariant Chow rings and higher Chow groups. In Section \ref{con}, we describe a sequence of opens $\mathcal{U}_m$ that exhaust $\mathcal{B}_{r,d}$. Following a construction of Bolognesi--Vistoli, each $\mathcal{U}_m$ can be realized as a global quotient.
In Section \ref{chow}, we use this description to calculate the rational Chow ring of $\mathcal{B}_{r,d}$. Finally, in Section 5, we prove that the integral Chow ring is torsion-free and provide a description of the generators.
\subsection*{Acknowledgements} I would like to thank Ravi Vakil for many helpful conversations and in particular pointing me towards ideas of Akhil Mathew and Eric Larson on higher Chow groups. I am grateful to the latter two for explaining higher Chow groups and how to use them. Thanks also to Samir Canning for thoughtful conversations about this work and comments on an earlier version of this paper.
\section{Preliminaries on equivariant Chow and higher Chow} \label{2}
Most of the stacks we encounter in this paper are quotients of open subsets $X$ of affine space by linear algebraic groups $G$. The Chow ring of a quotient stack $[X/G]$ is defined as the $G$-equivariant Chow ring of $X$, which in turn is defined in \cite{EG} using models based on Borel's mixing construction. More precisely, given a representation $V$ of $G$ and an open subset $U \subset V$ on which $G$ acts freely, if $\codim U^c > i$, we define $A^i([X/G]) = A^i(X \times_G U)$ where $X \times_G U$ is the quotient of $X \times U$ by the diagonal $G$ action.
This is well-defined because $A^*(V) \cong A^*(X)$ whenever $V$ is a vector bundle over $X$ (``homotopy") and $A^i(X - Z) = A^i(X)$ whenever $\codim Z > i$ (``excision").
\begin{Example} \label{bglr}
Let $V_N = \mathrm{Mat}_{r \times N}(k) = k^{\oplus rN}$ with $\mathrm{GL}_r$ acting on $V_N$ by left multiplication.
The group $\mathrm{GL}_r$ acts freely on the open subset $U_N \subset V_N$ of full rank matrices. This determines a model $pt \times_{\mathrm{GL}_r} U_N = U_N/\mathrm{GL}_r = G(r, N)$. Since $\codim U_N^c = N - r + 1$, we have $A^i(\mathrm{BGL}_r) = A^i(G(r,N))$ for $i < N - r + 1$. It is a classical result that $A^*(G(r,N))$ is generated by the Chern classes $c_1, \ldots, c_r$ of the tautological rank $r$ bundle with no relations in degrees less than $N - r +1$. Taking larger and larger $N$, it follows that $A^*(\mathrm{BGL}_r) = \mathbb{Z}[c_1, \ldots, c_r]$.
\end{Example}
Variants of the above construction allow one to approximate all quotient stacks in this paper with concrete models which are fiber bundles over Grassmannians.
The higher Chow groups of these quotient stacks will also be an important tool.
In \cite{B}, Bloch defines the higher Chow groups of a quasi-projective variety $X$ as the homology of a complex $z^*(X,-)$ of free abelian groups, i.e. $\mathrm{CH}^*(X, n) = H_n(z^*(X,-))$.
Higher Chow groups with coefficients in a ring $R = \mathbb{Z}/m$ or $\mathbb{Z}$ are defined similarly by $\mathrm{CH}^*(X, n, R) = H_n(z^*(X,-) \otimes R)$.
Some properties of higher Chow groups are the following.
\begin{enumerate}
\item Weight zero: we have $\mathrm{CH}^*(X, 0, R) = A^*(X) \otimes R$.
\item \label{func} Functoriality: there are proper push forwards and and flat pull backs.
\item \label{loc} Localization long exact sequence: If $Y \subset X$ is a closed subscheme of pure codimension $d$, then there is a long exact sequence
\begin{align*}
\ldots &\rightarrow \mathrm{CH}^{*-d}(Y, 1, R) \rightarrow \mathrm{CH}^*(X,1,R) \rightarrow \mathrm{CH}^*(X-Y,1,R) \\
&\rightarrow \mathrm{CH}^{*-d}(Y,0,R) \rightarrow \mathrm{CH}^*(X,0,R) \rightarrow \mathrm{CH}^*(X-Y,0,R).
\end{align*}
\item Homotopy: $\mathrm{CH}^*(X \times \mathbb{A}^m, n, R) \cong \mathrm{CH}^*(X, n, R)$. By \eqref{func} and \eqref{loc} it follows that if $\widetilde{X} \rightarrow X$ is any affine bundle, then $\mathrm{CH}^*(X, n, R) \cong \mathrm{CH}^*(\tilde{X},n, R)$. \label{htopy}
\end{enumerate}
Edidin and Graham \cite{EG} extend the notion of higher Chow groups to quotients $[X/G]$ by defining them to be higher Chow groups of suitable models: $\mathrm{CH}^*([X/G], n, R) := \mathrm{CH}^*(X \times_G U, n, R)$ where $U$ is an open subset of a representation of $G$ whose complement has sufficiently high codimension and $G$ acts freely on $U$. This is well-defined by the homotopy property, and Edidin-Graham obtain a localization long exact sequence for the corresponding quotients in \eqref{loc} when $Y$ is $G$-equivariant.
Over an algebraically closed field of characteristic zero, the higher Chow groups of a point with torsion coefficients are known:
\begin{equation}
\mathrm{CH}^i(pt, n, \mathbb{Z}/\ell) = \begin{cases} \mathbb{Z}/\ell &\text{if $n = 2i$} \\ 0 & \text{otherwise.} \end{cases}
\end{equation}
This follows from \cite[Corollary 4.3]{Suslin}, which relates higher Chow groups to certain \'etale cohomology groups, though this special case was likely known earlier.
Using the long exact sequence and the homotopy property, it follows that over such a field, $\mathrm{CH}^*(X, 1; \mathbb{Z}/\ell) = 0$ for any $X$ admitting an affine stratification.
In particular, since $\mathrm{BGL}_{r}$ is modeled by Grassmannians $G(r, N)$ (see Example \ref{bglr}), we have
\begin{equation} \label{hch}
\mathrm{CH}^*(\mathrm{BGL}_{r_1} \times \cdots \times \mathrm{BGL}_{r_s}, 1; \mathbb{Z}/\ell) = 0
\end{equation}
over an algebraically closed field of characteristic zero.
\section{Construction of the moduli stack} \label{con}
Given some rank $r \geq 0$ and degree $d \in \mathbb{Z}$, we define the moduli stack $\mathcal{B}_{r,d}$ of vector bundles of rank $r$ and degree $d$ on $\mathbb{P}^1$ bundles by
\[\mathcal{B}_{r,d}(T) = \left\{(W,E): \parbox{22em}{$W$ a rank $2$ vector bundle on $T$ \\
$E$ a rank $r$, relative degree $d$ vector bundle on $\mathbb{P} W$}\right\}.\]
An arrow $(W, E) \to (W', E')$ is the data of an isomorphism $\phi: W \to W'$ --- which induces an isomorphism $\mathbb{P} \phi: \mathbb{P} W \to \mathbb{P} W'$ --- and an isomorphism $\psi: E \to (\mathbb{P} \phi)^* E'$.
Given a vector bundle $E$ on a $\mathbb{P}^1$ bundle $\mathbb{P} W \rightarrow T$, we write $E(m) := E \otimes \O_{\mathbb{P} W}(1)^{\otimes m}$.
There are equivalences $\mathcal{B}_{r, d} \cong \mathcal{B}_{r,d+mr}$ sending $(W, E) \mapsto (W, E(m))$. Thus, it would suffice to study $\mathcal{B}_{r,\ell}$ for $0 \leq \ell < r$. Throughout, $d = \ell + mr$ will be an arbitrary degree in the residue class of $\ell$ modulo $r$.
The stack $\mathcal{B}_{r,\ell}$ is a union of open substacks
\[\mathcal{B}_{r,\ell} = \bigcup_{m=0}^\infty \mathcal{U}_{m} \qquad \text{with} \qquad \mathcal{U}_0 \subset \mathcal{U}_1 \subset \mathcal{U}_2 \subset \cdots.\]
where
$\mathcal{U}_m := \mathcal{U}_{m, r, \ell}$ is defined by
\begin{align*}
\mathcal{U}_{m, r, \ell}(T) &= \{(W, E) \in \mathcal{B}_{r,\ell}(T) : \text{$E(m)$ is globally generated on each fiber over $T$}\} \\
&=\{(W, E) \in \mathcal{B}_{r,\ell}(T) : \text{$R^1\pi_*E(m-1) = 0$ for $\pi: \mathbb{P} W \rightarrow T$ projection}\}.
\end{align*}
The stack $\mathcal{U}_{0, r, \ell}$ is equal to the stack of globally generated rank $r$, degree $\ell$ vector bundles on $\mathbb{P}^1$ bundles, which we shall denote $\mathcal{V}_{r,\ell}$. Via the twist by $\O_{\mathbb{P} W}(m)$, there are isomorphisms $\mathcal{U}_{m, r, \ell} \cong \mathcal{V}_{r, \ell + mr}$ for each $m$.
\subsection{Splitting loci} \label{sps}
Every vector bundle on $\mathbb{P}^1$ splits as a direct sum of line bundles, say $E = \O(e_1) \oplus \cdots \oplus \O(e_r)$ for $e_1 \leq \cdots \leq e_r$. We call $\vec{e} = (e_1, \ldots, e_r)$ the \emph{splitting type} of $E$, and abbreviate the corresponding sum of line bundles by
\[\O(\vec{e}) := \O(e_1) \oplus \cdots \oplus \O(e_r).\]
The moduli space $\mathcal{B}_{r,\ell}$ admits a stratification by the \emph{splitting loci} of the universal vector bundle.
These are the locally closed substacks $\Sigma_{\vec{e}} \subset \mathcal{B}_{r,\ell}$ which parametrize families of vector bundles with splitting type $\vec{e}$ on each fiber of a $\mathbb{P}^1$ bundle.
Suppose that $\O(\vec{e}) = \bigoplus_{i=1}^s \O(d_i)^{\oplus r_i}$ with $d_1 < \ldots < d_s$ (so the $d_i$ are the \emph{distinct} degrees in $\vec{e}$ and the $r_i$ their multiplicities).
Let us identify $\Aut(\O(\vec{e}))$ with block upper triangular matrices whose entries in the $i, j$ block are homogeneous polynomials on $\mathbb{P}^1$ of degree $d_j - d_i$.
The block diagonal matrices correspond to the subgroup $\prod_{i=1}^s \mathrm{GL}_{r_i} \hookrightarrow \Aut(\O(\vec{e}))$.
The group $\mathrm{GL}_2$ acts on the off-diagonal blocks via change of coordinates on $\mathbb{P}^1$.
The data of a vector bundle $E$ on a $\mathbb{P}^1$ bundle $\mathbb{P} W \rightarrow T$ whose restriction to each fiber has splitting type $\vec{e}$
is the same as a principal bundle for the semi-direct product $H_{\vec{e}} := \Aut(\O(\vec{e})) \ltimes \mathrm{GL}_2$.
In other words, $\Sigma_{\vec{e}}$ is equivalent to the classifying stack $B H_{\vec{e}}$. From this, we see that
\begin{equation} \label{codimeq}
\codim \Sigma_{\vec{e}} = h^1(\mathbb{P}^1, End(\O(\vec{e})) = \sum_{i, j} \max\{0, e_i - e_j-1\} =: u(\vec{e}).
\end{equation}
The complement of $\mathcal{U}_{m,r,\ell} \subset \mathcal{B}_{r,\ell}$
is the union of splitting loci $\Sigma_{\vec{e}}$ with $e_1 < -m$. In particular, using \eqref{codimeq}, one sees that this union of splitting loci has codimension $\ell + mr + 1$ (see also \cite[Section 5]{L}). In particular, as $m$ increases, the codimension of the complement of $\mathcal{U}_m$ goes to infinity.
The key to understanding the intersection theory of $\mathcal{B}_{r, \ell}$ is therefore to understand each $\mathcal{U}_{m,r,\ell}$, which is equivalent to the stack of globally generated vector bundles $\mathcal{V}_{r, d}$ for $d = \ell + mr$.
\subsection{Globally generated vector bundles}
The stack $\mathcal{V}_{r,d}$ of rank $r$, degree $d$ globally generated vector bundles on $\mathbb{P}^1$ was constructed by Bolognesi--Vistoli in \cite{BV}. We briefly motivate and review their construction.
If $E$ is a globally generated vector bundle on $\mathbb{P}^1$, then there is a surjection $H^0(E) \otimes \O_{\mathbb{P}^1} \to E$. Since $h^0(E) = r + d$, the kernel of this map is rank $d$, degree $-d$. Furthermore, the kernel has no global sections, so it must be $\O_{\mathbb{P}^1}(-1)^{\oplus d}$. That is, given a globally generated $E$, it sits naturally in a sequence
\begin{equation} \label{ptss}
\begin{tikzcd}
0 \arrow{r} & \O_{\mathbb{P}^1}(-1)^{\oplus d} \arrow{r}{\psi} & \O_{\mathbb{P}^1}^{\oplus (r + d)} \arrow{r} & E \arrow{r} &0.
\end{tikzcd}
\end{equation}
Let $M_{r,d} := \mathrm{Hom}( \O_{\mathbb{P}^1}(-1)^{\oplus d}, \O_{\mathbb{P}^1}^{\oplus (r + d)}) $ be the space of $d \times (r+d)$ matrices of linear forms on $\mathbb{P}^1$.
The sequence \eqref{ptss} determines an element $\psi \in M_{r,d}$ which is well-defined up to the choice of framings of the source and target. Moreover, $\psi$ lies in the open subvariety $\Omega_{r,d} \subset M_{r,d}$ of matrices of linear forms having full rank $d$ at each point on $\mathbb{P}^1$.
Let $\mathrm{GL}_d$ act on $M_{r,d}$ by left multiplication and $\mathrm{GL}_{r+d}$ act by right multiplication. In addition, let $\mathrm{GL}_2$ act on $M_{r,d}$ by change of coordinates on the entries, which live in the two-dimensional vector space $H^0(\mathbb{P}^1, \O_{\mathbb{P}^1}(1))$. These three actions commute, so we obtain an action of $\mathrm{GL}_d \times \mathrm{GL}_{d + r} \times \mathrm{GL}_2$ on $M_{r, d}$. The locus $\Omega_{r,d} \subset M_{r,d}$ is preserved by this action and thus inherits an action of $\mathrm{GL}_d \times \mathrm{GL}_{d + r} \times \mathrm{GL}_2$.
\begin{theorem}[Bolognesi-Vistoli \cite{BV}, Theorem 4.4] \label{bvthm}
There is an isomorphism of fibered categories
$\mathcal{V}_{r,d}\cong[\Omega_{r,d}/\mathrm{GL}_d \times \mathrm{GL}_{r+d} \times \mathrm{GL}_2].$
\end{theorem}
In other words, Theorem \ref{bvthm} says $\mathcal{V}_{r,d}$ is an open substack of a vector bundle over $\mathrm{BGL}_d \times \mathrm{BGL}_{r+d} \times \mathrm{BGL}_2$.
In particular, by the homotopy and excision properties, we have a surjection
\begin{equation} \label{surj}
A^*(\mathrm{BGL}_d \times \mathrm{BGL}_{r+d} \times \mathrm{BGL}_2) \twoheadrightarrow A^*(\mathcal{V}_{r,d}).
\end{equation}
Let $\mathcal{W}$ denote the universal rank $2$ vector bundle on $\mathcal{V}_{r,d}$ (pulled back from the $\mathrm{BGL}_2$ factor), and let $\mathcal{E}^{gg} := \mathcal{E}_{r,d}^{gg}$ be the universal globally generated rank $r$, degree $d$ vector bundle on $\pi: \mathbb{P} \mathcal{W} \to \mathcal{V}_{r,d}$.
Let $T_d$ and $T_{r+d}$ denote the universal vector bundles on $\mathrm{BGL}_d$ and $\mathrm{BGL}_{r+d}$. We wish to identify their pullbacks to $\mathcal{V}_{r,d}$ in terms of $\mathcal{E}^{gg}$.
\begin{Lemma} \label{genslem}
Let $\gamma: \mathcal{V}_{r,d} \rightarrow \mathrm{BGL}_d \times \mathrm{BGL}_{r+d}$ be the natural map.
We have
\begin{equation} \label{id}
\gamma^* T_d = (\pi_* \mathcal{E}^{gg}(-1)) \otimes \det \mathcal{W}^\vee \qquad \text{and} \qquad \gamma^*T_{r+d} = \pi_* \mathcal{E}^{gg}.
\end{equation}
In particular, $A^*(\mathcal{V}_{r,d})$ is generated by the Chern classes of the three vector bundles $\mathcal{W}$, $\pi_*\mathcal{E}^{gg}(-1)$ and $\pi_* \mathcal{E}^{gg}$.
\end{Lemma}
\begin{proof}
By the construction of $\mathcal{V}_{r,d}$ as a quotient of $\Omega_{r,d} \subset M_{r,d}$, the universal $\mathbb{P}^1$-bundle $\pi: \mathbb{P} \mathcal{W} \to \mathcal{V}_{r,d}$ is equipped with an exact sequence of vector bundles
\begin{equation} \label{eseq}
0 \rightarrow (\pi^*\gamma^*T_d)(-1) \rightarrow \pi^*\gamma^*T_{r+d} \rightarrow \mathcal{E}^{gg} \rightarrow 0.
\end{equation}
Pushing forward \eqref{eseq} by $\pi$ induces an isomorphism
\[\gamma^*T_{r+d} \cong \pi_*\pi^* \gamma^* T_{r+d} \xrightarrow{\sim} \pi_* \mathcal{E}^{gg}.\]
On the other hand, tensoring \eqref{eseq} with $\O_{\mathbb{P} \mathcal{W}}(-1)$ and pushing forward by $\pi$ induces an isomorphism
\begin{equation} \label{next}
\pi_*\mathcal{E}^{gg}(-1) \xrightarrow{\sim} R^1\pi_*( (\pi^* \gamma^* T_d)(-2)) \cong \gamma^* T_d \otimes R^1 \pi_* \O_{\mathbb{P} \mathcal{W}}(-2)
\end{equation}
Noting that the relative dualizing sheaf of $\pi$ is $\omega_{\pi} = \O_{\mathbb{P} \mathcal{W}}(-2) \otimes \det \mathcal{W}^\vee$, Serre duality provides an isomorphism of the righthand term in \eqref{next} with $\pi^*\gamma^*T_d \otimes \det \mathcal{W}$. Having identified the tautological vector bundles, \eqref{surj} now establishes the claim about generators of $A^*(\mathcal{V}_{r,d})$.
\end{proof}
\begin{remark}
Lemma \ref{genslem} provides a quick proof of the existence half of \cite[Theorem 1.2]{L}: Pulling back the classes of closures of the universal splitting loci $\overline{\Sigma}_{\vec{e}}$ on $\mathcal{B}_{r,d}$, it follows that
when the splitting loci of a vector bundle $E$ on a $\mathbb{P}^1$ bundle $\mathbb{P} W \rightarrow B$ have the expected codimension, their classes in the Chow ring of $B$ are given by a universal formula in terms of the Chern classes of the rank $2$ bundle $\pi_*\O_{\mathbb{P} W}(1)$ and the bundles $\pi_*E(i)$ for suitable $i$.
This observation does not, however, give an indication of how to find these formulas, as done in \cite[Section 6]{L}.
\end{remark}
Now, let $\mathcal{E} := \mathcal{E}_{r,\ell}$ denote the universal rank $r$, degree $\ell$ vector bundle on $\mathbb{P} \mathcal{W} \to \mathcal{B}_{r,\ell}$. The restriction of $\mathcal{E}$ to $\mathcal{U}_{0, r,\ell} \subset \mathcal{B}_{r,\ell}$ is just $\mathcal{E}|_{\mathcal{U}_{0,r,\ell}} = \mathcal{E}^{gg}_{r,\ell}$. More generally, we have each $\mathcal{U}_{m, r,\ell} \cong \mathcal{V}_{r,\ell+mr}$, and via this identification, $\mathcal{E}|_{\mathcal{U}_{m, r, \ell}} = \mathcal{E}_{r, \ell+mr}^{gg}(-m)$, equivalently $\mathcal{E}(m)|_{\mathcal{U}_{m, r, \ell}} = \mathcal{E}^{gg}_{r,\ell+mr}$. This establishes the following.
\begin{Lemma} \label{int-gen}
The Chow ring $A^*(\mathcal{U}_m)$ is generated over $\mathbb{Z}$ by the Chern classes of $\mathcal{W}$ and the vector bundles $\pi_*\mathcal{E}(m-1)$ and $\pi_*\mathcal{E}(m)$ on $\mathcal{U}_m$. Thus, $A^*(\mathcal{B}_{r,\ell})$ is generated by the Chern classes of $\mathcal{W}$ and $\pi_*\mathcal{E}(i)$ for $i = 0, 1, 2, \ldots$
\end{Lemma}
We shall later describe the Chow ring as a subring of a finitely generated $\mathbb{Q}$-algebra, which gives rise to an implicit description of the relations among these generators.
\section{The rational Chow ring} \label{chow}
The rational Chow ring of $\mathcal{B}_{r,\ell}$ can be described with fewer generators than the integral generators of Lemma \ref{int-gen}.
Let $w_1 = c_1(\mathcal{W})$, $w_2 = c_2(\mathcal{W})$ and $z = c_1(\O_{\mathbb{P} \mathcal{W}}(1))$.
The Chern classes of $\mathcal{E}$ can be written as $c_i(\mathcal{E}) = \pi^*(a_i) + \pi^*(a_i')z$ for unique $a_i \in A^i(\mathcal{B}_{r,\ell})$ and $a_i' \in A^{i-1}(\mathcal{B}_{r,\ell})$. (Note that $a_1' = \ell$.) By Grothendieck-Riemann-Roch, the Chern classes of the vector bundle $\pi_*\mathcal{E}(m)$ on $\mathcal{U}_m$ are expressible in terms of the $a_i$ and $a_i'$ for any $m$, so these classes are generators for the rational Chow ring of each $\mathcal{U}_m$ and therefore of $\mathcal{B}_{r,\ell}$. The main result of this section will be that there are no relations among these generators on $\mathcal{B}_{r,\ell}$. We first consider relations on an open $\mathcal{U}_m \cong \mathcal{V}_{r,d}$.
\begin{theorem} \label{ratchow}
The ring $A_{\mathbb{Q}}^*(\mathcal{V}_{r,d})$ is a quotient of $\mathbb{Q}[w_1, w_2, a_1, \ldots, a_r, a_2', \ldots, a_r']$ with all relations in degrees $d+1$ and higher.
\end{theorem}
\begin{proof}
Set $G := \mathrm{GL}_d \times \mathrm{GL}_{r+d} \times \mathrm{GL}_2$.
We let $T_d, T_{r+d}$ and $\mathcal{W}$ denote the corresponding tautological bundles on $B := BG$. Let $t_i = c_i(T_{d})$ and $u_i = c_1(T_{r+d})$ and $w_i = c_i(\mathcal{W})$ be their Chern classes, which freely generate $A^*(B)$.
Next, define $M:=[M_{r,d}/G]$, which is the total space of the vector bundle $\mathcal{H}om(T_d,T_{r+d}) \otimes \mathcal{W}^\vee$ over $B$. Theorem \ref{bvthm} says that $\mathcal{V}_{r,d}$ is an open substack of $M$. Let us write $X:=[\Omega_{r,d}^c/G]$ for its closed complement. Here, $\Omega_{r,d}^c \subset M_{r,d}$ is the space of matrices of linear forms that drop rank along some point on $\mathbb{P}^1$.
By excision, we have a right exact sequence
\begin{equation} \label{res}
A^{*-\mathrm{codim}X}(X) \rightarrow A^*(M) \rightarrow A^*(\mathcal{V}_{r,d}) \rightarrow 0.
\end{equation}
We shall see soon that $X$ is irreducible of codimension $r$, which immediately implies there are no relations among generators of $A^*(M)$ restricted to $\mathcal{V}_{r,d}$ in degrees less than $r$. In what follows, we describe relations among the restrictions of these Chern classes in degrees $r$ up to $d$.
First we construct a space $\widetilde{X}$ whose total space maps properly to $M$ with image $X$.
Let $\mathbb{P}(T_d)$ be the projectivization of the tautological rank $d$ bundle and let $\sigma: \mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d) \rightarrow B$ be the map to the base. On $\mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d)$ we have a surjection of vector bundles
\[\sigma^*M = \sigma^*(\mathcal{H}om(T_d, T_{r+d}) \otimes \mathcal{W}^\vee) \rightarrow \O_{\mathbb{P}(T_d)}(1) \otimes \sigma^*T_{r+d} \otimes \O_{\mathbb{P} \mathcal{W}}(1),\]
corresponding to evaluation of the map along a one-dimensional subspace of the fiber of $T_d$. Let $\widetilde{X} \subset\sigma^*M$ denote the total space of the kernel vector bundle.
Informally,
\[\widetilde{X} = \{(p, \Lambda, \psi) : p \in \mathbb{P} \mathcal{W}, \Lambda \subset (T_d)_{\pi(p)}, \psi \in M_{\pi(p)}, \Lambda \subset \ker \psi(p) \subset (T_d)_{\pi(p)}\} .\]
We have a commutative diagram:
\begin{center}
\begin{tikzcd}
&\widetilde{X} \arrow{dr}[swap]{\rho''} \arrow{r}{\iota} &\sigma^*M\arrow{d}{\rho'} \arrow{r}{\sigma'} &M \arrow{d}{\rho} \\
&&\mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d) \arrow{r}{\sigma} & B,
\end{tikzcd}
\end{center}
where $\rho, \rho',$ and $\rho''$ are all vector bundle maps.
By construction, $\sigma'(\iota(\widetilde{X})) = X$ and $\sigma' \circ \iota$ is projective, as desired. The map $\widetilde{X} \to X$ is also generically $1$-to-$1$. This establishes that
\[\dim X = \dim \widetilde{X} = \dim M + \dim (\mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d)) - \rank T_{k+r} = \dim M - r.\]
Since $\sigma' \circ \iota$ is projective, the pushforward of rational Chow groups $A_\mathbb{Q}^*(\widetilde{X}) \to A_\mathbb{Q}^*(X)$ is surjective. (Take the preimage of any cycle; if the fibers are generically finite, the pushforward is some multiple of the original cycle. Otherwise, slice with enough copies of the hyperplane class to get a cycle mapping with generically finite fibers to the same image.) It follows that the image of $A_\mathbb{Q}^{*-r}(X) \rightarrow A_\mathbb{Q}^*(M)$ via pushforward of cycles is the same as the image of $(\sigma' \circ \iota)_*: A_{\mathbb{Q}}^{*-r}(\widetilde{X}) \rightarrow A_{\mathbb{Q}}^*(M)$.
The pullback maps $(\rho'')^*$, $(\rho')^*$ and $\rho^*$ all induce isomorphisms on Chow rings.
Let $(\rho')^*\beta$ be the fundamental class of $\widetilde{X}$ in the Chow ring of $\sigma^*M$. In fact,
\[\beta := c_{r+d}(\O_{\mathbb{P}(T_d)}(1) \otimes \sigma^*T_{r+d} \otimes \O_{\mathbb{P} \mathcal{W}}(1)).\]
We can write every class in $A^{*-r}(\widetilde{X})$ as $(\rho'')^* \alpha$ for a unique class $\alpha \in A^{*-r}(\mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d))$.
Then the effect of $(\sigma' \circ \iota)_*$ is
\begin{equation} \label{effect}
\sigma'_* \iota_* (\rho'')^* \alpha = \sigma'_* \iota_* \iota^* (\rho')^* \alpha = \sigma'_*([\widetilde{X}] \cdot (\rho')^*\alpha) = \sigma'_*(\rho')^*(\beta \cdot \alpha) = \rho^*\sigma_*(\beta \cdot \alpha).
\end{equation}
Now, let $z = c_1(\O_{\mathbb{P} \mathcal{W}}(1))$ and $\zeta = c_1(\O_{\mathbb{P}(T_d)}(1))$.
By the projective bundle theorem,
\begin{equation} \label{ch}
A^*(\mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d)) = A^*(B)[z,\zeta]/(z^2 +w_1z+w_2, \zeta^{d} + \zeta^{d-1}t_1 + \ldots + \zeta t_{d-1} + t_{d}).
\end{equation}
Thus, each class $\alpha \in A^*(\mathbb{P} \mathcal{W} \times_B \mathbb{P}(T_d))$ is uniquely expressible as
\begin{equation} \label{expressible}
\alpha = \sum_{i=0}^{d-1} (\sigma^*\gamma_i) \zeta^i + \sum_{i=1}^d (\sigma^*\gamma_i') z\zeta^{i-1},
\end{equation}
where $\gamma_i \in A^*(B)$ with $\deg \gamma_i = \deg \gamma_i' = \deg \alpha - i$.
By \eqref{effect} and \eqref{expressible}, the image of $(\sigma' \circ \iota)_*: A_{\mathbb{Q}}^{*-r}(\widetilde{X}) \rightarrow A_{\mathbb{Q}}^*(X) \cong A_{\mathbb{Q}}^*(B)$ is the ideal generated by
the classes
\[f_{i,j} := \sigma_*(\beta \cdot z^i \zeta^j) \qquad \text{for }0 \leq i \leq 1, \ 0 \leq j \leq d - 1.\]
As $\rho^*$ is an isomorphism on Chow, we omit it above and in what follows for ease of notation.
By the excision sequence \eqref{res}, we have
\[A_{\mathbb{Q}}^*(\mathcal{V}_{r,d}) = \frac{A_{\mathbb{Q}}^*(B)}{\langle f_{i,j} : 0 \leq i \leq 1, 0 \leq j \leq d - 1\rangle} \cong \frac{\mathbb{Q}[w_1, w_2, t_1, \ldots, t_d, u_1, \ldots, u_{r+d}]}{\langle f_{i,j} : 0 \leq i \leq 1, 0 \leq j \leq d - 1\rangle}.\]
Since $\sigma$ has relative dimension $d$, the codimension of $f_{i,j}$ is $(r+d)+i+j - d = r+i+j$.
Recall that there are no relations among the generators of $A^*(M) \cong A^*(B)$, so $f_{i,j}$ is a unique polynomial of codimension $i+j+r$ in the $t$'s, $u$'s and $w$'s.
We are interested in particular in the coefficients of $t_{i+j+r}$ and $u_{i+j+r}$ in this expression for $f_{i,j}$.
By the splitting principle and \eqref{ch}, we have
\begin{align*}
\beta &= c_{r+d}(\O_{\mathbb{P}(T_d)}(1) \otimes \sigma^*T_{r+d} \otimes \O_{\mathbb{P} \mathcal{W}}(1)) = \sum_{i=0}^{r+d} (\zeta + z)^{r+d-i} \sigma^*u_i \\
&=(\zeta^{r+d} + (r+d)z\zeta^{r+d-1}) + (\zeta^{r+d-1}+(r+d-1)z\zeta^{r+d-2})\sigma^*u_1 \\
& \qquad + \ldots + (\zeta^d+dz\zeta^{d-1})\sigma^*u_r + \ldots + \sigma^*u_{r+d} + \langle \sigma^*w_1, \sigma^*w_2\rangle.
\end{align*}
The push forward of any term involving $\sigma^* w_1$ or $\sigma^*w_2$ cannot contribute to the coefficient of $t_{i+j+r}$ or $u_{i+j+r}$. Since $z^2 \in \langle \sigma^*w_1, \sigma^*w_2\rangle$, after we multiply $z^i\zeta^j$ with $\beta$, we only care about the resulting terms where the power of $z$ is $1$ (if the power of $z$ is zero, then the push forward vanishes).
To compute the push forward of such terms, iterated use of \eqref{ch} tell us (or c.f. \cite[Corollary 2.6]{HT})
\[\sigma_*(z\zeta^{d-1+i}) = \begin{cases} 0& \text{if $i < 0$} \\ 1 &\text{if $i = 0$} \\ \sum_{m_1 \cdot 1 + \ldots + m_d \cdot d = i} (-1)^{m_1 + \ldots + m_d} \cdot \frac{(m_1 + \ldots + m_d)!}{m_1! \cdots m_d!} \cdot t_1^{m_1} \cdots t_d^{m_d} &\text{if $i \geq 1$.}
\end{cases}\]
The coefficient in front of a monomial for $(m_1, \ldots, m_d)$ above is the number of ordered partitions of $i$ so that $j$ appears with multiplicity $m_j$. The terms we are interested in will come from that monomial being $1$ or $t_{i}$. In particular, we compute
\begin{align*}
f_{1, j-1} &= \sigma_*(\beta z \zeta^{j-1}) = - t_{j + r} + u_{j + r} + \ldots\\
f_{0, j} &= \sigma_*(\beta \zeta^j) = -(r+d)t_{j+r} + (d - j)u_{j+r} + \ldots.
\end{align*}
Hence, in $A_{\mathbb{Q}}^*(\mathcal{V}_{r,d})$,
the classes $t_{n}$ for $r \leq n \leq d$ and $u_{m}$ for $r+1 \leq m \leq d$ are expressible as polynomials in $w_1, w_2, t_1, \ldots, t_{r-1}, u_1, \ldots, u_r$. Moreover, after eliminating these higher degree generators, the $f_{i,j}$ produce no additional relations in degrees less than or equal to $d$ among the restrictions to $\mathcal{V}_{r,d}$ of $w_1, w_2, t_1, \ldots, t_{r-1}, u_1, \ldots, u_r$. Hence, the map
\begin{equation} \label{gen}
\mathbb{Q}[w_1, w_2, t_1,t_2,\ldots, t_{r-1}, u_1, \ldots, u_{r}] \to A_{\mathbb{Q}}^*(\mathcal{V}_{r,d})
\end{equation}
is an isomorphism in degrees $* \leq d$.
For dimension reasons, there can be no relations among the generators $w_1, w_2, a_1, \ldots, a_r, a_2', \ldots, a_r'$ in degrees less than $d$ because
they are a list of the same number of generators in the same degrees as \eqref{gen}.
\end{proof}
\begin{Corollary} \label{ratch}
We have $A_{\mathbb{Q}}^*(\mathcal{B}_{r,\ell}) = \mathbb{Q}[w_1, w_2, a_1, \ldots, a_r, a_2', \ldots, a_r']$.
\end{Corollary}
\begin{proof}
Recall that the codimension of the complement of $\mathcal{U}_{m, r, \ell}$ is $\ell + mr + 1$. Thus, choosing $m$ such that $\ell + mr + 1 > i$, we see
\begin{align*}
\dim A_{\mathbb{Q}}^i(\mathcal{B}_{r,\ell}) &= \dim A_{\mathbb{Q}}^i(\mathcal{U}_{m, r, \ell}) = \dim A_{\mathbb{Q}}^i(\mathcal{V}_{r,mr+\ell}) \\
&= \dim \mathbb{Q}[w_1, w_2, a_1, \ldots, a_r, a_2', \ldots, a_r']_i.
\end{align*}
We already know $ \mathbb{Q}[w_1, w_2, a_1, \ldots, a_r, a_2', \ldots, a_r'] \twoheadrightarrow A^*(\mathcal{B}_{r,\ell})$, so we conclude it has no kernel for dimension reasons.
\end{proof}
\section{The integral Chow ring} \label{int-sec}
Our plan is to describe the integral Chow ring of $\mathcal{B}_{r,\ell}$ by building it up as a union of the splitting loci $\Sigma_{\vec{e}}$ using excision.
The reader may be interested to compare the ideas and use of higher Chow groups here with work of Bae and Schmidtt in their study of the moduli stack of pointed genus $0$ curves \cite{BS}.
The Chow rings of our strata $\Sigma_{\vec{e}}$ are particularly nice.
Fix some $\vec{e}$ and write $\O(\vec{e}) = \bigoplus_{i=1}^s \O(d_i)^{\oplus r_i}$ as in Section \ref{sps}.
There is an inclusion of groups $\prod \mathrm{GL}_{r_i} \times \mathrm{GL}_2 \hookrightarrow H_{\vec{e}}$, which induces a map on classifying stacks
\begin{equation} \label{vb}
\prod \mathrm{BGL}_{r_i} \times \mathrm{BGL}_2 \rightarrow BH_{\vec{e}}.
\end{equation}
\begin{Lemma} \label{affbun}
The map \eqref{vb} factors as a sequence of affine bundles.
In particular $A^*(\Sigma_{\vec{e}}) =A^*(\prod \mathrm{BGL}_{r_i} \times \mathrm{BGL}_2)$, which is a free $\mathbb{Z}$ algebra. Furthermore, over $\mathbb{C}$, the first higher Chow groups satisfy
\[\mathrm{CH}^{*}(\Sigma_{\vec{e}},1, \mathbb{Z}/p\mathbb{Z}) = \mathrm{CH}^{*}\left(\prod \mathrm{BGL}_{r_i} \times \mathrm{BGL}_2, 1,\mathbb{Z}/p\mathbb{Z}\right) = 0\]
for all primes $p$.
\end{Lemma}
\begin{proof}
We induct on $s$. The $s=1$ case is immediate. Let $G' = \Aut(\oplus_{i=1}^{s-1} \O(d_i)^{\oplus r_i})$ and $H = G' \times GL_{r_s} \subset \Aut(\O(\vec{e}))$. There is a quotient map $\Aut(\O(\vec{e})) \rightarrow H$ defined by forgetting blocks not in $H$, which expresses $\Aut(\O(\vec{e}))$ as a semidirect product $N \rtimes H$ where $N \cong H^0(\O_{\mathbb{P}^1}(d_s - d_1))^{\oplus (r_1r_s)} \oplus \cdots \oplus H^0(\O_{\mathbb{P}^1}(d_s - d_{s-1}))^{\oplus (r_{s-1}r_s)}$ is affine. Moreover,
\[H_{\vec{e}} = (N \rtimes H) \rtimes \mathrm{GL}_2 = N \rtimes ((G' \rtimes \mathrm{GL}_2) \times \mathrm{GL}_{r_s}).\]
Now let $H_{\vec{e}}$ act on $N$ where elements of $N$ act by left multiplication and elements of $(G' \rtimes \mathrm{GL}_2) \times \mathrm{GL}_{r_s}$ act by conjugation.
This action is affine linear, so the quotient, which is $B((G' \rtimes \mathrm{GL}_2) \times \mathrm{GL}_{r_s})$, is an affine bundle over $BH_{\vec{e}}$.
By the homotopy property, the Chow ring and higher Chow groups of $BH_{\vec{e}}$ agree with that of $B(G' \rtimes \mathrm{GL}_2) \times \mathrm{BGL}_{r_s}$, which, by induction, are isomorphic to those of $(\mathrm{BGL}_{r_1} \times \cdots \times \mathrm{BGL}_{r_{s-1}} \times \mathrm{BGL}_2) \times \mathrm{BGL}_{r_s}$. The vanishing of the first higher Chow group over $\mathbb{C}$ is equation \eqref{hch}.
\end{proof}
Using the above lemma, we find that inclusion of cycle classes from strata is injective and deduce that the Chow ring of $\mathcal{B}_{r,\ell}$ is torsion-free.
\begin{Lemma} \label{5}
The Chow group $A^i(\mathcal{B}_{r,\ell})$ is a finitely-generated free $\mathbb{Z}$-module for all $i$.
\end{Lemma}
\begin{proof}
Suppose $U$ is a finite union of strata and $\Sigma_{\vec{e}}$ is a disjoint stratum which is closed in $X = U \cup \Sigma_{\vec{e}}$.
By Lemma \ref{affbun}, we know $A^{i}(\Sigma_{\vec{e}})$ is a finitely-generated free $\mathbb{Z}$-module for all $i$. By induction, we may also assume $A^i(U)$ is a finitely-generated free $\mathbb{Z}$-module.
It suffices to show that $A^i(X)$ is also a finitely-generated free $\mathbb{Z}$-module.
Let us first deduce the result over $\mathbb{C}$.
We have a localization long exact sequence
\begin{align*}
\ldots &\rightarrow \mathrm{CH}^{*-u(\vec{e})}(\Sigma_{\vec{e}}, 1, \mathbb{Z}/p) \rightarrow \mathrm{CH}^*(X,1,\mathbb{Z}/p) \rightarrow \mathrm{CH}^*(U,1,\mathbb{Z}/p) \\
&\rightarrow A^{*-u(\vec{e})}(\Sigma_{\vec{e}}) \otimes \mathbb{Z}/p\mathbb{Z} \rightarrow A^*(X) \otimes \mathbb{Z}/p\mathbb{Z} \rightarrow A^*(U) \otimes \mathbb{Z}/p\mathbb{Z} \rightarrow 0.
\end{align*}
By Lemma \ref{affbun}, we have $\mathrm{CH}^{*-u(\vec{e})}(\Sigma_{\vec{e}}, 1, \mathbb{Z}/p)=0$, so if
$\mathrm{CH}^{*}(U, 1, \mathbb{Z}/p\mathbb{Z}) = 0$ then we have $\mathrm{CH}^*(X,1,\mathbb{Z}/p) = 0$ too. It follows by induction that $\mathrm{CH}^*(U,1,\mathbb{Z}/p) = 0$ for any finite union of strata.
Hence, we have an exact sequence
\begin{equation} \label{forallp}
0 \rightarrow A^{*-u(\vec{e})}(\Sigma_{\vec{e}}) \otimes \mathbb{Z}/p\mathbb{Z} \rightarrow A^*(X) \otimes \mathbb{Z}/p\mathbb{Z} \rightarrow A^*(U) \otimes \mathbb{Z}/p\mathbb{Z} \rightarrow 0
\end{equation}
for each prime $p$.
Because the $\mathbb{Z}$-modules involved are finitely-generated, the exactness of \eqref{forallp} for all $p$ implies that
\begin{equation} \label{aseq}
0 \rightarrow A^{i-u(\vec{e})}(\Sigma_{\vec{e}}) \rightarrow A^i(X) \rightarrow A^i(U) \rightarrow 0
\end{equation}
is exact.
Since $A^{i-u(\vec{e})}(\Sigma_{\vec{e}})$ and $A^i(U)$ are finitely-generated free $\mathbb{Z}$-modules, so is $A^i(X)$. This concludes the proof over $\mathbb{C}$.
The exactness of \eqref{aseq} over $\mathbb{C}$ also tells us that
\begin{equation} \label{rkeq}
\rank A^i(\mathcal{B}_{r,\ell}) = \sum_{\vec{e}} \rank A^{i-u(\vec{e})}(\Sigma_{\vec{e}}).
\end{equation}
We claim that \eqref{rkeq} in fact holds over any ground field. Indeed, Theorem \ref{ratchow} holds over any field, so the
left-hand side is independent of the ground field. Similarly, using Lemma \ref{affbun}, we have $A^*(\Sigma_{\vec{e}}) = A^*(\prod \mathrm{BGL}_{r_i} \times \mathrm{BGL}_2)$ and the latter is independent of the ground field, so the right-hand side of \eqref{rkeq} is independent of the ground field.
Now, working over any ground field, we claim that the map $A^{i-u(\vec{e})}(\Sigma_{\vec{e}}) \to A^i(X)$ for attaching each stratum is injective. If not, then the image of $A^{i-u(\vec{e})}(\Sigma_{\vec{e}}) \to A^i(X)$ would have rank strictly less than $\rank A^{i - u(\vec{e})}(\Sigma_{\vec{e}})$. Then, $\rank A^i(\mathcal{B}_{r,\ell})$ would be less than the sum $\sum_{\vec{e}} \rank A^{i-u(\vec{e})}(\Sigma_{\vec{e}})$, violating \eqref{rkeq}. Hence, we must have an exact sequence as in \eqref{aseq} for each $\vec{e}$. Arguing as before, we know $A^{i-u(\vec{e})}(\Sigma_{\vec{e}})$ and $A^i(U)$ are finitely-generated free $\mathbb{Z}$-modules, so because \eqref{aseq} is exact, $A^i(X)$ is also a finitely-generated free $\mathbb{Z}$ module.
\end{proof}
An analogous argument in cohomology can be used to show that Chow and cohomology rings of $\mathcal{B}_{r,\ell}$ agree.
\begin{Lemma} \label{coh}
The cycle class map $A^*(\mathcal{B}_{r,\ell}) \to H^{2*}(\mathcal{B}_{r,\ell})$ is an isomorphism.
\end{Lemma}
\begin{proof}
As before, suppose $U$ is a finite union of strata and $\Sigma_{\vec{e}}$ is a disjoint stratum which is closed in $X = U \cup \Sigma_{\vec{e}}$.
Because $\Sigma_{\vec{e}}$ and $X$ are smooth, the cohomology of the pair $H^*(X, U)$ is the reduced cohomology of the Thom space of the normal bundle of $\Sigma_{\vec{e}} \subset X$. By the Thom isomorphism, the cohomology of this pair is then $H^*(X, U) \cong H^{* - 2u(\vec{e})}(\Sigma_{\vec{e}})$.
By Lemma \ref{affbun}, we have $H^*(\Sigma_{\vec{e}}) = H^*(\prod \mathrm{BGL}_{r_i} \times \mathrm{BGL}_2)$, which vanishes in odd degrees.
By induction, we may assume that the odd cohomology of $U$ vanishes. Thus, the long exact sequence for the pair $(X, U)$ gives an exact sequence
\begin{equation} \label{cohseq}
0 \rightarrow H^{2(i - u(\vec{e}))}(\Sigma) \rightarrow H^{2i}(X) \rightarrow H^{2i}(U) \rightarrow 0
\end{equation}
for each $i$.
The cycle class map sends the short exact sequence \eqref{aseq} to \eqref{cohseq}. Because $A^*(\mathrm{BGL}_{r_i}) \to H^{2i}(\mathrm{BGL}_{r_i})$ is an isomorphism,
using Lemma \ref{affbun}, we see that
$A^{i - u(\vec{e})}(\Sigma_{\vec{e}}) \to H^{2(i - u(\vec{e}))}(\Sigma_{\vec{e}})$ is an isomorphism. By induction, we may assume $A^i(U) \to H^{2i}(U)$ is an isomorphism. By the $5$-lemma, we see that $A^i(X) \to H^{2i}(X)$ is also an isomorphism.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main}]
Corollary \ref{ratch} determines $A_{\mathbb{Q}}^*(\mathcal{B}_{r,\ell})$. Lemma \ref{5} shows $A^*(\mathcal{B}_{r,\ell})$ is a subring of $A_{\mathbb{Q}}^*(\mathcal{B}_{r,\ell})$ and Lemma \ref{int-gen} identifies it as
the subring generated by the Chern classes of the sheaves $\pi_*\mathcal{E}(m)$. Lemma \ref{coh} shows that the cycle class map is an isomorphism.
\end{proof}
To make this subring more explicit, we provide formulas for the Chern classes of $\pi_* \mathcal{E}(m)$ in terms of the rational generators (working modulo $\langle w_1, w_2 \rangle$). Recall that $a_1' = \ell$ is the relative degree of $\mathcal{E}$.
\begin{Lemma} \label{F} Let
\begin{equation} \label{Feq}
F(t) = \sum_{i=0}^\infty f_i t^i = \exp\left(\int \frac{a_1'(a_1 +a_2t +\ldots + a_r t^{r-1}) - (a_2' + a_3't + \ldots + a_r' t^{r-2})}{1 + a_1t + \ldots + a_rt^r}dt \right).
\end{equation}
Then
\begin{equation}\label{cherngen}
\sum_{i=0}^\infty c_i(\pi_*\mathcal{E}(m)) t^i = F(t)(1 + a_1t + \ldots + a_rt^r)^{m+1} \quad \mod \langle w_1, w_2, t^{mr+\ell+1} \rangle.
\end{equation}
\end{Lemma}
\begin{proof}
We will use Grothendieck-Riemann-Roch to compute the Chern characters and make use of formal manipulations that turn power sums (Chern characters) into elementary symmetric functions (Chern classes). It is convenient to package this information in generating functions.
Given some $\alpha_1, \ldots, \alpha_r$, let $\sigma_j := \sigma_j(\alpha_1, \ldots, a_r) = \sum_{i_1 < i_2 < \cdots < i_j} \alpha_{i_1}\alpha_{i_2} \cdots a_{i_j}$ denote the $j$th elementary symmetric function in the $\alpha_i$. For each $j \geq 0$, there is a polynomial $p_j(x_1, \ldots, x_r)$ such that $p_j(\sigma_1, \ldots, \sigma_r) = \alpha_1^j + \ldots + \alpha_r^j$. These polynomials satisfy
\begin{equation} \label{identity}
\log (1+x_1t +\ldots + x_rt^r) = \sum_{j=1}^\infty \frac{(-1)^{j+1}}{j} p_j(x_1,\ldots,x_r) t^j.
\end{equation}
For any vector bundle $E$, the $j$th Chern character is related to the Chern classes by
\[ \mathrm{ch}_j(E) = \frac{1}{j!} p_j(c_1(E), \ldots, c_r(E)).\]
Each Chern character of the universal $\mathcal{E}$ on $\pi: \mathbb{P} \mathcal{W} \to \mathcal{B}_{r,\ell}$ is expressible as $\mathrm{ch}_j(\mathcal{E}) = \pi^*(c_j) + \pi^*(c_j')z $ for some $c_j \in A_\mathbb{Q}^{j}(\mathcal{B}_{r,\ell})$ and $c_j' \in A_\mathbb{Q}^{j-1}(\mathcal{B}_{r,\ell})$.
In what follows, we work modulo the ideal $\langle w_1, w_2 \rangle$, so that $z^2 = 0$. We have
\begin{align} \label{c}
c_j = \frac{1}{j!} p_j(a_1, \ldots, a_r) \qquad \text{and} \qquad c_j' = \frac{1}{j!} \sum_{i=1}^r a_i' \frac{\partial p_j}{\partial x_i}(a_1, \ldots, a_r).
\end{align}
We write $c = c_0 + c_1 + c_2 + \ldots$ and $c' = c_1' + c_2' + \ldots$, so $\mathrm{ch}(\mathcal{E}(m)) = \pi^*(c) + \pi^*(c')z$.
The relative tangent bundle of $\pi: \mathbb{P} \mathcal{W} \rightarrow \mathcal{B}_{r,\ell}$ has Todd class
$\mathrm{td}(T_\pi) = 1 + \frac{1}{2} c_1(T_\pi) = 1+z.$
Moreover, $\mathrm{ch}(\mathcal{E}(m)) = \mathrm{ch}(\mathcal{E})\mathrm{ch}(\O_{\mathbb{P} \mathcal{W}}(m)) = \mathrm{ch}(\mathcal{E})(1+mz)$.
On $\mathcal{U}_m$, we have $R^1\pi_* \mathcal{E}(m) =0$ so Grothendieck-Riemann-Roch tells us
\begin{align*}
\mathrm{ch}(\pi_*\mathcal{E}(m)) &= \pi_*(\mathrm{ch}(\mathcal{E}(m))\mathrm{td}(T_\pi)) \\
&= \pi_*((\pi^*(c)+\pi^*(c')z)(1+(m+1)z)) = c' + (m+1)c.
\end{align*}
To recover the Chern classes, we evaluate
{\small
\begin{align}
&\exp\left(\sum_{j=1}^\infty (j-1)! (-1)^{j+1}\mathrm{ch}_j(\pi_*\mathcal{E}(m)) t^j\right) = \exp\left(\sum_{j=1}^\infty (j-1)! (-1)^{j+1}(c_{j+1}' + (m+1)c_j )t^j\right) \notag \\
&\qquad \qquad =\exp\left(\sum_{i=1}^r a_i'\sum_{j=1}^\infty \frac{(-1)^{j+1}}{j(j+1)} \frac{\partial p_{j+1}}{\partial x_i}(a_1,\ldots,a_r)t^j\right) (1+a_1t+\ldots+a_rt^r)^{m+1} \label{exp}.
\end{align}}
To evaluate the infinite sums inside \eqref{exp}, we consider
\begin{align}
&\frac{d}{d t} \left(\sum_{j=1}^\infty \frac{(-1)^{j+1}}{j(j+1)} \frac{\partial p_{j+1}}{\partial x_i}(a_1, \ldots, a_r) t^j\right) = \sum_{j=1}^\infty \frac{(-1)^{j+1}}{j+1} \frac{\partial p_{j+1}}{\partial x_i}(a_1, \ldots, a_r) t^{j-1} \notag \\
&\qquad\qquad = \sum_{j=2}^\infty \frac{(-1)^j}{j} \frac{\partial p_j}{\partial x_i}(a_1, \ldots, a_r)t^{j-2}. \notag
\intertext{Note that $\partial p_j/\partial x_i = 0$ if $j < i$. Therefore taking the partial derivative of \eqref{identity} with respect to $x_i$, we see that the above is equal to}
&\qquad\qquad = \frac{-1}{t^2} \left(\frac{\partial}{\partial x_i} \log(1 + a_1 t + \ldots + a_r t^r) - \delta_{i1} t\right) \notag \\
&\qquad\qquad = \begin{cases} (a_1 + a_2 t + \ldots + a_rt^{r-1})/(1 + a_1 t + \ldots + a_r t^r) & \text{if $i = 1$} \\ -t^{i-2}/(1 + a_1t + \ldots + a_rt^r) & \text{if $i \geq 2$.} \end{cases} \label{rats}
\end{align}
Substituting the formal integral of the terms in \eqref{rats} into \eqref{exp} and exponentiating \eqref{exp} then gives the formula \eqref{cherngen}.
\end{proof}
\begin{remark}
The Chern classes of $\pi_*\mathcal{E}(m)$ (not just modulo $\langle w_1, w_2 \rangle$) are also expressible in terms of exponentials of formal integrals of rational functions in the $a_i$ and $a_i'$ and the Chern classes of $\mathcal{W}$. Equation \eqref{c} is made up of terms with higher partial derivatives of the polynomials $p_j$ multiplied by the Chern classes of $\mathcal{W}$. The sums of these higher partial derivatives that appear in expanding \eqref{exp} can again be collected into rational functions and formal integrals of rational functions in the $a_i$.
However, we no longer have the sequence $0 \rightarrow \pi_*\mathcal{E}(m-1) \rightarrow \pi_*\mathcal{E}(m) \rightarrow \mathcal{E}|_{B \times \{0\}} \rightarrow 0$ that holds for vector bundles on trivial families $B \times \mathbb{P}^1$ so the relationship between twists is not as nice.
\end{remark}
\begin{proof}[Proof of Theorem \ref{m2}]
By Vistoli's theorem, we have $A^*(\mathcal{B}_{r,\ell}^\dagger) = A^*(\mathcal{B}_{r,\ell})/\langle w_1, w_2 \rangle$. By Lemma \ref{F}, after setting $w_1 = w_2 = 0$, the Chern classes of $\pi_*\mathcal{E}(m)$ for all $m$ are expressible in terms the $f_i$ in \eqref{Feq} and $a_1, \ldots, a_r$. To see that the cycle class map is an isomorphism, one may argue as in Lemmas \ref{affbun} and \ref{coh}, but using the splitting loci $\Sigma^{\dagger}_{\vec{e}}$ on $\mathcal{B}_{r,\ell}^\dagger$. One has $\Sigma_{\vec{e}}^{\dagger} \cong B\Aut(\O(\vec{e}))$, so the proof is essentially the same after dropping the $\mathrm{BGL}_2$ factor from Lemma \ref{affbun}.
\end{proof}
\begin{Corollary} \label{nfg}
The integral Chow rings $A^*(\mathcal{B}_{r,\ell})$ and $A^*(\mathcal{B}_{r,\ell}^\dagger)$ are not finitely generated as $\mathbb{Z}$ algebras.
\end{Corollary}
\begin{proof}
Since $A^*(\mathcal{B}_{r,\ell})$ surjects onto $A^*(\mathcal{B}_{r,\ell}^\dagger)$, it suffices to prove the claim for $\mathcal{B}_{r,\ell}^\dagger$.
Suppose to the contrary that $A^*(\mathcal{B}_{r,\ell}^\dagger)$ were finitely generated as a $\mathbb{Z}$-algebra. Then there would exist a finite set of primes $\{p_1, \ldots, p_n\}$ such that for all $\eta \in A^*(\mathcal{B}_{r,\ell}^\dagger)$, there exist $m_1, \ldots, m_n \in \mathbb{Z}$ such that $p_1^{m_1} \cdots p_n^{m_n} \cdot \eta \in \mathbb{Z}[a_1, \ldots, a_r, a_2', \ldots, a_r']$. Let $q$ be a prime not in $\{p_1, \ldots, p_n\}$. Let us consider the coefficient $f_q$ in \eqref{Feq}. Performing the formal integral, we see that $F(t) = \exp(-a_2' t + \ldots)$, so $(a_2')^q$ appears in $f_q$ with coefficient $\frac{1}{q!}$. Hence, $p_1^{m_1} \cdots p_n^{m_n} \cdot f_q \notin \mathbb{Z}[a_1, \ldots, a_r, a_2', \ldots, a_r']$ for any $m_1, \ldots, m_n$.
\end{proof}
As an example of the utility of the formulas in Lemma \ref{Feq}, we show that $A^*(\mathcal{B}_{2,1}^\dagger)$ is not isomorphic to $A^*(\mathcal{B}_{2,0}^\dagger)$. A similar argument shows $A^*(\mathcal{B}_{2,1})$ is not isomorphic to $A^*(\mathcal{B}_{2,0})$.
\begin{Corollary} \label{b2}
(1) The integral Chow rings $A^*(\mathcal{B}_{2,1}^\dagger)$ and $A^*(\mathcal{B}_{2,0}^\dagger)$ are not isomorphic. Hence, $\mathcal{B}_{2,0}^\dagger$ is not isomorphic to $\mathcal{B}_{2,1}^\dagger$.
(2) The integral Chow rings $A^*(\mathcal{B}_{2,1})$ and $A^*(\mathcal{B}_{2,0})$ are not isomorphic. Hence, $\mathcal{B}_{2,0}$ is not isomorphic to $\mathcal{B}_{2,1}$.
\end{Corollary}
\begin{proof}
(1) Let $\sum f_i t^i = F(t)$ as in \eqref{Feq}.
Then $A^*(\mathcal{B}_{2,\ell}^\dagger) \subset \mathbb{Q}[a_1, a_2, a_2']$ is the subring generated by the classes $f_i$ together with $a_1, a_2, a_2'$. We must show that we obtain non-isomorphic rings (not just different subrings) when $\ell = a_1' = 0$ versus when $\ell = a_1' = 1$.
Setting $\ell = a_1' = 0$, we see $A^2(\mathcal{B}_{2,0}^\dagger)$ is generated by
\begin{align*}
f_2 = \frac{1}{2}(a_1a_2' + a_2'^2), \quad a_1^2, \quad a_1a_2', \quad a_2 \qquad \qquad &(\ell = 0). \\
\intertext{On the other hand, setting $\ell = a_1' = 1$, we see that
$A^2(\mathcal{B}_{2,1}^\dagger)$ is generated by}
f_2 = \frac{1}{2}(-a_1a_2' + a_2'^2 + a_2), \quad a_1^2, \quad a_1a_2', \quad a_2 \qquad \qquad &(\ell =1).
\end{align*}
Now already, we can see that we have produced different subrings of $\mathbb{Q}[a_1, a_2, a_2']$. To see that our rings are not isomorphic though, we look at classes in codimension $4$.
Let us consider the group $A^4(\mathcal{B}_{2,\ell}^\dagger)/(A^1(\mathcal{B}_{2,\ell}^\dagger) \cdot A^3(\mathcal{B}_{2,\ell}^\dagger) + A^2(\mathcal{B}_{2,\ell}^\dagger)^2)$. By Theorem \ref{m2}, this group is generated by $f_4$. When $\ell = 0$, we have
\[f_4 = \frac{1}{24} \cdot a_2' \cdot (6a_1^3 + 11a_1^2a_2' + 6a_1a_2'^2 + a_2'^3 - 12a_1a_2 - 8a_2a_2') \in A^4(\mathcal{B}_{2,0}^\dagger). \]
In particular, $24f_4 \in A^1(\mathcal{B}_{2,0}) \cdot A^3(\mathcal{B}_{2,0})$. On the other hand, when $\ell = 1$, we claim that no element in the coset $f_4 + (A^1(\mathcal{B}_{2,\ell}) \cdot A^3(\mathcal{B}_{2,\ell}) + A^2(\mathcal{B}_{2,\ell})^2)$ has the property that a multiple of it lies in $A^1(\mathcal{B}_{2,\ell}^\dagger) \cdot A^3(\mathcal{B}_{2,\ell}^\dagger)$. To see this, we compute
\begin{align*}
f_4 &= \frac{1}{24}(-2a_1^3a_2' - a_1^2 a_2'^2 + 2a_1a_2'^3 + a_2'^4 + 2a_1^2a_2 + 6a_1a_2a_2' - 2 a_2 a_2'^2-3a_2^2) \in A^4(\mathcal{B}_{2,1}^\dagger) \\
&= -\frac{1}{8} a_2^2 + \ldots.
\end{align*}
But there is no integral class in
$A^1(\mathcal{B}_{2,\ell}^\dagger) \cdot A^3(\mathcal{B}_{2,\ell}^\dagger) + A^2(\mathcal{B}_{2,\ell}^\dagger)^2$ that involves $\frac{1}{8} a_2^2$. Thus, there is no adjustment of $f_4$ by integral classes from lower codimension so that the result is divisible by a codimension $1$ class.
(2) The strategy here similar (the formulas are just more complicated because we must also keep track of $w_1$ and $w_2$). The codimension of the complement of $\mathcal{U}_{3, 2, \ell} \subset \mathcal{B}_{2,\ell}$ is $6 + \ell$. In particular, for $\ell= 0, 1$, we have $A^4(\mathcal{B}_{2,\ell}) = A^4(\mathcal{U}_{3,2,\ell})$.
By Lemma \ref{genslem}, we have that $A^4(\mathcal{B}_{2,\ell})/(A^1(\mathcal{B}_{2,\ell}) \cdot A^3(\mathcal{B}_{2,\ell}) + A^2(\mathcal{B}_{2,\ell})^2)$ is generated by $t_4 := c_4(\pi_*\mathcal{E}(2))$ and $u_4 := c_4(\pi_*\mathcal{E}(3))$. These classes are determined by the splitting principle and Grothendieck--Riemann--Roch, and one can calculate them quickly using the Schubert2 package in Macaulay2. One then observes that if $\ell = 0$, then $t_4$ and $u_4$ are expressible as sum of a class in
\[\mathbb{Z}[a_1, a_2', a_2, w_1,w_2]_4 \subset A^1(\mathcal{B}_{2,\ell}) \cdot A^3(\mathcal{B}_{2,\ell}) + A^2(\mathcal{B}_{2,\ell})^2,\]
plus a class divisible by $a_2'$. That is, when $\ell = 0$, we can choose generators of
\[A^4(\mathcal{B}_{2,\ell})/(A^1(\mathcal{B}_{2,\ell}) \cdot A^3(\mathcal{B}_{2,\ell}) + A^2(\mathcal{B}_{2,\ell})^2)\]
so that a multiple lies in $A^1(\mathcal{B}_{2,\ell}) \cdot A^3(\mathcal{B}_{2,\ell})$. On the other hand, if $\ell = 1$, then $t_4$ and $u_4$ contain an $a_2^2$ term with denominator $8$. However, all codimension $2$ classes have denominators at most $2$. Thus, there is no adjustment of $t_4$ or $u_4$ by integral classes from lower codimension so that the result is divisible by a codimension $1$ class.
\end{proof}
\bibliographystyle{amsplain}
|
1,116,691,500,395 | arxiv | \section{Introduction}
The main aim of this paper is to show that suitable weak solutions to the Navier-Stokes equations, whose $\dot{B}^{-1}_{\infty,\infty}$-norm is bounded, have the Type I singularities (or Type I blowups) only. To be more precise in the statement of our results, we need to define certain notions.
\begin{definition}
\label{sws} Let $\Omega$ be a domain in $\mathbb R^3$ and let $Q_T:=\Omega\times ]0,T[$. It is said that a pair of functions $v$ and $q$ is a suitable weak solution to the Navier-Stokes equations in $Q_T$ if the following conditions are fulfilled:
(i) $v\in L_\infty(\delta,T;L_{2,loc}(\Omega))\cap L_{2}(\delta,T;W^1_{2,loc}(\Omega)),\quad q\in L_\frac 32(\delta,T;L_{\frac 32,loc}(\Omega))
$ for any $\delta\in ]0,T]$;
(ii) $v$ and $q$ satisfy the Navier-Stokes equations
$$\partial_tv+v\cdot\nabla v-\Delta v=-\nabla q,\qquad {\rm div}\,v=0$$
in $Q_T$ in the sense of distributions;
(iii) for $Q(z_0,R)\subset \Omega\times ]0,T[$, the local energy inequality
$$\int\limits_{B(x_0,R)}\varphi|v(x,t)|^2dx+2\int\limits^t_{t_0-R^2}\int\limits_{B(x_0,R)}\varphi|\nabla v|^2dxd\tau\leq
$$
$$\leq \int\limits^t_{t_0-R^2}\int\limits_{B(x_0,R)}\bigg (|v|^2(\partial_t\varphi+\Delta \varphi)+v\cdot\nabla \varphi(|v|^2+2q)\bigg )dxd\tau $$
holds for a.a. $t\in ]t_0-R^2,t_0[$ and for all non-negative test functions $\varphi\in C^\infty_0(B(x_0,R)\\
\times ]t_0-R^2,t_0+R^2[)$. \end{definition}
Let us introduce the following scaled energy quantities:
$$ A(z_0,r):=\sup\limits_{t_0-r^2<t<t_0}\frac 1r\int\limits_{B(x_0,r)}|v(x,t)|^2dx,\qquad E(z_0,r):=\frac 1r\int\limits_{Q(z_0,r)}|\nabla v|^2dxdt,
$$
$$C(z_0,r):=\frac 1{r^2}\int\limits_{Q(z_0,r)}|v|^3dxdt,\qquad D(z_0,r):=\frac 1{r^2}\int\limits_{Q(z_0,r)}|q|^{\frac 3 2}dxdt,$$
$$G(z_0,r):=\max\{A(z_0,r),E(z_0,r),C(z_0,r)\},$$$$ g(z_0,r):=\min\{A(z_0,r),E(z_0,r),C(z_0,r)\}.$$
Here, $Q(z_0,r):=B(x_0,r)\times ]t_0-r^2,t_0[$ and $B(x_0,r)$ is the ball of radius $r$ centred at a point $x_0\in \mathbb R^3$.
The important feature of the above quantities is that all of them are invariant with respect to the Navier-Stokes scaling.
Our main result is as follows.
\begin{theorem}
\label{maintheorem} Let $\Omega=\mathbb R^3$. Assume that a pair $v$ and $q$ is a suitable weak solution to the Navier-Stokes equations in $Q_T$. Moreover, it is supposed that
\begin{equation}
\label{mainassumption}
v\in {L_\infty(0,T; \dot{B}^{-1}_{\infty,\infty}(\mathbb R^3))}.
\end{equation}
Then, for any $z_0\in \mathbb R^3\times ]0,T]$, we have the estimate
\begin{equation}
\label{mainresult}
\sup\limits_{0<r<r_0}G(z_0,r)\leq c[r_0^{\frac 1 2}+\|v\|^2_{L_\infty(0,T; \dot{B}^{-1}_{\infty,\infty}(\mathbb R^3))}+\|v\|^6_{L_\infty(0,T;\dot{ B}^{-1}_{\infty,\infty}(\mathbb R^3))} ],\end{equation}
where $r_0\leq \frac 12\min\{1,t_0\}$ and $c$ depends on $C(z_0,1)$ and $D(z_0,1)$ only.
\end{theorem}
Let us recall one of definitions of the norm in the space $\dot{B}^{-1}_{\infty,\infty}(\mathbb R^3)=\{f \in S': \|f\|_{\dot{B}^{-1}_{\infty,\infty}(\mathbb R^3)}<\infty\}$, which is the following:
$$\|f\|_{\dot{B}^{-1}_{\infty,\infty}(\mathbb R^3)}:=\sup\limits_{t>0}t^\frac 1 2\|w\|_{L_\infty(\mathbb R^3)},$$
where $S'$ is the space of tempered distributions,
$w$ is the solution to the Cauchy problem for the heat equation with initial datum $f$.
\begin{definition}
\label{Type1def} Assume that $z_0=(x_0,t_0)$ is a singular point of $v$, i.e., there is no parabolic vicinity of $z_0$ where $v$ is bounded. We call $z_0$ Type I singularity (or Type I blowup) if there exists a positive number $r_1$ such that
$$\sup\limits_{0<r<r_1}g(z_0,r)<\infty.$$
\end{definition}
According to Definition \ref{Type1def}, any suitable weak solution, satisfying assumption (\ref{mainassumption}), has Type I singularities only. In particular, arguments, used in paper \cite{Seregin2012}, show that axially symmetric suitable weak solutions to the Navier-Stokes equations have no Type I blowups. This is an improvement of what has been known so far, see papers \cite{and} and \cite{Seregin2012}, where condition (\ref{mainassumption}) is replaced by stronger one
$$v\in L_\infty(0,T;BMO^{-1}(\mathbb R^3)).$$
Regarding other regularity results on axially symmetric solutions to the Navier-Stokes equations, we refer to papers \cite{BZ2010,CL2002,CSYT2008,CSYT2009,HL2008,JX2003,KNSS2009,La1968,LZ2012,LNZ2016,LZ2017,
Pan2016,Seregin2009,Seregin2007,UY1968,WZ2012,Wei2016}.
Another important consequence is that the smallness of $\|v\|_{L\infty(0,T;\dot{B}^{-1}_{\infty,\infty}(\mathbb R^3))}$ implies regularity, see also \cite{CH2010,HL2017}.
\section{Proof of the Main Result}
In this section, Theorem \ref{maintheorem} is proved. First, we recall the known multiplicative inequality, see \cite{HMZW2011}.
\begin{lemma}\label{interbesov} For any $u\in\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})\cap\dot{H}^{1}(\mathbb{R}^{3})$,
the following is valid:
\begin{equation}\label{inebesov}
\|u\|_{L_{4}(\mathbb{R}^{3})}\leq c\|u\|_{\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})}^{\frac{1}{2}}\|\nabla u\|_{L_2{(\mathbb R^3)}}^{\frac{1}{2}},\end{equation}
where $\dot{H}^1(\mathbb R^3)$ is a homogeneous Sobolev space.
\end{lemma}
In fact, a weaker version of (\ref{inebesov}) with $\|u\|_{L^{4,\infty}}$ instead of $ \|u\|_{L_{4}(\mathbb{R}^{3})}$ is needed. Here, $L^{4,\infty}(\mathbb R^3)$ is a weak Lebesgue space. An elementary proof of a weaker inequality is given in \cite{Ledoux2003}.
The second auxiliary statement is about cutting-off in the space $\dot B^{-1}_{\infty,\infty}(\mathbb R^3)$.
\begin{lemma}\label{localbesov}
Let $u\in \dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})$ and $\phi\in C_{0}^{\infty}(\mathbb{R}^{3})$. Then
$$\|u\phi\|_{\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})}\leq c(|{\rm spt}\, \phi|)\|u\|_{\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})}.$$
\end{lemma}
We have not found out a proof of Lemma \ref{localbesov} in the literature and presented it in Appendix. Our proof is elementary and based on typical PDE's arguments.
A scaled version of the previous lemma is as follows.
\begin{lemma}\label{localversion} For any $u\in\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})\cap H^{1}(B(2))$, the estimate
\begin{equation}\label{localver}
\|u\|_{L^{4,\infty}(B)}\leq c\|u\|_{\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})}^{\frac{1}{2}}\|u\|^{\frac{1}{2}}_{H^{1}(B(2))}.
\end{equation}
is valid for a universal constant $c$. Moreover, if $u\in\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})\cap H^{1}(B(x_0,2R))$, then
\begin{equation}\label{localscaled}
\|u\|_{L^{4,\infty}(B_{R}(x_{0}))}\leq c\|u\|_{\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})}^{\frac{1}{2}}\left(\|\nabla u\|_{L_{2}(B_{2R}(x_{0}))}+\frac{1}{R}\|u\|_{L_{2}(B_{2R}(x_{0}))}\right)^{\frac{1}{2}}
\end{equation}
with a universal constant $c$.
\end{lemma}
Here, we use notation for the ball centred at the origin $B(R)=B(0,R)$ and $B=B(1)$.
\begin{proof}
It follows from Lemma \ref{interbesov} that for all $\phi\in C_{0}^{\infty}(\mathbb{R}^{3})$,
$$
\|u\phi\|_{L^{4,\infty}(\mathbb{R}^{3})}\leq c\|u\phi\|_{\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3})}^{\frac{1}{2}}\|u\phi\|_{\dot{H}^{1}(\mathbb{R}^{3})}^{\frac{1}{2}}.
$$
Taking a cut-off function $\phi$ such that $\phi=1$ in
$B$, $\phi=0$ out of $B(2)$, and $0\leq\phi\leq1$ for $1\leq|x|\leq2$,
we get inequality \eqref{localver} from Lemma \ref{localbesov}.
To prove inequality \eqref{localscaled}, one can use scaling and shift $x=x_0+Ry$, $x\in B(x_0,2R)$, $y\in B(2)$ in \eqref{localver}.
\end{proof}
In order to prove the main result, we need the following auxiliary inequalities for $C(z_{0},r)$.
\begin{lemma}
For any $0<r\leq R<\infty$, we have
\begin{equation}\label{C}
C(z_{0},r)\leq c\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3}))}^{\frac{3}{2}}\left(A^{\frac{3}{4}}(z_{0},2r)+E^{\frac{3}{4}}(z_{0},2r)\right),
\end{equation}
and
\begin{equation}\label{C2}
C(z_{0},r)\leq c\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3}))}^{\frac{3}{2}}
\left(\frac{R}{r}\right)^{\frac{3} {4}}\left(A^{\frac{3}{4}}(z_{0},R)+E^{\frac{3}{4}}(z_{0},R)\right).
\end{equation}
\end{lemma}
\begin{proof} Obviously, \eqref{C2} easily follows from \eqref{C}. So, we need to prove the first inequality only. By the H\"older inequality, we have
$$\|u(\cdot,t)\|_{L_3(B(x_0,r)}\leq cr^\frac 14\|u(\cdot,t)\|_{L^{4,\infty}(B(x_0,r)}
$$
and thus, by \eqref{localscaled},
$$
C(z_{0},r) =\frac{1}{r^{2}}\int_{t_{0}-r^{2}}^{t_{0}}\|u(\cdot,t)\|_{L_{3}(B(x_{0},r))}^{3}dt
$$$$\leq c\frac{1}{r^{\frac{3}{4}}}\|u\|_{L_\infty(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3}))}^{\frac{3}{2}}\Big(\int_{t_{0}-(2r)^{2}}^{t_{0}}\|\nabla u(\cdot,t)\|_{L_{2}(B(x_{0},2r))}^{2}+$$$$+\frac{1}{r^{2}}\|u(\cdot,t)\|_{L_{2}(B(x_{0},2r))}^{2}dt\Big)^{\frac{3}{4}}
$$$$\leq c\frac{1}{r^{\frac{3}{4}}}\|u\|_{L_\infty(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^{3}))}^{\frac{3}{2}}\Big(\int_{t_{0}-(2r)^{2}}^{t_{0}}\|\nabla u(\cdot,t)\|_{L_{2}(B(x_{0},2r))}^{2}dt+
$$$$+\sup\limits_{-(2r)^{2}+t_0\leq t<t_0}\|u(\cdot,t)\|_{L_{2}(B(x_{0},2r))}^{2}\Big)^{\frac{3}{4}}.
$$
This completes the proof of inequality \eqref{C}.
\end{proof}
Now we are going to jusify our main result.
\begin{proof}[Proof of Theorem \ref{maintheorem}] From the local
energy inequality, it follows that, for any $0<r<\infty$,
\begin{equation}\label{en1}
A(z_{0},r)+E(z_{0},r)\leq c\left(C^{\frac{2}{3}}(z_{0},2r)+C(z_{0},2r)+D(z_{0},2r)\right).
\end{equation}
For the pressure $q$, we have the decay estimate
\begin{equation}
D(z_{0},r)\leq c\left(\frac{r}{R}D(z_{0},R)+\left(\frac{R}{r}\right)^{2}C(z_{0},R)\right).\label{pressureD}
\end{equation}
which is valid for any $0<r<R<\infty$.
Assume that $0<r\leq\frac \rho 4<\rho\leq1$. Combining \eqref{pressureD} and \eqref{en1}, we find
$$
A(z_{0},r)+E(z_{0},r)+D(z_{0},r) \leq
$$$$ \leq c\left(C^{\frac{2}{3}}(z_{0},2r)+C(z_{0},2r)+\left(\frac{\rho}{r}\right)^{2}C(z_{0},\frac{\rho}{2})+\frac{r}{\rho}D(z_{0},\frac{\rho}{2})\right).
$$
Now, let us estimate each term on the right hand side of the last inequality. From \eqref{C}, \eqref{C2}, and Young's inequality with
an arbitrary positive constant $\delta$, we can derive
\[C(z_0,2r)\leq c\delta\left(A(z_0,\rho)+E(z_0,\rho)\right)+c\delta^{-3}\|u\|_{L^{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^6\left(\frac{\rho}{r}\right)^3.\]
Similarly,
\[C^{\frac{2}{3}}(z_0,2r)\leq c\delta\left(A(z_0,\rho)+E(z_0,\rho)\right)+c\delta^{-1}\|u\|_{L^{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^2\left(\frac{\rho}{r}\right),\]
and
\[(\frac{\rho}{r})^{2}C(z_0,\frac{\rho}{2})\leq c \delta\left(A(z_0,\rho)+E(z_0,\rho)\right)+c\delta^{-3}\|u\|_{L^{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^6\left(\frac{\rho}{r}\right)^8.\]
Denote $\mathcal{E}(r)=A(z_0,r)+E(z_0,r)+D(z_0,r)$. By a simple inequality $D(z_0,\rho/2)\leq cD(z_0,\rho)$,
$$\mathcal{E}(r)\leq c (\delta+\frac r \rho) \mathcal{E}(\rho)+c\Big\{\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^2\Big(\frac{\rho}{r}\Big)\delta^{-1}+$$
$$+\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^6\Big[\Big(\frac{\rho}{r}\Big)^3+\Big(\frac{\rho}{r}\Big)^8\Big]\delta^{-3}\Big\}.
$$
Letting $r=\theta \rho$ and $\delta=\theta$ and picking up $\theta$ such that $2c\theta^{1/2}\leq 1$, we find
$$\mathcal{E}(\theta r)\leq \theta^{1/2}\mathcal{E}(\rho)+c\left\{\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^2\theta^{-2} +\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^{6}\theta^{-11}\right\}.$$
Standard iteration gives us that for $0<r\leq \frac{1}{2}$,
$$
\mathcal{E}(r)\leq c\left(r^{\frac{1}{2}}\mathcal{E}(1)+\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^2+\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^6\right).
$$
Taking into account \eqref{C}, we get in addition that
$$
C(z_{0},r)\leq c\left(r^{\frac{1}{2}}\mathcal{E}(1)+\|u\|_{L_{\infty}(0,T;\dot{B}_{\infty,\infty}^{-1}(\mathbb{R}^3))}^6\right).
$$
This completes the proof of Theorem \ref{maintheorem}.
\end{proof}
|
1,116,691,500,396 | arxiv | \section{Introduction}
Spacetime outside of a stationary, charged, rotating black-hole is described by a member of the subextremal
family of Kerr--Newman solutions of the Einstein electrovacuum equations.
One of the most important open problems in general relativity is the nonlinear stability of the exterior Kerr(--Newman) metric (see \cite{K07}). At present, the only global nonlinear stability result in the asymptotically flat setting is that for Minkowski space, proved by Christodoulou and Klainerman in \cite{CK93}. Following the philosophy of their monumental proof, the first step in the journey toward resolution of the nonlinear stability problem is the analysis of the linear stability problem using sufficiently robust techniques. The simplest such linear problem is that of the scalar wave equation
\begin{equation}
\label{eq:wave0}
\square_g \psi = 0,
\end{equation}
which may be thought of as a ``poor-man's version'' of the linearised Einstein equations (taken around a subextremal Kerr--Newman metric $g$).
Thus, the boundedness and decay in time of such $\psi$ on a Kerr--Newman background may be thought of as stability of this spacetime for linear scalar perturbations.
This linear stability result is proved in this paper's companion, \cite{C14-1}, following the approach taken for the Kerr case by Dafermos, Rodnianski and Shlapentokh-Rothman in \cite{DR10} and \cite{DRS14}.
As in the Kerr case, one of the major difficulties in understanding \eqref{eq:wave0} is that of \textit{superradiance}, the fact that the conserved $\partial_t$ energy is not positive definite and thus does not control the solution $\psi$. After an appropriate frequency localisation in the frequency parameters $\w$ and $m$ (corresponding to the Killing fields $\partial_t$ and $\partial_\phi$ respectively), the superradiant frequencies are seen to be those satisfying
\begin{equation}
\label{eq:superrad}
0 \le {m\omega}\le \frac{a m^2}{2Mr_+ - Q^2} .
\end{equation}
In particular, the $\partial_t$ energy identity does not preclude finite-energy exponentially growing mode solutions (with explicit $t$, $\phi$ dependence $e^{-i \w t} e^{i m \phi}$ ) associated with the frequencies \eqref{eq:superrad}, with $\w$ in the upper half-plane. The statement that such modes do not exist is known as \textit{mode stability}. In the Kerr case, this has indeed been proven by Whiting in the celebrated \cite{Wh89}.
The proof of quantitative boundedness and decay for solutions of \eqref{eq:wave0} in the Kerr case given in \cite{DRS14} in fact depended on a quantitative refinement of Whiting's \cite{Wh89}. The necessary refinement was proved very recently by Shlapentokh-Rothman in \cite{SR} by first extending \cite{Wh89} to exclude resonances on the real axis and then refining this qualitative statement to a quantitative estimate.\footnote{In the case $\abs{a} \ll M$, one need not appeal to Whiting's \cite{Wh89} (or its refinement \cite{SR}) as the small parameter may be exploited to deal directly with superradiance. A boundedness result had been obtained for $\abs{a} \ll M$ in \cite{DR11} followed by decay results in \cite{AnBl}, \cite{DR09-1} and \cite{TaTo}. For the situation in the extremal case $\abs a = M$, see \cite{Ar11-2} and \cite{Ar12}. For the case where $\Lambda > 0$, see \cite{Dy10} and for the $\Lambda < 0$ case, see \cite{Ga12}, \cite{HS13-1} and \cite{HS13-2}.}
Turning to the Kerr--Newman spacetimes, even the analogue of Whiting's mode stability is absent in the literature. In the present paper, we will prove for these spacetimes both the qualitative mode stability results (in the upper half-plane and on the real axis) as well as the quantitative estimate in the spirit of \cite{SR}. In particular, the latter result is needed for the general boundedness and decay results presented in the companion paper \cite{C14-1}. The precise mode stability results are stated here in \S\ref{sec:results} and the estimate needed in \cite{C14-1} is presented here in Theorem \ref{thm:ILED}.
In the Kerr case, the crucial ingredients in the proof of mode stability given in \cite{Wh89} and \cite{SR} are the remarkable transformation properties of the radial ODE satisfied by the modes. Miraculously, all the essential elements of this structure are preserved in passing from the Kerr to the Kerr--Newman solution. In particular, we show that the radial ODE can be represented as a confluent Heun equation (See \S\ref{sec:inhomogeneous}). We then define the \textit{Whiting transform} for $u(\w,m,\lambda,r)$ with $Im(\w) \ge 0$ (see \eqref{eq:WhitingTransform} for the definition). The Whiting transform takes the solution $u^*$ of a confluent Heun equation to $\tilde u$ which solves another confluent Heun equation with different coefficients (See Proposition \ref{prop:WhitingProperties}). There are three pivotal facts about this transform:
\begin{enumerate}[{(a)}]
\item The potential in the confluent Heun equation satisfied by $\tilde u$ possesses certain positivity properties. (See Proposition \ref{prop:positivity}.)
\item $\tilde u$ has `good' asymptotics near the horizon and near null infinity. (See Propositions \ref{prop:asymptUpper} and \ref{prop:asymptReal}.)
\item For $\w \neq 0$ on the real axis, the limit of $u$ at the horizon is a positive multiple of the limit of $\tilde u$ at $r \rightarrow \infty$. (See Proposition \ref{prop:asymptReal}.)
\end{enumerate}
The statements above were shown to be true for the Kerr case in \cite{Wh89} and \cite{SR}; there is no a priori reason why one would expect these properties to carry over to the Kerr--Newman case. It is thus a fortunate fact that the potential and $\Delta$ parameter for the Kerr--Newman case differ from those in the Kerr case in such a way that the conditions (a), (b) and (c) still hold.
This is discussed further in \S\ref{sec:Whiting}.
For an introduction to many concepts relevant to this paper and an overview of the Kerr case, the reader is referred to the lecture notes \cite{DR08}, the survey paper \cite{DR12} and the recent \cite{DRS14}. For background on the Kerr--Newman spacetimes, see \cite{MR2014957}, which deals with the Dirac equation.
\section{Preliminaries}
\subsection{The Kerr--Newman spacetime}
A subextremal Kerr--Newman manifold describes a stationary spacetime in which there is a rotating charged black hole. The Kerr--Newman metric depends on three physical parameters: the mass $M$, angular momentum density $a$ and charge density $Q$. These parameters are expressed in ``natural units'' where the gravitational constant and speed of light have been set to unity ($G = c = 1$).
Here we consider the \textit{subextremal} family of Kerr--Newman spacetimes in which a black hole is present. Subextremal means that $0 \le a^2 + Q^2 < M^2.$
The other cases, $a^2 + Q^2 = M^2$ (extreme Kerr--Newman) and $a^2 + Q^2 > M^2$ (fast Kerr--Newman) have profoundly different structure.
We will often drop the dependence of the metric on the parameters $(a,Q,M)$ and denote an arbitrary member of $\set{g_{a,Q,M} ~:~ a^2 + Q^2 < M^2}$ by $g$.
We set, for a fixed triplet of parameters $(a, Q,M)$,
$$
r_\pm := M \pm \sqrt{M^2 - a^2 - Q^2}
$$
and define the manifold $\mathcal M$ to be covered by a ``Boyer--Lindquist'' coordinate chart\footnote{This coordinate chart is global modulo the degeneration of polar coordinates.}
$$
\mathcal M = \set{(t, r, \theta, \phi) \in \mathbb R \times (r_+, \infty) \times \mathbb S^2}.
$$
The Kerr--Newman metric in these coordinates is
\begin{eqnarray} \label{eq:metric}
g_{M,a,Q}
&=&
-\frac{\Delta}{\rho^2}\left(dt - a \sin^2\theta d\phi \right)^2
+
\frac{\sin^2\theta}{\rho^2}\Big((r^2+a^2)d\phi - a dt\Big)^2
+
\frac{\rho^2}{\Delta}dr^2
+
\rho^2 d\theta^2, \\
\mathrm{where}~
\Delta &=& r^2 - 2Mr + a^2 + Q^2 = (r - r_+)(r - r_-)
\nonumber\\
\mathrm{and}~
\rho^2 &=& r^2 + a^2 \cos^2\theta .\nonumber
\end{eqnarray}
It will be useful to define another coordinate $r^*(r) : (r+, \infty) \rightarrow (-\infty, \infty)$ (up to a constant) by
$$
\frac{dr^*}{dr} = \frac{r^2 + a^2}{\Delta}.
$$
The manifold $\mathcal M$ can be extended to a larger manifold $\tilde{\mathcal M}$.
The degeneration of the Boyer--Lindquist coordinates at $r = r_+$ is remedied by introducing the Kerr--Newman star coordinate chart $(t^*,r,\theta, \phi^*)$, where:
\begin{equation}
\label{eq:changeCoords}
\left\{
\begin{array}{ll}
t^* = t + \bar t(r), & ~~ d\bar t̄(r) = \frac{r^2 + a^2}{\Delta},
\\
\mathrm{and} ~~\phi^* = \phi + \bar \phi(r), & ~~ d\bar \phi(r) = \frac{a}{\Delta}.
\end{array}
\right.
\end{equation}
From this, one sees that the metric extends smoothly to a metric $\tilde g$ (defined by the expression arising from applying \eqref{eq:changeCoords} to \eqref{eq:metric}) on
$$
\mathcal{\tilde M} = \set{ (t^*, r, \theta, \phi^*) \in \mathbb R \times (r_-, \infty) \times \mathbb S^2}.
$$
Note that $\mathcal H^+ := \set{r = r_+} = \partial \mathcal{ M} \subset \tilde{\mathcal{M}}$ is a null hypersurface in $\tilde{\mathcal{M}}$. We shall refer to $\mathcal H^+$ as the horizon.
\subsection{Mode solutions of the wave equation}
For the Kerr--Newman metric in Boyer--Lindquist coordinates, the wave equation is
\begin{equation}
\label{eq:waveKN}
\frac{1}{\rho^2\sin\theta}\left[\left( a^2\sin^2\theta - \frac{(a^2 + r^2)^2 }{\Delta} \right) \partial_t^2\psi
-
\frac{a^2}{\Delta}\partial_\phi^2\psi
-
\frac{2a (r^2 + a^2-\Delta )}{\Delta}\partial_t\partial_\phi\psi
+
\partial_r(\Delta \partial_r \psi)
+
\Lang_{\mathbb S^2} \psi\right]
= 0,
\end{equation}
where $\Lang_{\mathbb S^2}$ denotes the (unit) spherical Laplacian:
$$
\Lang_{\mathbb S^2} = \frac{1}{\sin\theta} \partial_\theta(\sin\theta \partial_\theta) + \frac{1}{\sin^2\theta}\partial_\phi^2\psi.
$$
A general subextremal Kerr--Newman metric possesses only the two Killing fields $\partial_t$ and $\partial_\phi$. Nonetheless, Carter discovered in \cite{Ca68-2} that \eqref{eq:waveKN} can be formally separated. This is related to the existence of an additional hidden symmetry. We use this to make the following definition:
\begin{definition}\label{def:modeSol}
Let $(\mathcal M, g)$ be a subextremal Kerr--Newman spacetime. A smooth solution $\psi$ of the wave equation \eqref{eq:waveKN} is called a \emph{mode solution} if there exist $(\w,m,\ell) \in \mathbb C \setminus \set 0 \times \mathbb Z \times \set{\mathbb{Z} ~:~ \ell \ge \abs{m} }$
such that
$$
\psi(t,r,\theta,\phi) = R\ind(r) S\ind(\theta) e^{i m \phi} e^{-i \w t},
$$
where
\begin{enumerate}[1.]
\item $S\ind$ solves the following Sturm-Liouville problem
\begin{equation}
\displaystyle{\frac{1}{\sin\theta}
\frac{d}{d\theta}
\left(\sin\theta \frac{dS\ind}{d\theta}\right)
-
\left( \frac{m^2}{\sin^2\theta} - a^2 \w^2 \cos^2\theta \right)
+
\lambda\ind S\ind = 0} \label{eq:angularODE}
\end{equation}
with the boundary condition that
\begin{equation}
e^{i m \phi}S\ind(\theta)~{extends~ smoothly~ to~}\mathbb S^2,
\label{eq:angularBC}
\end{equation}
with $S\ind$ an eigenfunction with corresponding eigenvalue $\lambda\ind$ of the angular ODE \eqref{eq:angularODE}.\footnote{The Sturm--Liouville problem admits a set of eigenfunctions $\set{S\ind}_{\ell = \abs m}^\infty$ and real eigenvalues $\set{\lambda\ind}_{\ell = \abs m}^\infty$. The eigenfunctions $\set{S\ind}$ are called ``oblate spheroidal harmonics'' and define an orthonormal basis for $L^2(\sin\theta d\theta)$.}
\item $R$ solves the radial equation
\begin{eqnarray}\label{eq:Carter-r}
\left[
\partial_r(\Delta \partial_r)
-
\w^2
\left(
a^2 - \frac{(a^2 + r^2)^2 }{\Delta}
\right)
+
\frac{a^2m^2}{\Delta}
-
\frac{2am \omega (2Mr - Q^2)}{\Delta}
- \lambda\ind
\right]
R
= 0.
\end{eqnarray}
\item $\displaystyle{R(r)(r-r_+)^{-\frac{i(am - (2Mr_+ - Q^2)\w)}{r_+ - r_-} } }$ is smooth at $r = r_+$.\footnote{We will subsequently denote this as
$R(r) \sim (r-r_+)^{\frac{i(am - (2Mr_+ - Q^2)\w)}{r_+ - r_-} }$ at $r = r_+$. }
\item There exist constants $\set{C_k}_{k=0}^\infty$ such that for any $N \ge 1$,
$$
\displaystyle{R(r^*) = \frac{e^{i\w r^*} }{r} \sum_{k = 0}^N C_k r^{-k} + O(r^{-N-2}),~}
$$
for large $r$.\footnote{We will subsequently denote this as $R(r^*) \sim r^{-1} e^{i\w r^*}$ as $r \rightarrow \infty$. }
\end{enumerate}
\end{definition}
The boundary conditions \eqref{eq:angularBC} and in points 3 and 4 above are uniquely determined by requiring that $\psi$ extends smoothly to the horizon $\mathcal H^+$ and has finite energy along asymptotically flat hypersurfaces for $Im(\w) > 0$ and along hyperboloidal hypersurfaces for $Im(\w) \le 0$. See the discussion in \cite[Appendix D]{SR} for details, cf. \cite{Dy10} and \cite{Wa13}.
It is convenient to define
\begin{equation} \label{eq:defu}
u_{m\ell}^{(a\omega)}(r^*) = \sqrt{r^2 + a^2}R_{m\ell}^{(a\omega)}(r)
\end{equation}
which satisfies
\begin{equation} \label{eq:Carter}
\frac{d^2}{(dr^*)^2}u\ind(r^*) + \left(\omega^2 - V\ind(r)\right)u\ind = 0,
\end{equation}
where
\begin{eqnarray*}
V\ind(r)
=
\frac{2am\omega (r^2 + a^2 - \Delta)
-
a^2 m^2
+
\Delta \cdot (\lambda\ind + a^2\omega^2)}{(r^2 + a^2)^2}
+
\frac{\Delta( r^2 + \Delta + 2Mr)}{(r^2 + a^2)^3}
-
\frac{3 \Delta^2 r^2}{(r^2 + a^2)^4}.
\end{eqnarray*}
Note that even though $R\ind$ is complex-valued, the potential $V\ind$ is real.
We will often drop the indices $\w, m, \ell$ when there is no risk of confusion. We will also adopt the convention that $u'$ denotes a derivative with respect to $r^*$.
\subsection{The Wronskian}
\label{sec:Wronskian}
Through asymptotic analysis of \eqref{eq:Carter}, one can make the following definitions:
\begin{definition}
Let $u_{hor}(r^*, \w, m, \ell)$ be the unique function satisfying
\begin{enumerate}[1.]
\item $u_{hor}'' + (\w^2 - V) u_{hor} = 0$.
\item $u_{hor} \sim (r-r_+)^{\frac{i(am - (2Mr_+ - Q^2)\w)}{r_+ - r_-} }$ as $r^* \rightarrow -\infty$.
\item $\displaystyle{ \abs{ \left( (r(r^*)-r_+)^{-\frac{i(am - (2Mr_+ - Q^2)\w)}{r_+ - r_-} } u_{hor} \right) (-\infty)}^2 = 1 }$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $u_{out} (r^*, \w, m, \ell )$ be the unique function satisfying
\begin{enumerate}[1.]
\item $u_{out}'' + (\w^2 - V) u_{out} = 0$.
\item $u_{out} \sim e^{i\w r^*}$ as $r^* \rightarrow \infty$.
\item $ \abs{ \left( u_{out} e^{-i\w r^*} \right) (\infty) }^2 = 1$.
\end{enumerate}
\end{definition}
One then defines the Wronskian
\begin{equation} \label{eq:W}
W(\w,m,\ell ) = u_{hor}(r^*) u_{out}'(r^*) - u_{hor}'(r^*) u_{out}(r^*) .
\end{equation}
The Wronskian can be evaluated at any fixed $r^*$. The Wronskian $W$ will vanish if and only if the solutions are linearly dependent. Then $W = 0$ implies $\abs{W^{-1}} = \infty$. The quantitative mode stability result will be an explicit upper bound for $\abs{W^{-1}}$, so that $u_{out}$ and $u_{hor}$ are linearly independent and any solution of the Carter ODE \eqref{eq:Carter} can be expressed as a superposition of those solutions.
\subsection{The inhomogeneous equation}
\label{sec:inhomogeneous}
In the proof of Theorem \ref{thm:quantModeStab}, we will consider the following inhomogeneous form of \eqref{eq:Carter-r},
\begin{equation}
\left[
\partial_r(\Delta \partial_r)
-
\w^2
\left(
a^2 - \frac{(a^2 + r^2)^2 }{\Delta}
\right)
+
\frac{a^2m^2}{\Delta}
-
\frac{2am \omega (2Mr - Q^2)}{\Delta}
- \lambda\ind
\right]
R_{m\ell}^{(a\omega)}
= F ,
\label{eq:CarterF}
\end{equation}
where $F$ is a compactly supported smooth function on $(r_+,\infty)$.
The corresponding inhomogeneous version of \eqref{eq:Carter} is then
\begin{equation}
\frac{d^2}{(dr^*)^2}u\ind(r^*)
+
\left(\omega^2 - V\ind(r)\right)u\ind
=
H := \frac{\Delta F}{(r^2 + a^2)^{1/2}}.
\label{eq:CarterH}
\end{equation}
\section{Statement of mode stability results}
\label{sec:results}
For a subextremal Kerr--Newman spacetime $(\mathcal M, g)$, we have the following results.
\begin{theorem}[Quantitative mode stability on the real axis]\label{thm:quantModeStab}
Let
$$
\mathcal F \subset \set{(\w, m, \ell) \in \mathbb R \times\set{ \mathbb Z \times \mathbb{Z} ~|~ \ell \ge \abs{m} }}
$$
be a frequency range for which
$$
C_{\mathcal F}
:=
\sup_{(\w,m,\ell) \in \mathcal F}
\left(\abs{\w} + \abs{\w}^{-1} + \abs{m} + \abs{\lambda\ind }\right)
< \infty.
$$
Then the Wronskian $W$ given by \eqref{eq:W} satisfies
$$
\sup_{(\w,m,\ell) \in \mathcal F} \abs{W^{-1}}
\le
G(C_{\mathcal F}, a, Q , M).
$$
where the function $G$ can, in principle, be given explicitly.
\end{theorem}
In proving the quantitative result above, we will also obtain the following qualitative results.
\begin{theorem}[Mode Stability on the real axis]
\label{thm:modeStabReal}
There exist no non-trivial mode solutions corresponding to $ \w \in \mathbb R \setminus \set{0}$.
\end{theorem}
\begin{theorem}[Mode Stability]
\label{thm:modeStabUpper}
There exist no non-trivial mode solutions corresponding to $Im (\w) > 0$.
\end{theorem}
Theorem \ref{thm:modeStabUpper} is the analogue of Whiting's original mode stability result \cite{Wh89}. Theorem \ref{thm:modeStabReal} is the analogue of Shlapentokh-Rothman's extension of Whiting's mode stability result \cite{Wh89} to the real axis. Theorem \ref{thm:quantModeStab} is the quantitative refinement of Theorem \ref{thm:modeStabReal} needed in the companion paper \cite{C14-1} for the proof of linear stability of subextremal Kerr--Newman black holes.
Note that for non-superradiant frequencies $\w$, $m$, i.e. those outside of the range \eqref{eq:superrad}, Theorem \ref{thm:modeStabReal} and Theorem \ref{thm:modeStabUpper} follow immediately from the energy identity (see \cite[\S 1.5 \& \S 1.6]{SR}). In what follows, we will not however make a distinction between superradiant and non-superradiant frequencies.
\section{The Whiting transform}
\label{sec:Whiting}
The problem with trying to derive energy estimates for the Carter ODE \eqref{eq:Carter} is that the boundary condition at $r^* = -\infty$ may give a non-positive term due to superradiance. To deal with this, we will first cast \eqref{eq:Carter} as a confluent Heun equation \eqref{eq:carterHeun}. Applying the Whiting transform \eqref{eq:WhitingTransform} to \eqref{eq:carterHeun}, we will obtain a new confluent Heun equation \eqref{eq:carterWhiting} with different coefficients and boundary conditions that allow for a useful energy estimate.
\subsection{The confluent Heun equation}
\label{sec:CHE}
We rescale $R$ as follows. Let
\begin{equation}
u^* := e^{{i \w r}}
(r- r_-)^{-\eta}
(r- r_+)^{-\xi}
R(r)
\end{equation}
where
$$
\eta := -\frac{i \left(a m-\omega \left(2 M {r_-}-Q^2\right)\right)}{{r_+}-{r_-}}
~~~~and ~~~~
\xi := \frac{i \left(a m-\omega \left(2 M {r_+}-Q^2\right)\right)}{{r_+}-{r_-}}.
$$
Then $g$ satisfies the following Confluent Heun equation:
\begin{equation} \label{eq:carterHeun}
(r-r_+)(r-r_-)\frac{d^2 u^*}{dr^2}
+
\left(
\gamma (r - r_+)
+
\delta (r- r_-)
+
p(r-r_+)(r-r_-)
\right)
\frac{du^*}{dr}
+
\left(
\alpha p (r- r_-)
+
\sigma
\right) u^* = G
\end{equation}
where
\begin{eqnarray*}
\gamma &:=& 2\eta + 1,
\\
\delta &:=& 2\xi + 1,
\\
p &:=&-2 i \w,
\\
\alpha &:=&1,
\\
\sigma &:=& 2 a m \w - 2 \w r_- i - \lambda\ind - a^2 \w^2
\\
\mathrm{and~}
G &:=& e^{{i \w r}}
(r- r_-)^{-\eta}
(r- r_+)^{-\xi}
F .
\end{eqnarray*}
This can be verified by a direct calculation, generalising the analogous computation in \cite{Wh89}.
Note that, as in the (subextremal) Kerr case, $r_+$ and $r_-$ are distinct roots of $\Delta$. If $\Delta$ had more roots, or if these roots were not distinct, the Carter ODE would lie in a different class of equations.
\subsection{The transformed equation}
We now generalise the Whiting transformation to the Kerr--Newman case.
\begin{proposition}
\label{prop:WhitingProperties}
Let $Im(\w) \ge 0$, $\w \ne 0$, and let $R$ solve \eqref{eq:CarterF} with the boundary conditions of Definition \ref{def:modeSol}. Define $\tilde u$ by
\begin{equation} \label{eq:WhitingTransform}
\tilde u(x^*)
:=
(x^2 + a^2)^{1/2}
(x- r_+)^{-2iM\w}
e^{-i\w x}
\int_{r_+}^\infty
e^{\frac{2 i \w}{r_+ - r_- } (x- r_-)(r - r_-)}
(r- r_-)^\eta
(r- r_+)^\xi
e^{-i\w r}
R(r) dr
\end{equation}
where
$$
\eta := -\frac{i \left(a m-\omega \left(2 M {r_-}-Q^2\right)\right)}{{r_+}-{r_-}}
~~~~and ~~~~
\xi := \frac{i \left(a m-\omega \left(2 M {r_+}-Q^2\right)\right)}{{r_+}-{r_-}}.
$$
Then $\tilde u(x)$ is smooth on $(r_+,\infty)$ and satisfies the following confluent Heun equation:
\begin{equation} \label{eq:carterWhiting}
\tilde u'' + \Phi \tilde u = \tilde H,
\end{equation}
where primes denote derivatives with respect to $x^*$ (and $\frac{dx^*}{dx} = \frac{x^2 + a^2}{\Delta}$),
\begin{eqnarray*}
\tilde H(x^*) &:=&\frac{ (x- r_+)(r - r_-)}{(x^2 + a^2)^2} \tilde G(x),
\\
\tilde G(x) &:=&
\frac{(x^2 + a^2)^{1/2}}{
(x- r_+)^{2iM\w}}
e^{-i\w x}
\int_{r_+}^\infty
e^{\frac{2 i \w}{r_+ - r_- } (x- r_-)(r - r_-)}
(r- r_-)^\eta
(r- r_+)^\xi
e^{-i\w r}
F(r) dr
\\
and ~~ \Phi(x^*)
&:=&
\frac{(x-r_-) (x-r_+) }{\left(a^2+x^2\right)^4}
\left(
\left(2x^2 -a^2\right) ( r_- r_+)-2 M x (x^2 - 2a^2) -3 a^2x^2
\right)
\\
&&
+
\frac{(x-r_-) (x-r_+) }{\left(a^2+x^2\right)^2}
\left(
\frac{4 a m (x-M) \omega }{r_--r_+}
-
\lambda\ind -a^2 \omega ^2 \right.
\\
&&
\left.
+
\frac{8 M^2 (x-M) (x-r_-) \omega ^2}{(r_--r_+) (r_+-x)}
+
\frac{(x-r_-) \left((r_+-r_-) (x-r_+)-4 Q^2\right) \omega ^2}{r_+-r_-}\right)
\end{eqnarray*}
\end{proposition}
\begin{proof}[{\textbf{Proof}}]
It turns out that the proof is a direct modification of the computations in \cite[\S 4]{SR}. Let us remark on the fortuitous structure of the Kerr--Newman spacetimes that makes this so. We have already remarked in \S\ref{sec:CHE} that \eqref{eq:carterHeun} is a confluent Heun equation and thus (at least formally) admits non-trivial transformations. The exponents $\eta$ and $\xi$ are obtained from indicial equation associated to \eqref{eq:Carter}. They are the unique exponents that give the correct asymptotics at $r_+$ and $r_-$.
The definitions of $\eta$, $\xi$, $r_+$ and $r_-$ for the Kerr--Newman case differ from those in the Kerr case, but the potential $V\ind$, the parameter $\Delta$ and the asymptotics of the solutions of mode solutions of \eqref{eq:Carter}, have the same structure. The convergence of the integral in \eqref{eq:WhitingTransform} thus follows as in \cite[\S 4]{SR}.
\end{proof}
\noindent
\textbf{Remark.} The Whiting transform is a shifted, rescaled Fourier transform of a rescaled version of $R$. This fact will be crucial in showing that the vanishing of $\tilde u$ forces $R$ to vanish.
\subsection{Asymptotics of the transformed solution}
The good asymptotic properties of $\tilde u$ (c.f.~(b) and (c) of the introduction) are encapsulated in the following two propositions.
\begin{proposition}
\label{prop:asymptUpper}
Let $\w$ and $\tilde u$ be as in the statement Proposition \ref{prop:WhitingProperties}. If $Im(\w) > 0$ then
\begin{enumerate}[1.]
\item $\tilde u = O \left( (x - r_+)^{2M Im(\w) } \right)$ as $x \rightarrow r_+$.
\item $\tilde u' = O\left( (x - r_+)^{2M Im(\w) } \right)$ as $x \rightarrow r_+$.
\item $\tilde u = O\left( e^{- Im(\w) x^{1 + 2M Im(\w) }} \right)$ as $x \rightarrow \infty$.
\item $\tilde u' = O\left( e^{- Im(\w) x^{1 + 2M Im(\w) }} \right)$ as $x \rightarrow \infty$.
\end{enumerate}
\end{proposition}
\begin{proposition}
\label{prop:asymptReal}
Let $\w$ and $\tilde u$ be as in the statement Proposition \ref{prop:WhitingProperties}. If $\w \in \mathbb{R} \setminus \set 0$ then
\begin{enumerate}[1.]
\item $\tilde u$ and $\tilde u'$ are uniformly bounded.
\item
$\abs{\tilde u(\infty)}^2
=
\frac{(r_+ - r_-)^2 \abs{\Gamma(2\xi + 1)}^2 }{4(2 M r_+ - Q^2)\w^2 }
\abs{u(-\infty)}^2$, where $\Gamma(z) := \int_0^\infty e^{-t} t^{z-1} dt$ is the Gamma function.
\item $\tilde u' - i \w \tilde u = O\left( x^{-1} \right)$ as $x^* \rightarrow \infty$.
\item $\tilde u' + i \w r_+^{-1}(r_+ - r_-) \tilde u= O\left( x - r_+ \right)$ as $x \rightarrow -\infty$.
\end{enumerate}
\end{proposition}
The proofs of these propositions are direct modifications of the computations in \cite[\S 4]{SR}.
For all the results above, except Proposition \ref{prop:asymptReal}.2, the difference between the Kerr and Kerr--Newman case is encapsulated within the different definitions of $r_+$ and $r_-$.
Proposition \ref{prop:asymptReal}.2 is exceptional in that we see an explicit difference from the Kerr case. This is due to the presence of $(2 M r_+ - Q^2)$ in the null generator of the Kerr--Newman horizon.
Proposition \ref{prop:asymptReal}.2 is crucial in proving the quantitative result Theorem \ref{thm:quantModeStab} as it provides a correspondence between the horizon asymptotics of the solution of the Carter ODE and the large $r^*$ asymptotics of the transformed solution. This correspondence is what allows for the quantitative estimate of the horizon flux in terms of the inhomogeneity $F$ (see the proof of Proposition \ref{prop:WronskianEst}).
We can now prove the qualitative Theorems \ref{thm:modeStabReal} and \ref{thm:modeStabUpper}.
\section{Proofs of mode stability}
\label{sec:proofs}
\subsection{Qualitative results}
\label{sec:qualitative}
The final element of the structure necessary to prove mode stability for the Kerr--Newman spacetimes is the following positivity property (c.f.~(a) of the introduction):
\begin{proposition}
\label{prop:positivity}
Under the conditions of Proposition \ref{prop:WhitingProperties},
$$
Im(\Phi \bar \w) \ge 0.
$$
If $\w \in \mathbb R \setminus \set 0$, then $\Phi$ is real-valued.
\end{proposition}
\begin{proof}[{\textbf{Proof}}]
The second statement is clear from the definition of $\Phi$.
A (tedious) computation shows that
\begin{eqnarray*}
Im(\Phi \bar \w)
&=&
\frac{(x-r_-) (x-r_+) }{\left(a^2+x^2\right)^2}
Im\left((-\lambda\ind -a^2\w^2) \bar \w\right)
+
\frac{(x-r_-)^2 (x-r_+)^2 \w_I \abs{\w}^2 }{
\left(a^2+x^2\right)^2}
\\
&&
+
\frac{(x-r_-) (x-r_+) }{\left(a^2+x^2\right)^4}
{(\w_I)
\left(x^2(r_+ - a^2 - Q^2) + r_-(x^2 + a^2)(x-r_+) + 2xa^2(x + r_- - r_+)\right)}
\\
&&
+
\frac{(x-r_-) (x-r_+) \w_I \abs{\w}^2 }{
\left(a^2+x^2\right)^2}
\frac{(x-r_-) \left(8 M^2 (x-M)-4 Q^2 (x-r_+)+(r_+-r_-) (x-r_+)^2\right)}
{(r_+-r_-) (x-r_+)}.
\end{eqnarray*}
To see that $Im\left((-\lambda\ind -a^2\w^2) \bar \w\right) \ge 0$, multiply \eqref{eq:angularODE} by $\overline{ \w S\ind}$ and integrate by parts over $[0,\pi]$.
The positivity of the other terms follows from the following chain of inequalities
$$
0 \le r_- \le M \le r_+ \le x
$$
and the subextremal condition $a^2 + Q^2 < M^2$.
\end{proof}
We define the \textit{microlocal energy current}
$$
\tilde Q_T := Im(\tilde u' \overline{\w \tilde u}).
$$
\begin{proof}[Proof of Theorem \ref{thm:modeStabUpper} (Mode stability in the upper half-plane)]
Let $\w = \w_R + i\w_I$ and $Im(\w) = \w_I > 0$ and consider a mode solution of \eqref{eq:waveKN} with $(u\ind, S\ind, \lambda\ind)$.
Define $\tilde u$ to be the \eqref{eq:WhitingTransform} of $u\ind$. Then Proposition \ref{prop:asymptUpper} implies that $\tilde Q_T(\pm \infty) = 0$ so
$$
0
=
-\int_{-\infty}^\infty (\tilde Q_T)' dr^*
=
\int_{-\infty}^\infty \w_I \abs{\tilde u'}^2 + Im(\Phi \bar \w) \abs{\tilde u}^2 dr^*
$$
Since Proposition \ref{prop:positivity} guarantees that $Im(\Phi \bar \w) \ge 0$, we conclude that, $\tilde u$, the Whiting transform of $u$ vanishes. Hence
$$
\tilde R(x) := \int_{r_+}^\infty
e^{\frac{2 i \w}{r_+ - r_- } (x- r_-)(r - r_-)}
(r- r_-)^\eta
(r- r_+)^\xi
e^{-i\w r}
R(r) dr = 0.
$$
Extending $R$ by $0$, we see that the Fourier transform of $(r- r_-)^\eta
(r- r_+)^\xi
e^{-i\w r}
R(r)$ is (up to a change of variable)
$$
\hat R(z) := \int_{-\infty}^\infty
e^{{2 i \abs{\w}^2}(z- r_-)}
(r- r_-)^\eta
(r- r_+)^\xi
e^{-i\w r}
R(r) dr .
$$
Since $R$ is supported in $[0,\infty)$, $\hat R$ can be extended holomorphically into the upper half plane.
Since $\tilde R$ vanishes on $x \in (r_+, \infty)$, $\hat R = 0$ on the line $\set{z = \bar \w (x - r_+)/(r_+ - r_-)~| ~ x\in (r_+,\infty)}$. Analyticity then implies that $\hat R$ and hence $R$ vanish everywhere.
\end{proof}
\begin{lemma}[Unique continuation \cite{SR}]
\label{lemma:UniqueContinuation} Suppose that we have a solution $u(r^*) : (-\infty,\infty) \rightarrow \mathbb C$ to
$$
u + (\w^2 - V )u = 0
$$
such that
\begin{enumerate}[1.]
\item $\w \in \mathbb R \setminus \set 0$,
\item $u$ is uniformly bounded and $(\abs{u'}^2 + \abs{u}^2 )(\infty) = 0$,
\item $V$ is real, uniformly bounded, $V = O(r^{-1})$ as $r \rightarrow \infty$ and $V' = O(r^{-2})$ as $r \rightarrow \infty$.
\end{enumerate}
Then $u$ is identically $0$.
\end{lemma}
\begin{proof}[{\textbf{Proof}}]
This follows exactly as in \cite[\S 6]{SR}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:modeStabReal} (Mode stability on the real axis)]
Let $\w \in \mathbb R \setminus \set 0$ and consider a mode solution of \eqref{eq:waveKN} with $(u\ind, S\ind, \lambda\ind)$.
Define $\tilde u$ by \eqref{eq:WhitingTransform}. By Proposition \ref{prop:WhitingProperties}, $\Phi$ is real, so
$(\tilde Q_T)' = 0$ . Hence $\tilde Q_T( \infty) - Q_T( -\infty) = 0$.
The boundary conditions from Proposition \ref{prop:asymptReal} then imply that
$$
\w^2\abs{\tilde u(\infty)}^2
+
\abs{\tilde u'(\infty)}^2
+
\w^2 \frac{r_+ - r_-}{r_+} \abs{\tilde u(-\infty)}^2
+
\frac{r_+ }{r_+- r_-} \abs{\tilde u'(-\infty)}^2 = 0.
$$
By Lemma \ref{lemma:UniqueContinuation}, we conclude that $\tilde u$ vanishes.
Extending $R$ by $0$, we see that
$$
\tilde R(y) := \int_{-\infty}^\infty
e^{\frac{2 i \w}{r_+ - r_- } (y- r_-)(r - r_-)}
(r- r_-)^\eta
(r- r_+)^\xi
e^{-i\w r}
R(r) dr
$$
vanishes for $\set{y \in (r_+,\infty)}$. Since the Fourier transform of a non-trivial function supported in $(0, \infty)$ cannot vanish on an open set, $R$ must vanish everywhere.
\end{proof}
\subsection{Quantitative results}
\label{sec:quantitative}
The strategy is to express $\tilde u$ in terms of the functions $u_{out}$ and $u_{hor}$ and $W$ defined in \S\ref{sec:Wronskian} and obtain an an estimate for $W^{-1}$ in terms of $u(-\infty)$. This quantity is then estimated using the ODE \eqref{eq:CarterH}.
\begin{proposition}
\label{prop:WronskianEst}
Define $\mathcal F$ as in Theorem \ref{thm:quantModeStab}. For $(\w,m,\ell) \in \mathcal F$ let $u$ solve \eqref{eq:CarterH} with $H(x^*)$ a smooth, compactly supported function. Then for $\epsilon > 0$, there exists a positive constant $C := C({\mathcal F}, a, Q , M)$ such that
$$
\abs{u(-\infty)}^2
\le
C \left(
\epsilon^{-1}\int_{r_+}^\infty \abs{F(r)}^2r^4 dr
\right).
$$
\end{proposition}
\begin{proof}[{\textbf{Proof}}]
Since $(\tilde Q_T)' = \w Im(\tilde H \bar u)$,
$$
\int_{-\infty}^\infty \w Im(\tilde H \bar u)dr^*
=
\tilde Q_T(\infty) - \tilde Q_T(-\infty).
$$
The boundary conditions from Proposition \ref{prop:asymptReal} imply that
$$
\w^2\abs{\tilde u(\infty)}^2
+
\abs{\tilde u'(\infty)}^2
+
\w^2 \frac{r_+ - r_-}{r_+} \abs{\tilde u(-\infty)}^2
+
\frac{r_+ }{r_+- r_-} \abs{\tilde u'(-\infty)}^2
= \int_{-\infty}^\infty \w Im(\tilde H \bar u)dr^* .
$$
So changing variables, applying the Plancherel identity and the Cauchy Schwarz inequality, we have
$$
\w^2 \abs{\tilde u(\infty)}^2
\le
\int_{-\infty}^\infty \w Im(\tilde H \bar u)dr^*
\le
C \left(
\epsilon^{-1}\int_{r_+}^\infty \abs{F(r)}^2r^4 dr
+
\epsilon \int_{r_+}^\infty \abs{R(r)}^2 dr
\right).
$$
Then by Proposition \ref{prop:asymptReal}
$$
\abs{ u(-\infty)}^2
=
\frac{4 \w^2(2M r_+ - Q^2)}{\abs{\Gamma(2\xi + 1)}^2 } \abs{\tilde u(\infty)}^2
\le
C \left(
\epsilon^{-1}\int_{r_+}^\infty \abs{F(r)}^2r^4 dr
+
\epsilon \int_{r_+}^\infty \abs{R(r)}^2 dr
\right).
$$
Finally,
$$
\int_{r_+}^\infty \abs{R(r)}^2 dr
\le
C \int_{r_+}^\infty \abs{F(r)}^2r^4 dr,
$$
by the same argument as found in \cite[\S 5]{SR}.
\end{proof}
For the quantitative result, we construct mode solutions solutions to the Carter ODE from the Wronskian and apply the proposition above.
\begin{lemma}
\label{lemma:inhomContruction}
Let $H(x^*)$ be compactly supported. For any $(\w,m,\ell) \in \mathcal F$ (where $\mathcal F$ is as defined in Theorem \ref{thm:quantModeStab}), the function
$$
u({r^*}) = W(\w,m,\ell)^{-1}
\left(
u_{out}(r^*)
\int_{-\infty}^{r^*}
u_{hor}(x^*) H(x^*) dx^*
+
u_{hor}(r^*)
\int^{\infty}_{r^*}
u_{out}(x^*) H(x^*)
dx^*
\right)
$$
satisfies
$$
u'' + (\w^2 - V) u = H
$$
and the boundary conditions of a mode solution (see Definition \ref{def:modeSol}).
\end{lemma}
\begin{proof}[{\textbf{Proof}}]
This is verified by a direct calculation.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:quantModeStab} (Quantitative mode stability on the real axis)]
Define $\tilde u$ by Lemma \ref{lemma:inhomContruction}. Then
$$
\abs{u(-\infty)}^2
=
\abs{W^{-2}} \abs{\int^{\infty}_{-\infty}
u_{out}(x^*) H(x^*)
dx^* }^2.
$$
Rearranging this expression and applying Proposition \ref{prop:WronskianEst} we find that
$$
\abs{W^{-2}}
=
\frac{ \abs{u(-\infty)}^2 }
{ \abs{\int^{\infty}_{-\infty}
u_{out}(x^*) H(x^*)
dx^* }^2 }
\le
C
\frac{\int^{\infty}_{-\infty} \abs{
(r^2 + a^2)^{1/2} \Delta^{-1} H(x^*) } r^4
dr }
{ \abs{\int^{\infty}_{-\infty}
u_{out}(x^*) H(x^*)
dx^* }^2 } .
$$
Note that by Proposition \ref{prop:asymptReal}, for sufficiently large $x$, $\abs{u_{out}(x) - e^{i\w x} }< Cx^{-1}$ for an an explicit $C$. Since $W$ is independent of $H$ we choose a compactly supported $H$ for which the right hand side of the estimate above is finite. We thus have a quantitative estimate for $\abs{W^{-2}} $.
\end{proof}
\section{Application: Integrated local energy decay}
We now apply Theorem \ref{thm:quantModeStab} to prove Theorem \ref{thm:ILED}, which provides a quantitative energy decay estimate for solutions of the wave equation \eqref{eq:waveKN} on subextremal Kerr--Newman spacetimes which are supported in a compact range of superradiant frequencies. This is the estimate appealed to in \cite{C14-1} to control the horizon term $|{u\ind(-\infty)}|^2$ in the bounded superradiant frequency region.
We wish to apply Carter's separation to the solution of \eqref{eq:waveKN}. In order to perform this separation, we must be able to take the Fourier transform in time. We therefore deal with solutions of \eqref{eq:waveKN} which belong to the following class of functions.
\begin{definition}
A smooth function $f(t,r,\theta,\phi)$ is said to be admissible if for any multi-indices $\alpha$, $\beta$ s.t. $\abs \alpha \ge 1$, $\abs \beta \ge 0$, we have
\begin{enumerate}[1.]
\item
$\displaystyle{
\int_{r > r_0}
\int_{\mathbb S^2}
\abs{\partial^\alpha f}^2
|_{t=0} r^2 ~
\sin\theta dr~ d\theta ~d\phi
< \infty }$
for sufficiently large $r_0$.
\item
$\displaystyle{
\int_{0}^\infty
\abs{\partial^\beta f}^2
dt
< \infty }$
for any $(r,\theta, \phi) \in (r_+, \infty) \times \mathbb S^2$.
\item
$\displaystyle{
\int_{0}^\infty
\int_{K}
\abs{\partial^\beta f}^2~
\sin\theta ~dr ~d\theta~ d\phi ~dt
< \infty }$
for any compact $K \in (r_+, \infty) \times \mathbb S^2$.
\end{enumerate}
For an admissible function $f$ we also define
\begin{equation} \label{eq:energy}
\abs{\partial f}^2 := \abs{(\partial_t + \partial_{r^*})f}^2
+
\Delta\abs{(\partial_t - \partial_{r^*})f}^2
+
r^{-2}\left(\sin^{-2}\theta\abs{\partial_\phi f}^2 + \abs{\partial_\theta f}^2 \right).\footnote{The apparent degeneration of this energy as $r \rightarrow \infty$ is due to the hyperboloidal nature of $\Sigma_0$. The term $\Delta\abs{(\partial_t - \partial_{r^*})f}^2$ converges to the transversal derivative at the horizon.}
\end{equation}
\end{definition}
The main application of Theorem \ref{thm:quantModeStab} in \cite{C14-1} is to admissible solutions $\psi$ of \eqref{eq:waveKN} which are cut off as follows.
\begin{definition}
Let $\Sigma_0$ be a spacelike hyperboloidal hypersurface connecting the horizon $\mathcal H^+$ and future null infinity. Let $\Sigma_1$ be the time 1 image of $\Sigma_0$ under the flow generated by $\partial_t$. Then define a smooth cut-off $\gamma$ which is identically $0$ in the past of $\Sigma_0$ and identically $1$ in the future of $\Sigma_1$.
We define $\psi_{\mbox{\Rightscissors}} := \gamma \psi$, which satisfies the inhomogeneous wave equation
\begin{equation} \label{eq:cutOffWaveEq}
\square_g \psi_{{\mbox{\Rightscissors}}} = F,
~~~{where}~~~F = (\square \gamma)\psi + 2 \nabla^\mu\gamma\nabla_\mu \psi.
\end{equation}
\end{definition}
\begin{proposition}[Carter's separation]
Admissible solutions $f$ of \eqref{eq:waveKN} and \eqref{eq:cutOffWaveEq} can be expressed as
\begin{equation} \label{eq:decomposition}
f(t, r, \theta, \phi) = \overbrace{\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \underbrace{ \sum_{m, \ell > \abs{m}} R\ind(r) \cdot S^{(a\omega)}_{m\ell}(\cos\theta) e^{i m \phi} }_{\mathrm{Oblate ~spheroidal ~expansion}} e^{-i\omega t} }^{\mathrm{Fourier ~transform}} ~d\omega.
\end{equation}
The function $R\ind$ corresponding to $f = \psi$ solves \eqref{eq:Carter-r}.
The function $R\ind$ corresponding to $f = \psi_{\mbox{\Rightscissors}}$ satisfies the inhomogeneous equation \eqref{eq:CarterF} with $F = F\ind$, the Fourier transform of $F$ projected to the oblate spheroidal harmonic corresponding to $\lambda\ind$. The rescaled function $u\ind$ satisfies \eqref{eq:CarterH} with $H = H\ind := \Delta (r^2 + a^2)^{-1/2} F\ind$, where this equality is to be understood in the sense of $L^2_{\w \in \mathcal B }\ell^2_{ m, \ell \in \mathcal C}$. Note moreover that this $H$ is not compactly supported.
\end{proposition}
\begin{proof}[{\textbf{Proof}}]
See \cite[\S 5]{C14-1}.
\end{proof}
\begin{theorem}
\label{thm:ILED}
Let $\psi_{\mbox{\Rightscissors}}$ be an admissible solution of \eqref{eq:cutOffWaveEq} and let
$\mathcal B \subset \mathbb R$
and
$$
\mathcal C \subset \set{( m, \ell) \in \mathbb Z \times \mathbb{Z} ~|~ \ell \ge \abs{m} }
$$
such that
$$
C_{\mathcal B}
:=
\sup_{\w \in \mathcal B}
\left(\abs{\w} + \abs{\w}^{-1} \right)
< \infty
~\mathrm{and}~~
C_{\mathcal C}
:=
\sup_{m,\ell \in \mathcal C}
\left( \abs{m} + \abs{\lambda\ind}\right)
< \infty.
$$
There exists a constant $K := K(r_0,r_1, C_{\mathcal{B}}, C_{\mathcal{C}}, a ,Q, M)$ such that
\begin{equation} \label{eq:ILED}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\left(
\left(
\abs{u\ind(-\infty)}^2 + \abs{u\ind(\infty)}^2
\right)
+
\int_{r_0}^{r_1} \abs{\partial_{r^*}u\ind}^2 + \abs{u\ind}^2 ~ dr^*
\right)
~d\w
\le
\int_{\Sigma_0} \abs{\partial\psi}^2 ,
\end{equation}
where $\abs{\partial\psi}^2$ is defined by \eqref{eq:energy}, $u\ind =\sqrt{r^2 + a^2} R\ind$ and each $R\ind$ solves \eqref{eq:CarterF} for $\w \in \mathcal B$ and $(m,\ell) \in \mathcal C$.
\end{theorem}
\begin{proof}[{\textbf{Proof}}]
For $u$ satisfying the hypotheses of the theorem, we have for any $r^* \in (-\infty, \infty)$,
\begin{eqnarray}
\label{eq:representation}
u({r^*}) = W(\w,m,\ell )^{-1}
\left(
u_{out}(r^*)
\int_{-\infty}^{r^*}
u_{hor}(x^*) H(x^*) dx^*
+
u_{hor}(r^*)
\int^{\infty}_{r^*}
u_{out}(x^*) H(x^*)
dx^*
\right),
\\
\label{eq:drepresentation}
u'({r^*}) = W(\w,m,\ell )^{-1}
\left(
u_{out}'(r^*)
\int_{-\infty}^{r^*}
u_{hor}(x^*) H(x^*) dx^*
+
u_{hor}'(r^*)
\int^{\infty}_{r^*}
u_{out}(x^*) H(x^*)
dx^*
\right),
\end{eqnarray}
where the inequalities above hold in the sense of $L^2_{\w \in \mathcal B }\ell^2_{ m, \ell \in \mathcal C}$ (see \cite[\S 3]{SR} for the full derivation of this representation).\footnote{Roughly speaking, this is the converse of Lemma \ref{lemma:inhomContruction}.}
By the construction of $u_{hor}$ and $u_{out}$, there exists a positive $K := K( C_{\mathcal{B}}, C_{\mathcal{C}}, a ,Q, M)$ such that
\begin{equation} \label{eq:uHorOutBdd}
\sup_{r^* \in \mathbb R, \w \in \mathcal B, (m, \ell) \in \mathcal C} \left(\abs{u_{hor}} +\abs{u_{out}} \right)
< K < \infty,
\end{equation}
Evaluating \eqref{eq:representation} at $r^* = -\infty$ and taking \eqref{eq:uHorOutBdd} into account,
\begin{equation} \label{eq:horizonControl}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\abs{u\ind(-\infty)}^2
~d\w
\le
K
\limsup_{r^* \rightarrow \infty}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
W^{-2}
\abs{
\int^{\infty}_{r^*}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
~ d\w.
\end{equation}
For the term $\abs{u(\infty)}^2$ we apply the microlocal energy current:
\begin{eqnarray*}
\w^2 \abs{u\ind(\infty)}^2
&=&
Q_T(\infty)
=
Q_T(-\infty)
+
\int_{-\infty}^\infty (Q_T)' dr^*
\\
&=&
\w(am - (2Mr_+ - Q^2)\w) \abs{u\ind(-\infty)}^2
+
\w \int_{-\infty}^\infty Im(H\ind \bar u\ind) dr^*
\end{eqnarray*}
So by \eqref{eq:horizonControl},
\begin{eqnarray}
\label{eq:infinityControl}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\abs{u\ind(\infty)}^2
~d\w
&\le&
K
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
W^{-2}
\abs{
\int^{\infty}_{-\infty}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
~ d\w
\nonumber
\\
&& +
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\w \int_{-\infty}^\infty Im(H\ind \bar u\ind) dr^*
~ d\w.
\end{eqnarray}
For the integral term, we begin by taking $R_1$ much larger than $r_1$ and applying \eqref{eq:representation}:
\begin{eqnarray*}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\sup _{r^* \in (r_0,r_1)}\abs{u\ind}^2
~d\w
&\le&
K
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
W^{-2}
\left(
\sup _{r^* \in [r_0,r_1]}
\abs{
\int_{-\infty}^{r^*}
u_{hor}(x^*) H\ind(x^*) dx^*
}^2
\right.
\nonumber \\
&& \left.
+
\sup _{r^* \in [r_0,r_1]}
\abs{
\int_{r^*}^{R_1}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
\right.
\nonumber \\
&& \left.
+
\abs{
\int_{R_1} ^{\infty}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
\right) ~ d\w
\nonumber
\\
&\le&
K
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
W^{-2}
\left(
\int_{r_+}^{R_1}
\abs{F}^2
dr
+
\abs{
\int_{R_1} ^{\infty}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
\right) ~ d\w \nonumber.
\end{eqnarray*}
This estimate may be integrated over $(r_0,r_1)$ to obtain
\begin{equation}
\label{eq:integralControl0}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\int_{r_0}^{r_1}\abs{u\ind}^2
~d\w
\le
K
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
W^{-2}
\left(
\int_{r_+}^{R_1}
\abs{F}^2
dr
+
\abs{
\int_{R_1} ^{\infty}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
\right) ~ d\w .
\end{equation}
The same argument, with \eqref{eq:representation} replaced with \eqref{eq:drepresentation} yields
\begin{equation}
\label{eq:integralControl1}
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\int_{r_0}^{r_1}\abs{(u\ind)' }^2
~d\w
\le
K
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
W^{-2}
\left(
\int_{r_+}^{R_1}
\abs{F}^2
dr
+
\abs{
\int_{R_1} ^{\infty}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
\right) ~ d\w .
\end{equation}
Collecting \eqref{eq:horizonControl}, \eqref{eq:infinityControl}, \eqref{eq:integralControl0} and \eqref{eq:integralControl1} and applying Theorem \ref{thm:quantModeStab} to control $W^{-2}$, we have
\begin{eqnarray*}
&& \int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\left(
\left(
\abs{u\ind(-\infty)}^2 + \abs{u\ind(\infty)}^2
\right)
+
\int_{r_0}^{r_1} \abs{\partial_{r^*}u\ind}^2 + \abs{u\ind}^2 ~ dr^*
\right)
~d\w
\\
&\le&
K G
\int_\mathcal{B}
\sum_{m,\ell \in \mathcal C}
\left[
\abs{
\int_{R_1} ^{\infty}
u_{out}(x^*) H\ind(x^*)
dx^*
}^2
+
\int_{r_+}^{R_1}
\abs{F}^2
dr
+
\w \int_{-\infty}^\infty Im(H\ind \bar u\ind) dr^* \right]
~ d\w.
\end{eqnarray*}
It remains to control the right hand side of this estimate by $\int_{\Sigma_0} \abs{\partial\psi}^2$.
The control of the first term is achieved using the proof of \cite[Lemma 3.3]{SR}. The remaining terms are controlled using the methods in \cite[\S 7]{C14-1}.
\end{proof}
\begin{remark}
We can replace the hyperboloidal hypersurface $\Sigma_0$ with an asymptotically flat hypersurface in Theorem \ref{thm:ILED} as follows. Let $\Sigma_0^*$ be an asymptotically flat hypersurface that agrees with $\Sigma_0$ for $\set{r \le R}$ and which lies in the past of $\Sigma_0$. Choosing $R$ large enough that $T$ is timelike in $\set{r \le R}$, applying the $T$ energy estimate immediately implies that
$$
\int_{\Sigma_0} \abs{\partial\psi}^2 \le C\int_{\Sigma_0^*} \abs{\nabla_{g_{\Sigma_0^*}} \psi }^2 + \abs{n_{\Sigma_0^*} \psi}^2,
$$
so we can then replace the right hand side of \eqref{eq:ILED} by
this integral over an asymptotically flat hypersurface.
\end{remark}
\bibliographystyle{plain}
|
1,116,691,500,397 | arxiv | \section{Introduction}
\label{sec:intro}
In a recent paper Strange\cite{S12} discussed some quantum-mechanical
properties of an electron in a constant magnetic field. Since the system is
axially symmetric along the field direction (chosen to be the $z$ axis) then
the projection of the angular momentum along that axis is a constant of the
motion. The motion of the electron is free along the $z$ axis and bounded on
the plane $x-y$. Restricting the motion of the electron to this plane
Strange discussed the uncertainty relation for the azimuthal angle $\phi $
and the $z$--component of the angular momentum $L_{z}$ that he assumed to be
$\Delta \phi \Delta L_z \ge \hbar$. However, he did not take into account
some of the subtleties of this uncertainty relation that make it quite
different from that for a cartesian coordinate and its conjugate linear
momentum. The $\phi -L_{z}$ uncertainty relation was discussed by several
authors in the past\cite{J64,K65,PT69,K70,P79,C01}. There is even an
interesting series of pedagogical articles on the subject\cite{PT69, K70,
P79, C01}, not without some controversy\cite{PT69, K70}. According to those
papers the uncertainty relation invoked by Strange is incorrect. For this
reason we deem it worthwhile to carry out a more detailed analysis of the
results derived by this author, particularly because the $\phi-L_{z}$
uncertainty relation is suitable for an undergraduate course on quantum
mechanics\cite{PT69, K70, P79, C01}.
In section \ref{sec:uncertainty} we derive the uncertainty
relation for an arbitrary pair of observables following
Chisolm\cite{C01}. In section \ref {sec:example} we first outline
Strange's results based on the incorrect $\phi -L_{z}$ uncertainty
relation and then derive an exact one following
Kraus\cite{K65,K70} and Chisolm\cite{C01}. We also contrast the
exact uncertainty relation with the incorrect one by means of a
state that is somewhat more general than the one chosen by
Strange. Finally, in section \ref{sec:conclusions} we summarize
the main results of this paper and draw conclusions.
\section{The uncertainty relations}
\label{sec:uncertainty}
In order to make this paper sufficiently self-contained and
facilitate the discussion of the uncertainty relation for the
electron in a constant magnetic field\cite{S12} in what follows we
derive the uncertainty relation for an arbitrary pair of
observables. There are different ways of deriving it\cite{P79,C01}
and in what follows we resort to the well known Schwarz
inequality\cite{C01}. To this end consider the usual complex inner
product in quantum mechanics in terms of the bra-ket notation:
$\left\langle f\right| \left. g\right\rangle =\left\langle
g\right| \left. f\right\rangle ^{*}$. The Schwarz inequality
states that
\begin{equation}
\left| \left\langle f\right| \left. g\right\rangle \right| ^{2}\leq
\left\langle f\right| \left. f\right\rangle \left\langle g\right| \left.
g\right\rangle \label{eq:schwarz}
\end{equation}
for any two vectors $\left| f\right\rangle $ and $\left| g\right\rangle $ in
the state vector space. Chisolm\cite{C01} derived a somewhat more general
uncertainty relation from the obvious expression
\begin{equation}
\left| \left\langle f\right| \left. g\right\rangle \right| ^{2}=\frac{1}{4
\left( \left\langle f\right| \left. g\right\rangle +\left\langle g\right|
\left. f\right\rangle \right) ^{2}+\frac{1}{4}\left| \left\langle f\right|
\left. g\right\rangle -\left\langle g\right| \left. f\right\rangle \right|
^{2}
\end{equation}
However, for present purposes it is sufficient to take into account that
\begin{equation}
\left| \left\langle f\right| \left. g\right\rangle \right| \geq \frac{1}{2
\left| \left\langle f\right| \left. g\right\rangle -\left\langle g\right|
\left. f\right\rangle \right|
\end{equation}
(that is to say $\left| \left\langle f\right| \left.
g\right\rangle \right| \geq \left| \mathrm{Im}\left\langle
f\right| \left. g\right\rangle \right| $) that leads to
\begin{equation}
\sqrt{\left\langle f\right| \left. f\right\rangle \left\langle g\right|
\left. g\right\rangle }\geq \frac{1}{2}\left| \left\langle f\right| \left.
g\right\rangle -\left\langle g\right| \left. f\right\rangle \right|
\label{eq:schwarz2}
\end{equation}
Let $\left| \psi \right\rangle $ be the state of the system normalized to
unity ($\left\langle \psi \right| \left. \psi \right\rangle =1$) and $\hat{A}
$ and $\hat{B}$ the Hermitean operators for two quantum-mechanical
observables. We define $\left| f\right\rangle =\left( \hat{A}-\left\langle
\hat{A}\right\rangle \right) \left| \psi \right\rangle $ and $\left|
g\right\rangle =\left( \hat{B}-\left\langle \hat{B}\right\rangle \right)
\left| \psi \right\rangle $, where $\left\langle \hat{Q}\right\rangle
=\left\langle \psi \right| \hat{Q}\left| \psi \right\rangle $, so that
\begin{eqnarray}
\left\langle f\right| \left. f\right\rangle &=&\left\langle \hat{A
^{2}\right\rangle -\left\langle \hat{A}\right\rangle ^{2}=\left( \Delta
A\right) ^{2} \nonumber \\
\left\langle g\right| \left. g\right\rangle &=&\left\langle \hat{B
^{2}\right\rangle -\left\langle \hat{B}\right\rangle ^{2}=\left( \Delta
B\right) ^{2} \label{eq:DeltaA_DeltaB}
\end{eqnarray}
Since $\left\langle f\right| \left. g\right\rangle =\left\langle \hat{A}\psi
\right| \left. \hat{B}\psi \right\rangle -\left\langle \hat{A}\right\rangle
\left\langle \hat{B}\right\rangle $ then it follows from equation~(\ref
{eq:schwarz2}) that
\begin{equation}
\Delta A\Delta B\geq \frac{1}{2}\left| \left\langle \hat{A}\psi \right|
\left. \hat{B}\psi \right\rangle -\left\langle \hat{B}\psi \right| \left.
\hat{A}\psi \right\rangle \right| \label{eq:uncertaiinty_gen}
\end{equation}
If $\hat{B}\left| \psi \right\rangle $ belongs to the domain of $\hat{A}$
and $\hat{A}\left| \psi \right\rangle $ to the domain of $\hat{B}$ then we
can write
\begin{eqnarray}
\left\langle \hat{A}\psi \right| \left. \hat{B}\psi \right\rangle
&=&\left\langle \psi \right| \hat{A}\hat{B}\left| \psi \right\rangle
\nonumber \\
\left\langle \hat{B}\psi \right| \left. \hat{A}\psi \right\rangle
&=&\left\langle \psi \right| \hat{B}\hat{A}\left| \psi \right\rangle
\label{eq:turn_over}
\end{eqnarray}
and thus obtain the standard uncertainty relation
\begin{equation}
\Delta A\Delta B\geq \frac{1}{2}\left| \left\langle \left[ \hat{A},\hat{B
\right] \right\rangle \right| \label{eq:uncertainty}
\end{equation}
where $\left[ \hat{A},\hat{B}\right] =\hat{A}\hat{B}-\hat{B}\hat{A}$ is the
well known commutator. The interested reader will find a more detailed
discussion of the domains and ranges of operators in the literature already
cited\cite{J64,K65,PT69,K70,P79,C01}.
Before applying the results of this section to a particular model
in the next one it is worth stressing the fact that equation
(\ref{eq:uncertainty}) is valid provided that the root-mean-square
deviations $\Delta A$ and $\Delta B$ are calculated according to
equation~(\ref{eq:DeltaA_DeltaB}) and that equations
(\ref{eq:turn_over}) hold for the chosen state $\left| \psi
\right\rangle $. If the chosen state and operators do not satisfy
the latter conditions we can still use the more general inequality
(\ref {eq:uncertaiinty_gen}).
\section{Uncertainty relation for the azimuthal angle and angular momentum}
\label{sec:example}
Strange\cite{S12} described the motion of the electron in the
$x-y$ plane in polar coordinates $x=r\cos \phi $, $y=r\sin \phi $,
where $0\leq r=\sqrt{x^{2}+y^{2}}<\infty $ and $0\leq \phi <2\pi
$. For simplicity we omit the variable $r$ that is not relevant to
the discussion of the uncertainty relation for $\hat{\phi}$ and
$\hat{L}_{z}$ that commutes with the Hamiltonian operator of the
system. In the coordinate representation we define these operators
as follows:
\begin{eqnarray}
\hat{\phi}\psi (\phi ) &=&\phi \psi (\phi ) \nonumber \\
\hat{L}_{z}\psi (\phi ) &=&-i\hbar \frac{\partial }{\partial \phi }\psi
(\phi ) \label{eq:operators}
\end{eqnarray}
where $\psi (\phi )=\left\langle \phi \right| \left. \psi
\right\rangle $. Although it has been argued that this definition
of the quantum-mechanical operator for the azimuthal angle may not
be correct\cite{J64,PT69,K70} we keep it here because it is
relevant to the discussion of the results obtained by
Strange\cite{S12}. Besides, Chisolm\cite{C01} already chose this
definition of $\hat{\phi}$ in his discussion of the uncertainty
relations. We assume the state vectors to be periodic functions of
period $2\pi $ ($f(\phi +2\pi )=f(\phi )$) and choose the standard
inner product
\begin{equation}
\left\langle f\right| \left. g\right\rangle =\int_{0}^{2\pi }f(\phi
)^{*}g(\phi )\,d\phi \label{eq:inner_prod_phi}
\end{equation}
Strange\cite{S12} stated that ``The azimuthal angle-angular
momentum uncertainty relation is $\Delta \phi \Delta L\geq \hbar
$''. The origin of this uncertainty relation is unclear as it
differs from the standard one $\Delta x\Delta p\geq \hbar /2$ for
the coordinate $x$ and its conjugate momentum $p$. In order to
verify this uncertainty relation he later chose ``an equally
weighted sum of the $m=0$ and $m=1$ state''. Since he did not
write the state explicitly we suppose that it was of the form
\begin{equation}
\psi _{S}(\phi )=\frac{1}{2\sqrt{\pi }}\left( 1+e^{i\phi }\right)
\label{eq:psi_Strange}
\end{equation}
from which we obtain $\left\langle \hat{L}_{z}\right\rangle =\hbar
/2$, $\left\langle \hat{L}_{z}^{2}\right\rangle =\hbar ^{2}/2$ and
$\Delta L_{z}=\hbar /2$ in agreement with his results. Arguing
that ``the uncertainty in angle arises directly from the fact that
the origin of the angular coordinate is arbitrary'' he chose
$\left( \Delta \phi \right) _{S}=\pi $ and obtained $\left( \Delta
\phi \right) _{S}\Delta L_{z}=\pi \hbar /2$. However, in section
\ref{sec:uncertainty} we showed that the uncertainty relation
(\ref{eq:uncertainty}) is valid if the root-mean-square deviations
are calculated as in equation (\ref{eq:DeltaA_DeltaB}). In the
present case the inequality holds for $\Delta \phi =\sqrt{2+\pi
^{2}/3}$ and, therefore, also for $\left( \Delta \phi \right)
_{S}>\Delta \phi $.
The results just discussed are valid for the particular state (\ref
{eq:psi_Strange}). It is convenient to derive the $\phi -L_{z}$ uncertainty
relation for an arbitrary wave function $\psi (\phi )$ of period $2\pi $. If
we integrate $\left\langle \hat{L}_{z}\psi \right| \left. \phi \psi
\right\rangle $ \thinspace by parts we obtain\cite{C01}
\begin{equation}
\left\langle \hat{L}_{z}\psi \right| \left. \phi \psi \right\rangle
=\left\langle \psi \right| \left. \hat{L}_{z}\phi \psi \right\rangle +i\hbar
2\pi \left| \psi (2\pi )\right| ^{2}
\end{equation}
and equation (\ref{eq:uncertaiinty_gen}) leads to the exact inequality
\begin{equation}
\Delta \phi \Delta L_{z}\geq \frac{\hbar }{2}\left| 2\pi \left| \psi (2\pi
)\right| ^{2}-1\right| \label{eq:Delta_phi_Delta_Lz}
\end{equation}
already derive earlier by other authors\cite{K65,K70,C01}. The
reason why $\left\langle \hat{L}_{z}\psi \right| \left. \phi \psi
\right\rangle \neq \left\langle \psi \right| \left.
\hat{L}_{z}\phi \psi \right\rangle $ is that $\phi \psi (\phi )$,
unlike $\psi (\phi )$, is not a periodic function of period $2\pi
$ and, consequently, does not belong to the domain of
$\hat{L}_{z}$ (a more detailed discussion of this issue is
available in the articles already
cited\cite{J64,K65,PT69,K70,P79,C01}). However, note that when
$\left| \psi \right\rangle =\left| \psi _{S}\right\rangle $ the
right-hand-side of equation (\ref{eq:Delta_phi_Delta_Lz}) is
exactly $\hbar /2$ because $\left| \psi _{S}(2\pi )\right|
^{2}=1/\pi $. In other words, the `standard' uncertainty relation
$\Delta \phi \Delta L_{z}\geq \hbar /2$ is valid for the
particular wave function $\psi _{S}(\phi )$ chosen by Strange as
an illustrative example.
Since the right-hand side of equation
(\ref{eq:Delta_phi_Delta_Lz}) may be smaller that $\hbar /2$ the
standard uncertainty relation $\Delta \phi \Delta L_{z}\geq \hbar
/2$ is not guaranteed. We think that it is a worthy pedagogical
experiment to test its validity on other state functions. For
example, we can try a more general linear combination of the same
two states with $m=0$ and $m=1$:
\begin{equation}
\psi (a,\phi )=\frac{1}{\sqrt{2\pi }}\left( a+\sqrt{1-a^{2}}e^{i\phi }\right)
\label{eq:Psi(a,phi)}
\end{equation}
where $-1\leq a\leq 1$, which reduces to $\psi _{S}(\phi )$ when
$a=1/\sqrt{2}$. With this simple function we easily obtain
\begin{eqnarray}
R(a) &=&\frac{\hbar }{2}\left| 2\pi \left| \psi (a,2\pi )\right|
^{2}-1\right| =\hbar |a|\sqrt{1-a^{2}} \nonumber \\
\Delta L_{z} &=&\hbar |a|\sqrt{1-a^{2}}=R(a) \nonumber \\
\Delta \phi &=&\left( 4a\sqrt{1-a^{2}}+\frac{\pi ^{2}}{3}\right) ^{1/2}
\label{eq:R(a)_DLz_Dphi}
\end{eqnarray}
Note that $\Delta L_{z}=0$ when $a=0$ or $a=1$ because $\psi
(0,\phi )$ and $\psi (1,\phi )$ are eigenfunctions of
$\hat{L}_{z}$, and that in both cases $\Delta \phi =\pi
/\sqrt{3}$. Besides, it follows from $\Delta \phi \Delta L_{z}\geq
R(a)$ that $\Delta \phi \geq 1$ for all $-1\leq a\leq 1$.
Fig.~\ref{Fig:DphiDLz} shows that $\Delta \phi \Delta L_{z}=\hbar
/2$ at four points: $a_{1}\approx -0.91$, $a_{2}\approx -0.41$,
$a_{3}\approx 0.25$ and $a_{4}\approx 0.97$. The standard
uncertainty relation $\Delta \phi \Delta L_{z}\geq \hbar /2$ holds
only for $a_{1}\leq a\leq a_{2}$ and $a_{3}\leq a\leq a_{4}$,
while, on the other hand, the exact one $\Delta \phi \Delta
L_{z}\geq R(a)$ is valid for all $a$. In addition to it,
$R(a)=\hbar /2$ only for $a=\pm 1/\sqrt{2}$, that is to say, for
an equally weighted sum of the states with $m=0$ and $m=1$.
Fig.~\ref{Fig:Dphi} shows that $\pi >\Delta \phi >1$ for all $-1\leq a\leq 1$
so that if the uncertainty relation holds for the root-mean-square deviation
$\Delta \phi $ then it also holds for $\left( \Delta \phi \right) _{S}=\pi $
as argued above.
Fig.~\ref{Fig:piDLz} shows that $\pi \Delta L_{z}=\hbar /2$ at
four points $a_{1}^{\prime }=-a_{4}^{\prime }\approx -0.99$ and
$a_{2}^{\prime }=-a_{3}^{\prime }\approx -0.16$ and that $\pi
\Delta L_{z}\geq \hbar /2$ for $a_{1}^{\prime }\leq a\leq
a_{2}^{\prime }$ and $a_{3}^{\prime }\leq a\leq a_{4}^{\prime }$.
This uncertainty relation fails for $a$ outside those intervals.
We appreciate that the inequality invoked by Strange\cite {S12}
(which he arbitrarily chose to be $\Delta \phi \Delta L_{z}\geq
\hbar $) is not valid for all possible states of the system.
For simplicity we have restricted the discussion of the
uncertainty relation to states that depend only on the azimuthal
angle. By no means does such restriction invalidate the
conclusions drawn from the state (\ref {eq:Psi(a,phi)}) that are
illustrated in figures \ref{Fig:DphiDLz}, \ref {Fig:Dphi} and
\ref{Fig:piDLz}. However, as a further pedagogical exercise it is
worth taking into account the actual motion of the electron on the
$x-y $ plane. If we repeat the calculation for states $f(r,\phi
)=\left\langle r,\phi \right| \left. f\right\rangle $ and the
inner product
\begin{equation}
\left\langle f\right| \left. g\right\rangle =\int_{0}^{\infty
}\int_{0}^{2\pi }f(r,\phi )^{*}g(r,\phi )r\,d\phi \,dr
\end{equation}
we obtain the exact uncertainty relation
\begin{equation}
\Delta \phi \Delta L_{z}\geq \frac{\hbar }{2}\left| 2\pi \rho (2\pi
)-1\right| \label{eq:Delta_phi_Delta_Lz_r_phi}
\end{equation}
where
\begin{equation}
\rho (\phi )=\int_{0}^{\infty }\left| \psi (r,\phi \right| ^{2}r\,dr
\label{eq:rho(phi)}
\end{equation}
Equation (\ref{eq:Delta_phi_Delta_Lz_r_phi}) is a generalization
of the uncertainty relation (\ref{eq:Delta_phi_Delta_Lz}) that was
derived earlier by Kraus\cite{K65,K70}. Note that equation
(\ref{eq:Delta_phi_Delta_Lz_r_phi}) is suitable for the $(r,\phi
)$-dependent states chosen by Strange\cite {S12} to illustrate the
probability backflow. For example, using Strange's three-term
wavefunction (his equation (11) properly normalized)\cite{S12} we
obtain $\Delta \phi \Delta L_{z}\approx 1.99\hbar $ and
$\frac{\hbar }{2}\left| 2\pi \rho (2\pi )-1\right| \approx
0.844\hbar $ that satisfy the uncertainty relation
(\ref{eq:Delta_phi_Delta_Lz_r_phi}). Exactly in the same way we
can easily generalize the uncertainty relations derived by
Chisolm\cite{C01} that provide tighter lower bounds to the
products of square-root-mean deviations.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have shown that the uncertainty relation invoked
by Strange\cite{S12} in his discussion of the probability backflow
is only valid for a particular set of wave functions. The electron
in a constant magnetic field is a suitable example for showing
that the $\phi -L_{z}$ uncertainty relation should be applied
carefully because it is different from the $x-p$ one. In order to
keep the discussion as simple as possible we have avoided more
complicated issues like the correct form of the operator for the
azimuthal angle and of its square-root-mean
deviation\cite{J64,K65,PT69,K70}. Instead, we have kept the most
straightforward definitions of both the operator $\hat{\phi}$ and
its square-root-mean deviation $\Delta \phi $\cite {C01} that
proved suitable for the analysis of the results obtained by
Strange\cite{S12}.
Finally, we point out that in the case of the motion of a particle
in three dimensions one can easily derive uncertainty relations
similar to equation (\ref{eq:Delta_phi_Delta_Lz_r_phi}) that
generalize those derived earlier by other
authors\cite{K65,K70,C01}.
|
1,116,691,500,398 | arxiv | \section{Introduction}
The fractional order Sobolev spaces $W^{s,p}(\RN)$, $p\in[1,\infty)$, $s\in(0,1)$, endowed with the Gagliardo--Slobodeckij seminorm, which is defined for
smooth compactly supported functions $u$ as
\begin{equation*
|u|_{s,p}^p = \int_{\RN}\int_{\RN} \left(\frac{|u(x) - u(y)|}{|x-y|^s}\right)^p \frac1{|x-y|^N} \dx\dy,
\end{equation*}
have played an important role in the theory of partial differential equations and its applications for a long time (see the introductory section of \cite{Hitchhiker}). Much as it is tempting to think that
\begin{align}
\lim_{s\to1^-} |u|_{s,p}^p &\approx \int_{\RN} |\nabla u(x)|^p\dx \label{intro:s=1_wrong}\\
\intertext{or}
\lim_{s\to0^+} |u|_{s,p}^p &\approx \int_{\RN} |u(x)|^p\dx, \nonumbe
\end{align}
the Gagliardo--Slobodeckij seminorm notoriously fails to capture these limiting cases\textemdash to that end, it is sufficient to consider any nonconstant $u\in\mathcal C_0^\infty(\RN)$ and observe that $|u|_{s,p}^p$ converges to $\infty$ as $s\to1^-$ or $s\to0^+$. Nevertheless, it was discovered around 20~years ago that these ``defects'' can be, in a sense, ``fixed'' by introducing certain compensatory factors. Namely, for every $u\in\mathcal C_0^\infty(\RN)$, a special case of what is now often called the Bourgain--Brezis--Mironescu formula \cite{BBM} tells us that
\begin{equation}
\lim_{s\to 1^+} (1-s) \int_{\RN}\int_{\RN} \left(\frac{|u(x) - u(y)|}{|x-y|^s}\right)^p \frac1{|x-y|^N} \dx\dy = C(N,p) \int_{\RN} |\nabla u(x)|^p\dx. \label{intro:BBM_formula}
\end{equation}
Moreover, V.G.~Maz'ya and T.~Shaposhnikova proved in \cite{MS} that
\begin{equation}
\lim_{s\to 0^+} s \int_{\RN}\int_{\RN} \left(\frac{|u(x) - u(y)|}{|x-y|^s}\right)^p \frac1{|x-y|^N} \dx\dy = C(N,p) \int_{\RN} |u(x)|^p \dx. \label{intro:MS_formula}
\end{equation}
Recently, a completely different approach, not involving integration of fractional difference quotients at all, to repairing \eqref{intro:s=1_wrong} was taken by H.~Brezis, J.~Van Schaftingen and P.-L.~Yung. They proved in \cite{BvSY} that, instead of introducing a compensatory factor, the limit as $s\to1^-$ can be recovered if the strong $L^p$ norm of fractional difference quotients is replaced by the weak $L^{p,\infty}$ quasi-norm. More precisely, they obtained the following result.
Let $p\in[1,\infty)$ and $u\in\mathcal C_0^\infty(\RN)$ and define
\begin{equation*}
E_{\lambda,1} = \left|\left\{(x,y)\in \R^{2N}\colon x\neq y, \frac{|u(x) - u(y)|^p}{|x-y|^{N+p}}\geq \lambda^p\right\}\right|_{2N},
\end{equation*}
where $|\cdot|_{2N}$ stands for the Lebesgue measure on $\R^{2N}$. In \cite{BvSY} it was shown that
\begin{align*}
c(N,p)\int_{\RN} |\nabla u(x)|^p \dx \leq &\sup_{\lambda > 0} \lambda^p E_{\lambda,1} \leq C(N)\int_{\RN} |\nabla u(x)|^p\d
\intertext{and}
\lim_{\lambda\to\infty} \lambda^p E_{\lambda,1} &= C(N,p) \int_{\RN} |\nabla u(x)|^p\dx.
\end{align*}
Following this innovatory approach, Q.~Gu and P.-L.~Yung established in \cite{GY} other, possibly even more unanticipated, formulae. They complement the Maz'ya--Shaposhnikova formula \eqref{intro:MS_formula} in the same way the result of Brezis, Van Schaftingen and Yung complements the Bourgain--Brezis--Mironescu formula \eqref{intro:BBM_formula}. Namely the result of \cite{GY} asserts that, for every $u\in L^p(\RN)$, $p\in[1, \infty)$,
\begin{align}
c(N)\int_{\RN} |u(x)|^p\dx \leq &\sup_{\lambda > 0} \lambda^p E_{\lambda,0} \leq C(N)\int_{\RN} |u(x)|^p\dx \label{intro:GU_Yung_equivalent_norm}
\intertext{and}
\lim_{\lambda\to\infty} \lambda^p E_{\lambda,0} &= C(N) \int_{\RN} |u(x)|^p\dx, \label{intro:GU_Yung_limit}
\end{align}
where
\begin{equation*}
E_{\lambda,0} = \left|\left\{(x,y)\in \R^{2N}\colon x\neq y, \frac{|u(x) - u(y)|^p}{|x-y|^{N}}\geq \lambda^p\right\}\right|_{2N}.
\end{equation*}
The classical results \eqref{intro:BBM_formula} and \eqref{intro:MS_formula} were recently considerably strengthened in \cite{ACPS0, ACPS1, FB-S} by replacing the $p$-th power in the integrals with Orlicz functions, thus allowing for non-polynomial growth.
The aim of this short paper is to similarly extend the new developments of \cite{GY}; i.e.~\eqref{intro:GU_Yung_equivalent_norm} and \eqref{intro:GU_Yung_limit}, by replacing the $p$-th power with a general Orlicz function globally satisfying the $\Delta_2$ condition. Therefore, we express the Orlicz modular in terms of measures of certain level sets, without using any integral. Our proof technique is based on the argument presented in \cite{GY}, appropriately extended to the Orlicz framework.
\medskip
In what follows, we introduce some basic notations and definitions, needed for understanding the setting we will be considering in our main result. A proper detailed treatment of Orlicz functions and classes may be found e.g.~in \cite{RR}.
\medskip
A \emph{Young function} $\Phi\colon [0,\infty)\to[0,\infty)$ is any continuous convex function vanishing at $0$. Note that Young functions are nondecreasing. We say that a Young function $\Phi$ (globally) satisfies the \emph{$\Delta_2$ condition} if there is $k > 0$ such that $\Phi(2t)\leq k\Phi(t)$, for every $t>0$. Then necessarily $k\geq2$, which follows from the convexity of $\Phi$. We denote by $\Delta_2(\Phi)$ the infimum over all such $k$.
Given a Young function $\Phi$, we say that a measurable function $u\colon \RN\to\R$ belongs to the \emph{Orlicz class $\Lphi$}, and write $u\in\Lphi$, if
\begin{equation*}
\int_{\RN}\Phi(|u(x)|)\dx < \infty.
\end{equation*}
If $\Phi$ satisfies the $\Delta_2$ condition, $u\in\Lphi$ implies
\begin{equation*}
\int_{\RN}\Phi(\gamma|u(x)|)\dx < \infty, \qquad \text{for every $\gamma > 0$}.
\end{equation*}
As usual, $\omega_N$ denotes the volume of the unit ball in $\RN$.
\section{Main result}
\begin{thm
Let $\Phi$ be a Young function satisfying the $\Delta_2$ condition. Let $u\in\Lphi$ and for every $t>0$ define
$$
E_t = \left\{ (x,y)\in\R^{2N}\colon\ x\ne y,\ \frac{\Phi(|u(x)-u(y)|)}{|x-y|^N} \ge \Phi(t) \right\}.
$$
Then
\begin{equation}\label{2}
2\omega_N \irn \Phi(|u(x)|)\dx = \lim_{t\to 0^+} \Phi(t)\,\lenn{E_t}.
\end{equation}
Furthermore,
\begin{equation}\label{main:equivalent_expression_modular}
2\omega_N \irn \Phi(|u(x)|)\dx\leq \sup_{t>0} \Phi(t)\,\lenn{E_t} \leq 2\omega_N \Delta_2(\Phi) \irn \Phi(|u(x)|)\dx.
\end{equation}
\end{thm}
\begin{proof}
For every $t>0$ define the set
$$
H_t = \{ (x,y)\in E_t\colon\ |y|>|x|\}
$$
and observe that, thanks to symmetry, it satisfies $\lenn{H_t}=\frac12 \lenn{E_t}$.
At first, we are going to suppose that $u$ has compact support; i.e.,~there exists $R>0$ such that
$$
\operatorname{supp} u \subset B_R.
$$
Notice that, if $(x,y)\in H_t$, then necessarily $x\in B_R$, otherwise we would have $x,y\in \R^N\setminus B_R$ and thus
$u(x)=u(y)=0$, which would imply $(x,y)\notin H_t$.
For a fixed $x\in B_R$ define the sets
$$
H_{t,x} = \{ y\in\RN\colon\ (x,y)\in H_t\} = \left\{ y\in\RN \colon |y|>|x|,\ \frac{\Phi(|u(x)-u(y)|)}{|x-y|^N} \ge \Phi(t) \right\}
$$
and
$$
H_{t,x,R} = H_{t,x}\setminus B_R = \left\{ y\in\RN \colon |y|>R,\ \frac{\Phi(|u(x)|)}{|x-y|^N} \ge \Phi(t) \right\}.
$$
Obviously, we have
\begin{equation}\label{3}
H_{t,x,R} \subset H_{t,x} \subset H_{t,x,R}\cup B_R.
\end{equation}
The first inclusion together with the definition of $H_{t,x,R}$ implies
\begin{equation}\label{4}
\len{H_{t,x}}\ge \len{H_{t,x,R}} \ge \omega_N\frac{\Phi(|u(x)|)}{\Phi(t)} - \omega_NR^N,
\end{equation}
while the second inclusion in \eqref{3} implies
\begin{equation}\label{5}
\len{H_{t,x}} \le \omega_N\frac{\Phi(|u(x)|)}{\Phi(t)} + \omega_NR^N.
\end{equation}
Since $x\in B_R$ was arbitrarily chosen, we may integrate \eqref{4} and \eqref{5} over $B_R$ with respect to $x$ and multiply by $\Phi(t)$ to get
\begin{align*}
\omega_N \int_{B_R} \Phi(|u(x)|) \dx - \Phi(t)\omega_N^2R^{2N} & \le \Phi(t)\int_{B_R}\len{H_{t,x}} \dx \\
& \le \omega_N \int_{B_R} \Phi(|u(x)|) \dx + \Phi(t)\omega_N^2R^{2N}.
\end{align*}
Recalling that $u$ is supported in $B_R$ and $\lenn{H_t}=\frac12 \lenn{E_t}$, we may further rewrite this as
\begin{align}
2\omega_N \irn \Phi(|u(x)|) \dx - 2\Phi(t)\omega_N^2R^{2N} & \le \Phi(t)\lenn{E_t} \notag\\
& \le 2\omega_N \irn \Phi(|u(x)|) \dx + 2\Phi(t)\omega_N^2R^{2N}.\label{8}
\end{align}
Letting $t\to 0^+$, we obtain \eqref{2}.
Now we are going to extend the result beyond compactly supported functions. Suppose that $u\colon\R^N\to \R$ is measurable.
For any fixed $t>0$, the set $E_t$ satisfies
\begin{equation}\label{6}
E_t \subset \left\{ (x,y)\in \R^{2N}\colon\ \frac{\Phi(2|u(x)|)}{|x-y|^N} \ge \Phi(t)\right\}
\cup \left\{ (x,y)\in \R^{2N}\colon\ \frac{\Phi(2|u(y)|)}{|x-y|^N} \ge \Phi(t)\right\}.
\end{equation}
Indeed, if $(x,y)\in\R^N$ is not contained in either of the two sets on the right-hand side, then, by monotonicity and convexity of $\Phi$,
\begin{align*}
\Phi(|u(x)-u(y)|) & \le \Phi(|u(x)|+|u(y)|)\\
& \le \frac12 \Phi\left(2|u(x)|\right) + \frac12\Phi\left(2|u(y)|\right) \\
& < \Phi(t)|x-y|^N,
\end{align*}
hence $(x,y)\notin E_t$. This shows \eqref{6}.
Using the symmetry of the two sets on the right-hand side of \eqref{6}, we obtain
\begin{align*}
\lenn{E_t} \le 2 \irn \irn \chi_{\left\{ (x,y)\in\R^{2N}\colon\ |x-y|^N\le \frac{\Phi(2|u(x)|)}{\Phi(t)}\right\}}(x,y) \dy \dx
= 2\frac{\omega_N}{\Phi(t)} \irn \Phi(2|u(x)|) \dx,
\end{align*}
hence
\begin{equation}\label{7}
\sup_{t>0}\Phi(t)\lenn{E_t} \le 2 \omega_N \irn \Phi(2|u(x)|) \dx.
\end{equation}
Notice that
neither the assumption
nor the $\Delta_2$ condition of $\Phi$ has been used yet, so this estimate in fact holds for any measurable $u$ and any Young function $\Phi$. If $\Phi$ satisfies the $\Delta_2$ condition, \eqref{7} readily implies the second inequality in \eqref{main:equivalent_expression_modular}.
Assume that $u\in \Lphi$. Choose $R>0$ and define
\begin{equation}\label{14}
u_R = u\chi_{B_R}, \qquad v_R = u - u_R.
\end{equation}
Furthermore, choose $t>0$, $\lambda\in(0,1)$ and define
\begin{align*}
A_1 & = \left\{ (x,y)\in \R^{2N}\colon\ \frac{\Phi\left(\frac1\lambda|u_R(x)-u_R(y)|\right)}{|x-y|^N} \ge \Phi(t)\right\}
\intertext{and}
A_2 & = \left\{ (x,y)\in \R^{2N}\colon\ \frac{\Phi\left(\frac1{1-\lambda}|v_R(x)-v_R(y)|\right)}{|x-y|^N} \ge \Phi(t)\right\}.
\end{align*}
Then $E_t\subset A_1 \cup A_2$. Similarly as before, this can be seen by using monotonicity and convexity of $\Phi$ to get
\begin{align*}
\Phi(|u(x)-u(y)|) & \le \Phi(|u_R(x)-u_R(y)| + |v_R(x)-v_R(y)|) \\
& \le \lambda \Phi\left(\frac1\lambda|u_R(x)-u_R(y)|\right) + (1-\lambda) \Phi\left(\frac1{1-\lambda}|v_R(x)-v_R(y)|\right),
\end{align*}
from which the inclusion follows.
Observe that the set $A_1$ is obtained by replacing $u$ with $\frac{u_R}\lambda$ in the definition of $E_t$. Since $\frac{u_R}\lambda$ is compactly supported and belongs to $\Lphi$ (since $\Phi$ satisfies the $\Delta_2$ condition),
we may use the previously obtained estimate \eqref{8} with $A_1$ and $\frac{u_R}\lambda$ in place of $E_t$ and $u$, respectively, to get
\begin{equation*
\Phi(t)\lenn{A_1} \le 2\omega_N \irn \Phi\left(\frac{|u_R(x)|}\lambda\right) \dx + 2\Phi(t)\omega_N^2R^{2N}.
\end{equation*}
Analogously, applying \eqref{7} to the function $\frac{v_R}{1-\lambda}$ in place of $u$ ($A_2$ plays the role of $E_t$ for this function
), we get
\begin{equation*
\Phi(t)\lenn{A_2} \le 2 \omega_N \irn \Phi\left(\frac{2|v_R(x)|}{1-\lambda}\right) \dx.
\end{equation*}
As $E_t\subset A_1 \cup A_2$, this gives
\begin{equation*
\Phi(t)\lenn{E_t} \le 2\omega_N \left[ \irn \Phi\left(\frac{|u_R(x)|}\lambda\right) \dx + \Phi(t)\omega_NR^{2N} + \irn \Phi\left(\frac{|2v_R(x)|}{1-\lambda}\right) \dx\right].
\end{equation*}
Since $|u_R|\le |u|$, $|v_R|\le |u|$, $u\in\Lphi$ and $\Phi$ satisfies the $\Delta_2$ condition, both integrals above are finite regardless the choice of $\lambda$,
and the second integral (with a fixed $\lambda$) vanishes as $R\to\infty$ by the dominated convergence theorem. Hence, consecutively letting $t\to 0^+$, $R\to \infty$ and $\lambda\to 1^-$, we finally obtain
\begin{equation}\label{12}
\limsup_{t\to 0^+} \Phi(t)\lenn{E_t} \le 2\omega_N \irn \Phi(|u(x)|) \dx.
\end{equation}
It remains to show the opposite inequality for the lower limit.
Fix $R>0$, $\lambda\in(0,1)$ and define $u_R$, $v_R$ as in \eqref{14}. Then $|u|-|v_R|=|u_R|$ and, by convexity of $\Phi$, for any $(x,y)\in\R^{2N}$ we have
\begin{equation}\label{13}
\frac1\lambda \Phi(\lambda|u_R(x)-u_R(y)|) - \frac{1-\lambda}\lambda \Phi\left( \frac{\lambda}{1-\lambda}|v_R(x)-v_R(y)|\right) \le \Phi(|u_R(x)-u_R(y)|) .
\end{equation}
For any $t>0$ define
\begin{align*}
A_3 & = \left\{ (x,y)\in \R^{2N}\colon\ \frac{\Phi\left(\lambda|u_R(x)-u_R(y)|\right)}{|x-y|^N} \ge \Phi(t)\right\}
\intertext{and}
A_4 & = \left\{ (x,y)\in \R^{2N}\colon\ \frac{\Phi\left(\frac\lambda{1-\lambda}|v_R(x)-v_R(y)|\right)}{|x-y|^N} \ge \Phi(t)\right\}.
\end{align*}
Whenever $(x,y)\in A_3\setminus A_4$, we have
\begin{align*}
\Phi(t) & = \frac1\lambda \Phi(t) - \frac{1-\lambda}\lambda \Phi(t)\\
& < \frac1{|x-y|^N} \left( \frac1\lambda \Phi(\lambda|u_R(x)-u_R(y)|) - \frac{1-\lambda}\lambda \Phi\left( \frac{\lambda}{1-\lambda}|v_R(x)-v_R(y)|\right) \right) \\
& \le \frac{\Phi(|u_R(x)-u_R(y)|)}{|x-y|^N},
\end{align*}
where the last inequality follows from \eqref{13}. This shows that $(x,y)\in E_t$. Thus, we have $E_t \supset A_3\setminus A_4$.
We proceed analogously as before, realizing that $A_3$ plays the role of $E_t$ for the compactly supported function $\lambda u_R$, we use \eqref{8} to obtain
$$
\Phi(t)\lenn{A_3} \ge 2\omega_N \left[ \irn \Phi(\lambda|u_R(x)|)\dx - \Phi(t)\omega_NR^{2N}\right].
$$
Similarly, an appropriate interpretation of \eqref{7} yields
$$
\Phi(t)\lenn{A_4} \le 2\omega_N \irn \Phi\left(\frac{2\lambda}{1-\lambda}|v_R(x)|\right)\dx.
$$
Hence,
\begin{align*}
\Phi(t)\lenn{E_t} & \ge \Phi(t)\left(\lenn{A_3} - \lenn{A_4}\right) \\
& \ge 2\omega_N \left[ \irn \Phi\left(\frac{|u_R(x)|}\lambda\right) \dx - \Phi(t)\omega_NR^{2N} - \irn \Phi\left(\frac{2|v_R(x)|}{1-\lambda}\right) \dx\right].
\end{align*}
Letting $t\to 0^+$, $R\to \infty$ and $\lambda\to 1^-$, in this order, now yields
\begin{equation}\label{15}
\liminf_{t\to 0^+} \Phi(t)\lenn{E_t} \ge 2\omega_N \irn \Phi(|u(x)|) \dx.
\end{equation}
Once again, the $\Delta_2$ condition of $\Phi$ as well as the assumption $u\in\Lphi$ are both required in this step. This clearly implies the first inequality in \eqref{main:equivalent_expression_modular}. Finally, combining \eqref{15} with \eqref{12}, we arrive at \eqref{2}, and so the proof is complete.
\end{proof}
\begin{rem}
Applying the theorem to $\Phi(t)=t^p$, $p\in[1,\infty)$, we recover \cite[Theorem~1]{GY} with the same multiplicative constants.
\end{rem}
|
1,116,691,500,399 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{I}{t} is well-known that the success of machine learning methods rests on a large amount of training data. However, when training samples \emph{are hard to collect}, how to improve model performance with \emph{small datasets} then becomes a great challenge. When facing multiple related tasks together, Multi-Task Learning (MTL) \cite{mtl} is well-known as a good solution against such a challenge. In a general sense, MTL refers to the learning paradigm where multiple tasks are trained jointly under certain constraints leveraging knowledge transfer across some/all of the tasks. As typical evidence for the wisdom behind MTL, an early study \cite{limit2} shows that training a large number of similar tasks together could significantly improve the generalization performance when the data annotations of each individual task are insufficient. Nowadays, such wisdom has been widely adopted by the machine learning community, which makes MTL a crucial building block for a plethora of applications ranging from scene parsing \cite{cv1}, attribute learning \cite{cv2}, text classification \cite{nlp1}, sequence labeling \cite{nlp2}, to travel time estimation \cite{dm1}, \emph{etc}.
The fundamental belief of MTL lies in that sharing knowledge among multiple tasks often results in an improvement in generalization performance. Based on the belief, a great number of studies have been carried out to explore the problem of how to share valuable knowledge across different tasks. The early studies on this topic argue that knowledge should be shared across all the tasks. For example, in the work of \cite{simp}, knowledge is transferred by sharing common and sparse features across all the tasks. However, \cite{non-overlapping1} later points out that if not all the tasks are indeed related, sharing common features with dissimilar and hard tasks often results in performance degradation, which is termed as \textit{negative transfer}. To address this issue, recent studies in the odyssey against \textit{negative transfer} usually fall into two brunches.\\
\indent One line of studies casts the tasks grouping problem as a clustering method. At the very beginning, \cite{clus_early} proposes an MTL algorithm which first constructs the task clusters and then learns the model parameters separately. Seeing that this stage-wise method could not guarantee the optimality of the learned clusters and parameters, a significant number of studies start to explore how to integrate clustering and multi-task learning into a unified framework. Generally speaking, such work could be classified into two categories. The first class of methods adopts a Bayesian learning framework, which assumes that the task-specific parameters subject to cluster-leveraging priors such as mixtures of Gaussian prior \cite{MG} and Dirichlet process prior \cite{Dp1,Dp2,Dp3}. The second class of methods formulates the clustering problem as regularization terms. More specifically, such terms are developed to: (a)penalize small between-cluster variance and large within-cluster variance, (b) relax mix-integer programmings \cite{non-overlapping1, non-overlap2}, (c) encourage structural sparsity \cite{metag,structual,k-support}, or (d) encourage latent task representation \cite{overlap1,overlap2}.\\
\indent The other line of studies realizes that knowledge transfer should not be treated symmetrically. In fact, transferring knowledge from easy to hard tasks is generally safe, while transferring knowledge along the opposite direction is the major source of negative transfer and should be suppressed. Motivated by this, \cite{amtl} proposes the first MTL method to leverage asymmetry. In this work, the task parameters are assumed to lie in the column space spanned by themselves, and the asymmetry is then realized by leveraging sparse representation coefficients coming from each task. Since then, some improvements have been made on this framework via (a) making the penalty adaptive to the correlation between predictors \cite{self2}; (b) latent task representation \cite{self1,trace_lasso}; (c) grouping constraint\cite{asym-group} and d) robust constraints \cite{asym-robust}.\\
\indent For the majority of existing methods, the negative transfer issue is only modeled as task correlation/grouping. With the following motivating examples, it could be seen that, even when the tasks are reasonably grouped, sharing redundant features across different task groups still bears the risk for over-fitting. This suggests that \emph{negative transfer might as well take place across features and tasks}.
\begin{exam}
Consider the personalized learning problem, where the prediction of the decision results coming from a given user is regarded as a task, and the features capture different concepts of the given object that a user might be interested in. When making decisions toward an object, users often form disentangled groups in terms of their points-of-interests (color/shape/texture, etc.). Consequently, blindly sharing irrelevant features across groups is dangerous.
\end{exam}
\begin{exam}
In bioinformatics, seeking out connections between genetic markers and phenotypes is often regarded as a crucial problem. MTL could be applied to this problem, for example, when we expect to simultaneously predict a set of different diseases from genetic clues. Here every specific disease prediction is regarded as a task, and features come from the expressions of different markers. Under this scenario, since different groups of diseases involve different biological functions, it is natural to observe them correlated with different sets of genes. Sharing common genes across disease groups is then not reasonable, and might lead to negative transfer even when we have a good grouping of the diseases.
\end{exam}
\begin{exam}
Another example comes from the natural language processing tasks. For instance, consider the topic classification problem where identifying a specific topic is regarded as a task and the embeddings of words in a document are regarded as features. It is often observed that different groups of topics are relevant to different subsets of words. The Sport-related topics often involve keywords such as \emph{athletics}, \emph{soccer}, \emph{gymnastics}, while the Politics related keywords often include \emph{governance}, \emph{election}, \emph{parliaments}. Then sharing common words across these two topic groups then bears a high risk of over-fitting.
\end{exam}
Motivated by these examples, our goal in this paper is to seek out solutions for \textit{negative transfer} in a more \emph{general} manner such that the features could come into play for the grouping process. To realize our goal, we need to \emph{include} the co-partition of tasks and features as an important component of MTL. To this end, we formulate a new learning framework named Task-Feature Collaborative Learning (TFCL). Specifically, we construct our framework with three steps.\\
\noindent\textbf{ {Step 1}}. In the first step, we propose a base model that achieves the co-grouping target with a novel regularizer based on block-diagonal structure learning of a bipartite graph with features and tasks being its nodes.\\
\noindent \textbf{{Step 2}}. Since the resulting optimization problem of the base model, denoted in short as $(\bm{P})$, is non-smooth and non-convex, we propose a surrogate problem $(\bm{P}^\star)$ as an approximation. Through developing an optimization algorithm for $(\bm{P}^\star)$, we prove that it can simultaneously achieve the global convergence for $(\bm{P})$ and $(\bm{P}^\star)$ under certain assumptions. Besides the convergence analysis, it is also noteworthy that the intermediate solution produced by the optimization method implicitly provides an embedding for each feature and task. With these embeddings, we further show that the optimization
algorithm could leverage the expected block-diagonal structure if the parameters are carefully chosen. This naturally leverages a grouping effect across task/feature, where transferring across groups is suppressed. \\
\noindent \textbf{{Step 3}}. With the base model elaborated, we then turn our focus to a more comprehensive model and target at an application problem called \emph{ personalized attribute prediction}, where the personalized attribute annotation prediction for a given user is regarded as a task. To obtain a flexible model, we simultaneously consider (a) the consensus factor that captures the popular interests shared by the users, which allows group overlapping (b) the co-grouping factor in our base model (c) the abnormal factor that excludes outlier users (tasks) from grouping. We also prove that this method inherits all the theoretical properties of TFCL.\\
In a nutshell, the main contributions of this paper are three-fold:
\begin{enumerate}
\item [(C1)] In the core of the base model of TFCL framework lies a novel block-diagonal regularizer leveraging the task-feature co-partition. This allows us to explore a more general negative transfer effect simultaneously at the task- and feature-level.
\item[(C2)] An optimization algorithm is designed for the base model with a theoretical guarantee for the global convergence property. Moreover, we provide a theoretical guarantee for leveraging the expected block-diagonal structure.
\item[(C3)] Finally, we propose a more practical extension for the personalized attribute prediction problem based on TFCL with enhanced flexibility.
\end{enumerate}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{fig1.png}
\caption{
{\textbf{Illustration of the Base Model of the Task-Feature Collaborative Learning Framework}}. \textbf{Left}: We form the task-feature relations as an auxiliary bipartite graph $\mathcal{G}_{BI}$ with tasks and features being the nodes, and $\mathcal{L}_{\Gbi}$ being the Graph Laplacian. To separate all the tasks and features into $k$ groups, we expect to cut $\mathcal{G}_{BI}$ into $k$ connected components. \textbf{Middle}: If we reconsider this requirement from the Graph Laplacian, as is shown in Thm. \ref{thm:graph}, it is equivalent to force the smallest $k$ eigenvalues of $\mathcal{L}_{\Gbi}$ to be zero. Since directly doing this is intractable, we turn to minimize their sum as a relaxation, which gives birth to a novel regularizer based on Thm. \ref{thm:eig}. \textbf{Right}: Now we shift our attention to the model parameters $\bm{W}$. The proposed regularizer facilitates a generalized block-diagonal structure (up to permutations) toward $\bm{W}$, with each block containing a specific group of nodes in $\mathcal{G}_{BI}$. In the next section, based on Prop. \ref{prop:sol}, Thm. \ref{eigsol}-\ref{thm:global}, we will construct an optimization method for TFCL with global convergence guarantee. Moreover, we will also show in Thm. \ref{thm:group} that, negative transfer across blocks, marked as crosses here, could be effectively suppressed based on the algorithm.
}
\label{fig:illu}
\end{figure*}
\section{Related Work}
In this section, we briefly review the recent advances in block-diagonal structural learning, multi-task learning and personalized attribute prediction that are closely related to our study.
\noindent \textbf{Block-Diagonal Structural Learning.} The idea to learn block-diagonal structures could be traced back to the clustering problem. For the clustering problem, a set of data points are required to be grouped into several clusters in an unsupervised manner. As a representative type of method, graph-based clustering methods (\emph{e.g}. spectral clustering \cite{spec1,spec2} and subspace clustering \cite{SSC,LRR}) solve this problem in a two-stage way: (a) a proper sample-sample affinity matrix is first obtained to capture the correlations among different sample points; (b) Given the affinity matrix, the clustering problem is formulated as a graph partitioning problem via minimizing some spectral relaxations of the normalized cut. Under this framework, if the affinity matrix has a clear block-diagonal structure, then each of the block components naturally forms a cluster. Consequently, leveraging the block-diagonal structure of the affinity matrix could significantly improve the performance of such graph-based clustering methods. Motivated by this intuition, researchers start to find implicit regularization terms to preserve the block-diagonal structural properties of the affinity matrix \cite{s3c,sscomp,DDSSC,SCTwist,ssqp,lrsc,spsc1}. However, as suggested by \cite{lublock}, the implicit regularizers could not deal with the off-diagonal noises from the null spaces of the feature inputs. Then \cite{diag1,diag2} present the first trail to develop explicit block-diagonal regularizers in the graph-based clustering framework as a better solution against this issue. Most recently, this framework has been successfully extended to subspace clustering frameworks to embrace self-expression \cite{lublock,xie2017implicit,yang2019split}. Along this line of research, the most related studies to our work come from the explicit block-diagonal regularizations. However, they differ significantly with our work from two dimensions. First, they target at homogeneous sample grouping, where the block-diagonal property is merely limited to square matrices with the $i$-th column and $i$-th row representing the same sample point. In this paper, the task-feature co-grouping pursuit calls for a more generalized definition of block-diagonality. To this end, we propose a generalized block-diagonal structural learning framework which is available for arbitrary size matrices where the $i$-th column/row refers to heterogeneous type of nodes (task/feature in our case).
Second, concerning the optimization method, they only provide a subsequence convergence guarantee, leaving the global convergence property an open problem. By contrast, we will show that our proposed method could guarantee the global convergence property with a specific construction of an auxiliary surrogate problem.
We will have a closer look at the connection between the related work and our method in Sec.\ref{sec:disscus}. \\
\noindent \textbf{Multi-task Learning.} In the introduction, we have provided a brief review of the majority of methods attacking negative transfer issue in MTL. Here, we further discuss MTL methods that are closely related to our work. Firstly, from the structural learning perspective, the asymmetric MTL methods mentioned in the introduction section \cite{amtl,self1,asym-robust,asym-group} also leverage block-diagonal structures. Just as mentioned in the last paragraph, they only consider homogeneous block-diagonal structures at the task-level, which does not meet our requirement of heterogeneous block-diagonal structures across tasks and features. From the task-feature collaborative learning perspective, to the best of our knowledge, there are only two MTL studies that also explicitly learn the task-feature co-grouping structures. As a beginning trial, \cite{cluster} explores how different features play a role in multi-task relationships. Specifically, it designed a novel multi-task model via leveraging feature-specific task clusters. However, the features in \cite{cluster} are considered separately, with the complicated feature correlations left unconsidered. \cite{comtl} turns to learn the feature-task correlations based on a co-clustering guided regularization. However, there is no direct guarantee to ensure that the regularization scheme could explicitly leverage the block-diagonal structure. More importantly, it does not provide an explicit connection between the feature-task co-clustering and the negative transfer across tasks and features.
\noindent \textbf{Personalized Attribute Predictions.} Attribute learning has long been playing a central role in many machine learning and computer vision problems. While most attribute learning methods adopt consensus annotations, recently, with the rise of the crowdsourcing platform, there is an emerging wave to study how to train user-specific models for personalized annotations. An early trial presented in \cite{user1} learns user-specific attributes with an adaption process. More precisely, a general model is first trained based on a large pool of data. Then a small user-specific dataset is employed to adapt the trained model to user-specific predictors. \cite{user2} argues that one attribute might have different interpretations for different groups of persons. Correspondingly, a shade discovery method is proposed therein to leverage group-wise user-specific attributes. The common issue of these methods is that they only adopt a stage-wise training scheme. Most recently, the work presented in \cite{user3} starts an early trial to jointly learn personalized annotations across multiple attributes. In this paper, we will have a closer look at the negative transfer issue in this application with a fine-grained modeling of the user-feature correlations based on the proposed task-feature collaborative learning method.
\section{Task-Feature Collaborative Learning: The General Framework}
In this section, we propose the base model for Task-Feature Collaborative Learning (TFCL), which suppresses negative transfer across tasks and features.
In a nutshell, a summary of our method is illustrated in Fig.\ref{fig:illu}. In this section, our main assumption is that tasks and features could be simultaneously clustered into different groups. \textit{Nonetheless, our work does not restrict to the co-grouping assumption. In section \ref{sec:per}, we will extend TFCL to include outlier tasks and consensus features. }\\
\noindent\textbf{Notations}. The notations adopted in this paper are enumerated as follows. $\mathbb{S}_m$ denotes the set of all symmetric matrices in $\mathbb{R}^{m \times m}$. Given $\bm{A} \in \mathbb{S}_N$, we number the $N$ eigenvalues $\bm{A}$ in an ascending order as $\lambda_1(\bm{A}) \le \lambda_2(\bm{A}) \le \cdots \le \lambda_N(\bm{A})$. $\left<\cdot,\cdot\right>$ denotes the inner product for two matrices or two vectors. Given two matrices $\boldsymbol{A}$ and $\boldsymbol{B}$, $\boldsymbol{A} \oplus \boldsymbol{B}$ denotes the direct sum of two matrices, and we say $\boldsymbol{A} \succeq \boldsymbol{B} $, if $\boldsymbol{A} - \boldsymbol{B}$ is positive semi-definite. For distributions, $\mathbb{U}(a,b)$ denotes the uniform distribution and $\mathbb{N}(\mu, \sigma^2)$ denotes the normal distribution.
For a given matrix $ \bm{A} \in \mathbb{R}^{m \times n}$, the null space is defined as $\mathcal{N}(\bm{A}) =\{\bm{x} \in \mathbb{R}^{n}: \bm{A}\bm{x} = \bm{0}\}$. Given $\bm{A} \in \mathbb{S}_{m}$, if $\lambda$ is an eigenvalue of $\bm{A}$, $\mathbb{EIG}_{A}(\lambda) = \mathcal{N}(\bm{A} - \lambda\bm{I})$ is the subspace spanned by the eigenvectors associated with $\lambda$. Given a matrix $\bm{A} = [\bm{a}_1, \cdots, \bm{a}_n]$, we denote $\bm{A}_{m:n} = [\bm{a}_m, \cdots, \bm{a}_n]$. We have $\iota_\mathcal{A}(x)= 0$, if $x \in \mathcal{A}$, and $\iota_\mathcal{A}(x)= +\infty$, otherwise.
\noindent\textbf{Standard multi-task learning paradigm}. Given $T$ tasks to be learned simultaneously, we denote the training data as: \[\mathcal{S} = \left\{(\boldsymbol{X}^{(1)}, \boldsymbol{y}^{(1)}), \cdots, (\boldsymbol{X}^{(T)}, \boldsymbol{y}^{(T)})\right\}.\] For $\mathcal{S}$, $\boldsymbol{X}^{(i)} \in \mathbb{R}^{n_{i} \times d}$ is the input feature matrix for the $i$-th task, where $n_{i}$ denotes the number of instances and $d$ represents the feature dimension. Each row of $\boldsymbol{X}^{(i)} $ represents the feature vector for a corresponding instance in the task. $\boldsymbol{y^{(i)}}$ is the corresponding response or output variable for the $i$-th task. We train a linear model $\boldsymbol{g}^{(i)}(x) = \boldsymbol{W}^{(i)^\top}\boldsymbol{x}$ for each involved task. The \textit{parameter matrix} $\boldsymbol{W} \in \mathbb{R}^{d \times T}$ as the concatenation of task coefficients in the form $\boldsymbol{W} = [\boldsymbol{W}^{(1)}, \cdots, \boldsymbol{W}^{(T)}]$. Following the standard MTL learning paradigm, $\boldsymbol{W}$ could be solved from a regularized problem
$
\argmin_{\boldsymbol{W}} \mathcal{J}(\boldsymbol{W}) + \alpha \cdot \varOmega(\boldsymbol{W})
$, where $\ell_i$ denotes the empirical risk for the $i$-th task, $\mathcal{J}(\boldsymbol{W}) = \sum_i \ell_i$, $\varOmega(\boldsymbol{W})$ denotes a proper regularizer. In the rest of this section, we derive a proper regularizer that suppresses the negative transfer collaboratively from both the task- and feature-level.
\\
\\
\noindent Under the linear model, for the $i$-th task, the prediction of the response could be written as $\hat{{y}} = \boldsymbol{W}^{(i)^\top}\boldsymbol{x} = \sum_{j = 1}^d W_{ij}x_j$. Accordingly, if $W_{ij} = 0$, $\hat{{y}}$ becomes irrelevant to the $j$-th feature. While for those nonzero $W_{ij}$s, $\hat{{y}}$ tends to have a stronger dependence on the features with a greater value of $|W_{ij}|$. In this way, $|W_{ij}|$ provides a proper expression of the correlation between feature $i$ and task $j$. To alleviate negative transfer from both dissimilar tasks and dissimilar features, our basic setting in TFCL is to automatically separate the tasks and features into a given number of groups, where each group only contains similar tasks and features. In this scenario, negative transfer comes when $W_{ij} \neq 0$ if feature $i$ and task $j$ are not in the same group. Inspired by this, \textit{our goal is to search for a simultaneous partition of tasks and features into $k$ groups, where we expect $|W_{ij}|$ to vanish when feature $i$ and task $j$ are not in the same group}. At first glance, formulating this constraint as a regularizer is difficult. However, if we turn to define this constraint on an auxiliary bipartite graph, then a simple regularization term realizing this constraint could be constructed. Specifically, we define a bipartite graph $\mathcal{G}_{BI} = (\mathcal{V}_{BI},\mathcal{E}_{BI},\bm{A}_{BI})$. The vertices of $\mathcal{G}_{BI}$ include all the tasks and features. Denote $\mathcal{V}_{T}$ as the set of all tasks and $\mathcal{V}_{F}$ as the set of all features, the vertex set $\mathcal{V}_{BI}$ is defined as $\mathcal{V}_{BI}=\mathcal{V}_{T}\cup \mathcal{V}_{F}$. The affinity matrix $\bm{A}_{BI}$ is in the form
$
\bm{A}_{{BI}} = \begin{bmatrix}
\boldsymbol{0} & \boldsymbol{|\boldsymbol{W}|}\\
\boldsymbol{|\boldsymbol{W}|^\top} & \boldsymbol{0}\\
\end{bmatrix}
$,
then the edge set naturally becomes $\mathcal{E}_{BI}=\{(i,j)| \bm{A}_{{BI}_{{i,j}}} > 0 \}$. Besides the graph itself, we also employ the graph Laplacian matrix defined as $\mathcal{L}_{\Gbi} = diag(\bm{A}_{BI}\boldsymbol{1})-\bm{A}_{BI}$.
With $\mathcal{G}_{BI}$ defined, we could then find insight from spectral graph theory. In fact, to guarantee a simultaneous grouping of the tasks and features, it suffices to guarantee that the induced bipartite graph $\mathcal{G}_{BI}$ has $k$ connected components. Then the following theorem states that this is equivalent to restrict the dimension of the null space of $\mathcal{L}_{\Gbi}$.
\begin{thm} \label{thm:graph}\cite{spectral}
If $\dim(\mathcal{N}(\mathcal{L}_{\Gbi}))=k$, i.e, the 0 eigenvalue of $\mathcal{L}_{\Gbi}$ has a multiplicity of $k$ if and only if the bipartite graph $\mathcal{G}_{BI}$ has k connected components. Moreover, denote $\mathcal{G}(i)$ as the set of tasks and features belonging to the $i$-th component, we then have $\mathbb{EIG}_{\mathcal{L}_{\Gbi}}(0) = span(\boldsymbol{\iota}_{\mathcal{G}(1)},\boldsymbol{\iota}_{\mathcal{G}(2)}, \cdots, \boldsymbol{\iota}_{\mathcal{G}(N)})$, where $\boldsymbol{\iota}_{\mathcal{G}(i)} \in \mathbb{R}^{(d+T) \times 1}$, $[\boldsymbol{\iota}_{\mathcal{G}(i)}]_j =1 \ \text{if} \ j \in \mathcal{G}(i) $, otherwise $[\boldsymbol{\iota}_{\mathcal{G}(i)}]_j =0 $.
\end{thm}
The theorem above implies a way to realize our goal: adopting a regularizer to force the smallest $k$ eigenvalues to be zero. let $N= d + T$ denote the total number of nodes in $\mathcal{G}_{BI}$. We have that the regularizer is equivalent to force $rank(\mathcal{L}_{\Gbi}) =N-k$, which is known to be an NP-hard problem. In this case, we turn to minimize the sum of the bottom $k$ eigenvalues $\sum_{i=1}^k \lam{i}$ as a tractable relaxation. According to the well-known extremal property of eigenvalues suggested by Fan \cite{fan}, the sum of the $k$ smallest eigenvalues of $\mathcal{L}_{\Gbi}$ could be formulated as a minimization problem:
\begin{equation*}
\sum_{i=1}^k \lambda_i(\mathcal{L}_{\mathcal{G}_{BI}}) = \min_{\boldsymbol{E}} tr(\boldsymbol{E}\mathcal{L}_{\mathcal{G}_{BI}}\boldsymbol{E}^\top), s.t. \ \boldsymbol{E}^\top\boldsymbol{E} = \boldsymbol{I}_{k}.
\end{equation*}
At first glance, the above problem is not convex due to the non-convex constraint $\boldsymbol{E}^\top\boldsymbol{E} = \boldsymbol{I}_{k}$. Recently a convex formulation of the problem starts drawing attention from machine learning and computer vision community \cite{lublock}. The nature behind this new formulation attributes to the following theorem.
\begin{thm}\label{thm:eig}
Let $\Gamma=\{\boldsymbol{U}: \boldsymbol{U} \in \mathbb{S}_{N},\ \boldsymbol{I} \succeq \boldsymbol{U} \succeq \boldsymbol{0}, tr(\boldsymbol{U})=k \} $, then $\forall \boldsymbol{A} \in \mathbb{S}_N$:
\begin{equation*}
\sum_{i=1}^k\lambda_i(\boldsymbol{A}) = \min_{\boldsymbol{U} \in \Gamma} \left<\boldsymbol{A}, \ \boldsymbol{U}\right>,
\end{equation*}
with an optimal value reached at $\boldsymbol{U} = \boldsymbol{V}_k\boldsymbol{V}^\top_k$, where $\boldsymbol{V}_k$ represents the eigenvectors of the smallest $k$ eigenvalues of $\boldsymbol{A}$.
\end{thm}
\begin{proof}
In this proof, we denote the eigenvalue decomposition of $\boldsymbol{A}$ as
\begin{equation}
\boldsymbol{A} = \boldsymbol{Q} \Lambda \boldsymbol{Q}^\top, \ \Lambda = diag(\lambda_1(\boldsymbol{A}), \cdots, \lambda_N(\boldsymbol{A})).
\end{equation}
For any element $\boldsymbol{U}$ in the feasible set $\Gamma$, we have:
$
\left<\boldsymbol{A}, \boldsymbol{U}\right> = \sum_{i}C_{ii}\lambda_i(\boldsymbol{A}),
$
where $\boldsymbol{C} = \boldsymbol{Q}^\top \boldsymbol{U} \boldsymbol{Q}$. Since $\boldsymbol{C}$ has the same eigenvalues as $\boldsymbol{U}$, we have $\boldsymbol{C} \in \Gamma$ if and only if $\boldsymbol{U} \in \Gamma$.
Then we have:
\begin{equation}\label{equi}
\min_{\boldsymbol{U} \in \Gamma}\left<\boldsymbol{A}, \boldsymbol{U}\right> \Longleftrightarrow \min_{\boldsymbol{C} \in \Gamma}\sum_{i}C_{ii}\lambda_i(\boldsymbol{A}).
\end{equation}
Define $\boldsymbol{e}^i \in \mathbb{R}^{N \times 1} $, $\boldsymbol{e}^i_i = 1 $ and $\boldsymbol{e}^i_s = 0$ , if $s \neq i$, then we reach the fact that:
\[C_{ii} = \dfrac{\boldsymbol{e}^{i^\top}\boldsymbol{C}\boldsymbol{e}^{i}}{\boldsymbol{e}^{i^\top}\boldsymbol{e}^{i}}.\] We could then attain the following inequality based on the extremal property of the top/bottom eigenvalue of $\boldsymbol{C}$:
\begin{equation}
\begin{split}
0 &\le \lambda_1(\boldsymbol{C}) = \min\limits_{\boldsymbol{x}} \dfrac{\boldsymbol{x}^{\top}\boldsymbol{C}\boldsymbol{x}}{\boldsymbol{x}^{\top}\boldsymbol{x}} \le C_{ii}\\ &\le \max\limits_{\boldsymbol{x}} \dfrac{\boldsymbol{x}^{\top}\boldsymbol{C}\boldsymbol{x}}{\boldsymbol{x}^{\top}\boldsymbol{x}} = \lambda_N(\boldsymbol{C}) \le 1.
\end{split}
\end{equation}
The minimum of (\ref{equi}) is reached at $\sum_{i=1}^k\lambda_i(\boldsymbol{A}) $ when $C_{ii} = 0, i > k $, $C_{ii} = 1, i \le k$.
This directly shows that $\sum_{i=1}^k \lambda_i(\boldsymbol{A}) = \min_{\boldsymbol{U} \in \Gamma} \left<\boldsymbol{A}, \ \boldsymbol{U}\right>$.
Now it only remains to prove that $\boldsymbol{U} = \boldsymbol{V}_k \boldsymbol{V}^\top_k$ is an optimal solution by showing $\sum_{i=1}^k \lambda_i(\boldsymbol{A}) = \left< \bm{A}, \bm{U}\right> $. Since $\boldsymbol{V}_k$ is the eigenvectors associated with the smallest $k$ eigenvalues of $\boldsymbol{A}$, we have $\boldsymbol{Q} = [\boldsymbol{V}^\bot_k, \boldsymbol{V}_k]$, where $\boldsymbol{V}^\bot_k$ denotes the eigenvectors associated with the largest $N-k$ eigenvalues, and we have $\boldsymbol{V}^\top_k\boldsymbol{V}^\bot_k = \boldsymbol{0}$ and $\boldsymbol{V}^{\bot^\top}_k\boldsymbol{V}_k = \boldsymbol{0}$. In this sense, we obtain:
\begin{equation}
\begin{split}
\bm{C} &= \boldsymbol{Q}^\top \boldsymbol{U}\boldsymbol{Q} =\begin{bmatrix}
\boldsymbol{V}^\top_k \\
\boldsymbol{V}^{\bot^\top}_k \\
\end{bmatrix} \boldsymbol{V}_k \boldsymbol{V}^\top_k [ \boldsymbol{V}_k, \boldsymbol{V}^{\bot}_k] \\ &= \begin{bmatrix}
\boldsymbol{I}_k & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{0}
\end{bmatrix}
\end{split}.
\end{equation}
Then the proof follows that $ \sum_{i} C_{ii} \lambda_i(\bm{A}) = \sum_{i=1}^k \lambda_i(\bm{A})$
\end{proof}
Combining the multi-task empirical loss $\mathcal{J}(\boldsymbol{W})$, the regularization term proposed above, and an $\ell_2$ penalty on $\bm{W}$, we reach our proposed optimization problem $(\boldsymbol{P})$:
\begin{equation*}
(\boldsymbol{P}) \ \min_{\boldsymbol{W},\boldsymbol{U}\in \Gamma} \ \mathcal{J}(\boldsymbol{W}) +
\alpha_1 \cdot \left<\mathcal{L}_{\Gbi},\boldsymbol{U}\right> + \frac{\alpha_2}{2} \cdot \norm{\bm{W}}_F^2.
\end{equation*}
\section{Optimization}
\indent In this section, instead of solving $(\boldsymbol{P})$ directly, we first propose an optimization method to solve a surrogate problem $(\boldsymbol{P}^\star)$ written as
\begin{equation*}
(\boldsymbol{P}^\star) \ \min_{\boldsymbol{W},\boldsymbol{U}\in \Gamma}
\left\{\begin{aligned}
& \mathcal{J}(\boldsymbol{W}) +
\alpha_1 \cdot \left<\mathcal{L}_{\Gbi},\boldsymbol{U}\right> + \frac{\alpha_2}{2} \cdot \norm{\bm{W}}_F^2\\
& + \frac{\alpha_3}{2} \cdot \norm{\bm{U}}_F^2
\end{aligned}\right\}.
\end{equation*}
We will soon see that, under certain conditions, our algorithm, though originally targeted at $(\boldsymbol{P}^\star)$, surprisingly produces a sequence that simultaneously convergences to a critical point of $(\boldsymbol{P})$ and $(\boldsymbol{P}^\star)$.
Since the term $\left<\mathcal{L}_{\Gbi},\boldsymbol{U}\right>$ is non-smooth and non-convex with respect to $\bm{W}$, we adopt a Proximal Gradient Decent (PGD)\cite{fista} framework in our algorithm. As a basic preliminary, we assume that the gradient of the loss function i.e. $\nabla_{\boldsymbol{W}} \mathcal{J}(\boldsymbol{W})$ is $\varrho$-Lipschitz continuous. Following the PGD framework, for each iteration step $t$, given a constant $C > \varrho$ and the historical solution $\boldsymbol{W}^{t-1}$, the parameter $\boldsymbol{W}^t$ and $\boldsymbol{U}^t$ could be updated from the following problem:
\begin{equation*}
(\boldsymbol{Prox}) \ \min_{\boldsymbol{W},\boldsymbol{U} \in \Gamma}
\left\{
\begin{aligned}
&\dfrac{1}{2} \left\norm{\boldsymbol{W} -\widetilde{\boldsymbol{W}}^t \right}_F^2 + \frac{\alpha_1}{C} \left<\mathcal{L}_{\Gbi},\boldsymbol{U}\right> \label{Ptheta}\\
&+ \frac{\alpha_2}{2C} \cdot \norm{\bm{W}}_F^2 + \frac{\alpha_3}{2C}\norm{\bm{U}}_F^2 \\
\end{aligned}
\right\},
\end{equation*}
where
$\widetilde{\boldsymbol{W}}^t = \boldsymbol{W}^{t-1} - \dfrac{1}{C}\nabla_{\boldsymbol{W}}\mathcal{J}(\boldsymbol{W}^{t-1})$.
\subsection{Subroutines}
Solving $\boldsymbol{(Prox)}$ involves two subroutines, one is to optimize $\boldsymbol{U}$ with $\boldsymbol{W}$ given, and the other is to optimize $\boldsymbol{W}$ with $\boldsymbol{U}$ given.
{\color{white}dsadsa}\\
\noindent \textbf{Update $\boldsymbol{U}$, fix $\boldsymbol{W}$}: This involves the following subproblem:
\begin{equation}\label{eq:usub}
\min_{\bm{U}} \left<\LGbi, \bm{U}\right> + \frac{\alpha_3}{2C}\norm{\bm{U}}_F^2, \ \ s.t. \ \ \bm{U} \in \Gamma
\end{equation}
Unfortunately, Thm. \ref{thm:eig} only gives a solution to this problem when $\alpha_3 = 0$. This degenerates to an ordinal truncated eigenvalue minimization problem which has been widely adopted by historical literatures \cite{yang2019split,liu2019robust,xie2017implicit,lublock}. In the forthcoming theorem, inspired by \cite{sdpopt,stopca}, we show that, with a moderate magnitude of $\alpha_3$, the subproblem still has a closed-form solution. More interestingly, we show that this is also a solution for $\alpha_3 = 0$, which is illustrated as Fig.\ref{fig:eigillu}.
\begin{thm}\label{eigsol}
Let $\lam{0} = 0, \lam{N+1} = +\infty$. Let $\bm{V} = [\bm{v}_1, \bm{v}_2, \cdots, \bm{v}_N]$ be the associated eigenvectors for $\lam{1}, \cdots, \lam{N}$. Furthermore, set
\begin{align*}
p &= \max\{i: \lam{i} < \lam{i+1}, 0 \le i < k\}\\
q &=\min\{i: \lam{i} < \lam{i+1}, i \ge k \}.\\
\Delta{p} &= \lam{p+1} - \lam{p}, \\
\Delta{q} &= \lam{q+1} - \lam{q}.\\
\breve{\delta}(\mathcal{L}_{\Gbi}) &=
\min\{\Delta{p}, \Delta{q}\} \\
\end{align*}
For all $\mathcal{L}_{\Gbi} \neq \bm{0}$ and $0 \le \frac{\alpha_3}{2C} < \breve{\delta}(\mathcal{L}_{\Gbi})$, the optimal solution of (\ref{eq:usub}) is:\\
\begin{equation}\label{eq:opt}
\begin{array}{lll}
\bm{U}^\star= \bm{V}\tilde{\Lambda}\bm{V}^\top, & \tilde{\Lambda} = diag(\bm{c}), & {c}_i = \begin{cases}
1 & i\le p , \\
\frac{k-p}{q-p} & q \ge i >p, \\
0 & \text{otherwise}.
\end{cases}
\end{array}
\end{equation}
\end{thm}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{peig.png}
\caption{\label{fig:eigillu} \textbf{Illustration of the Solution in Thm. \ref{eigsol}.} In this figure, we plot the values of $l_i$ with respect to the corresponding eigenvalues. We see that Thm. \ref{eigsol} considers the multiplicity of $\lam{k}$. This makes our algorithm stable even when the eigengap $\lam{k+1} - \lam{k}$ is zero. }
\end{figure}
\noindent We have three interesting remarks for this theorem.
\begin{rem}[Grouping Effect under an ideal condition] We now provide a remark for the grouping power of $\bm{V}$. Under an ideal case, we assume that the bipartite graph has exactly $k$ connected components. Since $c_i = 0 , \forall i > k$, only $\bm{V}_{1:k}$ is relevant to the computation of $\bm{U}^\star$. We then investigate the grouping power of $\bm{f}_i \in \mathbb{R}^{k}, i =1,2,\cdots, N$, which is denoted as the transpose of the $i$-th row of $\bm{V}_{1:k}$. We define $\mathcal{G}(1) ,\cdots, \mathcal{G}(k)$, and the corresponding nodes in each component as $n_{\mathcal{G}(1)}, \cdots, n_{\mathcal{G}(k)}$, respectively. According to Thm. \ref{thm:graph}, up to some orthogonal transformation, $\boldsymbol{f}_i \in \mathbb{R}^{k \times 1}$ becomes:
\begin{equation}
{f}_{i,j} =\begin{cases}
\dfrac{1}{\sqrt{n_{\mathcal{G}(j)}}}, &i \in \mathcal{G}(j) \\
0, &otherwise
\end{cases}.
\end{equation}
In this way, we see that $\boldsymbol{f}_i$ has a strong discriminative power indicating which group the underlying task/feature belongs to. In Sec.\ref{sec:group}, we will revisit this property with a detailed theoretical analysis with more practical considerations.
\end{rem}
\begin{rem}
Different from the original result in Thm. \ref{thm:eig} that only considers the case when $\alpha_3 = 0$, Thm. \ref{eigsol} allows $\alpha_3 > 0$ which provides strong convexity to the $\bm{U} $-subproblem. This makes global convergence property available to the algorithm. More interestingly, we could also prove that the final algorithm converges globally to the critical points for both $(\bm{P}^\star)$ itself and the original problem $(\bm{P})$. The readers will soon see this in the next subsection.
\end{rem}
\begin{rem} \label{rem:ident}
Here we define $\bm{V}_{a:b}$ as the eigenvectors associated with $\lam{a}, \cdots, \lam{b}$. As shown in Fig.\ref{fig:eigillu}, our algorithm can work even when the eigengap $\lam{k+1} -\lam{k}$ vanishes. Note that, whenever $\lam{k} = \lam{k+1}$, the solution of Thm. \ref{thm:eig} is not well-defined. In this case, $ \lam{k} $ must have a multiplicity greater than 1. Without loss of generality, we assume that $\mathcal{L}_{\Gbi}$ has $s$ distinct eigenvalues $ [\lamc{1}, \cdots \lamc{s}] $ in its first $k$ smallest eigenvalues $[\lam{1}, \cdots \lam{k}]$, where $1 \le s < k$. In fact, $\bm{V}_{1:k}$ could not span the whole subspace $\oplus_{i=1}^s\mathbb{EIG}_{\mathcal{L}_{\Gbi}}(\breve{\lambda}_s)$ in this case (it only contains $k$ out of $q$ bases of this subspace). In this sense, the solution is not identifiable since $\bm{V}_{1:k}\bm{V}_{1:k}^\top$ is not unique toward changes (either through rotations or through different ways to select $k$ bases out of the $q$ bases) of the eigenvectors. This means that we can observe completely different results when the subset of eigenvectors is chosen differently. As for a practical example, we construct a bipartite graph with an affinity matrix:
\[
A = \begin{bmatrix}
\bm{0}& \bm{W}\\
\bm{W}^\top& \bm{0}\\
\end{bmatrix}, ~ ~ \text{and} ~ ~
\bm{W} = \begin{bmatrix}
1 & 1 & & & &\\
1& 1 & & & &\\
& &2 &2 & &\\
& &2 & 2& &\\
& && & 3&3 \\
\end{bmatrix}.
\]
Obviously, the multiplicity of zero eigenvalue for the corresponding Graph Laplacian matrix is 3. We denote the eigenvectors as $\bm{v}_1,\bm{v}_2, \bm{v}_3$. Let $k =2$, next, we now show that the outer product $\bm{V}\bm{V}^\top$ is not unique. To do this, picking $\bm{V}_{1:2} = [\bm{v}_1,\bm{v}_2]$ and $\bm{V}_{2:3} = [\bm{v}_2,\bm{v}_3]$, we calculate the corresponding outer products $\bm{V}_{1:2}\bm{V}_{1:2}^\top$ and $\bm{V}_{2:3}\bm{V}_{2:3}^\top$ and visualize them in Fig.\ref{fig:subspace}. We see that the matrices are completely different, making the preceding subproblem ill-defined since it leads to completely different solutions.\\
It is interesting to remark here that this issue will not take place if we employ Thm.\ref{eigsol} instead. In this case, we have $\bm{U}^\star = \bm{V}_{1:p-1}\bm{V}_{1:p-1}^\top + \frac{k-p}{q-p}\bm{V}_{p:q}\bm{V}_{p:q}^\top$. Moreover, by the definition of q and p, we know that $\bm{V}_{p:q}$ spans $\mathbb{E}_1= \mathbb{EIG}_{\mathcal{L}_{\Gbi}}(\lamc{s})$ and obviously $\bm{V}_{1:p-1}$ spans $\mathbb{E}_2 = \bigoplus_{i=1}^{s-1} \mathbb{EIG}_{\mathcal{L}_{\Gbi}}(\lamc{i})$. This implies that $\bm{V}_{1:p-1}\bm{V}_{1:p-1}^\top$ forms the orthogonal projector onto $\mathbb{E}_1$ and $\bm{V}_{p:q}\bm{V}_{p:q}^\top$ forms the orthogonal projector onto $\mathbb{E}_2$. According to the basic properties of orthogonal projectors, we know $\bm{U}^\star$ is well-defined and invariant. To sum up, the advantage of Thm.\ref{eigsol} against traditional variational formulations of $\sum_{i=1}^k \lam{i}$ is shown in Tab.\ref{tab:eigform}. Note that all three formulations therein yield the same optimal value. However, they have different optimal solutions with different degrees of stability.
\end{rem}
\begin{figure}[h]
\centering
\subfigure[$\bm{V}_{1:2}\bm{V}_{1:2}^\top$]{
\includegraphics[width=0.45\columnwidth]{sub12.png}
}
\subfigure[$\bm{V}_{2:3}\bm{V}_{2:3}^\top$]{
\includegraphics[width=0.45\columnwidth]{sub23.png}
}
\caption{Visualizations of the eigenvector outer-products, which shows that $\bm{V}\bm{V}^\top$ is not identifiable when we need to pick 2 out of 3 bases from the eigenspace of zero.}
\label{fig:subspace}
\end{figure}
{
\begin{table}[htbp]
\centering
\caption{\label{tab:eigform} Different formulations of $\sum_{i=1}^k \lam{i}$, where Ident. represents the identifiability of $\bm{U} = \bm{V}\bm{V}^\top$ when the eigengap $\lam{k+1} - \lam{k}$ vanishes. }
\scriptsize
\begin{tabular}{lccc}
\toprule
& Convex &
$
\begin{array}{l}
\text{Strongly} \\
\text{Convex}
\end{array}$
&
Ident. \\
\midrule
$\begin{array}{l}
\min_{\bm{V}} \ \ tr(\bm{V} \mathcal{L}_{\Gbi} \bm{V}^\top)\\
s.t. \ \ \bm{V}\bm{V}^\top = \bm{I}_k
\end{array} $
& \ding{53} & \ding{53} & \ding{53} \\
\midrule
$\begin{array}{l}
\min_{\bm{U}} \ \ \left<\LGbi, \bm{U}\right> \\
s.t. \ \ \bm{U} \in \Gamma
\end{array}$
& \ding{52} & \ding{53} & \ding{53} \\
\midrule
$
\begin{array}{l}
\min_{\bm{U}} \ \ \left<\LGbi, \bm{U}\right> + \lambda \cdot \norm{\bm{U}}_F^2\\
s.t. \ \ \bm{U} \in \Gamma \\
0 \le \lambda \le \breve{\delta}(\mathcal{L}_{\Gbi}) \\
(Ours)
\end{array} $ &
\ding{52} & \ding{52} & \ding{52} \\
\bottomrule
\end{tabular}%
\end{table}%
}
\noindent Now we proceed to solve the next subproblem.\\
\noindent \textbf{Update $\boldsymbol{W}$, fix $\boldsymbol{U}$}: The following proposition shows that when $\boldsymbol{U}$ is fixed, one could cast the $\boldsymbol{W}$ subproblem as a specific elastic net proximal mapping problem:
\begin{prop}\label{prop:sol}
The optimal solution of $\boldsymbol{W}$ subproblem of $(\boldsymbol{Prox})$ is:
\begin{equation}\label{eq:wsol}
\bm{W}^\star = sgn(\bm{\widetilde{W}})\left(\left|\dfrac{\bm{\widetilde{W}}}{1+\frac{\alpha_2}{C}}\right| - \frac{\alpha_1}{C+\alpha_2} \bm{D}\right)_+,
\end{equation} where
${D}_{ij} = \norm{\boldsymbol{f}_i - \boldsymbol{f}_{d+j} }^2 $.
\end{prop}
\begin{proof}
With the fact that
\begin{equation}
\begin{split}
&\left<\LGbi, \bm{U}\right> \\
&= \left<diag(\begin{bmatrix} 0 &|\boldsymbol{W}| \\ |\boldsymbol{W}|^\top &0
\end{bmatrix}\boldsymbol{1})-\begin{bmatrix} 0 &|\boldsymbol{W}| \\
|\boldsymbol{W}|^\top &0
\end{bmatrix},\boldsymbol{U}\right> \\
& = \left<diag(\boldsymbol{U})\boldsymbol{1}^\top -\boldsymbol{U},\begin{bmatrix} 0 &|\boldsymbol{W}| \\
|\boldsymbol{W}|^\top &0
\end{bmatrix} \right>,
\end{split}
\end{equation}
we could reformulate the problem as:
\begin{equation*}
\min_{\boldsymbol{W}} \left\{\begin{aligned}
& \ \frac{1}{2}||\boldsymbol{W} -\widetilde{\boldsymbol{W}}||_F^2+ \frac{\alpha_1}{C} \cdot \left<\Delta^{(1)} + \Delta^{{(2)}^\top}, |\boldsymbol{W}|\right> \\ &+\frac{\alpha_2}{2C}\cdot \norm{\bm{W}}_F^2\\
\end{aligned}\right\},
\end{equation*}
where
\begin{align}
&\Delta= diag(\boldsymbol{U})\boldsymbol{1}^\top-\boldsymbol{U},\\
&\Delta^{(1)}= \Delta(1:d,(d+1):end),\\
&\Delta^{(2)}= \Delta((d+1):end,1:d).
\end{align}
\noindent Furthermore, we have
\[ \Delta^{(1)}_{ij} + \Delta^{(2)}_{ji} =U_{ii} + U_{d+j,d+j} -U_{i, d+j}- U_{d+j, i} = \norm{\boldsymbol{f}_i - \boldsymbol{f}_{d+j}}_2^2. \]
\noindent Since the objective function is $\left(1+ \frac{\alpha_2}{C}\right)-$strongly convex, it is easy to see that the optimal solution is unique. Then the proof follows the proximal mapping of the $\ell_1$ norm \cite{fista}.
\end{proof}
With the embedding vectors fixed, the algorithm turns to learn $\boldsymbol{W}$ with a sparsity-inducing strategy, where ${W}_{ij}$ is activated if the magnitude of $\widetilde{W}_{ij}$ dominates the embedding distance between feature $i$ and task $j$. Moreover, the following remark reveals how transfer takes place across features and tasks.
\begin{rem}
(\ref{eq:usub}) enjoys an alternative formulation in the following:
\begin{equation}\label{eq:alter}
\begin{split}
\argmin_{\boldsymbol{W}} \left< \bm{D}, |\boldsymbol{W}|\right>
\ s.t. \boldsymbol{W} \in \mathcal{B}_{c(\alpha)}(\boldsymbol{W},\widetilde{\boldsymbol{W
}}^t),
\end{split}
\end{equation}
where $\mathcal{B}_{c(\alpha)} = \left\{\boldsymbol{W}: \norm{\boldsymbol{W} - \widetilde{\boldsymbol{W}}^t}_F^2 \le c(\alpha_1), \norm{\bm{W}}_F^2 \le c(\alpha_2
) \right\}$. It is noteworthy that (\ref{eq:alter}) shares a striking resemblance with the discrete optimal transport problem seeking the smallest cost transporting information across two sets: features and tasks. Borrowing insights from the optimal transport problem \cite{optimal}, the transfer costs between feature $i$ and task $j$ are measured via the $\ell_2$ distance between their embeddings ${D}_{ij}$. Since tasks/features belonging to the same group tend to share very similar embeddings, the intra-group transfer is encouraged via a small transportation cost ${D}_{ij}$. On the contrary, negative transfer across different groups is penalized with a much larger transportation cost ${D}_{ij}$. Different from existing MTL studies, this shows that our method also models negative transfer issue from the perspective of task-feature transfer.
\end{rem}
\subsection{Theoretical Analysis}
\subsubsection{Convergence Analysis}\label{sec:conv}
With the subroutines clarified, we now turn to discuss the optimization method in a global view. To reach a critical point of $(\boldsymbol{Prox})$, we have to alternatively optimize $\boldsymbol{U}$ and $\boldsymbol{W}$ until convergence before changing the reference point $\widetilde{\bm{W}}^{t}$. This induces a bi-level looping: \textit{an outer loop} is responsible for changing the reference point and \textit{an inner loop} is responsible for solving $\boldsymbol{W}^k$ and $\boldsymbol{U}^k$ given the reference point, which significantly increases the computational burden. However, we find practically that one round of the inner loop is sufficient to leverage convergence property. We summarize this method in Alg.~\ref{alg:opt}. Now we can prove the global convergence property for Alg.~\ref{alg:opt}. And more importantly, this property holds for both the surrogate loss and the original loss.
\begin{thm}[\textbf{Global Convergence Property for Alg.~\ref{alg:opt} with respect to } $(\bm{P}^\star)$]\label{thm:glob_star}
\ Let $\{\bm{W}^t, \bm{U}^t\}$ be the sequence generated by Alg. \ref{alg:opt}. Furthermore, assume that $\mathcal{J}(\cdot)$ is a definable function with $\mathcal{J}(\bm{W})$ lower bounded away from $\bm{0}$, and with $\rho$-Lipschitz continuous gradient. Then pick $C > \rho$, and $0 < \alpha_3 < 2C\min_t \breve{\delta}(\mathcal{L}_{\Gbi}^t)$, for all finite and feasible initialization, the following facts hold:
\begin{enumerate}[itemindent=0pt, leftmargin =15pt,label={(\arabic*)}]
\item The parameter sequence $\{\boldsymbol{W}^t, \boldsymbol{U}^t\}_t$ converges to a critical point $(\bm{W}^*, \bm{U}^*)$ of the problem $(\bm{P}^\star)$.
\item The loss sequence converges to the loss of critical point $(\bm{W}^*,\bm{U}^*)$ of the problem $(\bm{P}^\star)$.
\item For all $t \in \mathbb{N}$, there exists a subgradient $\bm{g}_t$, such that when $T \rightarrow +\infty$, $\dfrac{1}{T} ({\sum\limits_{t=1}^T\norm{\bm{g}_t}^2}) \rightarrow 0$ with rate $\mathcal{O}(\frac{1}{T})$.
\end{enumerate}
\end{thm}
\begin{thm}(\textbf{Global Convergence Property for Alg.~\ref{alg:opt} with respect to $\bm{P}$}).\label{thm:global}
\ Under the same condition as Thm. \ref{thm:glob_star}, the sequence $\{\bm{W}^t, \bm{U}^t\}_t$ generated by Alg.~\ref{alg:opt} also satisfies (1)-(3) with respect to the original problem $(\bm{P})$.
\end{thm}
\begin{rem}
With the help of Thm. \ref{eigsol}, we can reach the global convergence property for the original problem in Thm. \ref{thm:global}. Unfortunately, this will not always hold if we adopt Thm. \ref{thm:eig} directly. One reason is that it is hard to guarantee the identifiability discussed in Rem.\ref{rem:ident} of the $\bm{U}^t$ sequence even if $\bm{W}^t$ converges to a critical point. Another reason is that, without Thm. \ref{eigsol} it is hard to meet the sufficient descent condition which is required in the global convergence property (see our appendix). By contrast, our algorithm, though developed to solve $(\bm{P}^\star)$ , could also convergence to a critical point of $(\bm{P})$. This shows that optimizing the surrogate loss does not affect the quality of the solution. Moreover, since the choice of $\alpha_3$ is irrelevant to the algorithm as long as it lies in $[0, 2C\min_t\breve{\delta}(\mathcal{L}_{\Gbi}^t) )$, we do not need to tune this hyperparameter explicitly.
\end{rem}
\begin{algorithm}[t]
\caption{TFCL for $(\bm{P})$}
\label{alg:opt}
\begin{algorithmic}
\STATE {\bfseries Input:} Dataset $\mathcal{S}$, $\alpha_1$, $\alpha_2$ $k$, $C (C > \varrho)$.
\STATE {\bfseries Output:} Solution $\boldsymbol{W}$, $\boldsymbol{U}$.
\STATE Initialize $\boldsymbol{W}^0$, $\boldsymbol{U}^0 \in \Gamma$, $t=1$.
\REPEAT
\STATE \textbf{\texttt{U SUBROUTINE}}:\\
\ \ \STATE \ \ {Calculate} $\mathcal{L}_{\Gbi}$ with $\bm{W}^{t-1}$.\\
{\color{white}dsadsa}\\
\STATE \ \ $\bm{V}^{t} \gets $ {the eigenvectors of} $\mathcal{L}_{\Gbi}$.\\
\ \ \STATE \ \ $\bm{U}^{t} \gets \bm{V}^{t}\widetilde{\Lambda}\bm{V}^{t}{^\top},$ {according to Thm. \ref{eigsol}. }\\
{\color{white}dsadsa}\\
\STATE \textbf{\texttt{W SUBROUTINE}}:\\
\ \ \STATE \ \ {Calculate} $\bm{R}^t$ with $\bm{U}^{t}$.\\
\ \ \STATE \ \ $\widetilde{\boldsymbol{W}}^t \gets \boldsymbol{W}^{t-1} - (1/C)\cdot \nabla_{\boldsymbol{W}}\mathcal{J}(\boldsymbol{W}^{t-1})$. \\
\ \ \STATE \ \ $\bm{W}^{t} \gets sgn(\bm{\widetilde{W}}^{t})\left(\left|\dfrac{\bm{\widetilde{W}}^{t}}{1+\frac{\alpha_2}{C}}\right| - \frac{\alpha_1}{C+\alpha_2} \bm{D}^t\right)_+$.\\
\STATE $t \gets t + 1$ .\\
\UNTIL{Convergence}
\STATE $\boldsymbol{W} = \boldsymbol{W}^{t-1}$, $\boldsymbol{U} = \boldsymbol{U}^{t-1}$.
\end{algorithmic}
\end{algorithm}
\subsubsection{Task-Feature Grouping Effect} \label{sec:group}
In this subsection we show how the proposed algorithm differentiates task-feature groups in the model weights. Specifically, we have the following theorem.\\
\indent Under an ideal case, if $\sum_{i=1}^k \lam{i} = 0$, then according to Thm.\ref{thm:graph}, we know that $\mathcal{G}_{BI}$ must be k-connected and $\bm{W}_{i,j} \neq 0$ if and only if feature $i$ and task $j$ are in the same component of this bipartite group.\\
\indent More practically, we often observe $\sum_{i=1}^k \lam{i} \neq 0$, which makes the arguments above unavailable. Instead of assuming $\sum_{i=1}^k \lam{i} = 0$, for a well-trained model, it is reasonable to assume that the final objective function is small at the very end of the algorithm. Motivated by this, the following theorem shows that we can still recover the grouping structure when this much weaker assumption holds.
\begin{thm}(\textbf{Grouping Effect})\label{thm:group}
Assume that Alg.~\ref{alg:opt} terminates at the $\mathcal{T}$-th iteration with $\mathcal{F}(\bm{W}^{\mathcal{T}-1}, \bm{U}^{\mathcal{T}-1}) \le \epsilon_{\mathcal{T}-1}$. Denote $\mathsf{Supp}(\bm{A}) = \left\{(i,j): A_{i,j} \neq 0 \right\}$, $\mathcal{H}_K = \left\{\bm{W}: \norm{\bm{W}}_F \le K \right\},~ C_0 = \left(\dfrac{2}{\alpha_2} \cdot \epsilon_{\mathcal{T}-1} \right)^{1/2}.$ We further assume that for all $\infty> \kappa >0$, $\sup_{\norm{\bm{W}}_F \le \kappa}\norm{\nabla_{\boldsymbol{W}}\mathcal{J}(\boldsymbol{W})}_\infty \le \varpi(\kappa) < \infty$ and that there is a matrix $\bm{W}^\star \in \mathcal{H}_{C_0}$, where the corresponding bipartite graph $\mathcal{G}^\star$ has $k$ connected components with a Graph Laplacian matrix $\mathcal{L}_{\Gbi}^\mathcal{T}$ giving the ground-truth grouping. Moreover, denote $n_i$ as the number of nodes in the $i$-th group of the graph, and $n^\uparrow_1 = \max_{i} n_i$, $n^\uparrow_2 = \max_{j, n_j \le n^\uparrow_1} n_i$. With the following notations:
\begin{equation*}
\begin{split}
&\kappa_0 = C_0 + \dfrac{\varpi(C_0)}{C},~ \delta_1 = \frac{C}{\alpha_1} \kappa_0,~ \delta_2 = \frac{C}{\alpha_1} \delta_0,~\beta = \dfrac{1}{n^\uparrow_1} + \dfrac{1}{n^\uparrow_2},\\
& \rho = \frac{C_0}{\lambda_{k+1}\left(\mathcal{L}_{\Gbi}^\mathcal{T}\right)}, ~ \xi= \rho \cdot(\sqrt{d+T} + \sqrt{2}),
\end{split}
\end{equation*}
we have:
\begin{enumerate}
\item[(a)] \textbf{(no-false-positive-grouping)} If $\lambda_{k+1}(\mathcal{L}_{\Gbi}^\mathcal{T}) > \lambda_{k}(\mathcal{L}_{\Gbi}^\mathcal{T}) > 0$, $\frac{\sqrt{2}}{32} \cdot \beta > \xi $, and $ 8\sqrt{2}\xi < \delta_1 < \beta- 8\sqrt{2} \xi $, we have:
\begin{equation*}\label{eq:g1}
\mathsf{Supp}(\bm{W}^\mathcal{T}) \subseteq \big\{(i,j): \mathcal{G}(i) = \mathcal{G}(j) \big\} = \mathsf{Supp}(\bm{W}^\star) ,
\end{equation*}
where $\mathcal{G}(i)$ is the corresponding connected component in the bipartite graph $\mathcal{G}^\star$ that i belongs to.
\item[(b)] \textbf{(correct-grouping) }If we further assume that $\min_{(i,j)}|\widetilde{\boldsymbol{W}}^{\mathcal{T}}_{i,j}| \ge \delta_{0}> 0$, \[ 8\sqrt{2}\xi < \min \left\{\delta_1,\delta_2 \right\} \le \max \left\{\delta_1,\delta_2 \right\} < \beta - 8\sqrt{2} \xi, \] we get that:
\begin{equation*}\label{eq:g2}
\mathsf{Supp}(\bm{W}^\mathcal{T}) = \mathsf{Supp}(\bm{W}^\star).
\end{equation*}
\end{enumerate}
\end{thm}
\begin{rem} We have the following remarks for the theorem:
\begin{enumerate}
\item[(a)] The assumption that one can find a $\bm{W}^\star \in \mathcal{H}_{C_0}$ consistent with the GT structure is always achievable, since if $\bm{W}^\star \notin \mathcal{H}_{C_0}$, one can pick $\bm{W}' = C_0 \cdot \frac{\bm{W}^\star}{||\bm{W}^\star||_F}$ instead that locates within the F-norm ball with the same support set.
\item[(b)] Thm. \ref{thm:group} states that if the $k$-th eigengap of $\mathcal{L}_{\Gbi}$ exists, the hyperparameters are chosen as $\alpha_2 = o\left(\frac{(d+T)\cdot \epsilon_{\mathcal{T}-1}}{\beta^2 \cdot \lambda_{k+1}(\mathcal{L}_{\Gbi}^\mathcal{T})^2}\right), \alpha_1 = \mathcal{O}(C\kappa_0)$ and the inequality of $\xi,\delta_1$ is ensured, we can reach the no-false-positive grouping, where $|W^\mathcal{T}_{ij}|$ is activated as nonzero only if feature $i$ and task $j$ belong to the same group in GT. Moreover, when we have extra assumptions on the intermediate variable $|\widetilde{W}^\mathcal{T}_{ij}|$, with $\alpha_1 = \mathcal{O}(C\cdot(\delta_0 \vee \kappa_0)), ~ \alpha_2 = o\left(\frac{(d+T)\cdot \epsilon_{\mathcal{T}-1}}{\beta^2 \cdot \lambda_{k+1}(\mathcal{L}_{\Gbi}^\mathcal{T})^2}\right)$ , we can hopefully reach a correct grouping, where $|W^\mathcal{T}_{ij}|$ is activated as nonzero if and only if feature $i$ and task $j$ belong to the same group in GT.
\end{enumerate}
\end{rem}
\subsection{Discussion}\label{sec:disscus}
To end this section, we provide a discussion on the relationship between our base model and the work of \cite{lublock} and \cite{comtl}.
Similar to \cite{comtl}, we adopt the major assumption that features and tasks should be grouped into different clusters. However, our model differs significantly from this work. In \cite{comtl}, co-clustering is realized by the k-means assumption, without an explicit guarantee for leveraging the block-diagonal structure. Inspired by \cite{lublock}, we provide an explicit regularizer from the spectral graph theory for the clustering problem. In our work, we generalize the regularization in \cite{lublock} to a bipartite graph and apply it to the MTL problem. More importantly, we also provide a generalized closed-form solution for the variational form of truncated eigenvalue sum problem as shown in Thm.\ref{eigsol}. As an important property, it finds a specific global solution of the original problem (which is not strongly convex) as the unique solution for the strongly convex problem \eqref{eq:usub}. This not only makes $\bm{V}_k\bm{V}_k^\top$ identifiable but also leads to the global convergence result which is missing in \cite{lublock} and \cite{comtl}. Last but not least, compared with \cite{lublock}, we also adopt a different optimization method. This could make the optimization procedure simpler. Moreover, it offers us a chance to figure out a close connection with the optimal transport problem (OT), which suggests that the $\bm{W}$ subproblem approximates the OT problem with the distances between spectral embeddings as the transportation cost. Moreover, this also leads to our analysis of the grouping effect with practical considerations as shown in Thm.\ref{thm:group} and Thm.\ref{thm:group_app}, which is also new compared with the existing studies.
\section{Personalized Attribute Prediction}\label{sec:per}
\indent So far, we have developed a novel multi-task learning method based on the desire of task-feature collaborative learning. In this section, we extend this method to a specific application problem which we call personalized attribute prediction. In this problem, we are given a set of personal annotations on visual attributes (e.g., \emph{smile} for human faces, \emph{comfortable} for shoes) for a variety of images, which are collected on the crowdsourcing platforms. Our goal here is to predict the user-specific annotations for unknown images, so that the results cater for personalized demands which often span a wide spectrum. This is a problem that greatly matches the multi-task learning scenario, since each user typically annotates only a limited amount of images.
\subsection{Extended Model}\label{sec:app_model}
To model the personalized annotation process, we regard each user's annotation prediction as a single task. For a given attribute, we assume that there are $T$ users who take part in the annotation. Further, we assume that the $i$-th user labeled $n_i$ images with $n_{+,i}$ positive labels and $n_{-,i}$ negative labels. In this setting $\boldsymbol{X}^{(i)} \in \mathbb{R}^{n_{i} \times d}$ becomes the input features for images that the $i$-th user labeled, whereas $\boldsymbol{y^{(i)}} \in \{-1,1\}^{n_{i}}$ becomes the corresponding label vector. If $y^{(i)}_k =1 $, then the user thinks that the given attribute presents in the $s$-th image, otherwise we have $y^{(i)}_k =-1 $. Moreover, we denote $\mathcal{S}_{+,i}= \{k \ | \ y^{(i)}_k = 1\}$ and $\mathcal{S}_{-,i}= \{k \ | \ y^{(i)}_k = -1\}$. The diversity in personalized annotations allows us to employ different models for different users. In the spirit of this, we adopt a linear learner $\boldsymbol{g}^{(i)}(\boldsymbol{x})= \boldsymbol{W}^{(i)^\top}\boldsymbol{x}$ for each task (user) $i$.
A naive way to solve this problem is to learn user-specific models separately. However, adopting completely independent models might lead to disastrous over-fitting due to the limited amount of annotations from each user. To prevent this issue, we apply a coarse-to-fine decomposition for $\boldsymbol{W}$:
\begin{equation}
\boldsymbol{W}^{(i)} = \boldsymbol{\varTheta}_{c} +\boldsymbol{\varTheta}^{(i)}_g + \boldsymbol{\varTheta}^{(i)}_p.
\end{equation}
Here, the coarse-grained component $\boldsymbol{\varTheta}_{c}$ captures the consensus pattern shared across all users. This pattern typically consists of common sense and the superficial semantic information that are easy to be accepted by almost all the users. The finer-grained component $\boldsymbol{\varTheta}_{g}= [\boldsymbol{\varTheta}^{(1)}_{g}, \cdots, \boldsymbol{\varTheta}^{(T)}_{g}]$ captures the grouping pattern where our TFCL method works. Specifically, it interprets the majority of the diversity in the results, where different groups of users (tasks) tend to favor different results based on different priorities of the features. Considering the negative transfer issue, sharing information across dissimilar users and features clearly lead to over-fitting. Our block-diagonal regualarizer then come into play to restrict the structure of $\boldsymbol{\varTheta}_{g}$ against negative transfer. The finest-grained component $\boldsymbol{\varTheta}_{p}= [\boldsymbol{\varTheta}^{(1)}_{p}, \cdots, \boldsymbol{\varTheta}^{(T)}_{p}]$ captures the
personalized patterns that are completely user-specific. $\boldsymbol{\varTheta}_{p}$ is not available for all users. Rather, it is only activated for the hard tasks corresponding to the extremely personalized users and malicious users. In this way, $\boldsymbol{\varTheta}_{p}$ offers us a chance to separate the abnormal tasks from the co-grouping factor, which keeps the model away from negative transfer from hard tasks. To sum up, we have a concluding remark on how this decomposition scheme increases the flexibility of our base model.
\begin{rem}
In the extended model, there are two extra terms: $\boldsymbol{\varTheta}_{p}$ and $\boldsymbol{\varTheta}_{c}$. As a task-wise sparse parameter, $\boldsymbol{\varTheta}_{p}$ serves as a detector for hard tasks. This is beneficial to suppress negative transfer. In fact, the negative effect might come from transfer knowledge from hard tasks (with poor performance) to easy tasks (with good performance), and having non-zero $\boldsymbol{\varTheta}_{p}$ helps to remove the hard task from grouping with easy tasks. Meanwhile, $\boldsymbol{\varTheta}_{c}$ is a common factor that allows different groups to share overlapping features. With these two extra components, we then reach a comprehensive model with sharing, grouping and the effect of hard task considered.
\end{rem}
With the decomposition scheme, we then provide an objective function for the proposed model. To realize the functionality of the three components, we provide different regularization based on their characterizations. For the common factor $\boldsymbol{\varTheta}_c$, we simply adopt the most widely-used $\ell_2$ regularization. For $\boldsymbol{\varTheta}_{g}$, we employ our task-feature collaborative learning framework. To do this, we reformulate $\mathcal{G}_{BI}$ with the users, features and $\boldsymbol{\varTheta}_{g}$. Moreover, the graph Laplacian is defined as $\bm{\varTheta}_{\mathcal{G}}$ which is obtained by replacing $\bm{W}$ in the original $\mathcal{L}_{\Gbi}$ with $\bm{\varTheta}_g $. For $\boldsymbol{\varTheta}_{p}$, we adopt the $\ell_{1,2}$-norm to induce column-wise (user-wise) sparsity.
Finally, the empirical loss for task $i$ is denoted as $\ell_i$. As the standard preference learning paradigm, for a given user, we expect that the positive labeled instances could always have a higher rank than negative ones, so that the instances having the top predicted score always hit the user's comprehension about the attribute. This motivates us to optimize the Area Under roc Curve (AUC) metric in our model. Specifically, we adopt the squared surrogate loss for AUC \cite{onepass}:
\begin{equation*}
\begin{split}
&\mathcal{J}(\bm{\varTheta}_c ,\bm{\varTheta}_g ,\bm{\varTheta}_p ) = \sum_{i=1}^T \ell_i,\\
&\ell_i = \sum\limits_{x_p \in \mathcal{S}_{+,i}}\sum\limits_{x_q \in \mathcal{S}_{-,i}} \frac{s\Big(\boldsymbol{g}^{(i)}(\boldsymbol{x}_p) -\boldsymbol{g}^{(i)}(\boldsymbol{x}_q) \Big)}{n_{+,i}n_{-,i}}.\\
\end{split}
\end{equation*}
where $s(t) = (1-t)^2$. Note that the reasons for choosing the squared surrogate loss are two-fold. Theoretically, it is proved in the previous literature \cite{consis} that square loss results in a Bayesian optimal classier that is consistent with the true 0-1 AUC loss
\[\sum_{i}\sum_{x_p \in \mathcal{X_+}}\sum_{x_q \in \mathcal{X}_-} \frac{1}{n_+n_-} \cdot I\left[\bm{g}^{(i)}(\bm{x}_p) > \bm{g}^{(i)}(\bm{x}_q)\right],\] in an asymptotic sense. Practically, as discussed in the next subsection, we can easily accelerate the calculation of AUC loss. With all the above-mentioned settings, our objective function could be written in the form:
\begin{equation*}\label{genform}\small
\begin{split}
(\boldsymbol{Q}) \min_{\boldsymbol{\varTheta}, \boldsymbol{U} \in \Gamma} & ~ \left\{\begin{split}
&\mathcal{J}(\bm{\varTheta}_c ,\bm{\varTheta}_g ,\bm{\varTheta}_p )+\frac{\alpha_1}{2}\norm{\bm{\varTheta}_c }_2^2+ \alpha_2 \left<\bm{\varTheta}_{\mathcal{G}}, \boldsymbol{U}\right> \\ & +\frac{\alpha_3}{2} \norm{\bm{\varTheta}_g }_{F}^2 + \alpha_4 \norm{\bm{\varTheta}_p }_{1,2} + \iota_{\Gamma}(U)
\end{split}\right\}
\end{split}.
\end{equation*}
\subsection{Extended Optimization}
In this subsection, we will provide a fast extended algorithm to optimize $(\bm{Q})$. We first define the AUC comparison graph. Then we provide acceleration methods to speed-up loss and gradient evaluation. Finally, we provide an extended optimization method to solve problem $(\bm{Q})$.
\noindent \textbf{AUC comparison graph}. To begin with, we provide an AUC comparison graph to represent the sparse comparisons to calculate AUC. For each user $i$, the graph is defined as $\mathcal{G}_{AUC}^{(i)} = (\mathcal{V}^{(i)}, \mathcal{E}^{(i)},\mathcal{W}^{(i)})$. Here $\mathcal{V}^{(i)}$ denotes the set of vertices consist of all the samples that user $i$ labeled. Similarly, $\mathcal{E}^{(i)}$ represents the edge set $\{(j,k): y^{(i)}_j \neq y^{(i)}_k \}$. Moreover, for all edges $(j,k) \in \mathcal{E}^{(i)}$, we have a weight matrix $\mathcal{W}^{(i)} $ such that $\mathcal{W}^{(i)}_{j,k} = \frac{1}{n_{+,i}n_{-,i}}$. Given $\mathcal{W}^{(i)}$, the Laplacian matrix $\mathcal{L}_{AUC}^{(i)}$ of $\mathcal{G}_{AUC}^{(i)}$
could be expressed as: $\mathcal{L}_{AUC}^{(i)} = diag(\mathcal{W}^{(i)}\boldsymbol{1}) - \mathcal{W}^{(i)}.$\\
\textbf{Loss Evaluation}. With the definition of $\mathcal{L}_{AUC}^{(i)}$, we could reformulate the empirical loss $\mathcal{J}$ as:
\begin{equation*}
\begin{split}
\ell_i& = \sum\limits_{x_p \in \mathcal{S}_{+,i}}\sum\limits_{x_q \in \mathcal{S}_{-,i}} \frac{s\Big(\boldsymbol{g}^{(i)}(\boldsymbol{x}_p) -\boldsymbol{g}^{(i)}(\boldsymbol{x}_q) \Big)}{n_{+,i}n_{-,i}}\\
&=\frac{1}{2} (\boldsymbol{\widetilde{y}}^{(i)}-\hat{\boldsymbol{y}}^{(i)})^\top \mathcal{L}_{AUC}^{(i)}(\boldsymbol{\widetilde{y}}^{(i)}-\hat{\boldsymbol{y}}^{(i)})
\end{split}
\end{equation*}
\begin{equation*}
\mathcal{J}(\bm{\varTheta}_c ,\bm{\varTheta}_g ,\bm{\varTheta}_p ) = \sum_{i=1}^{n_u} \ell_i,
\end{equation*}
where $\boldsymbol{\widetilde{y}}^{(i)} = \frac{1+ \boldsymbol{y}^{(i)}}{2}$, $\boldsymbol{\hat{y}}^{(i)} = \boldsymbol{X}^{(i)}(\boldsymbol{\varTheta}_c + \boldsymbol{\varTheta}^{(i)}_g + \boldsymbol{\varTheta}^{(t)}_p )$.\\
\noindent \textbf{Gradient Computation}: According to the Quadratic formulation of the AUC loss, we can calculate the gradients $\nabla_{\boldsymbol{\varTheta}_c}\mathcal{J}$,
$\nabla_{\boldsymbol{\varTheta}_g}\mathcal{J}$, $\nabla_{\boldsymbol{\varTheta}_p}\mathcal{J}$, as follows:
\[\begin{array}{ll}
\nabla_{\boldsymbol{\varTheta}_c}\mathcal{J} &= \sum_{i} \boldsymbol{X}^{(i)\top}\mathcal{L}_{AUC}^{(i)}\left(\boldsymbol{X}^{(i)}\boldsymbol{W}^{(i)} - \bm{Y}^{(i)} \right),\\
\nabla_{\boldsymbol{\varTheta}^{(i)}_g}\mathcal{J} &= \boldsymbol{X}^{(i)\top}\mathcal{L}_{AUC}^{(i)}\left(\boldsymbol{X}^{(i)}\boldsymbol{W}^{(i)} - \bm{Y}^{(i)} \right),\\
\nabla_{\boldsymbol{\varTheta}^{(i)}_p}\mathcal{J} &= \boldsymbol{X}^{(i)\top}\mathcal{L}_{AUC}^{(i)}\left(\boldsymbol{X}^{(i)}\boldsymbol{W}^{(i)} - \bm{Y}^{(i)} \right).
\end{array}\]
\noindent \textbf{Efficient Computation}. According to the definition of $\mathcal{G}_{AUC}^{(i)} $ and $\mathcal{E}^{(i)}$, the affinity matrix of $\mathcal{G}_{AUC}^{(i)} $ could be written as:
\begin{equation*}
\mathcal{W}^{(i)} = \frac{1}{n_+n_-}[\boldsymbol{\tilde{y}^{(i)}}(\boldsymbol{1}-\boldsymbol{\tilde{y}^{(i)}})^\top + (1-\boldsymbol{\tilde{y}^{(i)}})(\boldsymbol{\tilde{y}^{(i)}})^\top ].
\end{equation*}
Correspondingly, $\mathcal{L}_{AUC}^{(i)}$ could be simplified as:
\begin{equation}\label{key}
\mathcal{L}_{AUC}^{(i)} = diag\left(\frac{\boldsymbol{\tilde{y}^{(i)}}}{n_{+,i}}+ \frac{\boldsymbol{1} - \boldsymbol{\tilde{y}^{(i)}}}{n_{-,i}}\right) -\mathcal{W}^{(i)}
\end{equation}
Denote $\bm{R}^{(i)} = \bm{X}^{(i)}\bm{W}^{(i)} - \bm{Y}^{(i)}$, now we are ready to speed-up the loss evaluation $\sum_i \bm{R}^{(i)^\top} \mathcal{L}_{AUC}^{(i)} \bm{R}^{(i)}$.
We have:
\begin{equation}\label{eq:loss_eve}
\begin{split}
\bm{R}^{(i)^\top} \mathcal{L}_{AUC}^{(i)}\bm{R}^{(i)} = &\bm{R}^{(i)^\top} \Bigg(diag\left(\frac{\boldsymbol{\tilde{y}^{(i)}}}{n_{+,i}}+ \frac{\boldsymbol{1} -
\boldsymbol{\tilde{y}^{(i)}}}{n_{-,i}}\right)\Bigg) \bm{R}^{(i)} \\
&- {\bm{R} }^{(i)}_+{{\bm{R} }^{(i)}_-} - {{\bm{R} }^{(i)}_-}{{\bm{R} }^{(i)}_+},
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
{{\bm{R}}_+} =\frac{1}{n_{+,i}} \bm{R}^{(i)^\top} \boldsymbol{\tilde{y}}^{(i)},\ \ {{\bm{R}}_-} =\frac{1}{n_{-,i}} \bm{R}^{(i)^\top}(\boldsymbol{1} -\boldsymbol{\tilde{y}}^{(i)}).\\
\end{split}
\end{equation*}
Similarly, we have the following simplification for the gradients:
\begin{equation}\label{eq:grad_eve}
\begin{split}
\bm{X}^{(i)^\top}\mathcal{L}_{AUC}^{(i)}\bm{R}^{(i)} = &\bm{X}^{(i)^\top} \Bigg(diag\left(\frac{\boldsymbol{\tilde{y}^{(i)}}}{n_{+,i}}+ \frac{\boldsymbol{1} -
\boldsymbol{\tilde{y}^{(i)}}}{n_{-,i}}\right)\Bigg)\\ &- {\bm{X}_+^{(i)}}\bm{R}_-^{(i)} - {\bm{X}_-^{(i)}}\bm{R}_+^{(i)}.
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
{\bm{X}_+^{(i)}} =\frac{1}{n_{+,i}} \bm{X}^{(i)^\top} \boldsymbol{\tilde{y}}^{(i)},\ \ {\bm{X}_-^{(i)}} =\frac{1}{n_{-,i}} \bm{X}^{(i)^\top} (\boldsymbol{1} -\boldsymbol{\tilde{y}}^{(i)}).\\
\end{split}
\end{equation*}
From \refeq{eq:loss_eve} and \refeq{eq:grad_eve}, we know that the complexity for computing loss and gradient could be reduced from at most $\mathcal{O}(\sum_i n_i^2)$ and $\mathcal{O}(\sum_i n_i^2\cdot d)$ respectively to $\mathcal{O}(\sum_i n_i \cdot d)$and $\mathcal{O}(n_i\cdot d)$ respectively. Applying this rule, we can compute the loss function and its gradients with a linear time w.r.t. the sample size.\\
Next, we extend the optimization method proposed in the last section to solve the problem.
\noindent \textbf{Optimization}. It could be proved that the empirical loss $\mathcal{J} = \sum_{i=1}^T\ell_i$ has Lipschitz continuous gradients with respect to $\boldsymbol{\Theta}= [\boldsymbol{\Theta}_c, vec(\boldsymbol{\Theta}_g), vec(\boldsymbol{\Theta}_p)]$ with bounded input $\boldsymbol{X}$. We denote the Lipschitz constant as $\varrho_{\varTheta}$ (see our appendix).
Picking $C > \varrho_{\varTheta} $ at each iteration of $t$, we could solve the parameters with the following subproblems:
\begin{equation}
(\boldsymbol{Prox_g})\ \argmin_{\boldsymbol{\Theta}_g,\boldsymbol{U} \in \Gamma}
\left\{
\begin{split}
&\dfrac{1}{2} \left\norm{\boldsymbol{\varTheta}_g-\widetilde{\boldsymbol{\varTheta}}_g^t \right}_F^2 + \frac{\alpha_1}{C} \left<\boldsymbol{\varTheta}_{\mathcal{G}},\boldsymbol{U}\right> \\
&+ \frac{\alpha_2}{2C} \norm{\bm{\varTheta}_g }_F^2\\
\end{split}
\right\},
\label{p2}
\end{equation}
\begin{equation}
\begin{split}
(\boldsymbol{Prox_c})& \ \argmin_{\boldsymbol{\Theta}_c} \dfrac{1}{2} \left\norm{\boldsymbol{\varTheta}_c -\widetilde{\boldsymbol{\varTheta}_c^t} \right}_2^2 + \frac{\alpha_3}{2C} \norm{\boldsymbol{\varTheta}_c}_2^2 \label{p1},
\end{split}
\end{equation}
\begin{equation}
\begin{split}
(\boldsymbol{Prox_p})& \ \argmin_{\boldsymbol{\Theta}_p}
\dfrac{1}{2} \left\norm{\boldsymbol{\varTheta}_p -\widetilde{\boldsymbol{\varTheta}_p^t} \right}_F^2 + \frac{\alpha_4}{C} \norm{\boldsymbol{\varTheta}_p}_{1,2} \label{p3},
\end{split}
\end{equation}
where
\begin{equation}\label{eq:tg}
\boldsymbol{\widetilde{\varTheta}}_g^t= \boldsymbol{\varTheta}_g^{t-1} - \dfrac{1}{C}\nabla_{\boldsymbol{\varTheta}_g}\mathcal{J}(\boldsymbol{\varTheta}^{t-1}),
\end{equation}
\begin{equation}\label{eq:tc}
\boldsymbol{\widetilde{\varTheta}}_c^t= \boldsymbol{\varTheta}_c^{t-1} - \dfrac{1}{C}\nabla_{\boldsymbol{\varTheta}_c}\mathcal{J}(\boldsymbol{\varTheta}^{t-1}),
\end{equation}
\begin{equation}\label{eq:tp}
\boldsymbol{\widetilde{\varTheta}}_g^t= \boldsymbol{\varTheta}_p^{t-1} - \dfrac{1}{C}\nabla_{\boldsymbol{\varTheta}_p}\mathcal{J}(\boldsymbol{\varTheta}^{t-1}).
\end{equation}
{Similar to Alg.\ref{alg:opt}, we adopt Alg.\ref{alg:opt_app} to optimize the parameters.\\
\indent In the end of this section, we show that Alg. \ref{alg:opt_app} inherits the theoretical merits of Alg. \ref{alg:opt}.
\begin{thm} \label{thm:conv_app} Denote by \[
\widetilde{\mathcal{F}} =\left\{\begin{split}
&\mathcal{J}({\bm{\varTheta}_c },\bm{\varTheta}_g ,\bm{\varTheta}_p )+ \alpha_1 \left<\bm{\varTheta}_{\mathcal{G}}, \boldsymbol{U}\right> + \frac{\alpha_2}{2} \norm{\bm{\varTheta}_g }_{F}^2\\ & +\frac{\alpha_3}{2}\norm{{\bm{\varTheta}_c }}_2^2+ \alpha_4 \norm{\bm{\varTheta}_p }_{1,2} + \iota_{\Gamma}(\bm{U})
\end{split}\right\} \] the loss function and denote by $(\bm{\varTheta}_c^t ,\bm{\varTheta}_g^t ,\bm{\varTheta}_p^t ,\boldsymbol{U}^t)$ the parameter obtained at iteration $t$. If the data is bounded in the sense that:
$\forall i, ~\norm{\boldsymbol{X}^{(i)}}_2 =\vartheta_{X_i} < \infty, ~n_{+,i} \ge 1, ~n_{-,i} \ge 1$, then pick $C > \varrho_{\Theta}$, where $\varrho_{\varTheta}
=3T\sqrt{(2T+1)}\max_{i} \left\{\dfrac{n_i\vartheta^2_{X_i}}{n_{+,i}n_{-,i}}\right\}$, the following properties hold for Alg. \ref{alg:opt_app}:
\begin{enumerate}[itemindent=0pt, leftmargin =15pt,label={(\arabic*)}]
\item The parameter sequence $(\bm{\varTheta}_c^t ,\bm{\varTheta}_g^t ,\bm{\varTheta}_p^t , \boldsymbol{U}^t)$ converges to a critical point $(\bm{\varTheta}_c ^\star,\bm{\varTheta}_g ^\star,\bm{\varTheta}_p ^\star,\boldsymbol{U}^\star)$ of the problem $\bm{Q}$.
\item The loss sequence $\{\widetilde{\mathcal{F}}_t\}_t$ converges to a critical point $\widetilde{\mathcal{F}}^\star$ of the problem $\bm{Q}$.
\item For all $t \in \mathbb{N}$, there exists a subgradient $\bm{g}_t$, such that when $T \rightarrow +\infty$, $\dfrac{1}{T} ({\sum\limits_{t=1}^T\norm{\bm{g}_t}^2}) \rightarrow 0$ with rate $\mathcal{O}(\frac{1}{T})$.
\end{enumerate}
\end{thm}
\begin{thm}\label{thm:group_app}
Assume that Alg.~\ref{alg:opt_app} terminates at the $\mathcal{T}$-th iteration with $\tilde{\mathcal{F}}_{\mathcal{T}-1} \le \epsilon^\mathcal{A}_{\mathcal{T}-1}$, where $\tilde{\mathcal{F}}_{\mathcal{T}-1}$ is the objective function at the $\mathcal{T}-1$ iteration. Furthermore, assume that there is a matrix $\bm{\varTheta_g}^\star \in \mathcal{H}_{C^\mathcal{A}_0}$, where the corresponding bipartite graph $\mathcal{G}^\star$ has k connected components with a Graph Laplacian matrix $\bm{\varTheta}_{\mathcal{G}}^\mathcal{T}$ giving the ground-truth grouping. Moreover, denote $n_i$ as the number of nodes in the $i$-th group of the graph, and $n^\uparrow_1 = \max_{i} n_i$, $n^\uparrow_2 = \max_{j, n_j \le n^\uparrow_1} n_i$. With the following notations:
\begin{equation*}
C^\mathcal{A}_0 = \left(2 \cdot \dfrac{ \epsilon^\mathcal{A}_{\mathcal{T}-1}}{\alpha_2}\right)^{1/2}, \kappa^\mathcal{A}_0 = C^\mathcal{A}_0 + \frac{\varkappa(\xi_c,\xi_g,\xi_p)}{C}, \delta^\mathcal{A}_1 = \frac{C}{\alpha_1} \kappa^\mathcal{A}_0
\end{equation*}
\begin{equation*}
\delta^\mathcal{A}_2 = \frac{C}{\alpha_1} \delta^\mathcal{A}_0 , ~~\xi^\mathcal{A}= (\sqrt{d+T} + \sqrt{2}) \cdot \frac{C^\mathcal{A}_0}{\lambda_{k+1}(\bm{\varTheta}^\mathcal{T}_\mathcal{G})}
\end{equation*}
\begin{equation*}
\begin{split}
&\xi_c =\left(2 \cdot \frac{ \epsilon^\mathcal{A}_{\mathcal{T}-1} }{\alpha_3}\right)^{1/2}, \ \xi_g = C^\mathcal{A}_0, \ \xi_p = {\frac{ \epsilon^\mathcal{A}_{\mathcal{T}-1} }{\alpha_4}}, ~\beta^\mathcal{A}= \dfrac{1}{n^\uparrow_1} + \dfrac{1}{n^\uparrow_2}, \\
& \varkappa(\xi_c,\xi_g,\xi_p)
= \dfrac{n_i\vartheta_{X_i}}{\sqrt{n_{+,i}}n_{-.i}} \sum_{i=1}^T \left((\xi_c+\xi_g+\xi_p)\dfrac{\vartheta_{X_i}}{\sqrt{n_{+,i}}}+ 1 \right),
\end{split}
\end{equation*}
\noindent the following facts hold for the grouping effect of $\bm{\varTheta}_g$ in Alg.\ref{alg:opt_app}~:
\begin{enumerate}
\item[(a)] \textbf{(no-false-positive-grouping)} If $\lambda_{k+1}(\bm{\varTheta}_{\mathcal{G}}^\mathcal{T}) > \lambda_{k}(\bm{\varTheta}_{\mathcal{G}}^\mathcal{T}) \ge 0$, $\frac{\sqrt{2}}{32}\cdot \beta^\mathcal{A} > \xi^\mathcal{A} $, and $ 8\sqrt{2}\xi^\mathcal{A} < \delta^\mathcal{A}_1 < \beta^\mathcal{A}- 8\sqrt{2} \xi^\mathcal{A} $, we have:
\begin{equation*}\label{eq:g1}
\mathsf{Supp}(\bm{\varTheta}_g^\mathcal{T}) \subseteq \big\{(i,j): \mathcal{G}(i) = \mathcal{G}(j) \big\} = \mathsf{Supp}(\bm{\varTheta}_g^\star) ,
\end{equation*}
where $\mathcal{G}(i)$ is the corresponding connected component in the bipartite graph $\mathcal{G}^\star$ that i belongs to.
\item[(b)] \textbf{(correct-grouping) }If we further assume that $\min_{(i,j)}|\widetilde{\boldsymbol{\varTheta}}^{\mathcal{T}}_{i,j}| \ge \delta^\mathcal{A}_{0}> 0$, and $ 8\sqrt{2}\xi^\mathcal{A} < \min \left\{\delta^\mathcal{A}_1,\delta^\mathcal{A}_2 \right\} \le \max \left\{\delta^\mathcal{A}_1,\delta^\mathcal{A}_2 \right\} < \beta^\mathcal{A}- 8\sqrt{2} \xi^\mathcal{A} $, we get that :
\begin{equation*}\label{eq:g2}
\mathsf{Supp}(\bm{\varTheta}_g^\mathcal{T}) = \mathsf{Supp}(\bm{\varTheta}_g^\star).
\end{equation*}
\end{enumerate}
\end{thm}
\begin{algorithm}[!h]
\caption{TFCL for $(\bm{Q})$}
\label{alg:opt_app}
\begin{algorithmic}
\STATE {\bfseries Input:} Dataset $\mathcal{S}$, $\alpha_1$, $\alpha_2$, $\alpha_3$, $\alpha_4$, $k$, $C (C> \varrho_\varTheta)$.
\STATE {\bfseries Output:} Solution $\bm{\varTheta}_c $, $\bm{\varTheta}_g $, $\bm{\varTheta}_p $, $\boldsymbol{U}$.
\STATE Initialize $\boldsymbol{\varTheta}_c^0$, $\boldsymbol{\varTheta}_g^0$, $\boldsymbol{\varTheta}_p^0$, $\boldsymbol{U}^0 \in \Gamma$, $t=1$.
\REPEAT
\STATE Calculate $\boldsymbol{\widetilde{\varTheta}}_c^t$, $\boldsymbol{\widetilde{\varTheta}}_g^t$, and $\boldsymbol{\widetilde{\varTheta}}_g^t$, respectively from Eq.(\ref{eq:tc})-Eq.(\ref{eq:tp}).
\STATE Solve $\boldsymbol{\varTheta}^t_c$ from (\ref{p1}).
\STATE Solve $\boldsymbol{\varTheta}^t_p$ from (\ref{p3}).
\STATE Invoke Alg.\ref{alg:opt} with $\mathcal{S}$, $\alpha_1 = \alpha_2$, $\alpha_2 = \alpha_3$, $k$, $C$, return $
\bm{\varTheta}_g ^{t}, \bm{U}^t$.
\STATE $t = t + 1$ .
\UNTIL{Convergence}
\STATE $\boldsymbol{\varTheta}_c = \boldsymbol{\varTheta}_c^{t-1}$, $\boldsymbol{\varTheta}_g = \boldsymbol{\varTheta}_g^{t-1}$,$\boldsymbol{\varTheta}_p = \boldsymbol{\varTheta}_p^{t-1}$, $\boldsymbol{U} = \boldsymbol{U}^{t-1}$.
\end{algorithmic}
\end{algorithm}
\section{Experiments}
In this section, we explore the performance of
our algorithm on both synthetic and real data. In Section \ref{sec:set} - Section \ref{sec:comp}, we
first elaborate the settings and competitors adopted in our experiments.
Then, in Section \ref{sec:sim}, we investigate the performance of our proposed algorithm on a simulated dataset. Subsequently, in Section \ref{sec:shoes} - Section \ref{sec:sun}, we present experimental results showing how
our method performs on real-world personalized annotation datasets.
\subsection{Experimental Settings}\label{sec:set}
We adopt the average of user-wise AUC score as our evaluation method. For all the experiments, hyper-parameters are tuned based on the training and validation set (account for 85\% of the total instances), and the results on the test set are recorded. The experiments are done with 15 repetitions for each involved algorithm. For the competitors, given the prediction $[\hat{\bm{y}}^{(1)}, \cdots, \hat{\bm{y}}^{(T)}]$, we adopt the instance-wise squared loss:
\[
\sum_i \frac{1}{2} \cdot ||\bm{y}^{(i)} - \hat{\bm{y}}^{(i)} ||_2^2
\]
as the loss function. For our proposed algorithm, we adopt the squared AUC loss as the final loss function to improve the performance. Meanwhile, we also record how our proposed method works with instance-wise squared loss function as an intermediate result for fairness.
\subsection{Competitors}\label{sec:comp}
Now we briefly introduce competitors adopted in this paper.
\begin{itemize}
\item \textbf{LASSO} \cite{lasso} where each task learner is regularized with an $\ell_1$-norm constraint.
\item \textbf{rMTFL} \cite{rMTFL} assumes that the model $\boldsymbol{W}$ can be decomposed into two components: a consensus component and a group-sparse component.
\item \textbf{RAMUSA} \cite{RAMUSA} adopts a capped trace norm regularizer to minimize only the singular values smaller than an adaptively tuned threshold.
\item \textbf{CoCMTL} \cite{comtl} realizes the task-specific co-clustering via minimizing the truncated sum-of-squares of the singular values of the task matrix.
\item \textbf{NC-CMTL} \cite{tclog} explores shared information among different tasks with a non-convex low-rank spectral regularizer and a robust re-weighting scheme.
\item \textbf{VSTGMTL} \cite{vstg} implements simultaneous variable selection and learning with a low-rank decomposition.
\item \textbf{AMTL} \cite{amtl} provides asymmetric transfer between tasks with a sparse selection on the asymmetric transfer matrix.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{sim_res_box.png}
\caption{AUC ($\uparrow$) comparison on the Simulated Dataset }\label{fig:box}
\end{figure}
\begin{figure}[h]
\begin{center}
\subfigure[obj]{\label{fig:conv:obj}
\includegraphics[width=0.3\columnwidth]{iter_loss.png}
}
\subfigure[$||\bm{W}^t - \bm{W}^{t-1}||$]{\label{fig:conv:dw}
\includegraphics[width=0.3\columnwidth]{iter_W.png}
}
\subfigure[$||\bm{U}^t - \bm{U}^{t-1}||$]{\label{fig:conv:du}
\includegraphics[width=0.3\columnwidth]{iter_B.png}
}\
\end{center}
\caption{\label{fig:conv}Convergence curves for (a) loss function, (b) parameter $\bm{W}$ in terms of the difference between two successive iterations $||\bm{W}^t - \bm{W}^{t-1}||$, (c) $||\bm{U}^t - \bm{U}^{t-1}||$. }
\end{figure}
\begin{table}[htbp]
\centering
\setlength{\belowcaptionskip}{10pt}%
\caption{Ablation Study for simulated dataset}
\begin{tabular}{c|cc}
\multicolumn{1}{c|}{\multirow{2}[1]{*}{Algorithm}} & \multicolumn{1}{c}{TFCL} & \multicolumn{1}{c}{TFCL} \\
& \multicolumn{1}{c}{ours} & \multicolumn{1}{c}{w/o AUC loss} \\
\midrule
AUC & \textbf{93.66} & 92.46 \\
\end{tabular}%
\label{tab:abl}%
\end{table}%
\begin{figure}[ht]
\begin{centering}
\subfigure[iter 0]{
\includegraphics[width=0.3\columnwidth]{iter0.png}
}
\subfigure[iter 1]{
\includegraphics[width=0.3\columnwidth]{iter1.png}
}
\subfigure[iter 2]{
\includegraphics[width=0.3\columnwidth]{iter2.png}
}\
\subfigure[iter 3]{
\includegraphics[width=0.3\columnwidth]{iter3.png}
}
\subfigure[iter 4]{
\includegraphics[width=0.3\columnwidth]{iter4.png}
}
\subfigure[iter 5]{
\includegraphics[width=0.3\columnwidth]{iter5.png}
}
\end{centering}
\caption{\label{fig:embed}\textbf{Evolution of Spectral Embeddings}. We plot the corresponding embeddings $\bm{f}_1 \cdots, \bm{f}_{d+T}$ in the first five iterations in this group of figures. The results suggest that spectral embeddings rapidly form stable and clear clusters after the second iteration.}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[CoCMTL]{
\includegraphics[width=0.3\columnwidth]{W_comtl.png}
}
\subfigure[RAMUSA]{
\includegraphics[width=0.3\columnwidth]{W_ramusa.png}
}
\subfigure[rFTML]{
\includegraphics[width=0.3\columnwidth]{W_rmtfl.png}
}
\subfigure[LASSO]{
\includegraphics[width=0.3\columnwidth]{W_lasso.png}
}
\subfigure[NC-CMTL]{
\includegraphics[width=0.3\columnwidth]{tclog.png}
}
\subfigure[AMTL]{
\includegraphics[width=0.3\columnwidth]{W_amtl.png}
}
\subfigure[VSTGML]{
\includegraphics[width=0.3\columnwidth]{w_vstg.png}
}
\subfigure[TFCL (Ours)]{
\includegraphics[width=0.3\columnwidth]{W_ours.png}
}
\subfigure[Ground Truth]{
\includegraphics[width=0.3\columnwidth]{W_GT.png}
}
\caption{\textbf{Structural Recovery on Simulation Dataset.} The $x$-axis represents the users, the $y$-axis represents the feature. Compared with the competitors, TFCL could leverage a clearer block diagonal structure as the ground-truth. }
\label{fig:struc}
\end{figure}
\begin{figure*}[!h]
\begin{center}
\subfigure[Shoes BR]{\label{fig:perf_shoes_brown}
\includegraphics[width=0.3\textwidth]{brown.png}
}
\subfigure[Shoes CM]{\label{fig:perf_shoes_comf}
\includegraphics[width=0.3\textwidth]{comfort.png}
}
\subfigure[Shoes FA]{\label{fig:perf_shoes_fash}
\includegraphics[width=0.3\textwidth]{fashion.png}
}
\subfigure[Shoes FM]{\label{fig:perf_shoes_fm}
\includegraphics[width=0.3\textwidth]{formal.png}
}
\subfigure[Shoes OP]{\label{fig:perf_shoes_op}
\includegraphics[width=0.3\textwidth]{open.png}
}
\subfigure[Shoes OR]{\label{fig:perf_shoes_br}
\includegraphics[width=0.3\textwidth]{ornate.png}
}
\subfigure[Shoes PT]{\label{fig:perf_shoes_pt}
\includegraphics[width=0.3\textwidth]{pointy.png}
}
\subfigure[SUN CL]{\label{fig:perf_sun_cl}
\includegraphics[width=0.3\textwidth]{cluttered.png}
}
\subfigure[SUN MO]{\label{fig:perf_sun_mo}
\includegraphics[width=0.3\textwidth]{modern.png}
}
\subfigure[SUN OA]{\label{fig:perf_sun_oa}
\includegraphics[width=0.3\textwidth]{open_area.png}
}
\subfigure[SUN RU]{\label{fig:perf_sun_ru}
\includegraphics[width=0.3\textwidth]{rustic.png}
}
\subfigure[SUN SO]{\label{fig:perf_sun_sm}
\includegraphics[width=0.3\textwidth]{soothing.png}
}
\end{center}
\caption{ \label{fig:bar}\textbf{Overall AUC comparisons with Boxplot.} Here the scatters represent all the results coming from all the attributes for Shoes Dataset and Sun Dataset each with 15 repetitions. To show the statistical trends, we plot boxplots for the two datasets, respectively. Here the scatters are the 15 repetitions over the data splits, while the height of the bar represents the mean performance. }
\end{figure*}
\begin{figure}[h]\label{fig:ablation}
\centering
\subfigure[Shoes BR]{\label{fig:ab:br}
\includegraphics[width=0.3\columnwidth]{BR.png}
}
\subfigure[Shoes CM]{
\includegraphics[width=0.3\columnwidth]{CM.png}
}
\subfigure[Shoes FA]{
\includegraphics[width=0.3\columnwidth]{FA.png}
}
\subfigure[Shoes FM]{
\includegraphics[width=0.3\columnwidth]{FM.png}
}
\subfigure[Shoes OP]{
\includegraphics[width=0.3\columnwidth]{OP.png}
}
\subfigure[Shoes OR]{
\includegraphics[width=0.3\columnwidth]{OR.png}
}
\subfigure[Shoes PT]{{\label{fig:ab:PT}}
\includegraphics[width=0.3\columnwidth]{PT.png}
}
\subfigure[Sun CL]{\label{fig:ab:CL}
\includegraphics[width=0.3\columnwidth]{CL.png}
}
\subfigure[Sun MO]{
\includegraphics[width=0.3\columnwidth]{MO.png}
}
\subfigure[Shoes OA]{
\includegraphics[width=0.3\columnwidth]{OA.png}
}
\subfigure[Sun RU]{
\includegraphics[width=0.3\columnwidth]{RU.png}
}
\subfigure[Sun SO]{\label{fig:ab:SO}
\includegraphics[width=0.3\columnwidth]{SO.png}
}
\caption{\textbf{Ablation Results (\uppercase\expandafter{\romannumeral 1})} The $y$-axis represents the average AUC score on the test set, and the $x$-axis represents different algorithms: \textbf{Org} shows the performance of our original TFCL algorithm; \textbf{w/o\_AUC} shows the performance of our algorithm when the AUC loss is replaced with the squared loss; \textbf{w/o\_G} shows the performance when our proposed co-grouping factor is removed from the model. }
\label{fig:ab1}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[Shoes BR]{\label{fig:ab2:br}
\includegraphics[width=0.3\columnwidth]{BR1.png}
}
\subfigure[Shoes CM]{
\includegraphics[width=0.3\columnwidth]{CM1.png}
}
\subfigure[Shoes FA]{
\includegraphics[width=0.3\columnwidth]{FA1.png}
}
\subfigure[Shoes FM]{
\includegraphics[width=0.3\columnwidth]{FM1.png}
}
\subfigure[Shoes OP]{
\includegraphics[width=0.3\columnwidth]{OP1.png}
}
\subfigure[Shoes OR]{
\includegraphics[width=0.3\columnwidth]{OR1.png}
}
\subfigure[Shoes PT]{{\label{fig:ab2:PT}}
\includegraphics[width=0.3\columnwidth]{PT1.png}
}
\subfigure[Sun CL]{\label{fig:ab2:CL}
\includegraphics[width=0.3\columnwidth]{CL1.png}
}
\subfigure[Sun MO]{
\includegraphics[width=0.3\columnwidth]{MO1.png}
}
\subfigure[Shoes OA]{
\includegraphics[width=0.3\columnwidth]{OA1.png}
}
\subfigure[Sun RU]{
\includegraphics[width=0.3\columnwidth]{RU1.png}
}
\subfigure[Sun SO]{\label{fig:ab2:SO}
\includegraphics[width=0.3\columnwidth]{SO1.png}
}
\caption{{\textbf{Ablation Results (\uppercase\expandafter{\romannumeral 2})} The $y$-axis represents the average AUC score on the test set, and the $x$-axis represents different algorithms: TFCL(Coc) shows the performance of TFCL algorithm with our co-grouping regularizer replaced by the corresponding regularizer in CocMTL; TFCL(ours) shows the performance of our original algorithm.} }
\label{fig:ab2}
\end{figure}
\begin{figure}[!h]
\centering
\subfigure[Shoes Brown]{\label{fig:dist_shoes_brown}
\includegraphics[width=0.45\columnwidth]{brown_dist.png}
}
\subfigure[SUN Open Area]{\label{fig:dist_sun_oa}
\includegraphics[width=0.45\columnwidth]{open_area_dist.png}
}
\caption{\textbf{Fine-grained comparison based on User AUC Score Distributions}. In this figure, we plot the user-specific performance distribution produced by all the involved algorithms for (a) the Brown attribute of shoes Dataset and (b) the Open Area attribute for Sun Dataset. In this figure, we investigate whether TFCL could benefit the performance distribution over users. The results show that TFCL tends to leverage more compact performance distribution. }
\end{figure}
\subsection{Simulated Dataset}\label{sec:sim}
To test the effectiveness of the basic TFCL framework, we generate a simple simulated annotation dataset with 100 simulated users, where the features and AUC scores are produced according to linear models with a block-diagonal task matrix. For each user, the 200 samples are generated as $\boldsymbol{X}^{(i)} \in \mathbb{R}^{200\times80}$ and $\boldsymbol{x}^{(i)}_k \sim \mathbb{N}(0,\boldsymbol{I}_{80})$. We generate a \textit{block-diagonal} task matrix $\boldsymbol{W}$ in a manner as the following. Specifically, we create 5 blocks for $\boldsymbol{W}$, in a way that $\bm{W} = \bigoplus_{i=1}^5\boldsymbol{W}_i$ where $\boldsymbol{W}_1 \in \mathbb{R}^{20 \times 20}$, $\boldsymbol{W}_2 \in \mathbb{R}^{20 \times 20}$, $\boldsymbol{W}_3 \in \mathbb{R}^{10 \times 20}$, $\boldsymbol{W}_4 \in \mathbb{R}^{20 \times 20}$, $\boldsymbol{W}_5 \in \mathbb{R}^{10 \times 20}$. For each of the block, the elements are generated from the distribution $\mathbb{N}(C_i, 2.5^2)$ (generated via element-wise sampling) where $C_i \sim \mathbb{U}(0,K_i)$ is the centroid for the corresponding cluster, $K_1= 5, K_2 =5, K_3 =10, K_4 = 15, K_5=20$. For each user, the scoring functions are generated as $\boldsymbol{s}^{(i)}= \boldsymbol{X}^{(i)}(\boldsymbol{W}^{(i)}+\epsilon^{(i)})$, where $\epsilon^{(i)} \in \mathbb{R}^{200 \times 1}$, and $\epsilon^{(i)} \sim \mathbb{N}(0,0.1^2\boldsymbol{I}_{200})$. To generate the labels $\boldsymbol{Y}^{(i)}$ for each $i$, the top 50 instances with the highest scores are labeled as 1, while the remaining instances are labeled as -1.\\
\noindent{\textbf{Performance Comparison.}} The performance of all the involved algorithms on the simulated dataset is recorded in Fig.\ref{fig:box}. The corresponding results show that our proposed algorithm consistently outperforms other competitors.
In particular, over the average results for 15 repetitions, our method achieves an AUC score of 93.66, while the second-best method obtains a score of 91.07. This leads to a 2.59 AUC improvement with respect to the second-best algorithm.\\
\noindent \textbf{Ablation Study.} Next, we carry out an ablation study to see how the single effect of AUC loss and the grouping factor work. Specifically, the corresponding results are shown in Tab.\ref{tab:abl}, where we compare our original performance with a baseline where the AUC loss is replaced with the instance-wise weighted squared loss. The results show that: (1) the original result outperforms this baseline, which suggests that adopting an AUC optimization is effective; (2) the baseline outperforms the other competitors, which suggests that the grouping factor is more effective than other competitors under the simulated dataset.\\
\noindent\textbf{Convergence analysis.} As shown in Fig.\ref{fig:conv:obj}-Fig.\ref{fig:conv:du}, the proposed method enjoys good convergence property on both the objective function and parameter sequence, which coincides with the theoretical results. \\
\noindent \textbf{Visualization of the Spectral Embeddings.} To show how the spectral embeddings evolve through the algorithm iterations, we plot the corresponding embeddings in the first five iterations in Fig.\ref{fig:embed}. The results suggest that spectral embeddings rapidly form stable and clear clusters after the second iteration. This fact validates our theoretical analysis concerning the grouping power of spectral embeddings. From Fig.\ref{fig:conv} and Fig.\ref{fig:embed}, one can find a close connection between the iteration curve and the evolution of the embedding space. To see this, recall the details of Alg.1, the spectral embeddings are optimized along with $\bm{V}$ in one of our subproblem, it thus contributes to the reduction of the loss function. Practically, this could be validated by Fig.\ref{fig:conv} and Fig.\ref{fig:embed}. In Fig.\ref{fig:conv}, we see that the loss reduces fast and reaches convergence after 5 iterations. In Fig.\ref{fig:embed}, one can see that the embeddings also converge to their corresponding clusters within 5 iterations. \\
\textbf{Structure Recovery.} Besides generalized performance, we could also empirically verify the ability of our algorithm to recover the expected structures on parameters $\boldsymbol{W}$. With the same simulated dataset, we compare the parameters $\boldsymbol{W}$ learned from the involved algorithms and the Ground Truth in Fig.\ref{fig:struc}. The results show that our proposed method could recover a much clearer structure than other competitors. Meanwhile, we see that all the competitors could roughly recover a block-diagonal outline. However, different methods suffer from different degrees of off-diagonal noises. This could be understood from an algebraic analysis. For linear models, the predictive function is defined as $\hat{\bm{y}}(\bm{X}) = \bm{X}\bm{W}$. Moreover, we note that the true parameters are generated by a linear function $\bm{X}\bm{W}^\star$. If $\bm{X}$ is not fully ranked, we have $\bm{X}\bm{W} = \bm{X}\bm{W}^\star$ whenever $\bm{W} = \bm{W}^\star + \bm{W}'$ and $\bm{W}' \in \mathsf{Null}(\bm{X})$. Generally, $\bm{W}'$ has off-diagonal elements and naturally leads to the observed noise in Fig.\ref{fig:struc}. Obviously, without a block-diagonal regularizer, it is hard to avoid $\bm{W}'$ even for a global optimal solution. Moreover, since $\bm{W}'$ is only related to the observed data $\bm{X}$, there is a risk of over-fitting, especially in our case when off-diagonal noise is not compatible with the simulated dataset. By eliminating the off-diagonal noise, our model shows a significant performance improvement shown in Fig.\ref{fig:box}.
\subsection{Shoes Dataset}\label{sec:shoes}
\textbf{Dataset Description.} The Shoes Dataset \cite{user2} is a popular attribute prediction benchmark, which consists of 14,658 online shopping shoe images with 7 attributes (BR: brown, CM: comfortable, FA: fashionable, FM: formal, OP: open, ON: ornate, PT: pointy). In this dataset, annotators with various knowledge are invited to judge whether a specific attribute is present in an image. Specifically, each user is randomly assigned with 50 images, and there are at least 190 users for each attribute who take part in the process, which results in a total volume of 90,000 annotations.\\
\noindent \textbf{Pre-processing.} For input features, we adopt the GIST and color histogram provided in \cite{shoes} as input features. Then we perform PCA to reduce the redundancy of these features before training. Meanwhile, we notice that users who extremely prefer to provide merely one class of labels may lead to large biases. To eliminate such effect, we manually remove users who give less than 8 annotations for the minority class.\\
\noindent \textbf{Performance comparison.}
The average performances of 15 repetitions for Shoes dataset are shown in the left side of Fig.\ref{fig:bar} where the scatters show the 15 observations over different dataset splits and the bar plots show the average performance over 15 repetitions. Then we could make the following observations: 1) Our proposed algorithm consistently outperforms all the competitors significantly for all the attributes on Shoes dataset. It is worth mentioning that, for Shoes dataset, our method outperforms the second best method by the AUC score of 6.22, 1.88, 2.34, 2.38, 3.66, 3.27, 2.85 in terms of \emph{BR}, \emph{CM}, \emph{FA}, \emph{FM}, \emph{OP}, \emph{OR}, \emph{PT}, respectively. 2) The models using low-rank constraints achieve much higher scores than those using sparsity constraints (LASSO and rMTFL). This is because there exist obvious correlations among users' annotations and the low-rank assumption could model these task (user) correlations much better. 3) AMTL outperforms the other low-rank constrained methods on all the datasets, as it explicitly models and reduces the influence of negative transfer via asymmetric learning. 4) The superiority of our method over the other feature-task correlation learning approaches (CoCMTL and VSTGMTL) justifies the TFCL framework. 5) Our proposed method outperforms AMTL over most attributes on both datasets. One possible reason is that our framework reasonably models the user annotation behaviors and thus is more suitable to the personalized attribute prediction problem. Moreover, our method provides extra effort to prevent the negative transfer across features and tasks.\\
\noindent \textbf{Ablation Study.} (\uppercase\expandafter{\romannumeral 1}) Our proposed algorithm has two major components: one is the grouping factor with its regularizer and the other is the surrogate AUC loss. Now we see how these two components contribute to the improvements respectively. Specifically, we remove these two parts respectively from our original model, and show the corresponding results in Fig.\ref{fig:ab:br}-Fig.\ref{fig:ab:PT}. In these figures, \textbf{Org} shows the performance of our original TFCL algorithm, \textbf{w/o\_AUC} shows the performance when the AUC loss is replaced with the instance-wise squared loss, and \textbf{w/o\_G} shows the performance when our proposed co-grouping factor is removed from the model. From the experimental results, we have the following observations: 1) Our original algorithm outperforms both competitors, which shows that the joint effect of the two components is better than the single effects. 2) In most cases, we find that removing the grouping factor shows a more significant performance reduction than replacing the AUC loss. This implies that the grouping effect tends to have a more effective impact on performance. (\uppercase\expandafter{\romannumeral 2}) Note the CocMTL algorithm also developed a co-grouping regularizer, we further record the performance when our regularizer is replaced with the corresponding term in CoCMTL. This is shown in Fig.\ref{fig:ab2:br}-Fig.\ref{fig:ab2:PT}. The results suggest that our proposed regularizer could outperform the co-grouping regularizer in CoCMTL.\\
\noindent \textbf{Fine-grained comparison.} After showing the effectiveness of our proposed method in coarse-grained comparison, we then visualize the personalized predictions to investigate fine-grained conclusions at user level. Take the \emph{brown} attribute on Shoes dataset as an example, we visualize the testing AUC score distributions over users for all the methods in Fig.\ref{fig:dist_shoes_brown}. As shown in this figure, our model achieves a higher mean and lower performance variance than the competitors. On the contrary, traditional approaches suffer from an obvious long-tail problem. The reason for such long-tail effect might be two-fold: 1) The traditional methods are more sensitive toward hard tasks, 2) the negative transfer issue is not sufficiently addressed. It is thus indicated that our method indeed promotes the collaborative learning to improve the performance of those hard tasks.\\
\subsection{Sun Dataset}\label{sec:sun}
\noindent \textbf{Dataset Description.} The Sun Dataset \cite{user2} contains 14,340 scene images from SUN Attribute Database \cite{sundata}, with personalized annotations over 5 attributes (CL: Cluttered, MO: Modern, OP: Opening Area, RU: Rustic, SO: Soothe). With a similar annotating procedure, 64,900 annotations are obtained in this dataset.\\
\noindent \textbf{Pre-processing.} For input features, we deploy the 2048-dim feature vectors extracted by the Inception-V3 \cite{inception} network for Sun's data. {The reason leading us to different feature extraction strategies lies in that the images in Shoes dataset are photographed on a white background, while images in Sun dataset usually suffer from much more complicated backgrounds.} The rest of the pre-processing follows that for the Shoes dataset.\\
\noindent \textbf{Performance comparison.}
The average performances of 15 repetitions for Sun dataset are shown in the left side of Fig.\ref{fig:bar} where the scatters show the 15 observations over different dataset splits and the bar plots show the average performance over 15 repetitions}. Similar to the Shoes dataset, we could make the following observations: 1) Our proposed algorithm consistently outperforms all the competitors significantly for all the attributes. Our method outperforms the second best method by the AUC score of 1.95, 2.95, 0.44, 2.37, 1.29 in terms of \emph{CL}, \emph{MO}, \emph{OA}, \emph{RU}, and \emph{SO}, respectively. 2) Moreover, we have a similar observation to the 2)-5) for the shoes dataset.\\
\noindent \textbf{Ablation Study.} Similar to the Shoes Dataset, we show the corresponding ablation results for Sun Dataset in Fig.\ref{fig:ab:CL}-Fig.\ref{fig:ab:SO} and Fig.\ref{fig:ab2:CL}-Fig.\ref{fig:ab2:SO}. From the experimental results, we have the following observations: 1) Our original algorithm outperforms both competitors for all attributes except Soothing. 2) Removing the grouping factor shows a relatively significant effect on performance reduction than replacing the square AUC loss. 3) Our proposed regularizer could outperform the co-grouping regularizer in CoCMTL.
\\ \noindent \textbf{Fine-grained comparison.} We then visualize the user-specific performance distribution for the sun dataset. Take the \emph{Open Area} attribute as an example, we visualize the testing AUC score distributions over users for all the methods in Fig.\ref{fig:dist_sun_oa}. As shown in this figure, our model achieves a more compact distribution than other competitors. This again shows our method could alleviate the negative transfer issue.\\
\section{Conclusion}
In this paper, we develop a novel multi-task learning method called TFCL, which prevents negative transfer simultaneously at the feature and task level via a co-grouping regularization. An optimization method is then proposed to solve the model parameters, which iteratively solves convex subproblems. Moreover, we provide a novel closed-form solution for one of the subproblems, which paves the way to our proof of the global convergence property. Meanwhile, the solution produced by the optimization method shows a close connection between our method and the optimal transport problem, which brings new insight into how negative transfer could be prevented across features and tasks. We further extend the TFCL method to the problem of personalized attribute learning via a hierarchical model decomposition scheme. In order to validate the proposed methods, we perform systematic experiments on a simulated dataset and two real-world datasets. Results on the simulated dataset show that TFCL could indeed recover the correct co-grouping structure with good performance, and results on the real-world datasets further verify the effectiveness of our proposed model on the problem of personalized attribute prediction.
\bibliographystyle{abbrv}
|
1,116,691,500,400 | arxiv | \section{Introduction}\label{sec1}
The known perturbation theory \cite{seq,Diracpqm} is an extremely
important tool for describing real quantum systems, as it turns out
to be very difficult to find exact solutions to the Schr\"dinger
equation for Hamiltonians of even moderate complexity. Recently, we
see the dawn to overcome this difficulty because we obtained the
exact solution of the Schr\"oding equation \cite{seq} in general
quantum systems independent of times \cite{My1}. However, this does
not mean that the perturbation theory is unnecessary because our
exact solution is still an infinite power series of perturbation.
Our solution is called ``exact one" in the sense including all order
approximations of perturbation. In practice, if we do not intend to
apply our exact solution to investigations of the formal theory of
quantum mechanics, we often need to cut off our exact solution
series to some given order approximation in the calculations of
concrete problems. Perhaps, one argues that our exact solution will
back to the usual perturbation theory, and it is, at most, an
explicit form that can bring out the efficiency amelioration.
Nevertheless, the case is not so. Such a view, in fact, ignores the
significance of the general term in an infinite series, and forgets
the technologies to deal with an infinite series in the present
mathematics and physics. From our point of view, since the general
term is known, we can systematically and reasonably absorb the
partial contributions from some high order even all order
approximations to the lower order approximations just like one has
done in quantum field theory via summation over a series of
different order but similar feature Feynman figures. In this paper,
based on such a method we develop our ``dynamical arrangement and
summation" idea, and then propose an improved scheme of perturbation
theory via introducing several useful skills and methods.
It is very interesting that we find a flaw in the usual perturbation
theory, that is, the perturbing parameter is introduced too early so
that the contributions from the high order even all order
approximations of the diagonal and off-diagonal elements of the
perturbing Hamiltonian matrix are, respectively, inappropriately
dropped and prematurely cut off. For some systems, the influences on
the calculation precision from this flaw can be not neglectable with
the evolution time increasing. This motivates us to set our
start-point to introduce the perturbing parameter as late as
possible in order to guarantee the generality and precision. It is
natural from a mathematics view if we think the perturbing parameter
in a general perturbation theory is a formal multiplier. Based on
this start point, we propose Hamiltonian redivision skill and
further methods so as to overcome the above flaw in the usual
perturbation theory, viz. the Hamiltonian redivision makes the
contributions from all order approximations of the diagonal elements
of the perturbing Hamiltonian matrix can be absorbed in our improved
form of perturbed solution. Hence, this skill advances the
calculation precision in theory, extends the application range for
the perturbation theory and can remove degeneracies in some systems.
Since our exact solution series has apparent divergences, we provide
the methods of perturbing Hamiltonian matrix product decomposition
in order to separate the contraction terms with apparent divergences
and anti-contraction terms without apparent divergences. Here,
``apparent" refers to an unture thing, that is, the apparent
divergences are not real singularities and they can be eliminated by
mathematical and/or physical methods, while the ``perturbing
Hamiltonian matrix" refers to the representation matrix of the
perturbing Hamiltonian in the unperturbed Hamiltonian
representation. Then, by the limit process we can eliminate these
apparent divergences in the contraction terms. Furthermore we apply
``dynamical rearrangement and summation" idea for the sake of the
partial contributions from the high order even all order
approximations absorbed in our perturbed solution. In terms of these
useful ideas, skills and methods we build an improved scheme of
perturbation theory. Without any doubt, they are given definitely
dependent on our exact solution. In fact, our exact solution
inherits the distinguished feature in a $c$-number function form
just like the Feynman \cite{Feynman} path integral expression and
keeps the advantage in Dyson series \cite{Dyson} that is a power
series of perturbation. At the same time, our exact solution is so
explicit that when applying it to a concrete quantum system, all we
need to do is only the calculations of perturbing Hamiltonian matrix
and the limitations of primary functions.
As well known, a key idea of the existed perturbation theory to
research the time evolution of system is to split the Hamiltonian of
system into two parts, that is \begin{equation} H=H_0+H_1,\end{equation} where the
eigenvalue problem of so called unperturbed Hamiltonian $H_0$ is
solvable, and so-call perturbing Hamiltonian $H_1$ is the rest part
of the Hamiltonian. In other words, this splitting is chosen in such
a manner that the solutions of $H_0$ are known as \begin{equation}\label{h0eeq}
H_0\ket{\Phi^\gamma}=E_\gamma \ket{\Phi^\gamma}, \end{equation} where
$\ket{\Phi^\gamma}$ is the eigenvector of $H_0$ and $E_\gamma$ is
the corresponding eigenvalue. Whole $\ket{\Phi^\gamma}$, in which
$\gamma$ takes over all possible values, form a representation of
the unperturbed Hamiltonian. It must be emphasized that the
principle of Hamiltonian split is not just the best solvability
mentioned above in more general cases. If there are degeneracies,
the Hamiltonian split is also restricted by the condition that the
degeneracies can be completely removed via the usual diagonalization
procedure of the degenerate subspaces and the Hamiltonian redivision
proposed in this paper, or specially, if the remained degeneracies
are allowed, it requires that the off-diagonal element of the
perturbing Hamiltonian matrix between any two degenerate levels are
always vanishing so as to let our improved scheme of perturbation
theory work well. As an example, it has been discussed in our serial
study \cite{My3}. In addition, if the cut-off approximation of
perturbation is necessary, it requires that for our improved scheme
of perturbation theory, the off-diagonal elements of $H_1$ matrix is
small enough compared with the diagonal element of $H=H_0+H_1$
matrix in the unperturbed representation.
Nevertheless, there are some known shortcomings in the existed
perturbation theory, for example, when $H_1$ is not so small
compared with $H_0$ that the high order approximations should be
considered, and/or when the partial contributions from the higher
order approximations become relatively important to the studied
problems, and/or the evolution time is long enough, the usual
perturbation theory might be difficult to calculate to an enough
precision in an effective way, even not feasible practically since
the lower approximation might break the physical symmetries and/or
constraints. In order to overcome these difficulties and problems,
we recently study and obtain the exact solution in general quantum
systems via explicitly expressing the time evolution operator as a
$c$-number function and a power series of perturbation including all
order approximations \cite{My1}. In this paper, our purpose is to
build an improved scheme of perturbation theory based on our exact
solution \cite{My1} so that the physical problems are more
accurately and effectively calculated. For simplicity, we focus on
the cases of Schr\"odinger dynamics \cite{seq}. It is direct to
extend our improved scheme to the cases of the von Neumann dynamics
\cite{vonneumann}.
Just well-known, quantum dynamics and its perturbation theory have
been sufficiently studied and have many successful applications.
Many famous physicists created their nice formulism and obtained
some marvelous results. An attempt to improve its part content or
increase some new content as well as some new methods must be very
difficult in their realizations. However, our endeavors have
obtained their returns, for examples, our exact solution \cite{My1},
perturbation theory and open system dynamics \cite{My3} in general
quantum systems independent of time.
In this paper, we start from proposing our ideas, skills and
methods. We expressly obtain the improved forms of the zeroth,
first, second and third order approximations of perturbed solution
absorbing partial contributions from the high order even all order
approximations, finding the improved transition probability,
specially, the revised Fermi's golden rule, and providing an
operational scheme to calculate the perturbed energy and perturbed
state. Furthermore, by studying a concrete example of two state
system, we illustrate clearly that our solution is more efficient
and more accurate than the usual perturbative method. In short, our
exact solution and perturbative scheme are formally explicit,
actually calculable, operationally efficient, conclusively more
accurate (to the needed precision).
This paper is organized as the following: in Sec. \ref{sec2} we find
a flaw of the usual perturbation theory and introduce Hamiltonian
redivision to overcome it. Then, we propose the skill of the
perturbing Hamiltonian matrix product decomposition in order to
separate the contraction terms with apparent divergences and
anti-contraction terms without apparent divergences. By the limit
process we can eliminate these apparent divergences. More
importantly, we use the ``dynamical rearrangement and summation"
idea so that the partial contributions from the high order even all
order approximations are absorbed in our perturbed solution and the
above flaw is further overcome; in Sec. \ref{sec3} we obtain the
improved forms of the zeroth, first, second and third order
perturbed solutions of dynamics absorbing partial contributions from
the high order even all order approximations; in Sec. \ref{sec4} we
deduce the improved transition probability, specially, the revised
Fermi's golden rule. In Sec. \ref{sec5} we provide a scheme to
calculate the perturbed energy and the perturbed state; in Sec.
\ref{sec6} we study an example of two state system in order to
concretely illustrate our solution to be more effective and more
accurate than the usual method; in Sec. \ref{sec7} we summarize our
conclusions and give some discussions. Finally, we write an appendix
as well as a supplementary where some expressions are calculated in
order to derive out the improved forms of perturbed solutions.
\section{Skills and methods in the improved scheme of perturbation
theory}\label{sec2}
In our recent work \cite{My1}, by splitting a Hamiltonian into two
parts, using the solvability of eigenvalue problem of one part of
the Hamiltonian, proving an useful identity and deducing an
expansion formula of operator binomials power, we obtain an explicit
and general form of the time evolution operator in the
representation of solvable part (unperturbed part) of the
Hamiltonian. Then we find out an exact solution of the
Schr\"{o}dinger equation in general quantum systems independent of
time \begin{eqnarray} \label{ouress} \ket{\Psi(t)}&=&\sum_{l=0}^\infty
\mathcal{A}_l(t)\ket{\Psi(0)}=\sum_{l=0}^\infty\sum_{\gamma,\gamma^\prime}
A_l^{\gamma\gamma^\prime}(t)
\left[\diracsp{\Phi^{\gamma^\prime}}{\Psi(0)}\right]\ket{\Phi^\gamma},
\end{eqnarray} where \begin{eqnarray} \mathcal{A}_l(t)&=&\sum_{\gamma,\gamma^\prime}
A_l^{\gamma\gamma^\prime}(t)\ket{{\Phi}^{\gamma}}\bra{{\Phi}^{\gamma^\prime}},\\
A_0^{\gamma\gamma^\prime}(t)&=&\sum_{\gamma,\gamma^\prime}{\rm e}^{-{\rm i}
E_{\gamma}
t}\delta_{\gamma\gamma^\prime}, \\
\label{Aldefinition}
A_l^{\gamma\gamma^\prime}(t)&=&\sum_{\gamma_1,\cdots,\gamma_{l+1}}\left[
\sum_{i=1}^{l+1}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,l])}\right]\left[
\prod_{j=1}^{l}H_1^{\gamma_j\gamma_{j+1}}\right]
\delta_{\gamma_1\gamma}\delta_{\gamma_{l+1}\gamma^\prime}.\end{eqnarray} and
all
$H_1^{\gamma_j\gamma_{j+1}}=\bra{\Phi^{\gamma_j}}H_1\ket{\Phi^{\gamma_{j+1}}}$
form so-called ``perturbing Hamiltonian matrix", that is, the
representation matrix of the perturbing Hamiltonian in the
unperturbed Hamiltonian representation. While \begin{eqnarray}
d_1(E[\gamma,l])&=&\prod_{i=1}^{l}\left(E_{\gamma_{1}}
-E_{\gamma_{i+1}}\right),\\
d_i(E[\gamma,l])&=&
\prod_{j=1}^{i-1}\left(E_{\gamma_{j}}
-E_{\gamma_{i}}\right)\!\!\!\prod_{k=i+1}^{l+1}\left(E_{\gamma_{i}}
-E_{\gamma_{k}}\right),\\[-3pt] d_{l+1}(E[\gamma,l])
&=&\prod_{i=1}^{l}\left(E_{\gamma_{i}}-E_{\gamma_{l+1}}\right),\end{eqnarray}
here $2\leq i \leq l$.
It is clear that there are apparent divergences in the above
solution. For example \begin{eqnarray}
A_1^{\gamma\gamma^\prime}(t)&=&\left[\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{E_{\gamma}-E_{\gamma^\prime}}-\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}
t}}{E_{\gamma}-E_{\gamma^\prime}}\right]H_1^{\gamma\gamma^\prime}.\end{eqnarray}
when $E_{\gamma}=E_{\gamma^\prime}$ (which can appear in the
summation or degeneracy cases), it is $\infty-\infty=-{\rm i}
H_1^{\gamma\gamma^\prime} t$. As pointed out in our paper
\cite{My1}, we need to understand $A_k^{\gamma\gamma^\prime}(t)$ in
the sense of limitation. Moreover, in practice, we should present
how to calculate their limitation in order to eliminate the apparent
divergences.
Now, the key problems are how and when to introduce the cut-off
approximation in order to obtain the perturbed solution. For
studying and solving them, we first need to propose some skills and
methods in this section. These skills and methods profit from the
fact that the general term is clearly known and explicitly expressed
in our exact solution. By using them we can derive out the improved
forms of perturbed solution absorbing the partial contributions from
the high order even all order approximations of perturbation. It
will be seen that all the steps are well-regulated and only
calculational technology is to find the limitation of primary
functions. In other words, our exact solution and perturbation
theory are easily calculative and operational, and they have better
precision and higher efficiency. Frankly speaking, before we know
our exact solution, we are puzzled by too many irregular terms and
very trouble dependence on previous calculation steps. Moreover, we
are often anxious about the precision of the results in such some
calculations because those terms proportional to $t^a {\rm e}^{-{\rm i}
E_{\gamma_i} t}$ $(a=1,2,\cdots)$ in the high order approximations
might not be ignorable with time increasing. Considering the
contributions of these terms can obviously improve the precision.
However, the known perturbation theory does not give the general
term, considering this task to absorb reasonably the high order
approximations is impossible.
Since our exact solution has given the explicit form of any order
approximation, that is a general term of an arbitrary order
perturbed solution, and their forms are simply the summations of a
power series of the perturbing Hamiltonian. Just enlightened by this
general term of arbitrary order perturbed solution, we use two
skills and ``dynamical rearrangement and summation" method to build
an improved scheme of perturbation theory, which are respectively
expressed in the following two subsections.
\subsection{Hamiltonian Redivision}
The first skill starts from the decomposition of the perturbing
Hamiltonian matrix, that is the matrix elements of $H_1$ in the
representation of $H_0$, into diagonal part and off-diagonal part:
\begin{equation}\label{H1d2}
H_1^{\gamma_j\gamma_{j+1}}=h_1^{\gamma_j}\delta_{\gamma_j\gamma_{j+1}}
+g_1^{\gamma_j\gamma_{j+1}},\end{equation} so as to distinguish them because
the diagonal and off-diagonal elements can be dealed with in the
different way. In addition, it makes the concrete expression of a
given order approximation can be easily calculated. Note that
$h_1^{\gamma_j}$ has been chosen as its diagonal elements and then
$g_1^{\gamma_j\gamma_{j+1}}$ has been set as its off-diagonal
elements: \begin{equation} g_1^{\gamma_j\gamma_{j+1}}=g_1^{\gamma_j\gamma_{j+1}}
(1-\delta_{\gamma_j\gamma_{j+1}}).\end{equation}
As examples, for the first order approximation, it is easy to
calculate that \begin{equation}
\label{A1h}A_1^{\gamma\gamma^\prime}(h)=\sum_{\gamma_1,\gamma_{2}}\left[
\sum_{i=1}^{2}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,l])}\right]
\left(h_1^{\gamma_1}\delta_{\gamma_1\gamma_{2}}\right)
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_{2}}=\frac{(-{\rm i}
h_1^{\gamma} t)}{1!}{\rm e}^{-{\rm i} E_{\gamma}
t}\delta_{\gamma\gamma^\prime},\end{equation} \begin{eqnarray}\label{A1g}
A_1^{\gamma\gamma^\prime}(g)&=&\sum_{\gamma_1,\gamma_{2}}\left[
\sum_{i=1}^{2}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,l])}\right]
g_1^{\gamma_1\gamma_2}(1-\delta_{\gamma_1\gamma_2})
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_{2}}
=\left[\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{E_{\gamma}-E_{\gamma^\prime}}-\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}
t}}{E_{\gamma}-E_{\gamma^\prime}}\right]g_1^{\gamma\gamma^\prime}.\end{eqnarray}
Note that here and after we use the symbol
$A_i^{\gamma\gamma^\prime}$ denoting the contribution from the $i$th
order approximation, which is defined by (\ref{Aldefinition}), while
its argument indicates the product form of perturbing Hamiltonian
matrix. However, for the second order approximation, since \begin{eqnarray}
\label{H12deq}\prod_{j=1}^{2}H_1^{\gamma_j\gamma_{j+1}}&=&
\left(h_1^{\gamma_1}\right)^2\delta_{\gamma_1\gamma_{2}}\delta_{\gamma_2\gamma_{3}}
+h_1^{\gamma_1}g_1^{\gamma_2\gamma_{3}}\delta_{\gamma_1\gamma_2} +
g_1^{\gamma_1\gamma_{2}}h_1^{\gamma_2}\delta_{\gamma_2\gamma_3}
+g_1^{\gamma_1\gamma_{2}}g_1^{\gamma_2\gamma_{3}}.\end{eqnarray} we need to
calculate the mixed product of diagonal and off-diagonal elements of
perturbing Hamiltonian matrix. Obviously, we have \begin{eqnarray} \label{A2hh}
A_2^{\gamma\gamma^\prime}(hh)&=&\sum_{\gamma_1,\gamma_{2},\gamma_3}\left[
\sum_{i=1}^{3}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,2])}\right]
\left(h_1^{\gamma_1}\delta_{\gamma_1\gamma_{2}}h_1^{\gamma_2}\delta_{\gamma_2\gamma_{3}}\right)
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_{3}} =\frac{(-{\rm i}
h_1^{\gamma} t)^2}{2!}{\rm e}^{-{\rm i} E_{\gamma}
t}\delta_{\gamma\gamma^\prime},\end{eqnarray} \begin{eqnarray} \label{A2hg}
A_2^{\gamma\gamma^\prime}(hg)&=&\sum_{\gamma_1,\gamma_2,\gamma_{3}}\left[
\sum_{i=1}^{3}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,2])}\right] h_1^{\gamma_1}g_1^{\gamma_2\gamma_{3}}
\delta_{\gamma_1\gamma_2}\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_3}\nonumber\\
& =&\left[-\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{(E_{\gamma}-E_{\gamma^\prime})^2}+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime} t}}{(E_{\gamma}-E_{\gamma^\prime})^2}+(-{\rm i}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{E_{\gamma}-E_{\gamma^\prime}}\right]h_1^{\gamma}g_1^{\gamma\gamma^\prime},\end{eqnarray}
\begin{eqnarray}
\label{A2gh}A_2^{\gamma\gamma^\prime}(gh)&=&\sum_{\gamma_1,\gamma_2,\gamma_{3}}\left[
\sum_{i=1}^{3}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,2])}\right]
h_1^{\gamma_2}g_1^{\gamma_1\gamma_{2}}\delta_{\gamma_2\gamma_3}
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_3}\nonumber\\
& =&\left[\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{(E_{\gamma}-E_{\gamma^\prime})^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime} t}}{(E_{\gamma}-E_{\gamma^\prime})^2}-(-{\rm i}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}
t}}{E_{\gamma}-E_{\gamma^\prime}}\right]g_1^{\gamma\gamma^\prime}h_1^{\gamma^\prime},\end{eqnarray}
\begin{eqnarray}
\label{A2gg}A_2^{\gamma\gamma^\prime}(gg)&=&\sum_{\gamma_1,\gamma_2,\gamma_{3}}\left[
\sum_{i=1}^{3}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,2])}\right]
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_{3}}
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_3}\nonumber\\
&=&\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma}-E_{\gamma^\prime})}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma_1}-E_{\gamma^\prime})}+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{(E_{\gamma}-E_{\gamma^\prime})(E_{\gamma_1}-E_{\gamma^\prime})}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}. \end{eqnarray} In the usual
time-dependent perturbation theory, the zeroth order approximation
of time evolution of quantum state keeps its original form \begin{equation}
\ket{\Psi^{(0)}(t)}={\rm e}^{-{\rm i} E_\gamma t}\ket{\Phi^\gamma},\end{equation} where
we have set the initial state as $\ket{\Phi^\gamma}$ for simplicity.
By using our solution, we easily calculate out the contributions of
all order approximations from the product of completely diagonal
elements $h$ of the perturbing Hamiltonian matrix to this zeroth
order approximation \begin{eqnarray} \sum_{\gamma_1,\cdots,\gamma_{l+1}}\left[
\sum_{i=1}^{l+1}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,l])}\right]
\left(\prod_{j=1}^{l}h_1^{\gamma_j}\delta_{\gamma_j\gamma_{j+1}}\right)
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_{l+1}}=\frac{(-{\rm i}
h_1^{\gamma} t)^l}{l!}{\rm e}^{-{\rm i} E_{\gamma}
t}\delta_{\gamma\gamma^\prime}.\end{eqnarray} Therefore, we can absorb the
contributions of all order approximation parts from the product of
completely diagonal elements $h$ of the perturbing Hamiltonian
matrix into this zeroth order approximation to obtain \begin{equation}
\label{0thawithde} \ket{{\Psi^\prime}^{(0)}(t)}={\rm e}^{-{\rm i}
\left(E_\gamma+h_1^\gamma\right) t}\ket{\Phi^\gamma}.\end{equation} Similarly,
by calculation, we can deduce that up to the second approximation,
the perturbed solution has the following form \begin{eqnarray}
\label{2thawithde}
\ket{\Psi^\prime(t)}&=&\sum_{\gamma,\gamma^\prime}\left\{{\rm e}^{-{\rm i}
\left(E_{\gamma}+h_1^\gamma\right)t}\delta_{\gamma\gamma^\prime} +
\left[\frac{{\rm e}^{-{\rm i} \left(E_{\gamma}+h_1^\gamma\right) t}-{\rm e}^{-{\rm i}
\left(E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)
t}}{\left(E_{\gamma}+h_1^{\gamma}\right)
-\left(E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma^\prime}\right.\nonumber\\
& & +\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma}+h_1^\gamma\right)t}}{\left[\left(E_{\gamma}+h_1^\gamma\right)
-\left(E_{\gamma_1}+h_1^{\gamma_1}\right)\right]\left[\left(E_{\gamma}+h_1^\gamma\right)
-\left(E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)\right]}\right.\nonumber\\
& & -\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma_1}+h_1^{\gamma_1}\right)t}}{\left[\left(E_{\gamma}+h_1^\gamma\right)
-\left(E_{\gamma_1}+h_1^{\gamma_1}\right)\right]\left[\left(E_{\gamma_1}+h_1^{\gamma_1}\right)
-\left(E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)\right]}\nonumber\\
& &\left.\left.+\frac{{\rm e}^{-{\rm i}\left(
E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)t}}{\left[\left(E_{\gamma}+h_1^\gamma\right)
-\left(E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)\right]\left[\left(E_{\gamma_1}+h_1^{\gamma_1}\right)
-\left(E_{\gamma^\prime}+h_1^{\gamma^\prime}\right)\right]}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\right\}\nonumber\\
& &
\left[\diracsp{\Phi^{\gamma^\prime}}{\Psi(0)}\right]\ket{\Phi^\gamma}+\mathcal{O}(H_1^3).
\end{eqnarray} However, for the higher order approximation, the corresponding
calculation is heavy. In fact, it is unnecessary to calculate the
contributions from those terms with the diagonal elements of $H_1$
since introducing the following skill. This is a reason why we omit
the relevant calculation details. Here we mention it only for
verification of the correctness of our exact solution in this way.
The results (\ref{0thawithde}) and (\ref{2thawithde}) are not
surprised because of the fact that the Hamiltonian is re-divisible.
Actually, we can furthermore use a trick of redivision of the
Hamiltonian so that the new $H_0$ contains the diagonal part of
$H_1$, that is, \begin{eqnarray}
H_0^\prime&=&H_0+\sum_{\gamma}h_1^\gamma\ket{\Phi^\gamma}\bra{\Phi^\gamma},\\
H_1^\prime&=&H_1-\sum_{\gamma}h_1^\gamma\ket{\Phi^\gamma}\bra{\Phi^\gamma}
=\sum_{\gamma,\gamma^\prime}g_1^{\gamma\gamma^\prime}\ket{\Phi^\gamma}\bra{\Phi^{\gamma\prime}}.\end{eqnarray}
In other words, without loss of generality, we always can choose
that $H_1^\prime$ has only the off-diagonal elements in the
$H_0^\prime$ (or $H_0$) representation and \begin{equation}
H_0^\prime\ket{\Phi^\gamma}=\left(E_\gamma+h_1^\gamma\right)\ket{\Phi^\gamma}
=E_{\gamma}^\prime\ket{\Phi^\gamma}.\end{equation} It is clear that this
redivision does not change the representation of the unperturbed
Hamiltonian, but can change the corresponding eigenvalues. In spite
that our skill is so simple, it seems not be sufficiently transpired
and understood from the fact that the recent some textbooks of
quantum mechanics still remain the contributions from the diagonal
elements of the perturbing Hamiltonian matrix in the expression of
the second order perturbed state. It is clear that the directly
cut-off approximation in the usual perturbation theory drops the
contributions from all higher order approximations of the diagonal
element of the perturbing Hamiltonian matrix. From our point of
view, the usual perturbation theory introduces the perturbing
parameter too early so that this flaw is resulted in.
If there is degeneracy, our notation has to be changed as \begin{eqnarray}
E_{\gamma_i}&\rightarrow&E_{\gamma_i a_{\gamma_i}}=E_{\gamma_i},\\
\delta_{\gamma_i\gamma_j}&\rightarrow&
\delta_{\gamma_i\gamma_j}\delta_{a_{\gamma_i} a_{\gamma_j}},\\
\eta_{\gamma_i\gamma_j}&\rightarrow& \eta_{\gamma_i\gamma_j}+
\delta_{\gamma_i\gamma_j}\eta_{a_{\gamma_i} a_{\gamma_j}}.\end{eqnarray}
Thus, we can find \begin{equation} A_1^{\gamma a_{\gamma},\gamma^\prime
a_{\gamma^\prime}}(h)=\frac{(-{\rm i} h_1^{\gamma} t)}{1!}{\rm e}^{-{\rm i}
E_{\gamma}
t}\delta_{\gamma\gamma^\prime}\delta_{a_{\gamma}a_{\gamma^\prime}},\end{equation}
\begin{eqnarray} \label{A1gd} A_1^{\gamma a_{\gamma},\gamma^\prime
a_{\gamma^\prime}}(g)&=&\left[\frac{{\rm e}^{-{\rm i} E_{\gamma a_\gamma}
t}}{E_{\gamma a_\gamma}-E_{\gamma^\prime
a_{\gamma^\prime}}}-\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime
a_{\gamma^\prime}} t}}{E_{\gamma a_\gamma}-E_{\gamma^\prime
a_{\gamma^\prime}}}\right]g_1^{\gamma a_{\gamma},\gamma^\prime
a_{\gamma^\prime}}\eta_{\gamma\gamma^\prime}+\frac{(-{\rm i} g_1^{\gamma
a_{\gamma},\gamma a_{\gamma^\prime}} t)}{1!}{\rm e}^{-{\rm i} E_{\gamma
a_\gamma} t}\delta_{\gamma\gamma^\prime}.\hskip 1.0cm\end{eqnarray} This
seems to bring some complications. However, we can use the trick in
the usual degenerate perturbation theory, that is, we are free to
choose our base set of unperturbed kets $\ket{\Phi^{\gamma
a_{\gamma}}}$ in such a way that that $H_1$ is diagonalized in the
corresponding degenerate subspaces. In other words, we should find
the linear combinations of the degenerate unperturbed kets to
re-span the zero-order eigen subspace of $H_0$ so that \begin{eqnarray}
\bra{\Phi^{\gamma a_{\gamma}}}H_1\ket{\Phi^{\gamma b_{\gamma}}}&=&
g_1^{\gamma a_{\gamma},\gamma b_{\gamma}}=d_1^{\gamma a_\gamma}
\delta_{a_\gamma b_\gamma}.\end{eqnarray} (If there is still the same values
among all of $d_{\gamma a}$, this procedure can be repeated in
general.) This means that $g_1^{\gamma a_{\gamma},\gamma
a_{\gamma^\prime}}=0$. Then, we use our redivision skill again, that
is \begin{eqnarray} H_0^{\prime\prime}&=&H_0+\sum_{\gamma\notin D,\;
\gamma}h_1^\gamma\ket{\Phi^\gamma}\bra{\Phi^\gamma} +\sum_{\gamma\in
D,\;a_{\gamma}}d_1^{\gamma a_{\gamma}}
\ket{\Phi^{\gamma a_{\gamma}}}\bra{\Phi^{\gamma a_{\gamma}}},\\
H_1^{\prime\prime}&=&H_1-\sum_{\gamma\notin D,\;
\gamma}h_1^\gamma\ket{\Phi^\gamma}\bra{\Phi^\gamma}-\sum_{\gamma\in
D,\;a_{\gamma}}d_1^{\gamma a_{\gamma}} \ket{\Phi^{\gamma
a_{\gamma}}}\bra{\Phi^{\gamma a_{\gamma}}}.\end{eqnarray} where $D$ is a set
of all degenerate subspace-indexes. Thus, the last term in Eq.
(\ref{A1gd}) vanishes, \begin{eqnarray} \label{A1gdnew} A_1^{\gamma
a_{\gamma},\gamma^\prime a_{\gamma^\prime}}(g)&=&\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma a_\gamma} t}}{E_{\gamma a_\gamma}-E_{\gamma^\prime
a_{\gamma^\prime}}}-\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime
a_{\gamma^\prime}} t}}{E_{\gamma a_\gamma}-E_{\gamma^\prime
a_{\gamma^\prime}}}\right]g_1^{\gamma a_{\gamma},\gamma^\prime
a_{\gamma^\prime}}\eta_{\gamma\gamma^\prime}.\end{eqnarray} In fact, under
the preconditions of $H_1$ is diagonal in the degenerate subspaces,
we can directly do replacement \begin{equation}
g_1^{\gamma_i\gamma_j}\rightarrow g_1^{\gamma_i
a_{\gamma_i},\gamma_j a_{\gamma_j}}\eta_{\gamma_i\gamma_j}\end{equation} from
the non-degenerate case to the degenerate case. For simplicity, we
always assume that $H_1$ has been diagonalized in the degenerate
subspaces from now on.
It must be emphasized that the Hamiltonian redivision skill leads to
the fact that the new perturbed solution can be obtained by the
replacement \begin{equation} \label{rdenergy} E_{\gamma_i}\rightarrow
E_{\gamma_i}+h_1^{\gamma_i}\end{equation} in the non-degenerate perturbed
solution and its conclusions. With degeneracy present, if our method
is to work well, the degeneracy should be completely removed in the
diagonalization procedures of the degenerate subspaces and the
Hamiltonian redivision, that is, for any given degenerate subspace,
$d_1^{\gamma a}\neq d_1^{\gamma b}$ if $a\neq b$. In other words,
$E_{\gamma a}^{\prime\prime}\neq E_{\gamma b}^{\prime\prime}$ if
$a\neq b$. This means that all of eigenvalues of
$H_0^{\prime\prime}$ are no longer the same, so we can back to the
non-degenerate cases. Or specially, if we allow the remained
degeneracies, the off-diagonal element of the perturbing Hamiltonian
matrix between any two degenerate levels are always vanishing. This
implies that there is no extra contribution from the degeneracies in
the any more than the zeroth order approximations. It is important
to remember these facts. However, how must we proceed if the
degeneracies are not completely removed by the usual diagonalization
procedure and our Hamiltonian redivision as well as the special
cases with the remained degeneracies stated above are not valid. It
is known to be a challenge in the usual perturbation theory.
Although our exact solution can apply to such a kind of cases, but
the form of perturbed solution will get complicated because more
apparent divergences need to be eliminated and then some new terms
proportional to the power of evolution time will appear in general.
We will study this problem in the near future. Based on the above
reasons, we do not consider the degenerate case from now on.
From the statement above, we have seen that there are two equivalent
ways to obtain the same perturbed solution and its conclusions. One
of them is to redefine the energy level $E_{\gamma_i}$ as
$E^\prime_{\gamma_i}$ (or $E^{\prime\prime}_{\gamma_i}$), think
$E^\prime_{\gamma_i}$ (or $E^{\prime\prime}_{\gamma_i}$) to be
explicitly independent on the perturbing parameter from a redefined
view, and then use the method in the usual perturbation theory to
obtain the result from the redivided $H_1^\prime$ (or
$H_1^{\prime\prime}$). The other way is to directly derive out the
perturbed solution from the original Hamiltonian by using the
standard procedure, but the rearrangement and summation are carried
out just like what we have done above. From our point of view, this
is because the perturbing parameter is only a formal multiplier in
mathematics and it can be introduced after redefining
$E^\prime_{\gamma_i}$. It is natural although this problem seems not
be noticed for a long time. The first skill, that is, the
Hamiltonian redivision skill will be again applied to our scheme to
obtain improved forms of perturbed energy and perturbed state in
Sec. \ref{sec3}.
The Hamiltonian redivision not only overcomes the flaw of the usual
perturbation theory, but also has three obvious advantages. Firstly,
it advances the calculation precision of perturbation theory because
it makes the contributions from all order approximations of the
diagonal elements of the perturbing Hamiltonian matrix are naturally
included. Secondly, it extends the applicable range of perturbation
theory based on the same reason since the diagonal elements of the
perturbing Hamiltonian no longer is needed to be smaller. Lastly, it
can be used to remove the degeneracies, which is important for the
perturbation theory.
For simplicity, in the following, we omit the ${}^\prime$ (or
${}^{\prime\prime}$) in $H_0$, $H_1$ as well as $E_\gamma$, and
always let $H_1$ have only its off-diagonal part and let $H_0$ have
no degeneracy unless particular claiming.
\subsection{Perturbed Hamiltonian matrix product decomposition and apparent divergence elimination}
In this subsection, we present the second important skill
enlightened by our exact solution, that is, the perturbing
Hamiltonian matrix product decomposition, which is a technology to
separate the contraction terms with apparent divergences and
anti-contraction terms without apparent divergences, and then we can
eliminate these apparent divergences by the limit process. More
importantly, we can propose so-called ``dynamical rearrangement and
summation" idea in order to absorb the partial contributions from
the high order even all order approximation of perturbing
Hamiltonian into the lower order terms of our perturbation theory.
It is a key method in our improved scheme of perturbation theory.
Let us start with the second order approximation. Since we have
taken $H_1^{\gamma_j\gamma_{j+1}}$ only with the off-diagonal part
$g_1^{\gamma_j\gamma_{j+1}}$, the contribution from the second order
approximation of the perturbing Hamiltonian is only
$A_2^{\gamma\gamma^\prime}(gg)$ in eq.(\ref{A2gg}). However, we find
that in the above expression of $A_2^{\gamma\gamma^\prime}(gg)$, the
apparent divergence has not been completely eliminated or the
limitation has not been completely found out because we have not
excluded the case $E_\gamma=E_{\gamma^\prime}$ (or
$\gamma=\gamma^\prime$). This problem can be fixed by introducing a
perturbing Hamiltonian matrix product decomposition \begin{equation}
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}
=g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}\delta_{\gamma_1\gamma_3}
+g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}\eta_{\gamma_1\gamma_3},
\end{equation} where $\eta_{\gamma_1\gamma_3}=1-{\delta}_{\gamma_1\gamma_3}$.
Thus, the contribution from the second order approximation is made
of two terms, one so-called contraction term with the $\delta$
function factor and the other so-called anti-contraction term with
the $\eta$ function factor. Obviously, the contraction term has the
apparent divergence and anti-contraction term has no the apparent
divergence. Hence, in order to eliminate the apparent divergence in
the contraction term, we only need to find its limitation. It must
be emphasized that we only consider the non-degenerate case here and
after for simplification. When the degeneration happens, two indexes
with the same main energy level number will not have the
anti-contraction.
In terms of the above skill, we find that the contribution from the
second order approximation is made of the corresponding contraction-
and anti-contraction- terms \begin{equation}
{A}_2^{\gamma\gamma^\prime}({gg})={A}_2^{\gamma\gamma^\prime}(gg;c)
+{A}_2^{\gamma\gamma^\prime}(gg;n),\end{equation} where
\begin{eqnarray} \label{A2ggc}
{A}_2^{\gamma\gamma^\prime}(gg;c)&=&\sum_{\gamma_1,\gamma_2,\gamma_{3}}\left[
\sum_{i=1}^{3}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,2])}\right]
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_{3}}\delta_{\gamma_1\gamma_3}
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_3}\nonumber\\
&=& \sum_{\gamma_1}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2}+(-{\rm i}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma}t}}{E_\gamma-E_{\gamma_1}}\right]
\left|g_1^{\gamma\gamma_1}\right|^2\delta_{\gamma\gamma^\prime},
\end{eqnarray}
\begin{eqnarray} \label{A2ggn}
{A}_2^{\gamma\gamma^\prime}(gg;n)&=&\sum_{\gamma_1,\gamma_2,\gamma_{3}}\left[
\sum_{i=1}^{3}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,2])}\right]
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_{3}}\eta_{\gamma_1\gamma_3}
\delta_{\gamma\gamma_1}\delta_{\gamma^\prime\gamma_3}\nonumber\\
&=&\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma}-E_{\gamma^\prime})}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma_1}-E_{\gamma^\prime})}+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{(E_{\gamma}-E_{\gamma^\prime})(E_{\gamma_1}-E_{\gamma^\prime})}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\hskip 1.0cm\end{eqnarray}
The above method can be extended to the higher order approximation
by introducing a skill of perturbing Hamiltonian matrix product
decomposition, or simply called it $g$-product decomposition when
the perturbing Hamiltonian matrix is off-diagonal. For a sequential
product of off-diagonal elements $g$ with the form $\prod_{k=1}^m
g_1^{\gamma_k\gamma_{k+1}}$ ($m\geq 2$), we define its $(m-1)$th
decomposition by \begin{equation} \label{gpd} \prod_{k=1}^m
g_1^{\gamma_k\gamma_{k+1}}=\left(\prod_{k=1}^m
g_1^{\gamma_k\gamma_{k+1}}\right)\delta_{\gamma_1\gamma_{m+1}}
+\left(\prod_{k=1}^m
g_1^{\gamma_k\gamma_{k+1}}\right)\eta_{\gamma_1\gamma_{m+1}}.\end{equation}
When we calculate the contributions from the $n$th order
approximation, we will first carry out $(n-1)$ the first
decompositions, that is \begin{equation} \label{rdofgp}\prod_{k=1}^n
g_1^{\gamma_k\gamma_{k+1}}=\left(\prod_{k=1}^n
g_1^{\gamma_k\gamma_{k+1}}\right)\left[\prod_{k=1}^{n-1}
\left(\delta_{\gamma_k\gamma_{k+2}}+\eta_{\gamma_k\gamma_{k+2}}\right)\right].
\end{equation} Obviously, from the fact that $H_1$ is usually taken as Hermit
one, it follows that \begin{equation}
g_1^{\gamma_{j}\gamma_{j+1}}g_1^{\gamma_{j+1}\gamma_{j+2}}\delta_{\gamma_j\gamma_{j+2}}
=\left|g_1^{\gamma_{j}\gamma_{j+1}}\right|^2\delta_{\gamma_j\gamma_{j+2}}.\end{equation}
When the contribution from a given order approximation is
considered, the summation over one of two subscripts will lead to
the contraction of $g$-production. More generally, for the
contraction of even number $g$-production \begin{equation} \left(\prod_{j=1}^m
g_1^{\gamma_j\gamma_{j+1}}
\prod_{k=1}^{m-1}\delta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{m+1}\gamma^\prime}
=\left|g_1^{\gamma\gamma_2}\right|^m\left(\prod_{k=1}^{m-1}\delta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{m+1}\gamma^\prime}\delta_{\gamma\gamma^\prime},\end{equation}
and for the contraction of odd number $g$-production, \begin{equation}
\left(\prod_{j=1}^m g_1^{\gamma_j\gamma_{j+1}}
\prod_{k=1}^{m-1}\delta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{m+1}\gamma^\prime}
=\left|g_1^{\gamma\gamma^\prime}\right|^{m-1}\left(\prod_{k=1}^{m-1}\delta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{m+1}\gamma^\prime}g_1^{\gamma\gamma^\prime},\end{equation}
where $\delta_{\gamma_1\gamma}\delta_{\gamma_{m+1}\gamma^\prime}$ is
a factor appearing in the expression of our solution.
Then, we consider, in turn, all possible the second decomposition,
the third decomposition, and up to the $(n-1)$th decomposition. It
must be emphasized that after calculating the contributions from the
terms of lower decompositions, some of terms in the higher
decompositions may be trivial because there are some symmetric and
complementary symmetric indexes in the corresponding results, that
is, the products of these results and the given
$\delta_{\gamma_k\gamma_{k^\prime}}$ or
$\eta_{\gamma_k\gamma_{k^\prime}}$ are zero. In other words, such
some higher decompositions do not need to be considered. As an
example, let us analyze the contribution from the third order
approximation. It is clear that the first decomposition of a
sequential production of three off-diagonal elements becomes \begin{eqnarray}
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma_4}
&=&g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma_4}
\delta_{\gamma_1\gamma_3}\delta_{\gamma_2\gamma_4}
+g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma_4}
\delta_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}
\nonumber\\
&
&+g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma_4}
\eta_{\gamma_1\gamma_3}\delta_{\gamma_2\gamma_4}
+g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma_4}
\eta_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}.\end{eqnarray} Thus, the
related contribution is just divided into $4$ terms \begin{equation}
A_3^{\gamma\gamma^\prime}(ggg)=A_3^{\gamma\gamma^\prime}(ggg;cc)
+A_3^{\gamma\gamma^\prime}(ggg;cn)+A_3^{\gamma\gamma^\prime}(ggg;nc)
+{A}_3^{\gamma\gamma^\prime}(ggg,nn).\end{equation} In fact, by calculating we
know that the second decomposition of the former three terms do not
need to be considered, only the second decomposition of the last
term is nontrivial. This means that \begin{equation}
{A}_3^{\gamma\gamma^\prime}(ggg;nn)={A}_3^{\gamma\gamma^\prime}(ggg;nn,c)
+{A}_3^{\gamma\gamma^\prime}(ggg;nn,n),\end{equation} where we have added
$\delta_{\gamma_1\gamma_3}$ in the definition of
${A}_3^{\gamma\gamma^\prime}(ggg;nn,c)$, and
$\eta_{\gamma_1\gamma_3}$ in the definition of
${A}_3^{\gamma\gamma^\prime}(ggg;nn,n)$. Obviously, in the practical
process, this feature largely simplifies the calculations. It is
easy to see that the number of all of terms with contractions and
anti-contractions is $5$. For convenience and clearness, we call the
contributions from the different terms in the decomposition of
$g$-product as the contractions and anti-contractions of
$g$-product. Of course, the contraction and anti-contraction refer
to the meaning after summation(s) over the subscript(s) in general.
Moreover, here and after, we drop the argument $gg\cdots g$ in the
$i$th order approximation $A_i$ since its meaning has been indicated
by $i$ after the Hamiltonian is redivided. For example, the explicit
expressions of all contraction- and anti-contraction terms in the
third order approximation $A_3$ can be calculated as follows:
\begin{eqnarray}\label{A3cc} {A}_3^{\gamma\gamma^\prime}(cc)&=
&\sum_{\gamma_1,\cdots,\gamma_{4}}\left[
\sum_{i=1}^{4}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,3])}\right] \left[\prod_{j=1}^3
g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{2}\delta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{4}\gamma^\prime}\nonumber\\
& =& \left[-\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^3}+\frac{2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^3}+(-{\rm i}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma}t}}
{\left(E_\gamma-E_{\gamma^\prime}\right)^2}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2}\right]
\left|g_1^{\gamma\gamma^\prime}\right|^2g_1^{\gamma\gamma^\prime},\end{eqnarray}
\begin{eqnarray}\label{A3cn}
{A}_3^{\gamma\gamma^\prime}(cn)&=&\sum_{\gamma_1,\cdots,\gamma_{4}}\left[
\sum_{i=1}^{4}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,3])}\right] \left[\prod_{j=1}^3
g_1^{\gamma_j\gamma_{j+1}}\right]\delta_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}
\delta_{\gamma_1\gamma}\delta_{\gamma_{l+1}\gamma^\prime}\nonumber\\
& =& \sum_{\gamma_1}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_\gamma-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_\gamma-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& &\left. -\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}
{\left(E_\gamma-E_{\gamma_1}\right)\left(E_\gamma-E_{\gamma^\prime}\right)}\right]
\left|g_1^{\gamma\gamma_1}\right|^2g_1^{\gamma\gamma^\prime}
\eta_{\gamma_1\gamma^\prime},\end{eqnarray}
\begin{eqnarray} \label{A3nc} {A}_3^{\gamma\gamma^\prime}(nc)&=
&\sum_{\gamma_1,\cdots,\gamma_{4}}\left[
\sum_{i=1}^{4}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,3])}\right] \left[\prod_{j=1}^3
g_1^{\gamma_j\gamma_{j+1}}\right]\eta_{\gamma_1\gamma_3}\delta_{\gamma_2\gamma_4}
\delta_{\gamma_1\gamma}\delta_{\gamma_{l+1}\gamma^\prime}\nonumber\\
& =& \sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)\left(E_\gamma-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma_1}\right)\left(E_\gamma-E_{\gamma^\prime}\right)^2}
\right.\nonumber\\
& &\left.+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}+(-{\rm i}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}t}}
{\left(E_\gamma-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma^\prime}
\left|g_1^{\gamma_1\gamma^\prime}\right|^2\eta_{\gamma\gamma_1},\end{eqnarray}
\begin{eqnarray} \label{A3nn-c} {A}_3^{\gamma\gamma^\prime}(nn,c)
&=&\sum_{\gamma_1,\cdots,\gamma_{4}}\left[
\sum_{i=1}^{4}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,3])}\right] \left[\prod_{j=1}^3
g_1^{\gamma_j\gamma_{j+1}}\right]
\delta_{\gamma_1\gamma}\delta_{\gamma_{4}\gamma^\prime}
\eta_{\gamma_1\gamma_3}
\eta_{\gamma_2\gamma_4}{\delta}_{\gamma\gamma^\prime}
\nonumber\\
&=& \sum_{\gamma_1\gamma_2}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)}\right.\nonumber\\
& &\left. -\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)}
\right]g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma^\prime}
\eta_{\gamma\gamma_2}{\delta}_{\gamma\gamma^\prime}, \end{eqnarray}
\begin{eqnarray} \label{A3nn-n} {A}_3^{\gamma\gamma^\prime}(nn,n)
&=&\sum_{\gamma_1,\cdots,\gamma_{4}}\left[
\sum_{i=1}^{4}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,3])}\right] \left[\prod_{j=1}^3
g_1^{\gamma_j\gamma_{j+1}}\right]
\delta_{\gamma_1\gamma}\delta_{\gamma_{4}\gamma^\prime}
\eta_{\gamma_1\gamma_3}
\eta_{\gamma_2\gamma_4}\eta_{\gamma\gamma^\prime}
\nonumber\\
&=& \sum_{\gamma_1\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)}
-\frac{{\rm e}^{-{\rm i} E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& & \left.+\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}
\right]\nonumber\\
& &\times
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma^\prime}
\eta_{\gamma\gamma_2}\eta_{\gamma_1\gamma^\prime}
\eta_{\gamma\gamma^\prime}.\end{eqnarray} In the above calculations, the
uesed technologies mainly to find the limitation, dummy index
changing and summation, as well as the replacement
$g_1^{\gamma_i\gamma_j}\eta_{\gamma_i\gamma_j}=g_1^{\gamma_i\gamma_j}$
since $g_1^{\gamma_i\gamma_j}$ has been off-diagonal.
It must be emphasized that, in our notation,
$A_i^{\gamma\gamma^\prime}$ represents the contributions from the
$i$th order approximation. The other independent variables are
divided into $i-1$ groups and are arranged sequentially
corresponding to the order of $g$-product decomposition. That is,
the first variable group represents the first decomposition, the
second variable group represents the second decomposition, and so
on. Every variable group is a bit-string made of three possible
element $c,n,k$ and its length is equal to the number of the related
order of $g$-product decomposition, that is, for the $j$th
decompositions in the $i$th order approximation its length is $i-j$.
In each variable group, $c$ corresponds to a $\delta$ function, $n$
corresponds to a $\eta$ function and $k$ corresponds to $1$
(non-decomposition). Their sequence in the bit-string corresponds to
the sequence of contraction and/or anti-contraction index string.
From the above analysis and statement, the index string of the $j$th
decompositions in the $i$ order approximation is: \begin{equation}
\prod_{k=1}^{i-j}\left(\gamma_k,\gamma_{k+1+j}\right). \end{equation} For
example, for $A_5$, the first variable group is $cccn$, which refers
to the first decomposition in five order approximation and the terms
to include the factor
$\delta_{\gamma_1\gamma_3}\delta_{\gamma_2\gamma_4}
\delta_{\gamma_3\gamma_5}\eta_{\gamma_4\gamma_6}$ in the definition
of $A_5(cccn)$. Similarly, $cncc$ means to insert the factor
$\delta_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}
\delta_{\gamma_3\gamma_5}\delta_{\gamma_4\gamma_6}$ into the
definition of $A_5(cncc)$. When there are nontrivial second
contractions, for instance, two variable groups $(ccnn,kkc)$
represent that the definition of $A_5(ccnn,kkc)$ has the factor
$\left(\delta_{\gamma_1\gamma_3}\delta_{\gamma_2\gamma_4}
\eta_{\gamma_3\gamma_5}\eta_{\gamma_4\gamma_6}\right)\delta_{\gamma_3\gamma_6}$.
Since there are fully trivial contraction (the bit-string is made of
only $k$), we omit their related variable group for simplicity.
Furthermore, we pack up all the contraction- and non-contraction
terms in the following way so that we can obtain conveniently the
improved forms of perturbed solution of dynamics absorbing the
partial contributions from the high order even all order
approximations. We first decompose $A_3^{\gamma\gamma^\prime}$,
which is a summation of all above terms, into the three parts
according to ${\rm e}^{-{\rm i} E_{\gamma_i}t}, (-{\rm i} t) {\rm e}^{-{\rm i} E_{\gamma_i}t}
$ and $(-{\rm i} t)^2{\rm e}^{-{\rm i} E_{\gamma_i}t}/2$: \begin{equation}
A_3^{\gamma\gamma^\prime}=A_3^{\gamma\gamma^\prime}({\rm e})+A_3^{\gamma\gamma^\prime}(t{\rm e})
+A_3^{\gamma\gamma^\prime}(t^2{\rm e}).\end{equation} Secondly, we decompose its
every term into three parts according to ${\rm e}^{-{\rm i} E_{\gamma}t},
{\rm e}^{-{\rm i} E_{\gamma_1}t}$ ($\sum_{\gamma_1}{\rm e}^{-{\rm i} E_{\gamma_1}t}$)
and ${\rm e}^{-{\rm i} E_{\gamma^\prime}t}$: \begin{eqnarray}
A_3^{\gamma\gamma^\prime}({\rm e})&=&A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma}t})+A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma^\prime}t}),\\
A_3^{\gamma\gamma^\prime}(t{\rm e})&=&A_3^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma}t})+A_3^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_3^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i} E_{\gamma^\prime}t}),\\
A_4^{\gamma\gamma^\prime}(t^2{\rm e})&=&A_3^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma}t})+A_3^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_3^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}). \end{eqnarray} Finally, we again decompose every term
in the above equations into the diagonal and off-diagonal parts
about $\gamma$ and $\gamma^\prime$: \begin{eqnarray}
A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}),\\
A_3^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_3^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_3^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}) ,\\
A_3^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_3^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_3^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}), \end{eqnarray} where $E_{\gamma_i}$ takes $
E_{\gamma}, E_{\gamma_1}$ and $E_{\gamma^\prime}$.
According to the above way, it is easy to obtain \begin{eqnarray}
A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma}t};{\rm D})&=&
-\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i} E_{\gamma}
t}\left[\frac{1}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma_2}\right)^2}+\frac{1}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma_2}\right)}\right]g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma}\delta_{\gamma\gamma^\prime},\\
A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma}t};{\rm N})&=&
-\sum_{\gamma_1}{\rm e}^{-{\rm i} E_{\gamma}
t}\left[\frac{1}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}+\frac{1}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right]g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}
g_1^{\gamma\gamma^\prime}\nonumber\\
& &+\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i}
E_{\gamma}t}\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma\gamma_2}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)},\end{eqnarray}
\begin{eqnarray} A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma_1}t};{\rm D})&=&
\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i} E_{\gamma_1}
t}\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma}\delta_{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)},\\
A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma_1}t};{\rm N})&=&
-\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i}
E_{\gamma_1}t}\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)},\end{eqnarray}
\begin{eqnarray} A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma_2}t};{\rm D})&=&
-\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i} E_{\gamma_2}
t}\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma}\delta_{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)},\\
A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma_2}t};{\rm N})&=&
\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i}
E_{\gamma_2}t}\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma\gamma_2}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)\left(E_{\gamma_2}-E_{\gamma^\prime}\right)},\end{eqnarray}
\begin{eqnarray} A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma^\prime}t};{\rm
D})&=&0,\\ A_3^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma^\prime}t};{\rm N})&=& \sum_{\gamma_1}{\rm e}^{-{\rm i}
E_{\gamma^\prime}
t}\left[\frac{1}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}+\frac{1}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]g_1^{\gamma^\prime\gamma_1}g_1^{\gamma_1\gamma^\prime}
g_1^{\gamma\gamma^\prime}\nonumber\\
& &-\sum_{\gamma_1,\gamma_2}{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}.\end{eqnarray}
In the end of this subsection, we would like to point out that the
main purpose introducing the $g$-product decomposition and
calculating the contractions and anti-contractions of $g$-product is
to eliminate the apparent divergences and find out all the
limitations from the contributions of $g$-product contraction terms.
This is important to express the results with the physical
significance.
\section{Improved forms of perturbed solution of dynamics}\label{sec3}
In fact, the final aim using the $g$-product decomposition and then
calculating the limitation of the contraction terms is to absorb the
partial contributions from the high order approximations of
off-diagonal elements of the perturbing Hamiltonian matrix into the
lower order approximations in our improved scheme of perturbation
theory. In this section, making use of the skills and methods stated
in previous section, we can obtain the zeroth, first, second and
third order improved forms of perturbed solutions with the above
features.
In mathematics, the process to obtain the improved forms of
perturbed solutions is a kind of technology to deal with an infinite
series, that is, according to some principles and the general term
form to rearrange those terms with the same features together
forming a group, then sum all of the terms in such a particular
group that they become a compact function at a given precision,
finally this infinite series is transformed into a new series form
that directly relates to the studied problem. More concretely
speaking, since we concern the system evolution with time $t$, we
take those terms with $(-{\rm i} y_i t) {\rm e}^{-{\rm i} x_i t}$, $(-{\rm i} y_i t)^2
{\rm e}^{-{\rm i} x_i t}/2!$ and $(-{\rm i} y_i t)^3 {\rm e}^{-{\rm i} x_i t}/3!$, $\cdots$
with the same factor function $f$ together forming a group, then sum
them to obtain an exponential function
$f\exp\left[-{\rm i}\left(x_i+y_i\right)t\right]$. The physical reason to
do this is that such an exponential function represents the system
evolution in theory and it has the obvious physical significance in
the calculation of transition probability and perturbed energy.
Through rearranging and summing, those terms with factors
$t^a{\rm e}^{-{\rm i} E_{\gamma_i}t}$, $(a=1,2,\cdots)$ in the higher order
approximations are absorbed into the improved lower approximations,
we thus can advance the precision, particularly, when the evolution
time $t$ is long enough. We can call it ``dynamical rearrangement
and summation" method.
\subsection{Improved form of the zeroth order perturbed solution of dynamics}
Let us start with the zeroth order perturbed solution of dynamics.
In the usual perturbation theory, it is well-known \begin{equation}
\ket{\Psi^{(0)}(t)}=\sum_{\gamma}{\rm e}^{-{\rm i} E_\gamma
t}\diracsp{\Phi^\gamma}{\Psi(0)}\ket{\Phi^\gamma}=\sum_{\gamma\gamma^\prime}{\rm e}^{-{\rm i}
E_\gamma
t}\delta_{\gamma\gamma^\prime}a_{\gamma^\prime}\ket{\Phi^\gamma},\end{equation}
where $a_{\gamma^\prime}=\diracsp{\Phi^{\gamma^\prime}}{\Psi(0)}$.
Now, we would like to improve it so that it can absorb the partial
contributions from higher order approximations. Actually, we can
find that $A_2(c)$ and $A_3(nn,c)$ have the terms proportional to
$(-{\rm i} t)$ \begin{eqnarray}\label{0th1} & & (-{\rm i} t){\rm e}^{-{\rm i} E_{\gamma}
t}\left[\sum_{\gamma_1}\frac{1}{E_{\gamma}-E_{\gamma_1}}
\left|g_1^{\gamma\gamma_1}\right|^2\right]
\delta_{\gamma\gamma^\prime},\\
\label{0th2} & &(-{\rm i} t){\rm e}^{-{\rm i} E_{\gamma}
t}\left[\sum_{\gamma_1,\gamma_2}\frac{1}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma}-E_{\gamma_2})}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma^\prime}\right]
\delta_{\gamma\gamma^\prime}.\end{eqnarray} Introduce the notation \begin{eqnarray}
G^{(2)}_{\gamma}&=&\sum_{\gamma_1}\frac{1}{E_{\gamma}-E_{\gamma_1}}\left|g_1^{\gamma\gamma_1}\right|^2,\\
G^{(3)}_{\gamma}&=&\sum_{\gamma_1,\gamma_2}\frac{1}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma}-E_{\gamma_2})}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma}.\end{eqnarray}
It is clear that $G_\gamma^{(a)}$ has the energy dimension, and we
will see that it can be called the $a$th revised energy. Let us add
the terms (\ref{0th1}), (\ref{0th2}) and the related terms in
$A_4(t{\rm e}^{-{\rm i} E_{\gamma} t},{\rm D}),A_4(t^2{\rm e}^{-{\rm i}
E_{\gamma}t},{\rm D}),A_5(t{\rm e}^{-{\rm i} E_{\gamma} t},{\rm D})$,
$A_5(t^2{\rm e}^{-{\rm i} E_{\gamma} t},{\rm D})$, $A_6(t^2{\rm e}^{-{\rm i} E_{\gamma}
t},{\rm D})$ and $A_6(t^3 {\rm e}^{-{\rm i} E_{\gamma} t})$ given in Appendix
\ref{a1} together, that is, \begin{eqnarray} A_{\rm
I0}^{\gamma\gamma^\prime}(t)&=&{\rm e}^{-{\rm i} E_{\gamma}t}\left[1+(-{\rm i}
t)\left(G^{(2)}_\gamma+G^{(3)}_\gamma+G^{(4)}_\gamma+G^{(5)}_\gamma\right)\right.\nonumber\\
& &\left.+ \frac{(-{\rm i}
t)^2}{2!}\left(G^{(2)}_\gamma+G^{(3)}_\gamma\right)^2+ \frac{(-{\rm i}
t)^2}{2!}2 G^{(2)}_\gamma G^{(4)}_\gamma+\cdots
\right]\delta_{\gamma\gamma^\prime} ,\end{eqnarray} Although we have not
finished more calculations, from the mathematical symmetry and
physical concept, we can think \begin{eqnarray} A_{\rm
I0}^{\gamma\gamma^\prime}&=&{\rm e}^{-{\rm i} E_{\gamma}t}\left[1+(-{\rm i}
t)\left(G^{(2)}_\gamma+G^{(3)}_\gamma+G^{(4)}_\gamma+G^{(5)}_\gamma\right)\right.\nonumber\\
& &\left.+ \frac{(-{\rm i}
t)^2}{2!}\left(G^{(2)}_\gamma+G^{(3)}_\gamma+G^{(4)}_\gamma+G^{(5)}_\gamma\right)^2+\cdots
\right]\delta_{\gamma\gamma^\prime} ,\end{eqnarray} New terms added to the
above equation ought to, we think, appear at $A_7(t)$, $A_8(t)$,
$A_9(t)$ and $A_{10}(t)$, or come from the point of view introducing
the higher approximations. So we have \begin{equation} A_{\rm
I0}^{\gamma\gamma^\prime}(t)={\rm e}^{-{\rm i}\left(E_\gamma+G^{(2)}_\gamma
+{G}^{(3)}_\gamma+G^{(4)}_\gamma+G^{(5)}_\gamma\right)
t}\delta_{\gamma\gamma^\prime}\end{equation} and then obtain the improved form
of the zeroth order perturbed solution of dynamics \begin{equation} \label{ips0}
\ket{\Psi_{E_T}^{(0)}(t)}_{\rm I}=\sum_{\gamma\gamma^\prime}A_{\rm
I0}a_{\gamma^\prime}(t)\ket{\Phi^\gamma}.\end{equation}
It is clear that $G^{(2)}_\gamma$ is real. In fact, $G^{(3)}_\gamma$
is also real. In order to prove it, we exchange the dummy indexes
$\gamma_1$ and $\gamma_2$ and take the complex conjugate of
$G^{(3)}_\gamma$, that is \begin{eqnarray}
{G^{(3)}_{\gamma}}^*&=&\sum_{\gamma_1,\gamma_2}\frac{1}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma}-E_{\gamma_2})}
\left(g_1^{\gamma\gamma_2}\right)^*\left(g_1^{\gamma_2\gamma_1}\right)^*
\left(g_1^{\gamma_1\gamma}\right)^*\nonumber\\
&=&
\sum_{\gamma_1,\gamma_2}\frac{1}{(E_{\gamma}-E_{\gamma_1})(E_{\gamma}-E_{\gamma_2})}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma}\nonumber\\
&=& G^{(3)}_\gamma, \end{eqnarray} where we have used the relations
$\left(g_1^{\beta_1\beta_2}\right)^*=g_1^{\beta_2\beta_1}$ for any
$\beta_1$ and $\beta_2$ since $H_1$ is Hermit. Similar analyses can
be applied to $G^{(4)}_\gamma$ and $G^{(5)}_\gamma$. These mean that
${\rm e}^{-{\rm i}
\left(G^{(2)}_\gamma+G^{(3)}_\gamma+G^{(4)}_\gamma+G^{(5)}_\gamma\right)
t}$ is still an oscillatory factor.
\subsection{Improved form of the first order perturbed solution of
dynamics}
Furthermore, in order to absorb the partial contributions from the
approximation higher than zeroth order, we need to consider the
contributions from off-diagonal elements in the higher order
approximations.
Well-known usual first order perturbing part of dynamics is \begin{eqnarray}
\ket{\Psi^{(1)}(t)}&=&\sum_{\gamma,\gamma^\prime}\left[\frac{{\rm e}^{-{\rm i}
E_\gamma t}}{E_{\gamma}-E_{\gamma^\prime}}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}
t}}{E_{\gamma}-E_{\gamma^\prime}}\right]H_1^{\gamma\gamma^\prime}\ket{\Phi^\gamma}
= \sum_{\gamma,\gamma^\prime}\left[\left(\frac{{\rm e}^{-{\rm i} E_\gamma
t}}{E_{\gamma}-E_{\gamma^\prime}}-\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}
t}}{E_{\gamma}-E_{\gamma^\prime}}\right)g_1^{\gamma\gamma^\prime}\right]\ket{\Phi^\gamma}.\end{eqnarray}
It must be emphasized that $H_1$ is taken as only with the
off-diagonal part for simplicity. That is, we have used the
Hamiltonian redivision skill if the perturbing Hamiltonian matrix
has the diagonal elements.
Thus, from $A_3(t{\rm e}^{-{\rm i} E_{\gamma} t},{\rm N})$ and $A_4(t{\rm e}^{-{\rm i}
E_{\gamma} t},{\rm N})$, $A_4(t^2{\rm e}^{-{\rm i} E_{\gamma}t},{\rm D})$,
$A_5(t^2{\rm e}^{-{\rm i} E_{\gamma} t},{\rm N})$, $A_6(t^2{\rm e}^{-{\rm i} E_{\gamma}
t},{\rm N})$, which are defined and calculated in the Appendix
\ref{a1}, it follows that \begin{eqnarray} A_{\rm
I1}^{\gamma\gamma^\prime}(t)&=&\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)}\left[1+(-{\rm i}
t)\left(G^{(2)}_\gamma+G^{(3)}_\gamma+G^{(4)}_\gamma\right)+
\frac{(-{\rm i} t)^2}{2!}\left(G^{(2)}_\gamma\right)^2+ \frac{(-{\rm i}
t)^2}{2!}2 G^{(2)}_\gamma G^{(3)}_\gamma+\cdots
\right]g_1^{\gamma\gamma^\prime}\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)}\left[1+(-{\rm i}
t)\left(G^{(2)}_{\gamma^\prime}+G^{(3)}_{\gamma^\prime}+G^{(4)}_{\gamma^\prime}\right)+
\frac{(-{\rm i} t)^2}{2!}\left(G^{(2)}_{\gamma^\prime}\right)^2+
\frac{(-{\rm i} t)^2}{2!}2 G^{(2)}_{\gamma^\prime}
G^{(3)}_{\gamma^\prime}+\cdots
\right]g_1^{\gamma\gamma^\prime}.\hskip 0.5cm\end{eqnarray} Therefore, by
rewriting \begin{equation} A_{\rm
I1}^{\gamma\gamma^\prime}(t)=\left(\frac{{\rm e}^{-{\rm i}
\left(E_\gamma+G^{(2)}_\gamma+G^{(3)}_\gamma+G^{(4)}_\gamma\right)
t}}{E_{\gamma}-E_{\gamma^\prime}}-\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma^\prime}+G^{(2)}_{\gamma^\prime}+G^{(3)}_{\gamma^\prime}+G^{(4)}_{\gamma^\prime}\right)
t}}{E_{\gamma}-E_{\gamma^\prime}}\right)g_1^{\gamma\gamma^\prime},\end{equation}
we obtain the improved form of the first order perturbed solution of
dynamics \begin{eqnarray} \label{ips1} \ket{\Psi^{(1)}(t)}_{\rm
I}&=&\sum_{\gamma,\gamma^\prime}A_{\rm I1}^{\gamma\gamma^\prime}(t)
a_{\gamma^\prime}\ket{\Phi^\gamma}.\hskip 0.5cm\end{eqnarray}
\subsection{Improved second order- and third order perturbed solution}
Likewise, it is not difficult to obtain \begin{eqnarray} A_{\rm
I2}^{\gamma,\gamma^\prime}(t)&=&\sum_{\gamma_1}\left\{-\left[\frac{{\rm e}^{-{\rm i}
\left(E_\gamma+G^{(2)}_\gamma+G^{(3)}_\gamma\right) t}-{\rm e}^{-{\rm i}
\left(E_{\gamma_1}+G^{(2)}_{\gamma_1}+G^{(3)}_{\gamma_1}\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\delta_{\gamma\gamma^\prime}\right.\nonumber\\
& &+\left[\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma}+G^{(2)}_{\gamma}+G^{(3)}_{\gamma}\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma_1}+G^{(2)}_{\gamma_1}+G^{(3)}_{\gamma_1}\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& &\left.\left. +\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma^\prime}+G^{(2)}_{\gamma^\prime}+G^{(3)}_{\gamma^\prime}\right)
t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}\right\},\end{eqnarray}
\begin{eqnarray} A_{\rm
I3}^{\gamma,\gamma^\prime}(t)&=&\sum_{\gamma_1,\gamma_2}\left[-\frac{{\rm e}^{-{\rm i}
\left(E_\gamma+G^{(2)}_\gamma\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2}-\frac{{\rm e}^{-{\rm i}
\left(E_\gamma+G^{(2)}_\gamma\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)}\right.\nonumber\\
& &\left.+\frac{{\rm e}^{-{\rm i} \left(E_{\gamma_1}+G^{(2)}_{\gamma_1}\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)}-\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma_2}+G^{(2)}_{\gamma_2}\right)
t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma}\delta_{\gamma\gamma^\prime}
\nonumber\\
& &-\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
\left(E_\gamma+G^{(2)}_\gamma\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}+\frac{{\rm e}^{-{\rm i}
\left(E_\gamma+G^{(2)}_\gamma\right)
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}g_1^{\gamma\gamma^\prime}\nonumber\\
& &+\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma}+G^{(2)}_{\gamma}\right)
t}\eta_{\gamma\gamma_2}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma_1}+G^{(2)}_{\gamma_1}\right)
t}\eta_{\gamma_1\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& &\left. +\frac{{\rm e}^{-{\rm i}
\left(E_{\gamma_2}+G^{(2)}_{\gamma_2}\right)
t}\eta_{\gamma\gamma_2}}{\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma^\prime}\eta_{\gamma\gamma^\prime}.\end{eqnarray}
Therefore, the improved forms of the second- and third order
perturbed solutions are, respectively, \begin{eqnarray} \label{ips2}
\ket{\Psi^{(2)}(t)}_{\rm I}&=&\sum_{\gamma,\gamma^\prime}A_{\rm
I2}^{\gamma,\gamma^\prime}(t)
a_{\gamma^\prime}\ket{\Phi^\gamma},\\
\label{ips3} \ket{\Psi^{(3)}(t)}_{\rm
I}&=&\sum_{\gamma,\gamma^\prime}A_{\rm I3}^{\gamma,\gamma^\prime}(t)
a_{\gamma^\prime}\ket{\Phi^\gamma}.\end{eqnarray}
\subsection{Summary}
Obviously, our improved form of perturbed solution of dynamics up to
the third order approximation is \begin{eqnarray}
\ket{\Psi(t)}&=&\sum_{i=0}^3\ket{\Psi^{(i)}(t)}_{\rm
I}+\mathcal{O}(H_1^4).\end{eqnarray} However, this solution absorbs the
contributions from the whole $A_l^{\gamma\gamma^\prime}(t{\rm e})$,
$A_l^{\gamma\gamma^\prime}(t^2{\rm e})$ parts up to the fifth order
approximation and the whole $A_l^{\gamma\gamma^\prime}(t^2{\rm e})$,
$A_l^{\gamma\gamma^\prime}(t^3{\rm e})$ parts in the sixth order
approximation. After considering the contractions and
anti-contractions, we get the result corresponds to the replacement
\begin{equation} {\rm e}^{-{\rm i} E_{\gamma_i} t}\rightarrow {\rm e}^{-{\rm i}
\widetilde{E}_{\gamma_i} t}, \end{equation} in the
$A_l^{\gamma\gamma^\prime}({\rm e})$ part, where \begin{equation}
\widetilde{E}_{\gamma_i}={E}_{\gamma_i}+h^{\gamma_i}+\sum_{a=2}G_{\gamma_i}^{(a)},
\end{equation} $i=0,1,2,\cdots$, and $\gamma_0=\gamma$. Here, we have absorbed
the possible contributions from the diagonal elements of the
perturbing Hamiltonian matrix. Although the upper bound of summation
index $a$ is different from the approximation order in the finished
calculations, we can conjecture that it may be taken to at least 5
based on the consideration from the physical concept and
mathematical symmetry. For $a\geq 5$, their forms should be similar.
From our point of view, such form is so delicate that its form
happens impossibly by accident. Perhaps, there is a fundamental
formula within it. Nevertheless, we have no idea of how to prove it
strictly and generally at present.
Actually, as soon as we carry out further calculations, we can
absorb the contributions from higher order approximations. Moreover,
these calculations are not difficult and are programmable because we
only need to calculate the limitation and summation. Therefore, the
advantages of our solution have been made clear in our improved
forms of perturbed solution of dynamics. In other words, they offer
clear evidences to show our improved scheme is better than the
existing method in the precision and efficiency. In the following
several sections, we will clearly demonstrate these problems.
\section{Improved transition probability and revised Fermi's golden rule}\label{sec4}
One of the interesting applications of our perturbed solution is the
calculation of transition probability in general quantum systems
independent of time. It ameliorates the well-known conclusion
because our solution absorbs the contributions from the high order
approximations of the perturbing Hamiltonian. Moreover, in terms of
our improved forms of perturbed solution, it is easy to obtain the
high order transition probability. In addition, for the case of
sudden perturbation, our scheme is also suitable.
Let us start with the following perturbing expansion of state
evolution with time $t$, \begin{equation}
\ket{\Psi(t)}=\sum_{\gamma}c_\gamma(t)\ket{\Phi^\gamma}=\sum_{n=0}^\infty
\sum_{\gamma}c_\gamma^{(n)}(t)\ket{\Phi^\gamma}.\end{equation} When we take
the initial state as $\ket{\Phi^\beta}$, from our improved first
order perturbed solution, we immediately obtain \begin{equation} c_{\gamma,{\rm
I}}^{(1)}=\frac{{\rm e}^{-{\rm i} \widetilde{E}_\gamma t}-{\rm e}^{-{\rm i}
\widetilde{E}_{\beta}t}}{E_{\gamma}-E_{\beta}}
g_1^{\gamma\beta},\end{equation} where \begin{equation}
\widetilde{E}_{\gamma_i}=E_{\gamma_i}+h_1^{\gamma_i}+G_{\gamma_i}^{(2)}
+G_{\gamma_i}^{(3)}+G_{\gamma_i}^{(4)}.\end{equation} Here, we use the
subscript ``I" for distinguishing it from the usual result. Omitting
a unimportant phase factor ${\rm e}^{-{\rm i} \widetilde{E}_{\gamma}t}$, we
can rewrite it as \begin{equation} c_{\gamma,{\rm I}}^{(1)}=\frac{
g_1^{\gamma\beta}}{E_{\gamma}-E_{\beta}}
\left(1-{\rm e}^{{\rm i}\widetilde{\omega}_{\gamma\beta}t}\right),\end{equation} where
$\widetilde{\omega}_{\gamma\beta}=\widetilde{E}_{\gamma}-\widetilde{E}_{\beta}.$
Obviously it is different from the well known conclusion \begin{equation}
c_{\gamma}^{(1)}=\frac{ g_1^{\gamma\beta}}{E_{\gamma}-E_{\beta}}
\left(1-{\rm e}^{{\rm i} {\omega}_{\gamma\beta}t}\right),\end{equation} where $
\omega_{\gamma\beta}=E_{\gamma}-E_{\beta}$. Therefore, our result
contains the partial contributions from the high order
approximations.
Considering the transition probability from $\ket{\Phi^\beta}$ to
$\ket{\Phi^\gamma}$ after time $T$, we have \begin{equation} P_{\rm
I}^{\gamma\beta}(t)=\frac{\left|g_1^{\gamma\beta}\right|^2}{\omega_{\gamma\beta}^2}
\left|1-{\rm e}^{{\rm i}\widetilde{\omega}_{\gamma\beta}T}\right|^2
=\left|g_1^{\gamma\beta}\right|^2\frac{\sin^2\left(\widetilde{\omega}_{\gamma\beta}T/2\right)}
{\left({\omega}_{\gamma\beta}/2\right)^2}.\end{equation}
In terms of the relation \begin{equation}
\sin^2x-\sin^2y=\frac{1}{2}\left[\cos(2y)-\cos(2x)\right],\end{equation} we
have the revision part of transition probability \begin{equation}
\vartriangle\!\!P_{\rm I}^{\gamma\beta}(t)
=2\left|g_1^{\gamma\beta}\right|^2\frac{\cos\left({\omega}_{\gamma\beta}T\right)
-\cos\left(\widetilde{\omega}_{\gamma\beta}T\right)}
{\left({\omega}_{\gamma\beta}\right)^2}.\end{equation}
If plotting \begin{equation}
\frac{\sin^2\left(\widetilde{\omega}_{\gamma\beta}T/2\right)}{\left({\omega}_{\gamma\beta}/2\right)^2}=
\left(\frac{\widetilde{\omega}_{\gamma\beta}}{{\omega}_{\gamma\beta}}\right)^2
\frac{\sin^2\left(\widetilde{\omega}_{\gamma\beta}T/2\right)}
{\left(\widetilde{\omega}_{\gamma\beta}/2\right)^2},\end{equation} we can see
that it has a well-defined peak centered at
$\widetilde{\omega}_{\gamma\beta}=0$. Just as what has been done in
the usual case, we can extend the integral range as
$-\infty\rightarrow\infty$. Thus, the revised Fermi's golden rule
\begin{equation} w=w_{\rm F}+\vartriangle\!\!w,\end{equation} where the usual Fermi's
golden rule is \cite{Fermi} \begin{equation} w_{\rm F}=2\pi\rho(E_\beta)
\left|g_1^{\gamma\beta}\right|^2,\end{equation} in which $w$ means the
transition velocity, $\rho(E_\gamma)$ is the density of final state
and we have used the integral formula \begin{equation}
\int_{-\infty}^\infty\frac{\sin^2x}{x^2}=\pi. \end{equation} while the
revision part is \begin{equation} \vartriangle\!\!w=2\int_{-\infty}^\infty {\rm d}
E_\gamma \rho\left(E_{\gamma}\right)\left|g_1^{\gamma\beta}\right|^2
\frac{\cos\left({\omega}_{\gamma\beta}T\right)
-\cos\left(\widetilde{\omega}_{\gamma\beta}T\right)}
{T\left({\omega}_{\gamma\beta}\right)^2}.\end{equation} It is clear that $
\widetilde{\omega}_{\gamma\beta}$ is a function of $E_\gamma$, and
then a function of $\omega_{\gamma\beta}$. For simplicity, we only
consider $\widetilde{\omega}_{\gamma\beta}$ to its second order
approximation, that is \begin{equation}
\widetilde{\omega}_{\gamma\beta}=\widetilde{\omega}\left(\omega_{\gamma\beta}\right)
={\omega}_{\gamma\beta}+\sum_{\gamma_1}\left[
\frac{\left|g_1^{\gamma\gamma_1}\right|^2}{{\omega}_{\gamma\beta}-{\omega}_{\gamma_1\beta}}
-\frac{\left|g_1^{\beta\gamma_1}\right|^2}{{\omega}_{\beta\gamma_1}}\right]+\mathcal{O}(H_1^3).\end{equation}
Again based on ${\rm d} E_\gamma={\rm d}{\omega}_{\gamma\beta}$, we have \begin{equation}
\vartriangle\!\!w=2\int_{-\infty}^\infty {\rm d} \omega_{\gamma\beta}
\rho\left(\omega_{\gamma\beta}+E_{\beta}\right)\left|g_1^{\gamma\beta}\right|^2
\frac{\cos\left[{\omega}_{\gamma\beta}T\right]
-\cos\left[\widetilde{\omega}_{\gamma\beta}\left(\omega\right)T\right]}
{T\left({\omega}_{\gamma\beta}\right)^2}.\end{equation} It seems to not to be
easy to deduce the general form of this integral. In order to
simplify it, we can use the fact that
$\widetilde{\omega}_{\gamma\beta}-{\omega}_{\gamma\beta}$ is a
smaller quantity since \begin{equation}
\vartriangle\!\!\omega_{\gamma\beta}=\widetilde{\omega}_{\gamma\beta}-{\omega}_{\gamma\beta}
=\sum_{i=2}^4 \left(G_{\gamma}^{(i)}-G_{\beta}^{(i)}\right).\end{equation} For
example, we can approximatively take \begin{equation}
\cos\left({\omega}_{\gamma\beta}T\right)
-\cos\left(\widetilde{\omega}_{\gamma\beta}T\right)\approx
T\left(\widetilde{\omega}_{\gamma\beta}-{\omega}_{\gamma\beta}\right)
\sin\left(\widetilde{\omega}_{\gamma\beta}T-{\omega}_{\gamma\beta}T\right),\end{equation}
then calculate the integral. We will study it in our other
manuscript (in preparing).
Obviously, the revision comes from the contributions of high order
approximations. The physical effect resulted from our solution,
whether is important or unimportant, should be investigated in some
concrete quantum systems. Recently, we reconsider the transition
probability and perturbed energy for a Hydrogen atom in a constant
magnetic field \cite{Ourtp1}. We find the results obtained by using
our improved scheme are indeed more satisfying in the calculation
precision and efficiency. We will discuss more examples in our
future manuscripts (in preparing).
It is clear that the relevant results can be obtained from the usual
results via replacing $\omega_{\gamma\beta}$ in the exponential
power by using $\widetilde{\omega}_{\gamma\beta}$. Thus, one thing
is true --- with the time $t$ evolving,
${\rm e}^{\pm{\rm i}\left(\widetilde{\omega}_{\gamma\beta}t/2\right)}$ term in
the improved transition probability can be very different from
${\rm e}^{\pm{\rm i}\left({\omega}_{\gamma\beta}t/2\right)}$ in the
traditional one, which might lead to totally different results. To
save the space, we do not intend to discuss more here.
In fact, there is no any difficulty to obtain the second- and three
order transition probability in terms of our improved forms of
perturbed solution in the previous section. More higher order
transition probability can be given effectively and accurately by
our scheme.
\section{Improved forms of perturbed energy and perturbed state}\label{sec5}
Now we study how to calculate the improved forms of perturbed energy
and perturbed state. For simplicity, we only study them concerning
the improved second order approximation. Based on the experience
from the skill one in Sec. \ref{sec6}, we can, in fact, set a new
$\widetilde{E}$ and then use the technology in the usual
perturbative theory. That is, we denote \begin{equation}
\widetilde{E}_{\gamma_i}=E_{\gamma_i}+G_{\gamma_i}^{(2)}
+G_{\gamma_i}^{(3)}.\end{equation}
\begin{eqnarray} \label{ipsto2o}
\ket{\Psi(t)}&=&\sum_{\gamma,\gamma^\prime}\left\{{\rm e}^{-{\rm i}\widetilde{E}_\gamma
t}\delta_{\gamma\gamma^\prime}+\left[\frac{{\rm e}^{-{\rm i}
\widetilde{E}_\gamma t}-{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma^\prime}t}}{E_{\gamma}-E_{\gamma^\prime}}\right]
g_1^{\gamma\gamma^\prime}-\sum_{\gamma_1}\frac{{\rm e}^{-{\rm i}
\widetilde{E}_\gamma t}-{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\delta_{\gamma\gamma^\prime}
\right.\nonumber\\
& & +\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& &\left.\left. +\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\eta_{\gamma\gamma^\prime}\right\}
a_{\gamma^\prime}\ket{\Phi^\gamma}+\mathcal{O}(H_1^3).\end{eqnarray} Note
that $E_{\gamma_i}$ can contain the diagonal element
$h_1^{\gamma_i}$ of the original $H_1$, and we do not obviously
write $h_1^{\gamma_i}$ and take new $H_1$ matrix as off-diagonal in
the $H_0$ representation.
Because that \begin{equation} \ket{\Psi(t)}=\sum_{\gamma,\gamma^\prime}{\rm e}^{-{\rm i}
{E}_T
t}\delta_{\gamma\gamma^\prime}a_{\gamma^\prime}\ket{\Phi^\gamma},\end{equation}
we have \begin{eqnarray}\label{ipee} E_T
a_{\gamma}&=&\widetilde{E}_{\gamma}a_\gamma+\sum_{\gamma^\prime}\left\{\frac{
\widetilde{E}_\gamma-\widetilde{E}_{\gamma^\prime}}{E_{\gamma}-E_{\gamma^\prime}}
g_1^{\gamma\gamma^\prime}-\sum_{\gamma_1}\frac{\widetilde{E}_\gamma-\widetilde{E}_{\gamma_1}}
{\left(E_{\gamma}-E_{\gamma_1}\right)^2}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\delta_{\gamma\gamma^\prime}
+\sum_{\gamma_1}\left[\frac{
\widetilde{E}_{\gamma}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right.\right.\nonumber\\
& &\left.\left.-\frac{
\widetilde{E}_{\gamma_1}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
+\frac{\widetilde{E}_{\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\eta_{\gamma\gamma^\prime}\right\}
a_{\gamma^\prime}.\end{eqnarray}
In the usual perturbation theory, $H_1$ is taken as a perturbing
part with the form \begin{equation} H_1=\lambda v,\end{equation} where $\lambda$ is a real
number that is called the perturbing parameter. It must be
emphasized that $\widetilde{E}_{\gamma_i}$ can be taken as
explicitly independent perturbing parameter $\lambda$, because we
introduce $\lambda$ as a formal multiplier after redefinition. In
other words, $\widetilde{E}_{\gamma_i}$ has absorbed those terms
adding to it and formed a new quantity. This way has been seen in
our Hamiltonian redivision skill. Without loss of generality, we
further take $H_1$ only with the off-diagonal form, that is \begin{equation}
H_1^{\gamma_1\gamma_2}={g}_1^{\gamma_1\gamma_2}=\lambda
{v}^{\gamma_1\gamma_2}.\end{equation} Then, we expand both the desired
expansion coefficients $a_\gamma$ and the energy eigenvalues $E_T$
in a power series of perturbation parameter $\lambda$: \begin{eqnarray}
E_T&=&\sum_{l=0}^\infty \lambda^l E_{T,{\rm I}}^{(l)},\quad
a_\gamma=\sum_{l=0}^\infty \lambda^l a_{\gamma;{\rm I}}^{(l)}.\end{eqnarray}
\subsubsection{Improved 0th approximation}
If we set $\lambda=0$, eq.(\ref{ipee}) yields \begin{equation} E_{T,{\rm
I}}^{(0)} a_{\gamma;{\rm I}}^{(0)}=\widetilde{E}_\gamma
a_{\gamma;{\rm I}}^{(0)},\end{equation} where $\gamma$ runs over all levels.
Actually, let us focus on the level $\gamma=\beta$, then \begin{equation}
\label{0the} E_{T,{\rm I}}^{(0)}=\widetilde{E}_\beta .\end{equation} When the
initial state is taken as $\ket{\Phi^\beta}$, \begin{equation} \label{0ths}
a_{\gamma;{\rm I}}^{(0)}=\delta_{\gamma\beta}.\end{equation} Obviously, the
improved form of perturbed energy is different from the results in
the usual perturbative theory because it absorbs the contributions
from the higher order approximations. However, the so-call improved
form of perturbed state is the same as the usual result.
\subsubsection{Improved 1st approximation}
Again from eq.(\ref{ipee}) it follows that \begin{equation} E_{T,{\rm I}}^{(0)}
a_{\gamma;{\rm I}}^{(1)} +E_{T,{\rm I}}^{(1)}a_{\gamma;{\rm
I}}^{(0)} =\widetilde{E}_\gamma a_{\gamma;{\rm I}}^{(1)}
+\sum_{\gamma^\prime}\frac{\widetilde{E}_\gamma-\widetilde{E}_{\gamma^\prime}}{E_\gamma-E_{\gamma^\prime}}
v^{\gamma\gamma^\prime}a_{\gamma^\prime;{\rm I}}^{(0)}.\end{equation} When
$\gamma=\beta$, it is easy to obtain \begin{equation} E_{T,{\rm I}}^{(1)}=0.
\end{equation} If $\gamma\neq\beta$, then \begin{equation} a_{\gamma;{\rm
I}}^{(1)}=-\frac{1}
{\left(E_\gamma-E_{\beta}\right)}{v}^{\gamma\beta}.\end{equation} It is clear
that the first order results are the same as the one in the usual
perturbative theory.
\subsubsection{Improved 2nd approximation}
Likewise, the following equation \begin{eqnarray} & & E_{T,{\rm I}}^{(2)}
a_{\gamma;{\rm I}}^{(0)}+ E_{T,{\rm
I}}^{(1)}a_{\gamma;I}^{(1)}+E_{T,{\rm I}}^{(0)}a_{\gamma;{\rm
I}}^{(2)} =\widetilde{E}_\gamma a_{\gamma;{\rm I}}^{(2)}
+\sum_{\gamma^\prime}\frac{\widetilde{E}_\gamma-\widetilde{E}_{\gamma^\prime}}{E_\gamma-E_{\gamma^\prime}}
{v}^{\gamma\gamma^\prime}a_{\gamma^\prime;{\rm I}}^{(1)}
-\sum_{\gamma_1,\gamma^\prime}\frac{\widetilde{E}_\gamma-\widetilde{E}_{\gamma_1}}
{\left(E_{\gamma}-E_{\gamma_1}\right)^2}
v^{\gamma\gamma_1}v^{\gamma_1\gamma}\delta_{\gamma\gamma^\prime}a_{\gamma^\prime;{\rm
I}}^{(0)} \nonumber\\
& &\quad +\sum_{\gamma_1,\gamma^\prime}\left[\frac{
\widetilde{E}_{\gamma}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)} -\frac{
\widetilde{E}_{\gamma_1}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
+\frac{\widetilde{E}_{\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
v^{\gamma\gamma_1}v^{\gamma_1\gamma}\eta_{\gamma\gamma^\prime}
a_{\gamma^\prime;{\rm I}}^{(0)}.\hskip 0.5cm \end{eqnarray} is obtained and
it yields \begin{equation} E_{T;{\rm I}}^{(2)}=0,\end{equation} if we take $\gamma=\beta$.
When $\gamma\neq\beta$, we have \begin{eqnarray} a_{\gamma;{\rm
I}}^{(2)}&=&\sum_{\gamma_1}\frac{1}
{\left(E_{\gamma}-E_{\beta}\right)
\left(E_{\gamma_1}-E_{\beta}\right)}
v^{\gamma\gamma_1}v^{\gamma_1\beta}\eta_{\gamma\beta}.\end{eqnarray} It is
consistent with the off-diagonal part of usual result. In fact,
since we have taken $H_1^{\gamma\gamma^\prime}$ to be off-diagonal,
it does not have a diagonal part. However, we think its form is more
appropriate. In addition, we do not consider the revision part
introduced by normalization. While $E_{T;{\rm I}}^{(2)}=0$ is a new
result.
\subsubsection{Summary}
Now we can see, up to the improved second order approximation: \begin{equation}
E_{T,\beta}\approx \widetilde{E}_\beta=E_{\beta}+G_{\beta}^{(2)}
+G_{\beta}^{(3)}.\end{equation} Compared with the usual one, they are
consistent at the former two orders. It is not strange since the
physical law is the same. However, our improved form of perturbed
energy contains a third order term. In other words, our solution
might be effective in order to obtain the contribution from high
order approximations. The possible physical reason is that a
redefined form of the solution is obtained.
In special, when we allow the $H_1^{\gamma\gamma^\prime}$ to have
the diagonal elements, the improved second order approximation
becomes \begin{equation} E_{T,\beta}\approx
E_{\beta}+h_1^{\beta}+G_{\beta}^{(2)} +G_{\beta}^{(3)}.\end{equation}
Likewise, if we redefine \begin{equation}
\widetilde{E}_{\gamma_i}=E_{\gamma_i}+h_1^{\gamma_i}+G_{\gamma_i}^{(2)}
+G_{\gamma_i}^{(3)}+G_{\gamma_i}^{(4)}.\end{equation} Thus, only considering
the first order approximation, we can obtain \begin{equation} E_{T,\beta}\approx
E_{\beta}+h_1^{\beta}+G_{\beta}^{(2)}
+G_{\beta}^{(3)}+G_{\beta}^{(4)}. \end{equation} In fact, the reason is our
conjecture in the previous section. The correct form of redefined
$\widetilde{E}_{\gamma_i}$ should be \begin{equation} E_{T,\beta}\approx
E_{\beta}+h_1^{\beta}+G_{\beta}^{(2)}
+G_{\beta}^{(3)}+G_{\beta}^{(4)} +G_{\beta}^{(5)}+\cdots. \end{equation} This
implies that our improved scheme absorbs the partial even whole
significant contributions from the high order approximations. In
addition, based on the fact that the improved second approximation
is actually zero, it is possible that this implies our solution will
fade down more rapidly than the solution in the usual perturbative
theory.
Actually, the main advantage of our solution is in dynamical
development. The contributions from the high order approximations
play more important roles in the relevant physical problems such as
the entanglement dynamics and decoherence process. For the improved
perturbed energy, its high order part has obvious physical meaning.
But, for the improved form of perturbed state, we find them to be
the same as the existed perturbation theory up to the second
approximation.
\section{Example and application}\label{sec6}
In order to concretely illustrate that our exact solution and the
improved scheme of perturbation theory are indeed more effective and
more accurate, let us study a simple example: two-state system,
which appears in the most of quantum mechanics textbooks. Its
Hamiltonian can be written as\begin{equation} H=\left(\begin{array}{cc}
E_1&V_{12}\\
V_{21}& E_2\end{array}\right),\end{equation} where we have used the the basis
formed by the unperturbed energy eigenvectors, that is \begin{equation}
\ket{\Phi^1}=\left(\begin{array}{c}1\\0\end{array}\right),\quad
\ket{\Phi^2}=\left(\begin{array}{c}0\\1\end{array}\right). \end{equation} In
other words: \begin{equation} H_0\ket{\Phi^\gamma}=E_\gamma\ket{\Phi^\gamma},
\quad (\gamma=1,2)\end{equation} \vskip -0.5cm \noindent where \vskip
-0.5cm\begin{equation} H_0=\left(\begin{array}{cc}
E_1&0\\
0& E_2\end{array}\right).\end{equation} Thus, this means the perturbing
Hamiltonian is taken as \begin{equation} H_1=\left(\begin{array}{cc}
0& V_{12}\\
V_{21}& 0\end{array}\right).\end{equation} This two state system has the
following eigen equation \begin{equation}
H\ket{\Psi^\gamma}=E^T_{\gamma}\ket{\Psi^\gamma}. \end{equation} It is easy to
obtain its solution: corresponding eigenvectors and eigenvalues
\begin{eqnarray}
\ket{\Psi^1}&=&\frac{1}{\sqrt{4|V|^2+(\omega_{21}+{\omega}_{21}^T)^2}}
\left(\begin{array}{c}\omega_{21}+{\omega}_{21}^T\\-2V_{21}\end{array}\right),\\
\ket{\Psi^2}&=&\frac{1}{\sqrt{4|V|^2+(\omega_{21}-{\omega}_{21}^T)^2}}
\left(\begin{array}{c}\omega_{21}-{\omega}_{21}^T\\-2V_{21}\end{array}\right);
\end{eqnarray}
\begin{eqnarray} E_1^T=\frac{1}{2}\left(E_1+E_2-{\omega}_{21}^T\right),\quad
E_2^T=\frac{1}{2}\left(E_1+E_2+{\omega}_{21}^T\right); \end{eqnarray} where
$|V|=|V_{12}|=|V_{21}|$, $\omega_{21}=E_2-E_1$,
${\omega}_{21}^T=E_2^T-E_1^T=\sqrt{4|V|^2+\omega_{21}^2}$, and we
have set $E_2> E_1$ without loss of generality.
Obviously the transition probability from state 1 to state 2 is
\begin{eqnarray} P^T(1\rightarrow 2)&=&\left|\bra{\Phi^2}{\rm e}^{-{\rm i} H
t}\ket{\Phi^1}\right|^2=\left|\sum_{\gamma_1,\gamma_2=1}^2\diracsp{\Phi^2}{\Psi^{\gamma_1}}
\bra{\Psi^{\gamma_1}}{\rm e}^{-{\rm i} H
t}\ket{\Psi^{\gamma_2}}\diracsp{\Psi^{\gamma_2}}{\Phi^1}\right|^2
=|V|^2\frac{\sin^2\left({\omega}_{21}^T
t/2\right)}{(\omega_{21}^T/2)^2}. \end{eqnarray}
In the usual perturbation theory, up to the second order
approximation, the well-known perturbed energies are \begin{eqnarray}
E_1^P=E_1-\frac{|V|^2}{\omega_{21}},\quad
E_2^P=E_1+\frac{|V|^2}{\omega_{21}}. \end{eqnarray} While, under the first
order approximation, the transition probability from state 1 to
state 2 is \begin{equation} P(1\rightarrow
2)=|V|^2\frac{\sin^2\left(\omega_{21}t/2\right)}{(\omega_{21}/2)^2}.
\end{equation}
Using our improved scheme, only to the first order approximation, we
get the corresponding perturbed energies \begin{eqnarray}
\widetilde{E}_1=E_1-\frac{|V|^2}{\omega_{21}}+\frac{|V|^4}{\omega_{21}^3},\quad
\widetilde{E}_2=E_1+\frac{|V|^2}{\omega_{21}}-\frac{|V|^4}{\omega_{21}^3},
\end{eqnarray} where we have used the facts that \begin{eqnarray}
G_1^{(2)}&=&-\frac{|V|^2}{\omega_{21}}=-G_2^{(2)},\quad
G_1^{(3)}=G_2^{(3)}=0,\quad
G_1^{(4)}=\frac{|V|^4}{\omega_{21}^3}=-G_2^{(4)}. \end{eqnarray} Obviously,
under the first order approximation, our scheme yields the
transition probability from state 1 to state 2 as \begin{equation} P_{\rm
I}(1\rightarrow
2)=|V|^2\frac{\sin^2\left(\widetilde{\omega}_{21}t/2\right)}{(\omega_{21}/2)^2}.
\end{equation} where
$\widetilde{\omega}_{21}=\widetilde{E}_2-\widetilde{E}_1$. Therefore
we can say our scheme is more effective. Moreover, we notice that
\begin{eqnarray}
{E}_1^T&=&E_1-\frac{|V|^2}{\omega_{21}}+\frac{|V|^4}{\omega_{21}^3}
+\mathcal{O}(|V|^6) =\widetilde{E}_1+\mathcal{O}(|V|^6)
={E}_1^P+\frac{|V|^4}{\omega_{21}^3}+\mathcal{O}(|V|^6),\\
\widetilde{E}_2^T&=&E_1+\frac{|V|^2}{\omega_{21}}
-\frac{|V|^4}{\omega_{21}^3}+\mathcal{O}(|V|^6)
=\widetilde{E}_2+\mathcal{O}(|V|^6) =
{E}_2^P-\frac{|V|^4}{\omega_{21}^3}+\mathcal{O}(|V|^6). \end{eqnarray} \vskip
-0.5cm and \begin{eqnarray} P^T(1\rightarrow
2)&=&|V|^2\frac{\sin^2\left(\omega_{21}t/2\right)}{(\omega_{21}/2)^2}
+|V|^2\left[\frac{\sin\left(
\omega_{21}t\right)}{2(\omega_{21}/2)^2}-\frac{\sin^2\left(
\omega_{21}t/2\right)}{(\omega_{21}/2)^3}\right]
\left(\widetilde{\omega}_{21}-{\omega}_{21}\right)
+\mathcal{O}[\left(\widetilde{\omega}_{21}-{\omega}_{21}\right)^2]\\
&=& P_{\rm I}(1\rightarrow 2)-|V|^2\frac{\sin^2\left(
\omega_{21}t/2\right)}{(\omega_{21}/2)^3}
\left(\widetilde{\omega}_{21}-{\omega}_{21}\right)
+\mathcal{O}[\left(\widetilde{\omega}_{21}-{\omega}_{21}\right)^2]\\
&=& P(1\rightarrow 2)+|V|^2\left[\frac{\sin\left(
\omega_{21}t\right)}{2(\omega_{21}/2)^2}-\frac{\sin^2\left(
\omega_{21}t/2\right)}{(\omega_{21}/2)^3}\right]
\left(\widetilde{\omega}_{21}-{\omega}_{21}\right)
+\mathcal{O}[\left(\widetilde{\omega}_{21}-{\omega}_{21}\right)^2].\end{eqnarray}
Therefore, we can say that our scheme is more accurate.
\section{Discussion and conclusion}\label{sec7}
In this paper, our improved scheme of perturbation theory is
proposed based on our exact solution in general quantum systems
\cite{My1}. Because our exact solution has a general term that is a
$c$-number function and proportional to power of the perturbing
Hamiltonian, this provides the probability considering the partial
contributions from the high order even all order approximations.
While our dynamical rearrangement and summation method helps us to
realize this probability. Just as the contributions from the high
order even all order approximations are absorbed to the lower order
approximations, our scheme becomes an improved one.
It must be emphasized that our improved scheme of perturbation
theory is proposed largely dependent on the facts that we find and
develop a series of skills and methods. From that the Hamiltonian
redivision skill overcomes the flaw in the usual perturbation
theory, improves the calculation precision, extends the applicable
range and removes the possible degeneracies to that the perturbing
Hamiltonian matrix product decomposition method separates the
contraction terms and anti-contraction terms, eliminates the
apparent divergences in the power series of the perturbing
Hamiltonian and provides the groundwork of ``dynamical rearrangement
and summation", we have seen these ideas, skills and methods to be
very useful.
Actually, the start point that delays to introduce the perturbing
parameter as possible plays an important even key role in our
improved scheme of perturbation theory. It enlightens us to seek for
the above skills and methods.
From our exact solution transferring to our improved scheme of
perturbation theory we does not directly use the cut-off
approximation, but first deals with the power series of perturbation
so that the contributions from some high order even all order
approximations can be absorbed into the lower orders than the
cut-off order as possible. Hence, our improved scheme of
perturbation theory is physically reasonable, mathematically clear
and methodologically skillful. This provides the guarantee achieving
high efficiency and high precision. Through finding the improved
forms of perturbed solutions of dynamics, we generally demonstrate
this conclusion. Furthermore, we prove the correctness of this
conclusion via calculating the improved form of transition
probability, perturbed energy and perturbed state. Specially, we
obtain the revised Fermi's golden rule. Moreover, we illustrate the
advantages of our improved scheme in an easy understanding example
of two-state system. All of this implies the physical reasons and
evidences why our improved scheme of perturbation theory is actually
calculable, operationally efficient, conclusively more accurate.
From the features of our improved scheme, we believe that it will
have interesting applications in the calculation of entanglement
dynamics and decoherence process as well as the other physical
quantities dependent on the expanding coefficients.
In fact, a given lower order approximation of improved form of the
perturbed solution absorbing the partial contributions from the
higher order even all order approximations is obtained by our
dynamical rearrangement and summation method, just like ``Fynmann
figures" summation" that has been done in the quantum field theory.
It is emphasized that these contributions have to be significant in
physics. Considering time evolution form is our physical ideas and
absorbing the high order approximations with the factors $t^a{\rm e}^{-{\rm i}
E_{\gamma_i}t}$, $(a=1,2,\cdots)$ to the improved lower order
approximations definitely can advance the precision. Therefore,
using our dynamical rearrangement and summation method is
appropriate and reasonable in our view.
For a concrete example, except for some technological and
calculational works, it needs the extensive physical background
knowledge to account for the significance of related results. That
is, since the differences of the related conclusions between our
improved scheme and the usual perturbation theory are in high order
approximation parts, we have to study the revisions (differences) to
find out whether they are important or unimportant to the studied
problems. In addition, our conjecture about the perturbed energy is
based on physical symmetry and mathematical consideration, it is
still open at the strict sense. As to the degenerate cases including
specially, vanishing the off-diagonal element of the perturbing
Hamiltonian matrix between any two degenerate levels, we have
discussed how to deal with them, except for the very complicated
cases that the degeneracy can not be completely removed by the
diagonalization of the degenerate subspaces trick and the
Hamiltonian redivision skill as well as the off-diagonal element of
the perturbing Hamiltonian matrix between any two degenerate levels
are not vanishing when the remained degeneracies are allowed.
It must be emphasized that the study on the time evolution operator
plays a central role in quantum dynamics and perturbation theory.
Because of the universal significance of our general and explicit
expression of the time evolution operator, we wish that it will have
more applications in quantum theory. Besides the above studies
through the perturbative method, it is more interesting to apply our
exact solution to the formalization study of quantum dynamics in
order to further and more powerfully show the advantages of our
exact solution.
In summary, our results can be thought of as theoretical
developments of perturbation theory, and they are helpful for
understanding the theory of quantum mechanics and providing some
powerful tools for the calculation of physical quantities in general
quantum systems. Together with our exact solution \cite{My1} and
open system dynamics \cite{My3}, they can finally form the
foundation of theoretical formulism of quantum mechanics in general
quantum systems. Further study on quantum mechanics of general
quantum systems is on progressing.
\section*{Acknowledgments}
We are grateful all the collaborators of our quantum theory group in
the Institute for Theoretical Physics of our university. This work
was funded by the National Fundamental Research Program of China
under No. 2001CB309310, and partially supported by the National
Natural Science Foundation of China under Grant No. 60573008.
\begin{appendix}
\renewcommand{\theequation}{\thesection\arabic{equation}}
\section{The calculations of the high order terms}\label{a1}
Since we have taken the $H_1$ only with the off-diagonal part, it is
enough to calculate the contributions from them. In Sec. \ref{sec6}
the contributions from the first, second and third order
approximations have been given. In this appendix, we would like to
find the contributions from the fourth to the sixth order
approximations. The calculational technologies used by us are mainly
to the limit process, dummy index changing and summation, as well as
the replacement
$g_1^{\gamma_i\gamma_j}\eta_{\gamma_i\gamma_j}=g_1^{\gamma_i\gamma_j}$
since $g_1^{\gamma_i\gamma_j}$ has been off-diagonal. These
calculations are not difficult, but are a little lengthy.
\subsection{$l=4$ case}
For the fourth order approximation, its contributions from the first
decompositions consists of eight terms: \begin{eqnarray} \label{A4fc}
A_4^{\gamma\gamma^\prime}&=&A_4^{\gamma\gamma^\prime}(ccc)
+A_4^{\gamma\gamma^\prime}(ccn)+A_4^{\gamma\gamma^\prime}(cnc)
+A_4^{\gamma\gamma^\prime}(ncc)\nonumber\\
& & +A_4^{\gamma\gamma^\prime}(cnn)+A_4^{\gamma\gamma^\prime}(ncn)
+A_4^{\gamma\gamma^\prime}(nnc)+{A}_4^{\gamma\gamma^\prime}(nnn).
\end{eqnarray} Its the former four terms have no the nontrivial second
contractions, and its the fifth and seven terms have one nontrivial
second contraction as follows \begin{eqnarray} {A}_4^{\gamma\gamma^\prime}(cnn)
&=&{A}_4^{\gamma\gamma^\prime}(cnn,kc) +
{A}_4^{\gamma\gamma^\prime}(cnn;kn),\\
{A}_4^{\gamma\gamma^\prime}(ncn)
&=&{A}_4^{\gamma\gamma^\prime}(ncn,c)
+{A}_4^{\gamma\gamma^\prime}(ncn,n), \\
A_4^{\gamma\gamma^\prime}(nnc)
&=&{A}_4^{\gamma\gamma^\prime}(nnc,ck) +
{A}_4^{\gamma\gamma^\prime}(nnc,nk). \end{eqnarray} In addition, the last
term in eq.(\ref{A4fc}) has two nontrivial second contractions, and
its fourth term has also the third contraction. Hence \begin{eqnarray}
{A}_4^{\gamma\gamma^\prime}(nnn)&=&{A}_4^{\gamma\gamma^\prime}(nnn,cc)
+{A}_4^{\gamma\gamma^\prime}(nnn,cn){A}_4^{\gamma\gamma^\prime}(nnn,nc)
+{A}_4^{\gamma\gamma^\prime}(nnn,nn),\\
{A}_4^{\gamma\gamma^\prime}(nnn,nn)&=&{A}_4^{\gamma\gamma^\prime}(nnn,nn,c)
+{A}_4^{\gamma\gamma^\prime}(nnn,nn,n).\end{eqnarray} All together, we have
the fifteen terms that are the contributions from whole contractions
and anti-contractions of the fourth order approximation.
First, let us calculate the former four terms only with the first
contractions and anti-contractions, that is, with more than two
$\delta$ functions (or less than two $\eta$ functions)
\begin{eqnarray}\label{A4ccc} {A}_4^{\gamma\gamma^\prime}(ccc)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{3}\delta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1}\left[\frac{3{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^4}-\frac{3{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^4}-(-{\rm i}
t)\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^3}\right.\nonumber\\
& &\left.-(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^3}+\frac{(-{\rm i}
t)^2}{2}\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2}\right]
\left|g_1^{\gamma\gamma_1}\right|^4\delta_{\gamma\gamma^\prime}.\end{eqnarray}
\begin{eqnarray}\label{A4ccn} {A}_4^{\gamma\gamma^\prime}(ccn)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\delta_{\gamma_1\gamma_3}{\delta}_{\gamma_2\gamma_4}\eta_{\gamma_3\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}-\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^3
\left(E_{\gamma}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& & +\frac{2{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^3\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}+(-{\rm i}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)}
\nonumber\\
& & \left. +(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
\right]\left|g_1^{\gamma\gamma_1}\right|^2
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\hskip 1.2cm\end{eqnarray}
\begin{eqnarray}\label{A4cnc} {A}_4^{\gamma\gamma^\prime}(cnc)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\delta_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}\delta_{\gamma_3\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma_2}\right)^3} +\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma_2}\right)^2}+\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^3\left(E_{\gamma}-E_{\gamma_2}\right)}
\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^3\left(E_{\gamma_1}-E_{\gamma_2}\right)}
+\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_\gamma-E_{\gamma_2}\right)^3\left(E_{\gamma_1}-E_{\gamma_2}\right)}-(-{\rm i}
t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2}\nonumber\\
& &\left. -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)}
+\frac{(-{\rm i} t)^2}{2}\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma_2}\right)}\right]\nonumber\\
& &\times\left|g_1^{\gamma\gamma_1}\right|^2
\left|g_1^{\gamma\gamma_2}\right|^2
\eta_{\gamma_1\gamma_2}\delta_{\gamma\gamma^\prime}.\hskip 1.2cm
\end{eqnarray}
\begin{eqnarray}\label{A4ncc} {A}_4^{\gamma\gamma^\prime}(ncc)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\eta_{\gamma_1\gamma_3}{\delta}_{\gamma_2\gamma_4}{\delta}_{\gamma_3\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}+\frac{2{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^3}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& &-\frac{2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^3}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2} -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\nonumber\\
& & \left. -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right]\left|g_1^{\gamma_1\gamma^\prime}\right|^2
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\hskip 1.2cm\end{eqnarray}
Then, we calculate the three terms with the single first
contraction, that is, with one $\delta$ function. Because one
$\delta$ function can not eliminate the whole apparent singularity,
we also need to find out the nontrivial second contraction- and/or
anti-contraction terms. \begin{eqnarray}\label{A4cnn-kc}
{A}_4^{\gamma\gamma^\prime}(cnn,kc)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
{\delta}_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}
\eta_{\gamma_3\gamma_5}{\delta}_{\gamma_2\gamma^\prime}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1}\left[-\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^3}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}-\frac{2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^3
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}\nonumber\\
& &\left. -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
\left|g_1^{\gamma\gamma^\prime}\right|^2
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}.\hskip 1.2cm \end{eqnarray}
\begin{eqnarray}\label{A4cnn-kn} {A}_4^{\gamma\gamma^\prime}(cnn,kn)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
{\delta}_{\gamma_1\gamma_3}\eta_{\gamma_2\gamma_4}
\eta_{\gamma_3\gamma_5}\eta_{\gamma_2\gamma^\prime}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_\gamma-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_\gamma-E_{\gamma_2}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& & -\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_\gamma-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\nonumber\\
& & -\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_\gamma-E_{\gamma_2}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\nonumber\\
& &\left. +(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right]
\left|g_1^{\gamma\gamma_1}\right|^2
g_1^{\gamma\gamma_2}g_1^{\gamma_2\gamma^\prime}
\eta_{\gamma_1\gamma_2}\eta_{\gamma_1\gamma^\prime}
\eta_{\gamma\gamma^\prime}. \hskip 1.2cm\end{eqnarray}
\begin{eqnarray}\label{A4ncn-c} {A}_4^{\gamma\gamma^\prime}(ncn,c)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\eta_{\gamma_1\gamma_3}{\delta}_{\gamma_2\gamma_4}\eta_{\gamma_3\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}{\delta}_{\gamma\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma_2}\right)^2}-\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^3
\left(E_{\gamma}-E_{\gamma_2}\right)}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)^2}\right.\nonumber\\
& &+\frac{2{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^3
\left(E_{\gamma_1}-E_{\gamma_2}\right)} +\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)^2}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma_2}\right)}\nonumber\\
& &\left. +(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)}\right]\left|g_1^{\gamma\gamma_1}\right|^2
\left|g_1^{\gamma_1\gamma_2}\right|^2\eta_{\gamma\gamma_2}\delta_{\gamma\gamma^\prime}.
\hskip 1.2cm\end{eqnarray}
\begin{eqnarray}\label{A4ncn-n} {A}_4^{\gamma\gamma^\prime}(ncn,n)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\eta_{\gamma_1\gamma_3}{\delta}_{\gamma_2\gamma_4}\eta_{\gamma_3\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\eta_{\gamma\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_\gamma-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& & +\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\nonumber\\
& & -\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_\gamma-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)^2
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\nonumber\\
& &\left.-(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
\left|g_1^{\gamma_1\gamma_2}\right|^2
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}
\eta_{\gamma\gamma_2}\eta_{\gamma_2\gamma^\prime}
\eta_{\gamma\gamma^\prime}. \hskip 1.2cm\end{eqnarray}
\begin{eqnarray}\label{A4nnc-ck} {A}_4^{\gamma\gamma^\prime}(nnc,ck)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\eta_{\gamma_1\gamma_3}
\eta_{\gamma_2\gamma_4}{\delta}_{\gamma_3\gamma_5}
{\delta}_{\gamma_1\gamma_4}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1}\left[-\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^3}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}-\frac{2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^3
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}\nonumber\\
& &\left. -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]
\left|g_1^{\gamma\gamma^\prime}\right|^2
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}.\hskip 1.2cm \end{eqnarray}
\begin{eqnarray}\label{A4nnc-nk} {A}_4^{\gamma\gamma^\prime}(nnc,nk)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]
\eta_{\gamma_1\gamma_3}
\eta_{\gamma_2\gamma_4}{\delta}_{\gamma_3\gamma_5}
\eta_{\gamma_1\gamma_4}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_\gamma-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& & +\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_\gamma-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)^2}\nonumber\\
& & -\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\nonumber\\
& &\left. -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\right]
\left|g_1^{\gamma_2\gamma^\prime}\right|^2
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}
\eta_{\gamma\gamma_2}\eta_{\gamma_1\gamma_2}
\eta_{\gamma\gamma^\prime}.\hskip 1.2cm \end{eqnarray}
Finally, we calculate the ${A}_4^{\gamma\gamma^\prime}(nnn)$ by
considering the two second decompositions, that is, its former three
terms \vskip -0.5cm \begin{eqnarray}\label{A4nnn-cc}
{A}_4^{\gamma\gamma^\prime}(nnn,cc)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{3}\eta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma_4}\delta_{\gamma_2\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}
\nonumber\\
&=&\sum_{\gamma_1}\left[-\frac{2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^3}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}-\frac{2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^3
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}+(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}\nonumber\\
& &\left.-(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\right]g_1^{\gamma\gamma^\prime}
g_1^{\gamma^\prime\gamma_1}g_1^{\gamma_1\gamma}g_1^{\gamma\gamma^\prime}.
\hskip 1.2cm\end{eqnarray}
\vskip -0.5cm \begin{eqnarray}\label{A4nnn-cn}
{A}_4^{\gamma\gamma^\prime}(nnn,cn)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{3}\eta_{\gamma_k\gamma_{k+2}}\right)
{\delta}_{\gamma_1\gamma_4}\eta_{\gamma_2\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}
\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^2\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)} \nonumber\\
& & \left. +(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma}
g_1^{\gamma\gamma^\prime}\eta_{\gamma_1\gamma^\prime}\eta_{\gamma_2\gamma^\prime}.
\hskip 0.8cm\end{eqnarray}
\vskip -0.5cm \begin{eqnarray}\label{A4nnn-nc}
{A}_4^{\gamma\gamma^\prime}(nnn,nc)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{3}\eta_{\gamma_k\gamma_{k+2}}\right)
\eta_{\gamma_1\gamma_4}{\delta}_{\gamma_2\gamma_5}
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}
\nonumber\\
&=&\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right.\nonumber\\
& &+\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)^2}\nonumber\\
& & -\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}-\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\nonumber\\
& &\left. -(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\right]
g_1^{\gamma\gamma^\prime}g_1^{\gamma^\prime\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma\gamma_1}\eta_{\gamma\gamma_2}.
\hskip 0.8cm\end{eqnarray}
while the fourth term has the third decomposition,
that is \begin{eqnarray}\label{A4nnn-nn-c}
{A}_4^{\gamma\gamma^\prime}(nnn,nn,c)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{3}\eta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}
\delta_{\gamma\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2,\gamma_3}\left[-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)^2}-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma}-E_{\gamma_3}\right)}\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma_3}\right)}\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma_3}\right)}+\frac{{\rm e}^{-{\rm i}
E_{\gamma_3}t}}{\left(E_{\gamma}-E_{\gamma_3}\right)^2\left(E_{\gamma_1}-E_{\gamma_3}\right)
\left(E_{\gamma_2}-E_{\gamma_3}\right)}\nonumber\\
& &\left. +(-{\rm i} t)\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}
g_1^{\gamma_3\gamma}\eta_{\gamma\gamma_2} \eta_{\gamma_1\gamma_3}
{\delta}_{\gamma\gamma^\prime}.\hskip 1.2cm \end{eqnarray}
\begin{eqnarray}\label{A4nnn-nn-n} {A}_4^{\gamma\gamma^\prime}(nnn,nn,n)&=
&\sum_{\gamma_1,\cdots,\gamma_{5}}\left[
\sum_{i=1}^{5}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_i}
t}}{d_i(E[\gamma,4])}\right]
\left[\prod_{j=1}^4g_1^{\gamma_j\gamma_{j+1}}\right]\left(
\prod_{k=1}^{3}\eta_{\gamma_k\gamma_{k+2}}\right)
\delta_{\gamma_1\gamma}\delta_{\gamma_{5}\gamma^\prime}
\eta_{\gamma\gamma^\prime}\nonumber\\
&=&\sum_{\gamma_1,\gamma_2,\gamma_3}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)}\right.\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma_3}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}\nonumber\\
& &+\frac{{\rm e}^{-{\rm i}
E_{\gamma_2}t}}{\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma_3}\right)\left(E_{\gamma_2}-E_{\gamma^\prime}\right)}\nonumber\\
& &-\frac{{\rm e}^{-{\rm i}
E_{\gamma_3}t}}{\left(E_{\gamma}-E_{\gamma_3}\right)\left(E_{\gamma_1}-E_{\gamma_3}\right)
\left(E_{\gamma_2}-E_{\gamma_3}\right)\left(E_{\gamma_3}-E_{\gamma^\prime}\right)}\nonumber\\
& &\left. +\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)\left(E_{\gamma_3}-E_{\gamma^\prime}\right)}\right]\nonumber\\[7pt]
& & \times
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}
g_1^{\gamma_3\gamma^\prime}\eta_{\gamma\gamma_2}\eta_{\gamma\gamma_3}\eta_{\gamma\gamma^\prime}
\eta_{\gamma_1\gamma_3}\eta_{\gamma_1\gamma^\prime}
\eta_{\gamma_2\gamma^\prime}. \end{eqnarray}
Now, all 15 contractions and/or anti-contractions in the fourth
order approximation have been calculated out.
In order to absorb the contributions from the fourth order
approximation to the improved forms of lower order perturbed
solutions, we first decompose $A_4^{\gamma\gamma^\prime}$, which is
a summation of all above terms, into the three parts according to
their factor forms in ${\rm e}^{-{\rm i} E_{\gamma_i}t}, (-{\rm i} t) {\rm e}^{-{\rm i}
E_{\gamma_i}t} $ and $(-{\rm i} t)^2{\rm e}^{-{\rm i} E_{\gamma_i}t}/2$, that is
\begin{equation}\label{A4dto}
A_4^{\gamma\gamma^\prime}=A_4^{\gamma\gamma^\prime}({\rm e})+A_4^{\gamma\gamma^\prime}(t{\rm e})
+A_4^{\gamma\gamma^\prime}(t^2{\rm e}).\end{equation} Secondly, we decompose its
every term into three parts according to the factor forms in
${\rm e}^{-{\rm i} E_{\gamma}t}, {\rm e}^{-{\rm i} E_{\gamma_1}t}$
($\sum_{\gamma_1}{\rm e}^{-{\rm i} E_{\gamma_1}t}$) and ${\rm e}^{-{\rm i}
E_{\gamma^\prime}t}$, that is \begin{eqnarray}
A_4^{\gamma\gamma^\prime}({\rm e})&=&A_4^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma}t})+A_4^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_4^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma^\prime}t}),\\
A_4^{\gamma\gamma^\prime}(t{\rm e})&=&A_4^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma}t})+A_4^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_4^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i} E_{\gamma^\prime}t}),\\
A_4^{\gamma\gamma^\prime}(t^2{\rm e})&=&A_4^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma}t})+A_4^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_4^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}). \end{eqnarray} Finally, we again decompose every term
in the above equations into the diagonal and off-diagonal parts
about $\gamma$ and $\gamma^\prime$, that is \begin{eqnarray}
A_4^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_4^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_4^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}),\\
A_4^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_4^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_4^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}), \\
A_4^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_4^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_4^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}). \end{eqnarray} where $E_{\gamma_i}$ takes $
E_{\gamma}, E_{\gamma_1}$ and $E_{\gamma^\prime}$.
If we do not concern the improved forms of perturbed solutions equal
to or higer than the fourth order one, we only need to write down
the second and third terms in eq.(\ref{A4dto}) and calculate their
diagonal and off-diagonal parts respectively. Based on the
calculated results above, it is easy to obtain \begin{eqnarray}
\label{A4gammaD} A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_\gamma
t};{\rm D}\right)&=& \left(-{\rm i} t\right){\rm e}^{-{\rm i} E_\gamma
t}\left[\sum_{\gamma_1}\frac{-2
\left|g_1^{\gamma\gamma_1}\right|^4}{\left(E_{\gamma}-E_{\gamma_1}\right)^3}
-\sum_{\gamma_1,\gamma_2}\frac{\left|g_1^{\gamma\gamma_1}\right|^2
\left|g_1^{\gamma\gamma_2}\right|^2\eta_{\gamma_1\gamma_2}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2}\right.
\nonumber\\
& &
-\sum_{\gamma_1,\gamma_2}\frac{\left|g_1^{\gamma\gamma_1}\right|^2
\left|g_1^{\gamma\gamma_2}\right|^2\eta_{\gamma_1\gamma_2}}
{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)}
+\sum_{\gamma_1,\gamma_2}\frac{\left|g_1^{\gamma\gamma_1}\right|^2
\left|g_1^{\gamma_1\gamma_2}\right|^2\eta_{\gamma\gamma_2}}
{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)}\nonumber\\
&
&\left.+\sum_{\gamma_1,\gamma_2,\gamma_3}\frac{g_1^{\gamma\gamma_1}
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma}\eta_{\gamma\gamma_2}\eta_{\gamma_1\gamma_3}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)}\right]\delta_{\gamma\gamma^\prime}.\end{eqnarray}
Substituting the relation
$\eta_{\beta_1\beta_2}=1-\delta_{\beta_1\beta_2}$, using the
technology of index exchanging and introducing the definitions of
so-called the $a$th revision energy $G_\gamma^{(a)}$: \begin{eqnarray}
{G}^{(2)}_\gamma&=&\sum_{\gamma_1}\frac{\left|g_1^{\gamma\gamma_1}\right|^2}
{E_{\gamma}-E_{\gamma_1}}\end{eqnarray}
\begin{eqnarray} G^{(4)}_\gamma &=
&\sum_{\gamma_1,\gamma_2,\gamma_3}\frac{g_1^{\gamma\gamma_1}
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma}\eta_{\gamma\gamma_2}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)}-\sum_{\gamma_1,\gamma_2}
\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}g_1^{\gamma\gamma_2}g_1^{\gamma_2\gamma}}
{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)},
\end{eqnarray} we can simplify Eq. (\ref{A4gammaD}) to the following concise
form: \begin{eqnarray} A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_\gamma t};{\rm
D}\right)&=&-\left(-{\rm i} t\right){\rm e}^{-{\rm i} E_\gamma
t}\left[\sum_{\gamma_1}\frac{
G^{(2)}_\gamma}{\left(E_{\gamma}-E_{\gamma_1}\right)^2}
\left|g_1^{\gamma\gamma_1}\right|^2-G^{(4)}_\gamma\right]\delta_{\gamma\gamma^\prime}.
\end{eqnarray}
Similar calculation and simplification lead to \begin{eqnarray}
A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_{\gamma} t}; {\rm
N}\right)=(-{\rm i} t){\rm e}^{-{\rm i} E_{\gamma} t}\left[
\frac{{G}^{(3)}_{\gamma}g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)}
+\sum_{\gamma_1} \frac{{G}^{(2)}_{\gamma}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)}
\right], \end{eqnarray} where \begin{eqnarray} G^{(3)}_\gamma &=
&\sum_{\gamma_1,\gamma_2}\frac{g_1^{\gamma\gamma_1}
g_1^{\gamma_1\gamma_2}g_1^{\gamma_2\gamma}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)}.
\end{eqnarray} For saving the space, the corresponding detail is omitted. In
fact, it is not difficult, but it is necessary to be careful enough,
specially in the cases of higher order approximations.
In the same way, we can obtain: \begin{eqnarray}
A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_{\gamma_1} t}; {\rm D}
\right)&=&\left(-{\rm i}
t\right)\sum_{\gamma_1}\frac{G^{(2)}_{\gamma_1}{\rm e}^{-{\rm i} E_{\gamma_1}
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2}
\left|g_1^{\gamma\gamma_1}\right|^2\delta_{\gamma\gamma^\prime},\\
A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_{\gamma_1} t} ; {\rm
N}\right)&=&-(-{\rm i} t)\sum_{\gamma_1}
\frac{{G}^{(2)}_{\gamma_1}{\rm e}^{-{\rm i} E_{\gamma_1} t}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\end{eqnarray}
\begin{eqnarray} A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_{\gamma^\prime}t};
{\rm D}\right)&=&0,\\
A_4^{\gamma\gamma^\prime}\left(t{\rm e}^{-{\rm i} E_{\gamma^\prime}t}; {\rm
N}\right)&=&-(-{\rm i} t){\rm e}^{-{\rm i} E_{\gamma^\prime}
t}\left[\sum_{\gamma_1}
\frac{{G}^{(3)}_{\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)}
g_1^{\gamma\gamma^\prime}\right.\nonumber\\
& &\left. +\sum_{\gamma_1} \frac{{G}^{(2)}_{\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}\right].
\end{eqnarray}
For the terms with the factor $t^2{\rm e}$ , only one is nonzero, that is
\begin{eqnarray} A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}\right)
&=&A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}^{-{\rm i} E_{\gamma} t}; D\right)
=\frac{(-{\rm i} G^{(2)}_\gamma t)^2}{2!}{\rm e}^{-{\rm i} E_{\gamma} t}. \end{eqnarray}
since \begin{eqnarray} A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}^{-{\rm i} E_{\gamma_1}
t};{\rm D}\right)&=&A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime} t};{\rm D}\right)=0,\\
A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}^{-{\rm i} E_{\gamma} t};{\rm
N}\right)&=&A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}^{-{\rm i} E_{\gamma_1}
t};{\rm D}\right)=A_4^{\gamma\gamma^\prime}\left(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime} t};{\rm D}\right)=0. \end{eqnarray}
We can see that these terms can be absorbed into (or merged with)
the lower order approximations to obtain the improved forms of
perturbed solutions.
\subsection{l=5 case}
Now let we consider the case of the fifth order approximation
($l=5$). From eq.(\ref{gpd}) it follows that the first
decompositions of $g$-product have $2^4=16$ terms. They can be
divided into 5 groups \begin{equation} A_5^{\gamma\gamma^\prime}= \sum_{i=0}^4
\mathcal{A}_5^{\gamma\gamma^\prime}(i;\eta),\end{equation} where $i$ indicates
the number of $\eta$ functions. Obviously \begin{equation}
\mathcal{A}_5^{\gamma\gamma^\prime}(0;\eta)={A}_5^{\gamma\gamma^\prime}(cccc),\end{equation}
\begin{eqnarray}
\mathcal{A}_5^{\gamma\gamma^\prime}(1;\eta)={A}_5^{\gamma\gamma^\prime}(cccn)
+{A}_5^{\gamma\gamma^\prime}(ccnc)+{A}_5^{\gamma\gamma^\prime}(cncc)
+{A}_5^{\gamma\gamma^\prime}(nccc),\end{eqnarray} \begin{eqnarray}
\mathcal{A}_5^{\gamma\gamma^\prime}(2;\eta)&=&{A}_5^{\gamma\gamma^\prime}(ccnn)
+{A}_5^{\gamma\gamma^\prime}(cncn)+{A}_5^{\gamma\gamma^\prime}(cnnc)\nonumber\\
& & +{A}_5^{\gamma\gamma^\prime}(nccn)
+{A}_5^{\gamma\gamma^\prime}(ncnc)
+{A}_5^{\gamma\gamma^\prime}(nncc),\end{eqnarray} \begin{eqnarray}
\mathcal{A}_5^{\gamma\gamma^\prime}(3;\eta)&=&{A}_5^{\gamma\gamma^\prime}(cnnn)
+{A}_5^{\gamma\gamma^\prime}(ncnn)
+{A}_5^{\gamma\gamma^\prime}(nncn)
+{A}_5^{\gamma\gamma^\prime}(nnnc), \end{eqnarray} \begin{equation}
\mathcal{A}_5^{\gamma\gamma^\prime}(4;\eta)={A}_5^{\gamma\gamma^\prime}(nnnn).\end{equation}
Here, we have used the notations stated in Sec. \ref{sec6}.
By calculation, we obtain the
$\mathcal{A}^{\gamma\gamma^\prime}_5(0,\eta)$ and every term of
$\mathcal{A}^{\gamma\gamma^\prime}_5(1,\eta)$ have only nontrivial
first contractions and/or anti-contractions. But, we can find that
every term of $\mathcal{A}^{\gamma\gamma^\prime}_5(2,\eta)$ can have
one nontrivial second or third or fourth contraction or
anti-contraction, that is \begin{eqnarray}
A^{\gamma\gamma^\prime}_5(ccnn)&=&A^{\gamma\gamma^\prime}_5(ccnn,kkc)+A^{\gamma\gamma^\prime}_5(ccnn,kkn),\\
A^{\gamma\gamma^\prime}_5(cncn)&=&A^{\gamma\gamma^\prime}_5(cncn,kc)+A^{\gamma\gamma^\prime}_5(cncn,kc),\\
A^{\gamma\gamma^\prime}_5(cnnc)&=&A^{\gamma\gamma^\prime}_5(cnnc,kck)+A^{\gamma\gamma^\prime}_5(cnnc,kck),\\
A^{\gamma\gamma^\prime}_5(nccn)&=&A^{\gamma\gamma^\prime}_5(nccn,c)+A^{\gamma\gamma^\prime}_5(nccn,n),\\
A^{\gamma\gamma^\prime}_5(ncnc)&=&A^{\gamma\gamma^\prime}_5(ncnc,ck)+A^{\gamma\gamma^\prime}_5(ncnc,nk),\\
A^{\gamma\gamma^\prime}_5(nncc)&=&A^{\gamma\gamma^\prime}_5(nncc,ckk)+A^{\gamma\gamma^\prime}_5(nncc,nkk).
\end{eqnarray} Similarly, every term of
$\mathcal{A}^{\gamma\gamma^\prime}_5(3,\eta)$ can have two higher
order contractions and/or anti-contractions: \begin{eqnarray}
A^{\gamma\gamma^\prime}_5(cnnn)&=&A^{\gamma\gamma^\prime}_5(cnnn,kcc)
+A^{\gamma\gamma^\prime}_5(cnnn,kcn)\nonumber\\
& &+A^{\gamma\gamma^\prime}_5(cnnn,knc)
+A^{\gamma\gamma^\prime}_5(cnnn,knn),\\
A^{\gamma\gamma^\prime}_5(ncnn)&=&A^{\gamma\gamma^\prime}_5(ncnn,kkc,ck)
+A^{\gamma\gamma^\prime}_5(ncnn,kkn,ck)\nonumber\\
& & +A^{\gamma\gamma^\prime}_5(ncnn,kkc,nk)+A^{\gamma\gamma^\prime}_5(ncnn,kkn,nk),\\
A^{\gamma\gamma^\prime}_5(nncn)&=&A^{\gamma\gamma^\prime}_5(nncn,ckk,kc)
+A^{\gamma\gamma^\prime}_5(nncn,ckk,kn)\nonumber\\
& &+A^{\gamma\gamma^\prime}_5(nncn,nkk,kc)+A^{\gamma\gamma^\prime}_5(nncn,nkk,kn),\\
A^{\gamma\gamma^\prime}_5(nnnc)&=&A^{\gamma\gamma^\prime}_5(nnnc,cck)
+A^{\gamma\gamma^\prime}_5(nnnc,cnk)\nonumber\\
& &+A^{\gamma\gamma^\prime}_5(nnnc,nck)
+A^{\gamma\gamma^\prime}_5(nnnc,nnk).\end{eqnarray} Moreover, their last
terms, with two higher order anti-contractions, can have one
nontrivial more higher contraction or anti-contraction: \begin{eqnarray}
A^{\gamma\gamma^\prime}_5(cnnn,knn)&=&A^{\gamma\gamma^\prime}_5(cnnn,knn,kc)
+A^{\gamma\gamma^\prime}_5(cnnn,knn.kn),\\
A^{\gamma\gamma^\prime}_5(ncnn,kkn,nk)&=&A^{\gamma\gamma^\prime}_5(ncnn,kkn,nk,c)
+A^{\gamma\gamma^\prime}_5(ncnn,kkn,nk,n),\\
A^{\gamma\gamma^\prime}_5(nncn,nkk,kn)&=&A^{\gamma\gamma^\prime}_5(nncn,nkk,kn,c)
+A^{\gamma\gamma^\prime}_5(nncn,nkk,kn,n),\\
A^{\gamma\gamma^\prime}_5(nnnc,nnk)&=&A^{\gamma\gamma^\prime}_5(nnnc,nnk,ck)
+A^{\gamma\gamma^\prime}_5(nnnc,nnk,nk). \end{eqnarray} In the case of
$A^{\gamma\gamma^\prime}_5(nnnn)$, there are three terms
corresponding to the second decompositions that result in \begin{eqnarray}
\hskip -0.3in
A^{\gamma\gamma^\prime}_5(nnnn)&=&A^{\gamma\gamma^\prime}_5(nnnn,ccc)
+A^{\gamma\gamma^\prime}_5(nnnn,ccn)+A^{\gamma\gamma^\prime}_5(nnnn,cnc)
+A^{\gamma\gamma^\prime}_5(nnnn,ncc)\nonumber\\
&
&+A^{\gamma\gamma^\prime}_5(nnnn,cnn)+A^{\gamma\gamma^\prime}_5(nnnn,ncn)
+A^{\gamma\gamma^\prime}_5(nnnn,nnc)+A^{\gamma\gamma^\prime}_5(nnnn,nnn).
\end{eqnarray} In the above expression, from the fifth term to the seventh
term have the third- or fourth- contraction and anti-contraction,
the eighth term has two third contractions and anti-contractions:
\begin{eqnarray}
A^{\gamma\gamma^\prime}_5(nnnn,cnn)&=&A^{\gamma\gamma^\prime}_5(nnnn,cnn,kc)
+A^{\gamma\gamma^\prime}_5(nnnn,cnn,kn),\\
A^{\gamma\gamma^\prime}_5(nnnn,ncn)&=&A^{\gamma\gamma^\prime}_5(nnnn,ncn,c)
+A^{\gamma\gamma^\prime}_5(nnnn,ncn,n),\\
A^{\gamma\gamma^\prime}_5(nnnn,nnc)&=&A^{\gamma\gamma^\prime}_5(nnnn,nnc,ck)
+A^{\gamma\gamma^\prime}_5(nnnn,nnc,nk),\\
A^{\gamma\gamma^\prime}_5(nnnn,nnn)&=&A^{\gamma\gamma^\prime}_5(nnnn,nnn,cc)
+A^{\gamma\gamma^\prime}_5(nnnn,nnn,cn)\nonumber\\
& &
+A^{\gamma\gamma^\prime}_5(nnnn,nnn,nc)+A^{\gamma\gamma^\prime}_5(nnnn,nnn,nn).
\end{eqnarray} In addition, $A^{\gamma\gamma^\prime}_5(nnnn,nnn,nn)$ consists
of the fourth contraction and anti-contraction \begin{equation}
A^{\gamma\gamma^\prime}_5(nnnn,nnn,nn)=A^{\gamma\gamma^\prime}_5(nnnn,nnn,nn,c)
+A^{\gamma\gamma^\prime}_5(nnnn,nnn,nn,n).\end{equation} According to the
above analysis, we obtain that the contribution from the five order
approximation is made of $52$ terms after finding out all of
contractions and anti-contractions.
Just like we have done in the $l=4$ case, we decompose
\begin{equation}\label{A5dto}
A_5^{\gamma\gamma^\prime}=A_5^{\gamma\gamma^\prime}({\rm e})+A_5^{\gamma\gamma^\prime}(t{\rm e})
+A_5^{\gamma\gamma^\prime}(t^2{\rm e}),\end{equation} where \begin{eqnarray}
A_5^{\gamma\gamma^\prime}({\rm e})&=&A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma}t})+A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_2}t})+A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i} E_{\gamma^\prime}t}),\\
A_4^{\gamma\gamma^\prime}(t{\rm e})&=&A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma}t})+A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_2}t})+A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i} E_{\gamma^\prime}t}),\\
A_5^{\gamma\gamma^\prime}(t^2{\rm e})&=&A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma}t})+A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_1}t})+A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_2}t})+A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}). \end{eqnarray} While, every term in the above equations
has its diagonal and off-diagonal parts about $\gamma$ and
$\gamma^\prime$, that is \begin{eqnarray} A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_5^{\gamma\gamma^\prime}({\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}),\\
A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_5^{\gamma\gamma^\prime}(t{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}), \\
A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_5^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}). \end{eqnarray} where $E_{\gamma_i}$ takes $
E_{\gamma}, E_{\gamma_1}, E_{\gamma_2}$ and $E_{\gamma^\prime}$.
If we do not concern the improved forms of perturbed solution higher
than the fourth order one, we only need to write down the second and
third terms in eq.(\ref{A5dto}). We can calculate them and the
results are put in the supplementary of Ref. \cite{My2}.
Based on these contraction- and anti contraction- expressions, we
can, in terms of the rearrangement and summation, obtain \begin{eqnarray}
A_5(t{\rm e}^{-{\rm i} E_{\gamma} t},{\rm D})&=& -(-{\rm i}
G_\gamma^{(3)}t)\sum_{\gamma_1}\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}
\delta_{\gamma\gamma^\prime}\nonumber\\
& & -(-{\rm i}
G_\gamma^{(2)}t)\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma}
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)}\right.
\nonumber\\ & & \left. +\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2}\right]
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2} g_1^{\gamma_2\gamma}
\delta_{\gamma\gamma^\prime}+ (-{\rm i}
G_\gamma^{(5)}t)\delta_{\gamma\gamma^\prime}\end{eqnarray} where \begin{eqnarray}
G_\gamma^{(5)}&=&\sum_{\gamma_1,\gamma_2,\gamma_3,\gamma_4}
\frac{g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma_4}g_1^{\gamma_4\gamma}
\eta_{\gamma\gamma_2}\eta_{\gamma\gamma_3}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)\left(E_{\gamma}-E_{\gamma_4}\right)}\nonumber\\
& & -\sum_{\gamma_1,\gamma_2,\gamma_3}\left[
\frac{g_1^{\gamma\gamma_1}g_1^{\gamma\gamma_2}
g_1^{\gamma_1\gamma}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma}}
{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)}
+\frac{g_1^{\gamma\gamma_1}g_1^{\gamma\gamma_2}
g_1^{\gamma_1\gamma}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)^2
\left(E_{\gamma}-E_{\gamma_3}\right)}\right.\nonumber\\
& &\left.+\frac{g_1^{\gamma\gamma_1}g_1^{\gamma\gamma_2}
g_1^{\gamma_1\gamma}g_1^{\gamma_2\gamma_3}g_1^{\gamma_3\gamma}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma_3}\right)^2}\right].\hskip 1.2cm\end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma_1} t},{\rm D})&=& (-{\rm i}
G_\gamma^{(3)}t)\sum_{\gamma_1}\frac{{\rm e}^{-{\rm i} E_{\gamma_1}
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}
\delta_{\gamma\gamma^\prime}\nonumber\\
& & +(-{\rm i} G_\gamma^{(2)}t)\sum_{\gamma_1,\gamma_2}\frac{{\rm e}^{-{\rm i}
E_{\gamma_1}
t}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2} g_1^{\gamma_2\gamma}
\delta_{\gamma\gamma^\prime}\end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma_2} t},{\rm D})&=& -(-{\rm i}
G_\gamma^{(2)}t)\sum_{\gamma_1,\gamma_2}\frac{{\rm e}^{-{\rm i} E_{\gamma_2}
t}}{\left(E_{\gamma}-E_{\gamma_2}\right)^2\left(E_{\gamma_1}-E_{\gamma_2}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2} g_1^{\gamma_2\gamma}
\delta_{\gamma\gamma^\prime}\end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma^\prime} t},{\rm D})&=& 0\end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma} t},{\rm N})&=&(-{\rm i} G_\gamma^{(4)}
t)\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)}+
(-{\rm i} G_\gamma^{(3)} t)\sum_{\gamma_1}\frac{{\rm e}^{-{\rm i} E_{\gamma}
t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}
\eta^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma^\prime}\right)}
\nonumber\\
& &-(-{\rm i} G_\gamma^{(2)} t)\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i}
E_{\gamma} t} g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}
g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_1}\right)^2\left(E_{\gamma}-E_{\gamma^\prime}\right)}
+\frac{{\rm e}^{-{\rm i} E_{\gamma} t}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}
g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)^2}\right]\nonumber\\
& & +(-{\rm i} G_\gamma^{(2)} t)\sum_{\gamma_1,\gamma_2}\frac{{\rm e}^{-{\rm i}
E_{\gamma} t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma\gamma_2}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma}-E_{\gamma_2}\right)
\left(E_{\gamma}-E_{\gamma^\prime}\right)} \end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma_1} t},{\rm N})&=&-(-{\rm i}
t)\sum_{\gamma_1}\frac{G_{\gamma_1}^{(3)}{\rm e}^{-{\rm i}
E_{\gamma_1}t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}
\eta_{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma_1}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
\nonumber\\
& &-(-{\rm i} t)\sum_{\gamma_1,\gamma_2}\frac{G_{\gamma_1}^{(2)}{\rm e}^{-{\rm i}
E_{\gamma_1}t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)} \end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma_2} t},{\rm N})&=&(-{\rm i}
t)\sum_{\gamma_1,\gamma_2}\frac{G_{\gamma_2}^{(2)}{\rm e}^{-{\rm i}
E_{\gamma_2}t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma\gamma_2}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma_2}\right)\left(E_{\gamma_1}-E_{\gamma_2}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)} \end{eqnarray}
\begin{eqnarray} A_5(t{\rm e}^{-{\rm i} E_{\gamma^\prime} t},{\rm N})&=&-(-{\rm i}
G_{\gamma^\prime}^{(4)} t)\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}
t}g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)}+
(-{\rm i} G_{\gamma^\prime}^{(3)} t)\sum_{\gamma_1}\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime} t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}
\eta^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
\nonumber\\
& &+(-{\rm i} G_{\gamma^\prime}^{(2)}
t)\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime} t}
g_1^{\gamma^\prime\gamma_1}g_1^{\gamma_1\gamma^\prime}
g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)^2
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)} +\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime} t}
g_1^{\gamma^\prime\gamma_1}g_1^{\gamma_1\gamma^\prime}
g_1^{\gamma\gamma^\prime}}{\left(E_{\gamma}-E_{\gamma^\prime}\right)
\left(E_{\gamma_1}-E_{\gamma^\prime}\right)^2}\right]\nonumber\\
& & -(-{\rm i} G_{\gamma^\prime}^{(2)}
t)\sum_{\gamma_1,\gamma_2}\frac{{\rm e}^{-{\rm i} E_{\gamma^\prime}
t}g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma_2}
g_1^{\gamma_2\gamma^\prime}\eta_{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}}
{\left(E_{\gamma}-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)
\left(E_{\gamma_2}-E_{\gamma^\prime}\right)} \end{eqnarray}
For the parts with $t^2{\rm e}$, we have \begin{eqnarray} A_5(t^2{\rm e}^{-{\rm i} E_{\gamma}
t},{\rm D})&=&\frac{(-{\rm i} t)^2}{2!} 2
G_{\gamma}^{(2)}G_{\gamma}^{(3)}\delta_{\gamma\gamma^\prime}{\rm e}^{-{\rm i}
E_{\gamma} t},\\
A_5(t^2{\rm e}^{-{\rm i} E_{\gamma_1} t},{\rm D})&=&A_5(t^2{\rm e}^{-{\rm i}
E_{\gamma_2} t},{\rm D})=A_5(t^2{\rm e}^{-{\rm i} E_{\gamma^\prime} t},{\rm
D})=0.\end{eqnarray}
\begin{eqnarray} A_5(t^2{\rm e}^{-{\rm i} E_{\gamma} t},{\rm N})&=&\frac{(-{\rm i} t)^2}{2!}
\left(G_{\gamma}^{(2)}\right)^2\frac{{\rm e}^{-{\rm i}
E_{\gamma} t}}{E_{\gamma}-E_{\gamma^\prime}}g_1^{\gamma\gamma^\prime},\\
A_5(t^2{\rm e}^{-{\rm i} E_{\gamma_2} t},{\rm N})&=&A_5(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime} t},{\rm N})=0,\\
A_5(t^2{\rm e}^{-{\rm i} E_{\gamma^\prime} t},{\rm N})&=&-\frac{(-{\rm i} t)^2}{2!}
\left(G_{\gamma^\prime}^{(2)}\right)^2\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}
t}}{E_{\gamma}-E_{\gamma^\prime}}g_1^{\gamma\gamma^\prime}.\end{eqnarray}
It is clear that the above diagonal and off-diagonal part about
$A_5^{\gamma\gamma^\prime}(t{\rm e})$ and
$A_5^{\gamma\gamma^\prime}(t{\rm e})$ indeed has the expected forms and
can be absorbed reasonably into the lower order approximations in
order to obtain the improved forms of perturbed solutions.
\subsection{$l=6$ case}
Now let we consider the case of the sixth order approximation
($l=6$). From eq.(\ref{gpd}) it follows that the first
decompositions of $g$-product have $2^5=32$ terms. Like the $l=5$
case, they can be divided into 6 groups \begin{equation}
A_6^{\gamma\gamma^\prime}= \sum_{i=0}^4
\mathcal{A}_6^{\gamma\gamma^\prime}(i;\eta),\end{equation} where $i$ indicates
the number of $\eta$ functions. Obviously \begin{equation}
\mathcal{A}_6^{\gamma\gamma^\prime}(0;\eta)={A}_6^{\gamma\gamma^\prime}(ccccc),\end{equation}
\begin{eqnarray}
\mathcal{A}_6^{\gamma\gamma^\prime}(1;\eta)&=&{A}_6^{\gamma\gamma^\prime}(ccccn)
+{A}_6^{\gamma\gamma^\prime}(cccnc)+{A}_6^{\gamma\gamma^\prime}(ccncc)\nonumber\\
& &
+{A}_6^{\gamma\gamma^\prime}(cnccc)+{A}_6^{\gamma\gamma^\prime}(ncccc),\end{eqnarray}
\begin{eqnarray} \mathcal{A}_6^{\gamma\gamma^\prime}(2;\eta)&=&
{A}_6^{\gamma\gamma^\prime}(cccn)+
{A}_6^{\gamma\gamma^\prime}(ccncn)+
{A}_6^{\gamma\gamma^\prime}(cnccn)+
{A}_6^{\gamma\gamma^\prime}(ncccn)\nonumber\\ & & +
{A}_6^{\gamma\gamma^\prime}(ccnnc) +
{A}_6^{\gamma\gamma^\prime}(cncnc) +
{A}_6^{\gamma\gamma^\prime}(nccnc)+
{A}_6^{\gamma\gamma^\prime}(cnncc)\nonumber\\ & &+
{A}_6^{\gamma\gamma^\prime}(ncncc) +
{A}_6^{\gamma\gamma^\prime}(nnccc),\end{eqnarray} \begin{eqnarray}
\mathcal{A}_6^{\gamma\gamma^\prime}(3;\eta)&=&
{A}_6^{\gamma\gamma^\prime}(ccnnn)
+{A}_6^{\gamma\gamma^\prime}(cncnn) +
{A}_6^{\gamma\gamma^\prime}(cnncn) +
{A}_6^{\gamma\gamma^\prime}(cnnnc)\nonumber\\ & & +
{A}_6^{\gamma\gamma^\prime}(nccnn) +
{A}_6^{\gamma\gamma^\prime}(ncncn) +
{A}_6^{\gamma\gamma^\prime}(ncnnc) +
{A}_6^{\gamma\gamma^\prime}(nnccn)\nonumber\\ & & +
{A}_6^{\gamma\gamma^\prime}(nncnc) +
{A}_6^{\gamma\gamma^\prime}(nnncc), \end{eqnarray} \begin{eqnarray}
\mathcal{A}_6^{\gamma\gamma^\prime}(4;\eta)&=&{A}_6^{\gamma\gamma^\prime}(cnnnn)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn)
+{A}_6^{\gamma\gamma^\prime}(nncnn)\nonumber\\ & & +
{A}_6^{\gamma\gamma^\prime}(nnncn)+
{A}_6^{\gamma\gamma^\prime}(nnnnc) \end{eqnarray}
\begin{equation}
\mathcal{A}_6^{\gamma\gamma^\prime}(5;\eta)={A}_6^{\gamma\gamma^\prime}(nnnnn).\end{equation}
Furthermore considering the high order contraction or
anti-contraction, we have \begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(cccnn)
&=& {A}_6^{\gamma\gamma^\prime}(cccnn, kkkc) +
{A}_6^{\gamma\gamma^\prime}(cccnn, kkkn),\\
{A}_6^{\gamma\gamma^\prime}(ccncn) &=&
{A}_6^{\gamma\gamma^\prime}(ccncn, kkc) +
{A}_6^{\gamma\gamma^\prime}(ccncn, kkn),\\
{A}_6^{\gamma\gamma^\prime}(ccnnc) &=&
{A}_6^{\gamma\gamma^\prime}(ccnnc, kkck) +
{A}_6^{\gamma\gamma^\prime}(ccnnc, kknk),\\
{A}_6^{\gamma\gamma^\prime}(cnccn) &=&
{A}_6^{\gamma\gamma^\prime}(cnccn, kc) +
{A}_6^{\gamma\gamma^\prime}(cnccn, kn),\\
{A}_6^{\gamma\gamma^\prime}(cncnc) &=&
{A}_6^{\gamma\gamma^\prime}(cncnc, kck) +
{A}_6^{\gamma\gamma^\prime}(cncnc, knk),\\
{A}_6^{\gamma\gamma^\prime}(cnncc) &=&
{A}_6^{\gamma\gamma^\prime}(cnncc, kckk) +
{A}_6^{\gamma\gamma^\prime}(cnncc, knkk),\\
{A}_6^{\gamma\gamma^\prime}(ncccn) &=&
{A}_6^{\gamma\gamma^\prime}(ncccn, c) +
{A}_6^{\gamma\gamma^\prime}(ncccn, n),\\
{A}_6^{\gamma\gamma^\prime}(nccnc) &=&
{A}_6^{\gamma\gamma^\prime}(nccnc, ck) +
{A}_6^{\gamma\gamma^\prime}(nccnc, nk),\\
{A}_6^{\gamma\gamma^\prime}(ncncc) &=&
{A}_6^{\gamma\gamma^\prime}(ncncc, ckk) +
{A}_6^{\gamma\gamma^\prime}(ncncc, nkk),\\
{A}_6^{\gamma\gamma^\prime}(nnccc) &=&
{A}_6^{\gamma\gamma^\prime}(nnccc, ckkk) +
{A}_6^{\gamma\gamma^\prime}(nnccc, nkkk) \end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(ccnnn) &=&
{A}_6^{\gamma\gamma^\prime}(ccnnn, kkcc)
+ {A}_6^{\gamma\gamma^\prime}(ccnnn, kkcn)
+ {A}_6^{\gamma\gamma^\prime}(ccnnn, kknc)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(ccnnn, kknn, kkc)
+ {A}_6^{\gamma\gamma^\prime}(ccnnn, kknn,
kkn),\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(cncnn) &=&
{A}_6^{\gamma\gamma^\prime}(cncnn, kkkc, kck)
+ {A}_6^{\gamma\gamma^\prime}(cncnn, kkkc, knk)
+ {A}_6^{\gamma\gamma^\prime}(cncnn, kkkn, kck)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(cncnn, kkkn, knk, kc)
+ {A}_6^{\gamma\gamma^\prime}(cncnn, kkkn, knk, kn),\\
{A}_6^{\gamma\gamma^\prime}(cnncn) &=&
{A}_6^{\gamma\gamma^\prime}(cnncn, kckk, kkc)
+ {A}_6^{\gamma\gamma^\prime}(cnncn, kckk, kkn)
+ {A}_6^{\gamma\gamma^\prime}(cnncn, knkk, kkc)\nonumber\\
& &+
{A}_6^{\gamma\gamma^\prime}(cnncn, knkk, kkn, kc)
+ {A}_6^{\gamma\gamma^\prime}(cnncn, knkk, kkn, kn),\\
{A}_6^{\gamma\gamma^\prime}(cnnnc) &=&
{A}_6^{\gamma\gamma^\prime}(cnnnc, kcck)
+ {A}_6^{\gamma\gamma^\prime}(cnnnc, kcnk)
+ {A}_6^{\gamma\gamma^\prime}(cnnnc, knck)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(cnnnc, knnk, kck)+ {A}_6^{\gamma\gamma^\prime}(cnnnc, knnk,
knk),\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nccnn) &=&
{A}_6^{\gamma\gamma^\prime}(nccnn, kkkc, ck)
+ {A}_6^{\gamma\gamma^\prime}(nccnn, kkkc, nk)
+ {A}_6^{\gamma\gamma^\prime}(nccnn, kkkn, ck)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nccnn, kkkn, nk, c) + {A}_6^{\gamma\gamma^\prime}(nccnn, kkkn, nk,
n),\\
{A}_6^{\gamma\gamma^\prime}(ncncn) &=&
{A}_6^{\gamma\gamma^\prime}(ncncn, ckc)
+ {A}_6^{\gamma\gamma^\prime}(ncncn, ckn)
+ {A}_6^{\gamma\gamma^\prime}(ncncn, nkc)\nonumber\\
& & + {A}_6^{\gamma\gamma^\prime}(ncncn, nkn, c) +
{A}_6^{\gamma\gamma^\prime}(ncncn, nkn, n),\\
{A}_6^{\gamma\gamma^\prime}(ncnnc) &=&
{A}_6^{\gamma\gamma^\prime}(ncnnc, kkck, ckk)
+ {A}_6^{\gamma\gamma^\prime}(ncnnc, kkck, nkk)
+ {A}_6^{\gamma\gamma^\prime}(ncnnc, kknk, ckk)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(ncnnc, kknk, nkk, ck)
+ {A}_6^{\gamma\gamma^\prime}(ncnnc, kknk, nkk, nk),\\
{A}_6^{\gamma\gamma^\prime}(nnccn) &=&
{A}_6^{\gamma\gamma^\prime}(nnccn, cc)
+ {A}_6^{\gamma\gamma^\prime}(nnccn, cn)
+ {A}_6^{\gamma\gamma^\prime}(nnccn, nc)\nonumber\\
& & + {A}_6^{\gamma\gamma^\prime}(nnccn, nn, c) +
{A}_6^{\gamma\gamma^\prime}(nnccn, nn, n),\\
{A}_6^{\gamma\gamma^\prime}(nncnc) &=&
{A}_6^{\gamma\gamma^\prime}(nncnc, ckkk, kck)
+ {A}_6^{\gamma\gamma^\prime}(nncnc, ckkk, knk)
+ {A}_6^{\gamma\gamma^\prime}(nncnc, nkkk, kck)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nncnc, nkkk, knk, ck)
+ {A}_6^{\gamma\gamma^\prime}(nncnc, nkkk, knk, nk),\\
{A}_6^{\gamma\gamma^\prime}(nnncc) &=&
{A}_6^{\gamma\gamma^\prime}(nnncc, cckk)
+ {A}_6^{\gamma\gamma^\prime}(nnncc, cnkk)
+ {A}_6^{\gamma\gamma^\prime}(nnncc, nckk) \nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nnncc, nnkk, c) + {A}_6^{\gamma\gamma^\prime}(nnncc, nnkk, n)
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(cnnnn) &=&
{A}_6^{\gamma\gamma^\prime}(cnnnn, kccc)
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, kccn)
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, kcnc)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, kncc)
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, kcnn)
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, kncn)\nonumber\\
& & + {A}_6^{\gamma\gamma^\prime}(cnnnn, knnc)
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, knnn),
\end{eqnarray} \begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(cnnnn, kcnn) &=&
{A}_6^{\gamma\gamma^\prime}(cnnnn, kcnn, kkc) +
{A}_6^{\gamma\gamma^\prime}(cnnnn, kcnn, kkn),\\
{A}_6^{\gamma\gamma^\prime}(cnnnn, kncn) &=&
{A}_6^{\gamma\gamma^\prime}(cnnnn, kncn, kc) +
{A}_6^{\gamma\gamma^\prime}(cnnnn, kncn, kn),\\
{A}_6^{\gamma\gamma^\prime}(cnnnn, knnc) &=&
{A}_6^{\gamma\gamma^\prime}(cnnnn, knnc, kck) +
{A}_6^{\gamma\gamma^\prime}(cnnnn, knnc, knk), \end{eqnarray} \begin{eqnarray}
{A}_6^{\gamma\gamma^\prime}(cnnnn, knnn) &=&
{A}_6^{\gamma\gamma^\prime}(cnnnn, knnn, kcc)
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, knnn, kcn)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, knnn, knc)+
{A}_6^{\gamma\gamma^\prime}(cnnnn, knnn, knn, kc)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(cnnnn, knnn, knn, kn).
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(ncnnn) &=&
{A}_6^{\gamma\gamma^\prime}(ncnnn, kkcc, ckk)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kkcc, nkk)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kkcn, ckk) +
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknc, ckk)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kkcn, nkk)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kknc, nkk)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, ckk)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkk),
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(ncnnn, kkcn, nkk) &=&
{A}_6^{\gamma\gamma^\prime}(ncnnn, kkcn, nkk, c) +
{A}_6^{\gamma\gamma^\prime}(ncnnn, kkcn, nkk, n),\\
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknc, nkk) &=&
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknc, nkk, ck)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kknc, nkk, nk),\ \ \ \ \ \ \ \\
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, ckk) &=&
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, ckc) +
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, ckn), \end{eqnarray}
\begin{eqnarray}
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkk) &=&
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkc, ck)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkc, nk)\nonumber\\
& &
+
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkn, ck)
+ {A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkn, nk, c)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(ncnnn, kknn, nkn, nk, n)
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nncnn) &=&
{A}_6^{\gamma\gamma^\prime}(nncnn, ckkc, kck)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, ckkc, knk)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nncnn, ckkn, kck)+
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkc, kck)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nncnn, ckkn, knk)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, nkkc, knk)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, kck)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk),
\end{eqnarray} \begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nncnn, ckkn, knk) &=&
{A}_6^{\gamma\gamma^\prime}(nncnn, ckkn, knk, kc)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, ckkn, knk, kn),\ \ \ \ \ \ \ \\
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkc, knk) &=&
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkc, knk, ck)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, nkkc, knk, nk),\\
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, kck) &=&
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, kck, c) +
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, kck, n), \end{eqnarray} \begin{eqnarray}
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk) &=&
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk, cc)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk, cn)\nonumber\\
& &
+
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk, nc)
+ {A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk, nn, c)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nncnn, nkkn, knk, nn, n)
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnncn) &=&
{A}_6^{\gamma\gamma^\prime}(nnncn, cckk, kkc)
+ {A}_6^{\gamma\gamma^\prime}(nnncn, cckk, kkn)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nnncn, cnkk, kkc)
+
{A}_6^{\gamma\gamma^\prime}(nnncn, nckk, kkc)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nnncn, cnkk, kkn)
+ {A}_6^{\gamma\gamma^\prime}(nnncn, nckk, kkn)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, kkc)
+ {A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, kkn),
\end{eqnarray} \begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnncn, cnkk, kkn) &=&
{A}_6^{\gamma\gamma^\prime}(nnncn, cnkk, kkn, kc)
+ {A}_6^{\gamma\gamma^\prime}(nnncn, cnkk, kkn, kn),\ \ \ \ \ \ \\
{A}_6^{\gamma\gamma^\prime}(nnncn, nckk, kkn) &=&
{A}_6^{\gamma\gamma^\prime}(nnncn, nckk, kkn, c) +
{A}_6^{\gamma\gamma^\prime}(nnncn, nckk, kkn, n),\\
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, kkc) &=&
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, ckc) +
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, nkc), \end{eqnarray} \begin{eqnarray}
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, kkn) &=&
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, ckn, kc)
+ {A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, ckn, kn)\nonumber\\
& &
+
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, nkn, kc)
+ {A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, nkn, kn, c)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nnncn, nnkk, nkn, kn, n).
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnnnc) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnc, ccck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, ccnk)
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, cnck)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, ncck)+
{A}_6^{\gamma\gamma^\prime}(nnnnc, cnnk)
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, ncnk)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, nnck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk),
\end{eqnarray} \begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnnnc, cnnk) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnc, cnnk, kck) +
{A}_6^{\gamma\gamma^\prime}(nnnnc, cnnk, knk),\\
{A}_6^{\gamma\gamma^\prime}(nnnnc, ncnk) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnc, ncnk, ck) +
{A}_6^{\gamma\gamma^\prime}(nnnnc, ncnk, nk),\\
{A}_6^{\gamma\gamma^\prime}(nnnnc, nnck) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnc, nnck, c) +
{A}_6^{\gamma\gamma^\prime}(nnnnc, nnck, n),\end{eqnarray} \begin{eqnarray}
{A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk, cck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk, cnk)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk, nck) +
{A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk, nnk, ck)\nonumber\\
& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnc, nnnk, nnk, nk).
\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnnnn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, cccc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cccn)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, ccnc)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cncc)
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nccc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, ccnn)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cncn)
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nccn)\nonumber\\ & &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, ncnc)
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn)\nonumber\\
& & + {A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn)
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncn)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc) \nonumber\\
& &+ {A}_6^{\gamma\gamma^\prime}(nnnnn,
nnnn),\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnnnn, ccnn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, ccnn, kkc) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, ccnn, kkn),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, cncn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, cncn, kc) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, cncn, kn),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnc) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnc, kck) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnc, knk),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, nccn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, nccn, c) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, nccn, n),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnc) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnc, ck) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnc, nk),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncc) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncc, ckk) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncc, nkk), \end{eqnarray} \begin{eqnarray}
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn, kcc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn, kcn)\nonumber\\& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn, knc)
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn, knn, kc)\nonumber\\& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, cnnn, knn, kn),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn, kkc, ck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn, kkc, nk)\nonumber\\& &
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn, kkn, ck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn, kkn, nk, c) \nonumber\\& &+
{A}_6^{\gamma\gamma^\prime}(nnnnn, ncnn, kkn, nk, n),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncn, ckk, kc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nncn, ckk, kn)\nonumber\\& &
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncn, nkk, kc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nncn, nkk, kn, c)\nonumber\\& & +
{A}_6^{\gamma\gamma^\prime}(nnnnn, nncn, nkk, kn, n),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc, cck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc, cnk)\nonumber\\& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc, nck)+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc, nnk, ck)\nonumber\\& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnc, nnk, nk),\\
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ccc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ccn)\nonumber\\& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, cnc) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ncc)\nonumber\\& &
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, cnn)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ncn)\nonumber\\
& & +
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn),\end{eqnarray}
\begin{eqnarray} \!\!\!\!{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, cnn)\!\!\!
&=&\!\!\!
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, cnn, kc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, cnn, kn),\ \ \ \ \\
\!\!\!\!{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ncn)\!\!\!
&=&\!\!\! {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ncn, c) +
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, ncn, n),\ \ \ \ \ \\
\!\!\!\!{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnc)\!\!\!
&=&\!\!\!
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnc, ck)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnc, nk),\end{eqnarray}
\begin{eqnarray} {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn) &=&
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn, cc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn, cn)\nonumber\\& &
+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn, nc)
+ {A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn, nn, c) \nonumber\\& &+
{A}_6^{\gamma\gamma^\prime}(nnnnn, nnnn, nnn, nn, n).
\end{eqnarray}
Thus, we obtain that the contribution from the six order
approximation is made of $203$ terms after finding out all of
contractions and anti-contractions.
Just like we have done in the $l=4$ or 5 cases, we decompose
\begin{eqnarray}\label{A6dto1}
A_6^{\gamma\gamma^\prime}&=&A_6^{\gamma\gamma^\prime}({\rm e})+A_6^{\gamma\gamma^\prime}(t{\rm e})
+A_6^{\gamma\gamma^\prime}(t^2{\rm e})+A_6^{\gamma\gamma^\prime}(t^3{\rm e})\\
\label{A6dto2}&=&A_6^{\gamma\gamma^\prime}({\rm e},t{\rm e})+A_6^{\gamma\gamma^\prime}(t^2{\rm e},t^3{\rm e})
,\end{eqnarray} To our purpose, we only calculate the second term
$A_6^{\gamma\gamma^\prime}(t^2{\rm e},t^3{\rm e})$ in eq.(\ref{A6dto2}).
Without loss of generality, we decompose it into \begin{eqnarray}
A_6^{\gamma\gamma^\prime}(t^2{\rm e},t^3{\rm e})&=&A_6^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma}t},t^3{\rm e}^{-{\rm i}
E_{\gamma}t})+A_6^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_1}t},t^3{\rm e}^{-{\rm i}
E_{\gamma_1}t})\nonumber\\
& &+A_6^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t},t^3{\rm e}^{-{\rm i} E_{\gamma^\prime}t}).\end{eqnarray} While,
every term in the above equations has its diagonal and off-diagonal
parts about $\gamma$ and $\gamma^\prime$, that is \begin{eqnarray}
A_6^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_6^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_6^{\gamma\gamma^\prime}(t^2{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}),\\
A_6^{\gamma\gamma^\prime}(t^3{\rm e}^{-{\rm i}
E_{\gamma_i}t})&=&A_6^{\gamma\gamma^\prime}(t^3{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm D})+A_6^{\gamma\gamma^\prime}(t^3{\rm e}^{-{\rm i}
E_{\gamma_i}t};{\rm N}). \end{eqnarray} where $E_{\gamma_i}$ takes $
E_{\gamma}, E_{\gamma_1}$ and $E_{\gamma^\prime}$.
Based on our calculations, we find that there are nonvanishing 91
terms and vanishing 112 terms with $t^2{\rm e}, t^3{\rm e}$ factor parts in
all of 203 contraction- and anti contraction- expressions (see in
the supplementary Ref. \cite{My2}). Therefore we can, in terms of
rearrangement and summation, obtain the following concise forms:
\begin{eqnarray} A_6(t^2{\rm e}^{-{\rm i} E_{\gamma}t},{\rm D})&=&\frac{(-{\rm i}
t)^2}{2!}\left(G_\gamma^{(3)}\right)^2{\rm e}^{-{\rm i}
E_{\gamma}t}\delta_{\gamma\gamma^\prime}+\frac{(-{\rm i} t)^2}{2!}2
G_\gamma^{(2)}G_\gamma^{(4)}{\rm e}^{-{\rm i}
E_{\gamma}t}\delta_{\gamma\gamma^\prime}\nonumber\\
& &-\frac{(-{\rm i}
t)^2}{2!}\sum_{\gamma_1}\frac{\left(G_\gamma^{(2)}\right)^2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\delta_{\gamma\gamma^\prime}.
\end{eqnarray}
\begin{eqnarray} A_6(t^2{\rm e}^{-{\rm i} E_{\gamma_1}t},{\rm D})&=&\frac{(-{\rm i}
t)^2}{2!}\sum_{\gamma_1}\frac{\left(G_\gamma^{(2)}\right)^2{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)^2}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma}\delta_{\gamma\gamma^\prime}.
\end{eqnarray} \begin{eqnarray} A_6(t^2{\rm e}^{-{\rm i} E_{\gamma^\prime}t},{\rm D})&=&0.\end{eqnarray}
\begin{eqnarray} A_6(t^2{\rm e}^{-{\rm i} E_{\gamma}t},{\rm N})&=&\frac{(-{\rm i} t)^2}{2!}2
G_\gamma^{(2)}G_\gamma^{(3)}\frac{{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)}g_1^{\gamma\gamma^\prime}\nonumber\\
& &+\frac{(-{\rm i}
t)^2}{2!}\sum_{\gamma_1}\frac{\left(G_\gamma^{(2)}\right)^2{\rm e}^{-{\rm i}
E_{\gamma}t}}{\left(E_\gamma-E_{\gamma_1}\right)\left(E_\gamma-E_{\gamma^\prime}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\end{eqnarray}
\begin{eqnarray} A_6(t^2{\rm e}^{-{\rm i} E_{\gamma_1}t},{\rm N})&=&-\frac{(-{\rm i}
t)^2}{2!}\sum_{\gamma_1}\frac{\left(G_{\gamma_1}^{(2)}\right)^2{\rm e}^{-{\rm i}
E_{\gamma_1}t}}{\left(E_\gamma-E_{\gamma_1}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\end{eqnarray}
\begin{eqnarray} A_6(t^2{\rm e}^{-{\rm i} E_{\gamma^\prime}t},{\rm N})&=&-\frac{(-{\rm i}
t)^2}{2!}2
G_{\gamma^\prime}^{(2)}G_{\gamma^\prime}^{(3)}\frac{{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)}g_1^{\gamma\gamma^\prime}\nonumber\\
& &+\frac{(-{\rm i}
t)^2}{2!}\sum_{\gamma_1}\frac{\left(G_{\gamma^\prime}^{(2)}\right)^2{\rm e}^{-{\rm i}
E_{\gamma^\prime}t}}{\left(E_\gamma-E_{\gamma^\prime}\right)\left(E_{\gamma_1}-E_{\gamma^\prime}\right)}
g_1^{\gamma\gamma_1}g_1^{\gamma_1\gamma^\prime}\eta_{\gamma\gamma^\prime}.
\end{eqnarray} Their forms are indeed the same as expected and can be
absorbed reasonably into the lower order approximations in order to
obtain the improved forms of perturbed solutions.
\end{appendix}
|
1,116,691,500,401 | arxiv | \section{Introduction}
Reinforcement Learning (RL) aims at the creation of agents and systems that are capable of functioning in real-world environments~\cite{sutton2018reinforcement}.
A common RL task involves decision-making and control, which given some information about the current state of the environment, must determine the best action to take in order to maximise long-term success.
In this regard, RL allows improving the decision-making process while operating, to learn without supervision, and adapt to changing circumstances~\cite{rlsurvey}.
In classical, autonomous RL~\cite{sutton2018reinforcement} the agent interacts with its environment learning by trial-and-error.
The agent explores the environment and learns solely from the rewards it receives (see grey box within Figure \ref{fig:HumanAdviceIntRL}).
RL has shown success in different domains such as inventory management~\cite{giannoccaro2002inventory}, robot scenarios~\cite{robocup, churamani2020icub}, and game environments~\cite{tdgammon, barros2020learning}, among others.
However, RL has difficulties to learn in large state spaces.
As environments become larger the agent's training time increases and finding a solution can become impractical~\cite{mankowitz2019challenges, cruz2018action}.
Interactive Reinforcement Learning (IntRL) is an alternative to RL in which an advisor interacts with an RL agent in real-time~\cite{thomaz2005real}.
The advisor can provide extra information to the agent regarding its behaviour or future actions it should perform.
In this regard, the advice can be either evaluative or informative~\cite{li2019human}.
The former is an evaluation the advisor gives to the agent indicating how good or bad was the last action performed.
The latter is a suggestion given to the agent indicating what action to perform next from the current state.
Human advisors are usually used in IntRL since they achieve good performance in areas such as problem-solving, forward planning, and teaching.
Moreover, they have a large collection of knowledge and experiences to draw upon when encountering new environments and problems~\cite{moonlanding}.
IntRL utilise these skills of humans to assist the agent with its own learning and decision-making.
This approach has been shown to considerably improve the agent's learning speed and can allow RL to scale to larger or more complex problems~\cite{scale}.
Figure \ref{fig:HumanAdviceIntRL} shows the IntRL approach with a human advisor included providing either evaluative or informative advice to the learner agent.
There are two major barriers to humans providing information to RL agents.
The first is the time required by the human.
In this regard, it is important that the mechanisms used to provide advice to the agent serve to reduce the number of interactions required.
The second barrier is the skill needed by the human to provide the information.
Humans usually need both programming skills and knowledge of the problem dynamics to encode information relevant to the agent's learning~\cite{arzate2020survey, bignold2020conceptual}.
A principle of IntRL is that the method to provide information to the agent should be understandable and usable by people without programming skills or deep problem domain expertise~\cite{thomaz2005real, amershi2014power}.
Therefore, the time required by a human advisor should remain as low as possible to reduce the burden on the human and methods for providing information to an agent should be accessible to users without programming or machine learning expertise.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{img/HumanAdviceIRL.pdf}
\caption{
Interactive reinforcement learning approach.
In classical, autonomous reinforcement learning, the learner agent performs an action $a_t$ from a state $s_t$ and the environment produces a response leading the agent to a new state $s_{t+1}$ and receiving a reward $r_{t+1}$.
Interactive reinforcement learning adds a human advisor for assistance.
Whereas the advisor also observes the environment's response, they can provide either evaluative or informative advice to the learner agent.
}
\label{fig:HumanAdviceIntRL}
\end{figure}
In this work, we aim to reduce the obligation of the human advisor while improving the learning speed of the agent.
We address the question of which of the approaches, evaluative or informative, is the preferred instructional approach for humans.
To this aim, we carry out an analysis of human engagement with twenty participants with no prior knowledge of machine learning techniques.
In our experiments, ten users give evaluative advice to the RL agent while ten users give informative advice in a simulated scenario.
From the performed interactions, we analyse the advice accuracy and the advice availability of each assistive approach.
We also present an analysis of how evaluative advice may be affected by reward bias when teaching the RL agent.
Therefore, the distinction between advice delivery styles, i.e., evaluative or informative (also known as reward-shaping and policy-shaping respectively), and how humans engage and prefer to teach artificial agents is studied in this work.
While evaluative and informative approaches are about the method used to instruct the agent, reward-shaping and policy-shaping methods are about how the agent incorporates the provided advice, thus considering the agent's viewpoint.
This work is organised in the following sections.
Section 2 presents an overview of prior research on evaluative and informative advice, including a discussion of how they compare.
Secondly, this section discusses prior studies in human engagement involving these two interactive approaches.
Section 3 introduces the experimental methodology used in this work including further details of IntRL learning framed within the assisted RL taxonomy and how human-sourced advice has been obtained.
Section 4 describes the IntRL scenario used during the experiments; this includes the key features of the environment along with the interactions related to the particular scenario with the participants in the experiment.
Section 5 presents the results including the users' self-evaluation of the experience and the characteristics of the interactive steps in terms of the frequency, accuracy, and availability of the advice.
Finally, in Section 6 the main conclusions obtained from this work are presented.
\section{Reinforcement Learning and Interactive\\Human-sourced Advice}
Learning from the ground up can be a challenging task.
While humans and artificial agents using RL are both capable of learning new tasks, it is evident that any extra information regarding the task can significantly reduce the learning time~\cite{cruz2018multi,sharma2007transfer,taylor2007transfer}.
For humans, we can get advice from peers, teachers, the Internet, books, or videos, among other sources.
By incorporating advice, humans can learn what the correct behaviour looks like, build upon existing knowledge, evaluate current behaviour, and ultimately reduce the amount of time spent performing the wrong actions~\cite{shin2020biased}.
For artificial agents, the benefits of advice are the same.
For instance, advice may be used to construct or supplement the reward function, resulting in an improved evaluation of the agent's actions or increased the utility of the reward function requiring fewer experiences to learn a behaviour~\cite{grzes2017reward, marom2018belief}.
The advice can also be used to influence the agent's policy, either directly or through the action selection method, in order to reduce the search space.
There are many possible information sources for agents to use.
For instance, external information can come from databases~\cite{shah2016interactive}, labelled sets~\cite{deep1,deep2}, cases~\cite{kang1995multiple,compton1991ripple}, past experiences~\cite{taylor2009transfer}, other agents~\cite{multi1,multi2}, contextual perception~\cite{cruz2016learning}, and from humans~\cite{learningbydemonstration}.
Human-supplied advice is contextually relevant information that comes from a human as a result of observation or awareness of the agent's current behaviour or goal.
This information is commonly used to supplement, construct, or alter the RL process.
Human-sourced advice can be more noisy, inaccurate, and inconsistent than other information sources.
However, the critical benefit is that the advice is contextually relevant and can be applied to aid the agent in its current situation or goal.
IntRL may use human-sourced advice~\cite{millan2019human} or simulated-users~\cite{ayala2019reinforcement} to directly interact with the agent while it is learning/operating~\cite{thomaz2005real}.
The focus for IntRL is limited to the use of advice during the learning process, not before or after.
This limitation requires interactive techniques to be easy for an agent to get information from, and for humans to add information to so that the learning process is not slowed down.
This limitation also means that the agent or policy should not be reset when new information is provided, as that is conceptually similar to creating a new agent rather than interacting with an existing one.
When humans interact with the agent, they may either provide additional rewards in response to the agent's performance~\cite{thomaz2007asymmetric} or recommend actions to the agent to guide the exploration process~\cite{moreira2020deep}.
\subsection{Evaluative Advice}
Evaluative advice is information that critiques current or past behaviour of an agent~\cite{ng1999policy, brys2014combining}.
Advice that supplements, improves, or creates a reward function is considered to be evaluative as it is a reaction to an agent’s behaviour rather than a direct influence on an agent's decision-making.
The source of the advice is what separates evaluative advice from the reward function.
A typical reward function is defined for an specific environment, whereas evaluative advice originates from an observer of the agent or other external sources~\cite{marthi2007automatic, bignold2020conceptual}.
Figure \ref{fig:InformativeEvaluativeIntRL} shows in green evaluative advisors supplementing the reward received from the environment.
Humans providing evaluative advice do not need to know the solution to a problem~\cite{rosman2014giving}, it is enough for them to be able to assess the result of an action and then decide whether it was the correct action to take.
For instance, in the training an agent manually via evaluative reinforcement (TAMER) framework~\cite{tamer,knox2009interactively}, a human user continually critiques the RL agent's actions.
The human observes the agent, and in response to the agent's actions, provides a simple yes/no evaluation of its choice of action.
This Boolean evaluation acts as an additional reward signal, supplementing the reward function from the environment.
This bare minimum of human influence is enough to significantly decrease the time required by the agent to learn the required task~\cite{tamer}.
Another example of evaluative advice is the convergent actor-critic by humans (COACH) approach~\cite{macglashan2017interactive}.
In this approach, a human trainer may give positive or negative feedback to a virtual dog learning to reach a goal position.
The human feedback was divided into punishment and reward and labelled with different levels as 'mild electroshock', 'bad dog', 'good dog', and 'treats'.
Using COACH, the agent was able to learn the task facing multiple feedback strategies.
Recently, this approach has been extended as Deep COACH~ \cite{arumugam2019deep} to represent the agent policy by deep neural networks.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{img/InformativeEvaluativeIntRL.pdf}
\caption{
Interactive reinforcement learning approach using evaluative and informative advice.
While the informative advisor may suggest an action to be performed by the agent, the evaluative advisor may suggest a reward to supplement the reward obtained from the environment.
}
\label{fig:InformativeEvaluativeIntRL}
\end{figure}
\subsection{Informative Advice}
Informative advice is information that aids an agent in its decision-making~\cite{kessler2019active, paniz2020useful}.
Advice that recommends actions to take or avoid, suggests exploration strategies, provides information about the environment or proactively alters what action an agent may take is considered to be informative.
Informative methods primarily focus on transferring information from the human and encoding it into the agent's policy, either directly, by altering the policy, or indirectly by influencing the agent's decision-making process~\cite{lin2020review}.
Figure \ref{fig:InformativeEvaluativeIntRL} shows in brown informative advisors suggesting an action to be taken.
Providing informative advice can be challenging for two reasons, the first of which is the human factor.
Informative advice typically requires the human to know what the correct action is for a given state ahead of time.
Not only does this require a greater understanding of the environment and the agent's position within it, but it also requires a more substantial commitment of time and effort to provide the advice.
The time and effort required increases as the size of the environment, and the available actions increases~\cite{cruz2017agent}.
The second reason utilising informative advice is challenging is that encoding information sourced from a human into a form an agent can understand can be a complicated process, as it is more informationally dense then evaluative advice~\cite{grizou2013robot}.
For instance, an implementation of informative advice in IntRL is the ADVISE algorithm~\cite{griffith2013policy}.
In ADVISE, a human observing an agent in operation can recommend actions to take at any given step, which the agent may choose to follow.
This methodology allows the human to guide the agent through parts of the environment which they are familiar with.
This can result in a significant improvement over existing IntRL methods and a reduced need for exploration.
Another example of informative advice was presented in~\cite{cruz2018multi} in which a robot learned a cleaning task using human-provided interactive feedback.
In this domestic scenario, seven actions could be advised to the agent using multi-modal audiovisual feedback.
The provided advice was integrated into the learning process with an affordance-driven~\cite{cruz2016learning} IntRL approach.
After experiments, the robot collected more and faster reward, tested against different minimal confident level thresholds and different levels of affordance availability.
\subsection{Evaluative versus Informative}
Evaluative advice has been more widely utilised in prior research as implementations are simpler to encode as the focus tends to be on the result of a decision rather than on what decision should be made~\cite{pilarski2012between}.
This is due to it being easier to determine if an action was the correct or incorrect action to take once the result of the action is available.
Most implementations of evaluative advice alter or supplement the reward function of the environment.
Encoding information to alter the reward function is generally straightforward, as the primary focus is on whether to increase or decrease the reward given to the agent, as opposed to informative implementations that attempt to alter the decision-making policy~\cite{amir2016interactive}.
Additionally, providing an evaluation requires less human effort than determining what information or action is relevant for a given state, as the information sought is typically a Boolean or a scalar measurement.
Overall, evaluative advice is more direct to obtain, implement, and encode than the informative counterpart.
Informative advice tends to be more informationally dense than evaluative advice.
While this does make sourcing and encoding the information difficult, it does provide more benefit to the agent~\cite{pilarski2012between}.
Evaluative advice only reinforces behaviour after that behaviour has been exhibited, whereas informative advice can promote or discourage behaviour before it is presented.
Advice that recommends taking or avoiding actions will reduce the search space for the agent, resulting in improved learning time.
The downside of this is that if the agent never performs actions that are preemptively discouraged, and the advice is not optimal, then the optimal policy may not be found~\cite{cruz2018improving}.
A direct comparison of the two styles is difficult as the implementations of human-sourced advice vary.
Griffith et al.~\cite{griffith2013policy} compared the effects of informative versus evaluative advice on artificial agents using their informative algorithm ADVISE, against the evaluative algorithm TAMER.
Both algorithms utilise IntRL agents and advice is given on a step by step basis.
The ADVISE algorithm prompts the advisor for a recommended action which the agent can then follow, while TAMER prompts the advisor for a binary evaluation on the previously taken action.
In the experiments, each agent is assisted by a simulated human, making the advice comparable.
The ADVISE algorithm allows the advisor to recommend an action and therefore the number of bits of information provided is equal to $log_2(n_a)$ where $n_a$ is the number of possible actions (e.g., if there are eight possible actions $n_a = 8$, then each piece of informative advice provides three bits of information).
In contrast, TAMER allows the human to provide a binary evaluation (i.e., correct/incorrect) which provides only a single bit of information.
Therefore the information gain from ADVISE is greater than TAMER and may bias the results.
However, the experiments show that informative advice is more beneficial to the agent regardless of advice accuracy for the majority of cases.
The use of a simulated human as an oracle in these experiments allowed for the provision of consistent advice that does not suffer from biases introduced by real humans.
However if the behaviour of actual human advice-givers differs from that of the simulated human in terms of either accuracy and/or engagement, then the impact on agent behaviour may not reflect that observed in this study.
Therefore it is important to develop an understanding of the properties of actual human advice.
\subsection{Human Engagement}
Studies on human engagement and teaching styles when engaging with interactive machine learning agents have previously been studied~\cite{amershi2014power,crowdsourcing}, however, they have been mainly focused on assessing human commitment independent of the type of advice.
For instance, Amershi et al.~\cite{amershi2014power} presented a comprehensive study looking at the engagement between humans and interactive machine learning.
The study included some case studies demonstrating the use of humans as information sources in machine learning.
This work highlighted the need for increased understanding of how humans engage with machine learning algorithms, and what teaching styles the users preferred.
A study by Thomaz and Breazeal~\cite{thomaz2008teachable}, later confirmed by Knox and Stone \cite{knox2012reinforcement}, found that human tutors tend to have a positive bias when teaching machines, opting to reward rather than punish RL agents.
This bias leads to agents favouring the rewards provided by the human over the reward function of the environment.
The positive bias was observed in humans providing evaluative advice, as it tends to be provided as a reward \cite{thomaz2008teachable}.
Due to its characteristics, no such bias has been tested for or observed yet in informative-assisted agents.
Knox and Stone~\cite{knox2013learning} later mitigated the consequence of the positive bias in RL agents by developing an agent that valued human-reward gained in the long term rather than the short term.
Another study performed by Cakmak and Thomaz~\cite{cakmak2010optimality} investigated the strategy of teachers when tutoring machine learning agents.
The study found that humans providing advice to a system over an extended period experienced frustration and boredom when bombarded with questions from the agent.
The stream of questions to the teachers caused some participants to “turn their brain off” or “lose track of what they were teaching” according to self-reports~\cite{cakmak2010designing}.
Similar results were obtained using a movie recommendation system developed for Netflix, where participants were repeatedly asked to state if the system was right or wrong~\cite{guillory2011simultaneous,guillory2011online}.
The previous studies suggest that participants do not like being prompted for input repeatedly, particularly when the input can be repetitive.
Current IntRL systems do not prompt the user for information, instead, allowing the advisor to step in whenever they wish.
Nevertheless, input into these systems is repetitive and requires the users to provide advice on a state-by-state basis \cite{moreira2020deep}, leaving current systems susceptible to the same issues of frustration and interruption as the active learning systems reported.
Regardless, it is still not clear whether these issues will be translated into the IntRL scenarios.
Therefore, the remainder of this paper reports details and results of an experiment carried out to establish the characteristics of advice provided by humans interacting with an IntRL agent, and to assess whether these properties alter depending on whether evaluative or informative advice is being provided.
\section{Experimental Methodology}
In this section, we describe the IntRL methodology used during the experiments and frame the approach within an assisted RL framework.
Moreover, we outline the method to collect human advice including participants' characteristics, induction process, experiment details, and after-experience questionnaire.
\subsection{Interactive Reinforcement Learning Methodology}
Assisted reinforcement learning (ARL)~\cite{bignold2020conceptual} is a general framework proposed to incorporate external information into traditional RL.
The framework uses a conceptual taxonomy including processing components and communications links to describe transmission, modification, and modality of the sourced information.
The processing components comprise information source, advice interpretation, external model, and assisted agent, whereas the communications links are temporality, advice structure, and agent modification.
ARL agents aim to gather as much information from an external source as possible, as this can lead to improved performance within the environment.
A concrete example of an ARL agent is an IntRL agent.
As previously mentioned, an IntRL agent can be advised with externally-sourced information to support the learning process at any time of the training.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{img/EvaluativeInformativeIRL.pdf}
\caption{Interactive reinforcement learning method used to compare human engagement in evaluative and informative advice.
The method is presented using the assisted reinforcement learning taxonomy, defining processing components (dotted red squares) and communication links (underlined green parallelograms) for each advice delivery style.
The evaluative and informative methods differ in advice interpretation, advice structure, and agent modification.
All the other processing components and communication links are common to both and located at the centre. }
\label{fig:EvaluativeInformativeIntRL}
\end{figure}
In this work, two different learner agents attempt to solve the Mountain Car problem~\cite{sutton2018reinforcement} using IntRL (more details about the experimental problem are given in the next section), the first agent accepts evaluative advice and the other receives informative advice.
Figure \ref{fig:EvaluativeInformativeIntRL} shows the IntRL approach framed within the ARL framework~\cite{bignold2020conceptual} using both evaluative and informative advice.
The figure shows the processing components using dotted red squares and the communication links using green parallelograms with underlined text.
Using the ARL taxonomy, there are some common processing components and communication links that are adopted similarly by both approaches.
The common elements are information source, temporality, external model, and the assisted agent, which are adopted by the ARL framework as human-sourced advice, interactive assistance, an immediate model, and a Q-learning agent.
All the other processing components and communication links differ to each other for evaluative and informative advice.
For the evaluative approach, advice interpretation, advice structure, and agent modification are adopted by the ARL framework as binary advice to reward conversion, state-action pair value, and reward-shaping respectively.
For the informative approach, they are adopted as advice to action selection conversion, state-action lookup, and policy-shaping respectively.
As this approach relies on human trainers as an external information source, the higher the people engagement, the higher the opportunity to transfer knowledge to the agent.
The accuracy of the advice and information gain as a result of the advice provided is also important, as they contribute to the policy being learned by the agent~\cite{cruz2018improving}.
We aim to measure the human engagement, accuracy of advice, and the information gain for evaluative and informative advice for IntRL.
To this aim, we perform experiments using two IntRL agents implemented with the temporal-difference learning method Q-learning.
The performance of the agent, or its ability to solve the problem, is not the main focus of this paper.
A comparison of evaluative and informative advice, in terms of the performance of the agents, has been investigated in a prior study~\cite{griffith2013policy}.
In the context of this work, human engagement is a measure of the number of interactions, the total time spent constructing interactions, and the distribution of interactions over the time the agent is operating.
The observing human is given an opportunity to provide information once per step of the agent, and if the human does provide some advice during that step, then the interaction is recorded.
However, a measure of the number of interactions is not sufficient, as the time and effort required to provide an interaction may differ between informative and evaluative advice methods.
As a result, the interaction time is also recorded.
Moreover, the accuracy of the information provided to the agent affects its performance within the environment~\cite{cruz2018improving}.
In this regard, advice accuracy is a measure of how accurate the information provided by the human is, compared to the optimal action to take for each state the agent encounters.
This can be calculated by comparing the advice provided by the human against the known optimal policy for this task.
\subsection{Human-sourced Advice}
During the experiments, twenty people participated, ten for each advice delivery style.
Each participant was able to communicate with an RL agent while observing its current state and performance.
A participant interacting with the evaluative agent had the option of providing an agreement or disagreement (yes/no) to the agent's choice of action for the last time step.
This binary evaluation was then used by the agent to supplement the reward it receives from the environment.
A positive evaluation added $+1$ to the reward, while a negative evaluation subtracted $-1$ from the reward.
Likewise, a participant interacting with the informative agent had the option of suggesting an action for the agent's next step, either left or right.
If the agent was recommended an action then that action was taken, otherwise, the agent operates as a usual RL agent.
Each participant, regardless of teaching style, had three possible options each step.
For the evaluative advice participants, the options were: agree, disagree, or do nothing.
Whereas, for the informative advice participants, the options were to recommend: left, right, or to do nothing.
The participants chosen for the experiment had not had significant exposure to machine learning, and were not familiar with the Mountain Car environment.
Before beginning the experiment, each participant was given a five-minute induction to the Mountain Car problem, and then asked to complete a short questionnaire.
The induction introduces the aim of the agent, the dynamics of the environment, the action space, and most significantly, what the optimal solution to the problem is.
The solution for the environment is described to the participant to give all participants an equal understanding and to reduce the time that they spend exploring the environment themselves so that they may focus on assisting the agent.
When the induction was complete, the participant was asked to complete a questionnaire.
The full questionnaire consists of seven questions, the first two of which aim to assess the level of general knowledge about machine learning techniques and understanding of the Mountain Car problem of the participants.
After completing the first two questions, the participant is ready to begin the experiment.
The remaining five questions were answered after the subject had completed their interaction with the agent.
The participant was given 500ms to provide advice to the agent each step.
To provide advice to the agent, the participant pressed one of two keys on the keyboard to indicate either approval/disapproval of the agent's last choice in action when using evaluative advice, or to recommend the left/right action for the agent to take next when using informative advice.
Therefore, the input mechanism was dependent on the advice delivery style being tested.
If the human provided advice within the 500ms window, an interaction had taken place and the time taken to create that interaction was recorded.
If the human did not provide advice within the time window provided, then no interaction was recorded, and the agent operated as usual.
Additionally, the human could change the duration of the time window by 25\% during the experiment by pressing the +/- keys.
The experiments ran until the participant believed the agent had learned the correct behaviour, or until they tired of providing advice at which point the agent was terminated.
After the participant had chosen to stop providing advice, they were asked to complete the remainder of the questionnaire.
The remaining five questions aimed to assess understanding of the Mountain Car problem now that the participants have experienced the environment.
It also aimed to capture their perception about their level of engagement, the accuracy of their advice, and the agent's understanding of the advice supplied.
The full questionnaire form given is supplied in Appendix A.
\section{Interactive Reinforcement Learning Scenario}
In this section, we describe the key features of the experimental environment including the agent's representation, state and action representation, and reward function.
Furthermore, we complement the human-agent interactive methodology described in the previous section by indicating the script given to the participants.
\subsection{Features of the Environment}
The Mountain Car environment is a standard continuous-state testing domain for RL~\cite{sutton2018reinforcement, moore1991variable}.
In the environment, an underpowered car must drive from the bottom of a valley to the top of a steep hill.
Since the gravity in the environment is stronger than the engine of the car, the car cannot drive straight up the side of the mountain.
In order for the car to reach the top of the mountain, it must build up enough inertia and velocity.
Figure \ref{fig:MountainCarScenario} illustrates the mountain car environment and its key features.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{img/MountainCarScenario.pdf}
\caption{A detailed graphical representation of the Mountain Car environment.
The agent begins on the line at a random position within the yellow box and must travel to the green goal state.
To do so, the agent accelerates towards the first (1) key position until its velocity is reduced to zero by gravity.
At this point, the agent turns and accelerates towards the second (2) key position, again, until its velocity is reduced to zero.
Finally, the agent accelerates down the hill again, building up velocity to reach the goal state.}
\label{fig:MountainCarScenario}
\end{figure}
In our experiments, an RL agent controls the actions of the car.
The car begins at a random position and with a low velocity somewhere within the starting position.
In order to reach the goal position, the agent must build up enough momentum.
To do so, the agent accelerates towards the goal until its velocity is reduced to zero by gravity.
At this point, the agent turns and accelerates towards the other direction toward the highest possible position, again, until its velocity is reduced to zero.
Finally, the agent accelerates down the hill again, building up velocity to reach the goal state.
Should the agent not reach high enough up the mountain to reach the goal position, it should repeat the actions of accelerating in the opposite direction until a zero velocity is reached and turning around.
The key to the agent solving the Mountain Car problem is to increase its own velocity ($v$).
The agent's mass ($m$), the magnitude of acceleration ($a$), and the force of gravity ($G$) are constant.
As the agent's acceleration is lower than the gravity acting upon it, pulling the agent to the lowest point of the environment, the agent must accelerate at the correct moments, and in the correct direction, to increase its velocity.
The optimal solution to the Mountain Car problem is to accelerate in the current direction of travel and take a random action when velocity is zero.
An example of a rule formulation denoting this behaviour is shown in Eq. \eqref{eq:reward}.
The policy states the agent's next action ($A_t$) should be to accelerate right if its velocity is greater than 0, i.e. keep right movement, to accelerate left if its velocity is less than 0, i.e., keep left movement, and to take a random action if velocity is 0.
\begin{equation}
A_t = \left\{
\begin{array}{l r}
+1 & v > 0\\
-1 & v < 0\\
\in\{-1, 1\} & v = 0\\
\end{array}
\right.
\label{eq:reward}
\end{equation}
The agent controlling the car has three actions to choose from in any state: to accelerate left, to accelerate right, or not to accelerate at all.
The graphical representation of these possible actions is shown in Figure \ref{fig:MountainCarActions}.
At each step, the agent receives a reward of $-1$, and no reward when reaching the goal state.
This reward encourages the agent to reach the goal in as few steps as possible to maximise the reward.
\begin{figure
\centering
\includegraphics[width=0.9\textwidth]{img/MountainCarActions.pdf}
\caption{A graphical representation of the Mountain Car agent.
The entire rectangle (blue and red) represents the car.
The blue box indicates which action the agent has chosen to perform, either to accelerate left, to accelerate right, or not to accelerate at all and continue moving in its current direction of travel.}
\label{fig:MountainCarActions}
\end{figure}
The agent's state consists of two state variables, position and velocity, which are represented as real numbers.
The position variable $p$ represents the agent's position within the environment, and ranges linearly from $-1.2$ to $0.6$, i.e. $p \in [-1.2, 0.6]$, with the lowest point of the environment residing at $-0.53$.
The velocity of the agent $v$ has a range of $-0.07$ and $0.07$, i.e. $v \in [-0.07, 0.07]$.
A velocity greater than zero indicates the agent is travelling to the right or increasing its position.
If the agent collides with the edge of the environment on the left ($p=-1.2$) then the agent's velocity is set to 0.
In this work, the RL agent utilizes discrete state variables.
Therefore, twenty bins for each state variable has been used, creating a total of 400 ($20\times20$) states.
Of these 400 states, there are some that may never be visited by the RL agent, for example, it is impossible that the agent will be at top of the left mountain ($p=-1.2$) and have a high positive velocity ($v=0.07$).
\subsection{Interaction with Experiment's Participants}
As indicated in the previous section, twenty persons with no experience in machine learning participated as trainers.
During the experiments, the agents were given a low learning rate, manually tuned to extend the time which the agent would take to learn a suitable behaviour on its own.
This was chosen so that the focus would be on the human's input rather than on the agent's capabilities.
Both the evaluative and informative agents were given a learning rate of $\alpha=0.25$, a discounting factor of $\gamma=0.9$, and used an $\epsilon$-greedy action selection strategy with an epsilon of $\epsilon=0.05$.
In order to avoid the participants get too familiar with the environment and biased the training, each person ran only one learning episode.
The Mountain Car environment has been chosen since an optimal policy for the problem is known.
To have an optimal policy for the environment allows the accuracy of the human-sourced information to be measured.
Additionally, the Mountain Car problem has a low state and action space, allowing for the humans to observe the impact of their interactions relatively quickly, as the agent is likely to encounter the same state-action pairs frequently.
At the beginning of the experiments, the script given to the participants for describing the optimal solution is outlined below:
\begin{quote}
``The car [agent] begins at the bottom of the valley, between two mountains.
The car aims to drive to the top of the mountain on the right side.
However, the car does not have the power to drive directly up the mountainside; instead, it needs to build up momentum.
Momentum is gained by driving as high as possible on one side of the mountain, then turning around and accelerating in the opposite direction.
When the car reaches the highest point it can on the opposite side, the process is repeated.
Eventually, the car will gain enough speed to reach the top of the mountain.''
\end{quote}
\section{Results and Discussion}
In this section, we analyse the main results obtained from the experimental scenario.
First, we present user's self-evaluations in terms of the level of task understanding, engagement with the interactive agent, self-reported accuracy, and ability to follow advice.
Thereafter, we discuss the characteristics of the given advice, such as frequency, accuracy, and availability.
\subsection{User's Self-Evaluation}
As previously mentioned, before each participant began interacting with the agent, they were asked to answer two questions from the questionnaire (see Appendix A).
The purpose of the questionnaire is to assess the participants understanding of the problem environment and their interactions with the agent.
The first question asked was whether the participant had previously been involved in a machine learning study.
None of the twenty participants reported being involved in a machine learning study previously.
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{img/SelfUnderstanding.pdf}
\caption{Participants' self-reported level of understanding of the solution and dynamics of the Mountain Car environment.
The participants rated their understanding on a scale of 0 to 10 before and after assisting the agent. The standard deviation is shown over the bars for each approach and group.}
\label{fig:SelfUnderstanding}
\end{figure}
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{img/SelfEngagement.pdf}
\caption{Participants' self-reported level of engagement with the agent.
Participants reported that they (a) could have spent more time with the agent, (b) were happy with how much time they provided, or (c) spent too much time with the agent.
No significant differences are shown between the two groups.}
\label{fig:SelfEngagement}
\end{figure}
Participant were then provided with a brief explanation of the dynamics of the environment and what would be the optimal behaviour.
Subsequently, before starting the experiment, they were then asked to rate their level of understanding of the environment on a scale of 0 to 10.
After interacting with the agent, the participants were asked the same question again.
Figure \ref{fig:SelfUnderstanding} shows the average self-reported level of understanding from the two groups of participants, i.e., evaluative and informative groups, and both before and after the experiments.
Interestingly, there is a small difference in the participant self-reported understanding of the environment before they begin interacting with the agent.
The only difference in the explanation given to the two groups was the details on how they can interact with the agent.
The participants giving evaluative advice were asked to rate the agent's choice of action as good or bad, while the participants giving informative advice were asked to recommend an action, either left or right.
The difference in reported understanding before the experiment may indicate that evaluative advice delivery is easier to understand.
Additionally, a change in the level of participants self-reported understanding is observed after the experiment.
Although the informative group shows a greater change of understanding than the evaluative group after the experiment, this is due to the initial self-reported understanding.
After assisting the agent, the two groups reported a greater understanding of the environment showing no significant difference between both of them.
Moreover, after finishing the experiment, participants were also asked to report how they felt about their level of engagement with the agent.
They were given three different options to answer.
\begin{enumerate}[label=(\alph*)]
\item I could have spent more time interacting with the agent.
\item I am happy with how much time I interacted with the agent.
\item I spent too much time interacting with the agent.
\end{enumerate}
Figure \ref{fig:SelfEngagement} shows the participants' reported level of engagement with the agent indicating no significant difference between the two groups.
In both cases, the majority of participants were content with the level of engagement they had with the agent.
The participants were asked to report what they thought their level of accuracy was throughout the experiment.
Participants were given six different options to answer, ranging from always incorrect to always correct.
Figure \ref{fig:SelfAccuracy} shows the self-reported results.
The results obtained indicate that participants in the informative group were more confident in the advice they provided to the agent.
Finally, participants were asked to rate how well they thought the agent followed their advice.
On a scale from 0 (never), to 10 (always), participants scored the agent's ability to follow the given advice.
The obtained results, summarised in Figure \ref{fig:SelfFollowAdvice}, show that participants using informative advice perceived the agent as better able to follow advice when compared to participants using evaluative advice.
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{img/SelfAccuracy.pdf}
\caption{Participants' self-reported level of advice accuracy.
Participants rated the accuracy of the advice they provided to the agent from `Always Incorrect' to `Always Correct'.
The informative group shows more confidence in the advice they give to the agent.
}
\label{fig:SelfAccuracy}
\end{figure}
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{img/SelfFollowAdvice.pdf}
\caption{Average of participants' self-reported feeling of how well the agent followed the advice provided.
The participants score the agent's ability to follow advice using a scale from 0 (never), to ten (always).
The informative group perceives the agent to better follow the provided advice.
The standard deviation is shown over the bars for each approach.}
\label{fig:SelfFollowAdvice}
\end{figure}
We have computed the Student's t-test to test the statistical difference between the self-reported results from the two groups.
Table~\ref{table:SelfTstudent} shows the obtained t-scores along with the p-values for the level of understanding of the environment, before and after the experiment, as well as the reported agent's ability to follow advice.
While the differences in the self-reported understanding of the environment previous to the experiments and the perceived agent's ability to follow the provided advice are statistically significant, the difference between the two groups in the self-reported understanding of the environment after the experiments is not significant, confirming the results reported in Figure \ref{fig:SelfUnderstanding}.
\begin{table}[bt]
\centering
\caption{Student's t-test for comparison of self-reported results for evaluative and informative advisors.}
\label{table:SelfTstudent}
\begin{tabular}{lcc}
\hline
\textbf{Evaluation} & \textbf{t-score} & \textbf{p-value}\\
\hline
Understanding of the environment (before) & $t = 2.0608$ & $p = 0.0541$\\
Understanding of the environment (after) & $t = 0.3943$ & $p = 0.6980$\\
Agent's ability to follow advice & $t = 1.6584$ & $p = 0.1146$\\
\hline
\end{tabular}
\end{table}
\subsection{Characteristics of the Provided Advice}
From the assistance provided to the agent, we kept a record of the number of interactive steps and the percentage relative to the total amount of steps.
Figure \ref{fig:AverageInteractiveEpisodes} displays the number of steps that each set of participants interacted with the agent to provide assistance.
In the boxplot, the cross marker represents the mean, dots are outliers, and the quartile calculation uses exclusive median.
Overall, both groups provided advice in 9.15 steps on average, however, the data collected show a large variation in the engagement between the two types of advice.
Participants providing informative advice assisted over twice as many steps than participants providing evaluative advice.
\begin{figure
\centering
\includegraphics[width=0.85\textwidth]{img/AverageInteractiveEpisodes3.pdf}
\caption{Number of steps that participants provided advice to the learner agent on the Mountain Car environment.
The amount of interactive steps is over two times for participants providing informative advice in comparison to evaluative advice.
}
\label{fig:AverageInteractiveEpisodes}
\end{figure}
As demonstrated in previous work~\cite{griffith2013policy}, agents assisted by informative advice learn quicker than agents assisted by evaluative advice.
The increase in learning speed results in fewer steps per episode for environments with a termination condition.
This decrease in steps per episode for informative assisted agents gives fewer opportunities for the user to provide advice, as only one interaction may occur each step.
As a result, the number of interactions per episode is not necessarily a suitable measure of engagement.
Therefore, the number of steps in which interaction occurred relative to the total amount of steps is used to measure engagement.
Figure \ref{fig:AverageInteractiveRate} shows the interaction rate as a percentage for the two sets of participants.
As before, the boxplot uses cross markers to represent the mean and exclusive median for quartile calculation.
The interaction percentage is the ratio of interactions to interaction opportunities.
Using this measurement, the length of the episode is disregarded.
The results show that participants using an informative advice delivery method interact almost twice as often as their evaluative counterparts.
Despite the higher rate of interaction shown by participants using informative advice, both groups self-reported they were happy with their level of engagement with the agent, as shown in Figure \ref{fig:SelfEngagement}.
\begin{figure
\centering
\includegraphics[width=0.85\textwidth]{img/AverageInteractiveRate2.pdf}
\caption{Percentage of steps that participants provided advice to the learner agent on the Mountain Car problem.
The percentage is computed as the ratio of interactions to interaction opportunities.
The informative advice rate is almost twice as high in comparison to evaluative advice.
}
\label{fig:AverageInteractiveRate}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.85\textwidth]{img/AccuracyAdviceProvided2.pdf}
\caption{The percentage of interactions in which the advice provided was optimal for the state-action.
Participants providing informative advice were around two times more accurate and presented less variability in comparison to participants using evaluative advice.
}
\label{fig:AccuracyAdviceProvided}
\end{figure}
While training the agent, the availability and accuracy of the provided assistance by the advisors were recorded.
Figure \ref{fig:AccuracyAdviceProvided} displays the accuracy percentage of the advice provided by each of the groups of participants.
Cross markers represent the mean and exclusive median is used for quartile calculation.
An accurate interaction is one that provided the optimal advice for the agent in that state.
Therefore, accuracy is a measurement of the number of correct interactions compared to the total interactions.
Informative interactions are almost twice as accurate in comparison to evaluative interactions and also show much less variability.
These results also reflect the self-reported level of advice accuracy shown in Figure \ref{fig:SelfAccuracy}.
We have also computed the Student's t-test to test the statistical difference between the obtained results in terms of the advice provided from the two groups.
Table~\ref{table:AdviceTstudent} shows the obtained t-scores along with the p-values for the average interactive steps, the average interactive rate, and the accuracy of the advice provided.
Although there exist statistically differences between the two groups for the average interactive steps and the average interactive rate, this is much clearer in the accuracy of the advice provided given the low p-value.
\begin{table}[bt]
\centering
\caption{Student's t-test for comparison of the provided advice from evaluative and informative advisors.}
\label{table:AdviceTstudent}
\begin{tabular}{lcc}
\hline
\textbf{Evaluation} & \textbf{t-score} & \textbf{p-value}\\
\hline
Average interactive steps & $t = 2.2530$ & $p = 0.0370$\\
Average interactive rate & $t = 1.6828$ & $p = 0.1097$\\
Accuracy of the advice provide & $t = 14.5772$ & $p = \num{2.0778e-11}$\\
\hline
\end{tabular}
\end{table}
One hypothesis for the large difference in accuracy is latency.
In this context, latency is the time it takes for the human to decide on the advice to provide, and then input it into the agent.
It is possible that if the human is too late in providing advice, then the advice will inadvertently be provided to the state after the one intended.
For the Mountain Car environment, a late interaction is more likely to remain accurate in the next state for informative advice than it is for evaluative advice.
This is due to the layout of the state-space and the nature of untrained agents.
The optimal action for a state in the Mountain Car environment is likely to be the same as its neighbouring states.
This is due to the optimal behaviour being to accelerate in a single direction until velocity reaches 0.
Therefore, a recommended action that is received in the state after the one intended is likely to be the correct action, regardless of latency.
This does not apply to evaluative advice.
The participants assisting the evaluative agent do not provide a recommended action, instead, they critique the agent's last choice.
An untrained agent has a largely random action selection policy and is therefore not likely to choose the same action twice in a row.
As the agent's chosen action may have changed by the time it receives advice from the user, the accuracy is more affected.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{img/AgentAccuracy.pdf}
\caption{State-based accuracy of (a) informative and (b) evaluative participants for the Mountain Car environment.
Informative advice is in general more accurate than evaluative advice, except in states in the middle of the environment, where the optimal action changes.
Latency affects more in evaluative advice since there is a low probability that delayed advice is still useful in the next state.}
\label{fig:AgentAccuracy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{img/AdviceAvailability.pdf}
\caption{State-based availability of (a) informative and (b) evaluative participants for the Mountain Car environment.
Participants using informative advice achieved higher velocities in the environment, and as a consequence, more states were visited in comparison to the evaluative advice approach.
}
\label{fig:AdviceAvailability}
\end{figure}
This hypothesis is supported by the state breakdown of the advice accuracy.
Figure \ref{fig:AgentAccuracy} shows the accuracy of participants' advice for each state in the environment for (a) informative and (b) evaluative advice respectively.
The darker the colour, the more accurate the advice supplied by the participants for that state.
The comparison of the two heatmaps supports the earlier observations of the accuracy shown in Figure \ref{fig:AccuracyAdviceProvided}; informative is much more accurate than evaluative advice.
The informative advice method (Figure \ref{fig:AgentAccuracy}a) shows that the states with the most inaccuracy are in the middle of the environment, where the optimal action changes.
This inaccuracy is likely not due to poor participant knowledge, but rather providing delayed advice, after the agent has moved beyond the centre threshold.
The evaluative advice method (Figure \ref{fig:AgentAccuracy}b) shows that accuracy differs wildly across the environment and does not have an obvious pattern.
The poor result for accuracy of evaluative advice is likely due to the latency of advice delivery coupled with the lower probability that the advice will still be accurate to the following state compared to informative advice.
Additionally, evaluative advice may have lower accuracy as it requires the human assessing each state-action pair.
On the other hand, informative advice may require less time assessing each state, as the human may be following a set of rules for action recommendation, and that the next state is easier to predict compared to the agent's next action choice.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{img/EvaluativeAdviceBias2.pdf}
\caption{Reward bias of evaluative advice.
Above 50\% means that the advisor provided more positive evaluation than negative evaluation.
In general, participants provided much more positive advice, confirming prior findings that people are more likely to provide feedback on actions they view as correct than on incorrect actions.}
\label{fig:EvaluativeAdviceBias}
\end{figure}
Figure \ref{fig:AdviceAvailability} shows the availability of participants' advice for each state in the environment for (a) informative and (b) evaluative advice respectively.
Availability in this context is a measure of how often the user provides advice in a state compared to the number of times the agent visited the state.
The darker a state is on the heatmaps, the more often the user provides advice for that state.
The agent that was assisted by informative advice (Figure \ref{fig:AdviceAvailability}a) was able to achieve higher velocities in the environment, and as a result, visited more states in comparison to the evaluative advice method (Figure \ref{fig:AdviceAvailability}b).
One pattern that can be observed in the results is that the states on the edges show higher advice availability than those in the centre.
These edge states are visited when the agent has learned a suitable behaviour, making the evaluation and recommendation of actions easier on the user, and increasing engagement.
The edge states tend to be the last states the users provided advice, before voluntarily ending the experiment.
Finally, we tested the presence of the reward bias of the participants providing evaluative advice as it has been reported in existing literature~\cite{amershi2014power}.
In this regard, a deviation from fifty percent indicates reward bias, i.e., above $50\%$ means that the advisor provided more positive evaluation than negative evaluation.
On average, participants provided $66.22\%$ of positive advice, with a minimum rate of $57.14\%$ and a maximum rate of $100.00\%$ of positive evaluations.
Figure \ref{fig:EvaluativeAdviceBias} shows the reward bias of the participants providing evaluative advice.
The results collected show that all participants provided more positive evaluation than negative evaluation.
\section{Conclusions}
The human trial performed in this work has investigated the engagement of human advice-givers when assisting interactive reinforcement learning agents.
The assessment was performed using two methods for providing assistance, namely, evaluative and informative advice.
Evaluative advice assesses the past performance of an agent, while informative advice supplements future decision-making.
Previous work in the field has compared the performance of interactive reinforcement learning agents under the influence of each assistance method, finding that informative-assisted agents learn faster.
However, to the best of our knowledge, studies on human engagement when providing advice using each assistance method have not been performed.
The results obtained from the human trial show that advice-givers providing informative advice outperformed those that used evaluative advice.
Humans using an informative advice-giving method demonstrated more accurate advice, assisted the agent for longer, and provided advice more frequently.
Additionally, informative advice-givers rated the ability of the agent to follow advice higher, perceived their own advice to be of higher accuracy, and were similarly content with their engagement with the agent as the evaluative advice-giving participants.
Future work will consider the use of simulated users as a method of replicating the general behaviour of participants from this experiment.
Including simulated users would allow for faster experiments, keeping experimental conditions under control, and repeat the process as many time as needed.
The findings from this study can be used to create simulated users which more closely reflect the behaviour of actual human advisers.
\section*{Acknowledgments}
This work has been partially supported by the Australian Government Research Training Program (RTP) and the RTP Fee-Offset Scholarship through Federation University Australia.
\bibliographystyle{ieeetr}
|
1,116,691,500,402 | arxiv | \subsection{Polymer Property Calculation for Automated Training Dataset Generation}
For creating the training dataset, we have collected representative homo-polymers names in IUPAC nomenclature standard, from multiple polymer classes\cite{polyinfowebsite}. We have then converted their individual monomer unit names to SMILES format (with their head and tail units tagged) using the Open Parser for Systematic IUPAC nomenclature - OPSIN\cite{opsinwebsite}. Based on our analysis of the gas separation process, we have selected three suitable figures-of-merits or target properties for polymer membranes: glass transition temperature ($\displaystyle T_g$ in K), half-decomposition temperature ($\displaystyle T_{d,1/2}$ in K), and permeability ($\displaystyle P$) for $\displaystyle {CO_2}$ (in Barrer). $\displaystyle T_g$ is the temperature above which segmental motions of polymer chains occur such that they negatively affect a polymer membrane's mechanical stability. $\displaystyle T_g$ also defines the transition limit between glassy and rubbery polymers (temperature below and above $\displaystyle T_g$, respectively). Glassy polymers dominate the Roberson upper bound\cite{Robeson2008,Robeson2015} due to higher solubility coefficient\cite{Robeson2015}, or, in other words, better selectivity. However, rubbery polymers have lower solubility and higher diffusion coefficients\cite{Robeson2015}, i.e higher permeability and lower selectivity. Similarly, $\displaystyle T_{d,1/2}$ defined as the temperature at which the loss of weight during pyrolysis (at a constant rate of temperature rise) reaches 50 percent of its final value should be reasonably high as it is a measure for chemical stability. A high permeability for $\displaystyle {CO_2}$ is desirable as a measure of the gas flux through the membrane. However, it is limited by a trade-off with the membrane's selectivity $\displaystyle P_{CO_2}/P_{N_2}$. For creating the training dataset, we have collected literature data for $\displaystyle P_{CO_2}$ and combined it with calculated data for $\displaystyle T_{d,1/2}$ and $\displaystyle T_g$.
Calculation of $\displaystyle T_{d,1/2}$:
Best structure-property correlations were established with first-order (bond) connectivity index $^1\chi^V$; number of hydrogen atoms $N_H$ and number of vertices $N_{vertices}$ in the hydrogen-suppressed graph representation of a polymer's monomer\cite{jozefbook}. The functional relation for $T_{d,1/2}$ was obtained through a linear regression against the best correlation descriptors:
\begin{equation} \label{eq:tdhalf}
T_{d,1/2}=1000((7.17N_{vertices}-2.31N_H+12.52~^1\chi^V)/M_m)
\end{equation}
Fig.\ref{fig2}b displays the calculation steps performed by the PPPE engine for poly(vinyl butyral). Starting with the (i) hydrogen-suppressed graph representation of poly(vinyl butyral) monomer and its (ii) alternative representation with the square brackets not intersecting the bonds, the (iii) valence connectivity indices $\delta^V$ in the vertices and the (iv) bond indices $\beta^V$ in the edges are calculated according to eq.\ref{eq:deltav} and \ref{eq:betav}, respectively:
\begin{equation} \label{eq:deltav}
\delta^V=\frac{Z^V-N_H}{Z-Z^V-1}
\end{equation}
\begin{equation} \label{eq:betav}
\beta^V_{ij}=\delta^V_i\delta^V_j
\end{equation}
\noindent where $Z^V$ is the number of valence electrons of an atom, $N_H$ is the number of hydrogen atoms bonded to it, and $Z$ is its atomic number. $\beta^V_{ij}$ is the product of $\delta^V$ at the two vertices ($i$ and $j$) which define a given edge or bond.
The first-order (bond) connectivity index $^1\chi^V$ of the entire molecule is defined through the summation over the edges of the hydrogen-suppressed graph:
\begin{equation} \label{eq:chi1v}
^1\chi^V=\sum_{edges}\frac{1}{\sqrt{\beta^V}}
\end{equation}
By combining Eq.\ref{eq:tdhalf} and Eq.\ref{eq:chi1v}, counting the number of vertices and the hydrogen atoms and calculating the molar mass of poly(vinyl butyral), we obtain $T_{d,1/2}$=646K which is in agreement with the experimental value of 645K\cite{jozefbook}.
\subsection{Hyperparameter Optimization and Limited Discrepancy Search}
The procedure referred to as feature selection identifies a subset of features for achieving accurate predictions, rather than using the entire set of the original features\cite{Chandrashekar2014}. In other words, feature selection allows a machine learning algorithm to learn a model in a lower-dimensional space. The dimensionality reduction typically leads to computational performance enhancements.
Hyperparameter optimization (HPO) is also key for enhancing the model performance. There are many HPO algorithms available in the literature, including grid search and Bayesian Optimization, see for example, reference\cite{Yang2020}.
In theory, hyperparameter configurations are specific to a feature set used to train a machine learning model. One set of hyperparameter configurations that works well for one feature set might not be the best for another feature set. On the other hand, both feature selection and HPO typically require intensive computation. For example, given \textit{N} features, finding an optimal feature requires ($\displaystyle 2^N$) possible feature sets. For \textit{M} hyperparameters, each of which has \textit{b} configurations after its possible values are discretized, there are ($\displaystyle b^M$) possible choices for the hyperparameter configurations. An optimal feature set and hyperparameter configurations need to be found out of ($\displaystyle 2^Nb^M$) combinations. In practice, feature selection and HPO are performed separately to reduce the computational overhead, e.g., perform HPO after feature selection selects an optimized feature set with default hyperparameter configurations. However, this approach might not represent a good combination of the feature set and hyperparameter configurations.
We have optimized an average ($\displaystyle R^2$) score of the three-fold cross validation with 10 repeats.To that end, we have developed a systematic local search algorithm that simultaneously performs feature selection and HPO for a non-linear machine learning model. This approach leads to an optimized hyperparameter configuration specific to a selected feature set. To reduce the computational overhead, our approach focuses only on small, promising search spaces where optimized solutions are likely to occur.
We discretize possible values for each hyperparameter and formulate feature selection and HPO as a variable-value assignment task. This means that each variable corresponds to another variable to which one value needs to be assigned. The variable for a feature is set to either true or false, while the variable for each hyperparameter is set to one of the discretized hyperparameter-values.
Our approach is based on limited discrepancy search (LDS)\cite{Harvey1995,Korf1996}. The idea behind LDS has been studied in the artificial intelligence community and has a variety of applications such as\cite{Takeda2020}. LDS starts with an initial solution, i.e., initial variable-value assignment, and keeps refining it until a satisfactory solution is obtained.
Our current implementation calculates the initial solution passed to LDS as follows: It first calculates optimized hyperparameter configurations based on grid search with the whole feature set. With these hyperparameter configurations, it then computes the initial feature set based on so-called Sequential Backward Selection (SBS)\cite{Chandrashekar2014}. Our SBS implementation starts with the whole feature set. It repeats a greedy elimination of one feature (without which a score is improved) until no further improvement is obtained, .
The solution refinement step of LDS consists of a series of local search controlled by the notion of \textit{discrepancy}. Given the current best solution \textit{bs}, LDS assumes that a better solution exists in a search space whose solutions are similar to \textit{bs}. In our implementation, the discrepancy for a solution \textit{s} is defined as the number of variables whose assigned values have differences between \textit{bs} and \textit{s}. A smaller discrepancy indicates that \textit{s} is more similar to \textit{bs}.
LDS introduces a discrepancy threshold \textit{d} and performs local search in an iterative manner. After setting \textit{bs} to the initial solution calculated by SBS, LDS performs depth-first search with \textit{d}=1 and attempts to find a better solution than \textit{bs} in a search space where solutions are located that have a different value than \textit{bs} only for one variable. If no better solution is found, LDS increments \textit{d} and performs local search with \textit{d}=2. If no better solution is found again, LDS performs search with \textit{d}=3, and so on. If a better solution is found, LDS resets \textit{d}=1 and \textit{bs} to the better solution and restarts a local search with \textit{d}=1. LDS repeats these steps until the allocated time is used up or \textit{d} reaches a preset, maximum value.
There are several implementation choices for LDS to select a next variable for updating its value. Before performing a new iteration of local search, our current implementation orders variables in ascending order of the following formula: $\displaystyle w_1v$(\textit{x}) + $\displaystyle w_2u$(\textit{x}), where $\displaystyle w_1$ and $\displaystyle w_2$ are constants, $\displaystyle v$(\textit{x}) is the number of times variable \textit{x} is selected in local search, and $\displaystyle u$(\textit{x}) is the number of times variable \textit{x} fails to improve \textit{bs}. This formula attempts to remain the values of the variables unchanged that have contributed to improving a score as well as to prioritize the variables that have not been explored sufficiently.For the purpose of this study, we have chosen $\displaystyle w_1$=2 and $\displaystyle w_2$=1.
In Supplemental Fig.\ref{supfig1}, we show a comparison of regression results obtained with and without the application of hyperparameter optimization.
\subsection{Feature Vector Optimization}
Based on graph theory and atomic configurations, there exist multiple feature types which can be combined for application of machine learning models, among them the number of heavy atoms, number of rings, substructures, fingerprints, Coulomb matrix, dipole moment, potential energy and experimental conditions\cite{Takeda2020}.
By using Eq.\ref{eq:1}, we estimate feature vector values fv based on a target property value $\displaystyle v_p$ and a regression model $\displaystyle f_p$ by minimizing the score of each feature vector $\displaystyle v$. More specifically, the minimization is performed over the square error of the estimated value which is normalized by the prediction variance $\displaystyle \sigma_p^2$ to which a penalty function is added to account for violations of structural constraints. The violation of structural constraints is evaluated by means of the realizability of a molecular structure connected by sub-structures in the corresponding feature vector:
\begin{equation} \label{eq:1}
\newcommand{\mathop{\rm arg~min}\limits}{\mathop{\rm arg~min}\limits}
{\rm fv} = \mathop{\rm arg~min}\limits_{v \in I^n} \{ \frac{|v_p - f_p(v)|^2}{\sigma_p^2} + {\rm violation}(v) \}
\end{equation}
\subsection{Generative Molecular Design}
The Molecular-Customized McKay’s Canonical Construction Path Algorithm creates molecular structures efficiently, exhaustively, and without isomorphic duplication, i.e., edge relations are preserved. A simplified version of the algorithm with an idealized construction example is visualized in Fig.S2a. Based on the root molecule graph, one vertex is extended from each orbit of the automorphism group. The graph is grown by performing a generation step to add a new vertex to extendable vertices of an existing graph, starting from an initial single vertex. At each generation step, a canonical labeling step of the current graph is performed. The labeling algorithm assigns ordinals to all the vertices of the isomorphic graphs, providing a unique vertex addition or construction order for obtaining the graph. If a new vertex coincides with the last vertex in the construction order, the generation step continues. Otherwise, the generation step terminates - it is pruned - as its construction path generates a duplicate, isomorphic graph. Note, that the vertex here corresponds to a molecular structure and that adding a new vertex means adding an atom or a sub-structure in this application.
The advanced version of the generative algorithm \cite{Hama2020,Takeda2022} inherits user-customized design constrains such as, for example, expected or unexpected sub-structures in SMILES format, and the inverse designed feature vectors such as, for example, the number of heavy atoms, rings, and occurrences of fragment structures, and then converts them into molecular structures. Constraint functions capture design rules such as, for example, disallowing triple bonds between carbon atoms, limiting the number of molecular rings in the structure to between 4 and 9, or including preferential molecular substructures. For the purpose of this study, all constraints have been merged with the extracted feature vectors and best regression models for subsequent iterations of optimized structure generation.
An example with a ring of six atoms is shown in Supplemental Fig.\ref{supfig2}b. In a first step, the orbits of the automorphism group are obtained from the SMILES representation of a given sub-structure. We then create the isomorphic equivalent graph by replacing the atom name with the SMILES name and the minimum index number (indices 1 and 3). In this step, those vertices without "free hand" are eliminated which helps identifying the symmetry of the graph. For better handling, the isomorphic equivalent graph is further simplified to a single vertex representation by selecting vertices with minimum index number in each orbit, whereas other vertices are replaced by dummy atoms. Finally, we obtain the orbits of the automorphism group and the minimum index number of each orbit is selected to be an extending vertex of the sub-structure.
Supplemental Fig.\ref{supfig2}c shows a construction example. During the generation of a molecular structure as a colored graph (graph of various atoms) and by adding a vertex with a connecting edge one by one, the algorithm minimizes the number of vertices in order to improve the performance of the canonical labeling which is a bottleneck routine of the process. In the root graph, an extending vertex which has a minimum label in an orbit of an automorphism is considered to be a single vertex graph. In order to extend the vertices, it is replaced by an isomorphic equivalent representation. The new vertex is extended and canonical labeling of the entire graph is performed. Once the canonical construction path is validated, the original representation will be recovered. The new structure will be tested against the pre-defined design constraints. The cycle repeats until it fulfills pre-set requirements such as number of generated results with pre-defined target property values.
\subsection{Computational Representation of Polymer Membrane}
For the physical validation of AI predicted CO$_2$-permeability, we have created a method to automatically design a polymer membrane representation which is suitable for molecular dynamics simulation, see right box in Fig.\ref{fig1}. In a first step, the SMILES strings of AI designed monomers are indexed to indicate the position of head and tail atoms so they can be used as input for PySIMM\cite{Fortunato2017,pysimmwebsite}. We have then used the Force Field Assisted Linear Self-Avoiding Random Walk application in PySIMM\cite{Fortunato2017} to build a linear polymer chain with a maximum number of about 800 heavy atoms which are defined as atoms other than hydrogen. This way, we have kept the length of the polymer chain rather constant, independent of the monomer size. For describing the interactions between intra-chains and inter-chains atoms, we have used the DREIDING force field\cite{Mayo1990}.
Once the chain building step is completed, PySIMM saves the LAMMPS\cite{Plimpton1995} topology file with the associated force field parameters. Then, the polymer chains are packaged in a 3D box using Packmol\cite{Martnez2009}. The 3D simulation box is periodic in x, y, z directions. We are aware of the limited accuracy of applying force-field parameters generated automatically by PySIMM for polymer modeling, and opls-aa parameters\cite{Jorgensen1996} can be adopted for an improved accuracy. For defining the membrane thickness, the z dimension of the box is set to 6 nm. The dimensions of the box in x and y are defined by a multiplication factor of the polymer chain size. The number of polymer chains is defined by the total number of atoms in the polymer membrane - 20,000 in the present case. To keep the membrane thickness in z-direction fixed at 6nm, rigid walls are placed in the x, y membrane planes. To avoid interactions between periodic images in z-direction, a vacuum layer with a thickness of 5 nm is placed at each side of the polymer membrane.
The system then undergoes an equilibration process that consists of a nine-step compression-relaxation sequence, similar to the approach in reference\cite{Kong2019}: (1) energy minimization with isothermal and isochoric (NVT) MD simulation at 1 K for 100 ps, (2) NVT MD simulation at 300 K for 100 ps, (3) isothermal and isobaric (NPT) MD simulation at 300 K and 1 atm for 100 ps, (4) NPT MD simulation at 300 K and from 1 atm to 3000 atm for 100 ps, (5) NPT MD simulation at 300 K and 3000 atm for 300 ps, (6) NVT MD simulation at 800 K for 100 ps, (7) NVT MD simulation at 300 K for 100 ps, (8) NPT MD simulation at 300 K and 1000 atm for 300 ps, the steps (6)-(8) repeats 30 times, and (9) NPT MD simulation at 300 K and 1 atm for 10,000 ps.
To account for long-range electrostatic interactions, we have adopted the reciprocal space
Particle-Particle Particle-Mesh (PPPM) method. For all calculations, we have used 1 fs time steps and a
cutoff radius of 1.4 nm for van der Waals and Coulomb interactions, respectively. To control temperature and pressure, we have used Nose-Hoover thermostats and barostats with a relaxation time of 0.1 ps and 1 ps, respectively.
All MD simulations were carried out with the LAMMPS package\cite{Plimpton1995,Brown11,Brown12,Brown13}.
For further information regarding the effects of chosen force fields, chain lengths, membrane thicknesses and the equilibration process protocol, see Supplementary Information Figures \ref{supfig4} and \ref{supfig5}.
\subsection{Automated Membrane Validation with Molecular Dynamics Simulation}
Two types of Molecular Dynamics (MD) simulations methods have been used to investigate transport through membranes: Equilibrium MD (EMD) and Non-Equilibrium MD (NEMD). NEMD is ideally suited to represent an experimental membrane system in which an external driving force, such as a chemical potential or pressure gradient, is applied to the membrane. Specifically, we have chosen CPDMD to evaluate membrane based gas filtration\cite{Kong2019}.
For benchmarking purpose, as shown in Supplemental Fig.\ref{supfig3}, we have chosen representative homo-polymers covering a broad CO$_2$-permeability range. For six of these homo-polymers, we have performed five independent CPDMD simulations each using the simulation box set up in Fig.\ref{fig1}. The results are shown in Supplemental Fig.\ref{supfig6}. Overall, we obtain reasonable agreement with literature values for BZ-CF3, IBPA, PIM-PI-EA and PEO, despite the large error bars for BZ-CF3 and PEO. The simulated CO$_2$-permeabilities of TDA1-DM and PI-5 are higher than the literature values, however, one of the PI-5 samples is close to the experimental value. We note that due to the amorphous nature of polymers, both experimental and simulations results typically exhibit large error bars\cite{Kong2019}.
To set up a CPDMD simulation, we have placed the membrane at the center of the simulation box with a fixed, rigid wall at each side of the membrance, 10 nm away from its surface, as shown in Fig.\ref{fig1}. To avoid interactions with periodic images in z-direction, we have placed a 5 nm vacuum layer beyond each rigid wall. The carbon atoms in the 5 \AA~ surface layer of the membrane were fixed in z-direction by a harmonic potential with a force constant of 5.0 Kcal/mol \AA$^2$. Following\cite{Kong2019}, we have estimated the number of CO$_2$ molecules in the feed chamber using the ideal gas law $N_{CO2}=N_ApV/RT$, where $N_A$ is the Avogadro's constant, $R$ is the gas constant, $p$ is the pressure set to 10 atm, $T$ is the temperature set to 300 K, and $V$ is the feed chamber volume, see Fig.\ref{fig1}. We have then performed NVT MD simulations at 300 K. Due to the pressure gradient, CO$_2$ molecules are absorbed within the membrane and, subsequently, transported to the permeate side. To maintain the same initial pressure gradient of 10 atm, we have added CO$_2$ molecules into the feed chamber while removing the molecules at the permeate side to produce a pseudo vacuum. We have run the addition/removal processes in cycles with a time interval of 200 ps following\cite{Liu2019}. We have used the DREIDING force field\cite{Mayo1990} for describing the interactions between intra-chains and inter-chains atoms. For CO$_2$ molecules, we have used the rigid model TraPPE force field\cite{Potoff2001}. For the CO2/polymer LJ interactions, we have applied the Lorentz–Berthelot mixing rules. All MD simulations were performed with the LAMMPS package\cite{Plimpton1995,Brown11,Brown12,Brown13} using the same parameters described in the previous Methods subsection.
From the $N_{CO2}-t$ slope, the permeability $P_{CO2}$ can be estimated following
\begin{equation} \label{eq:2}
P_{CO2}=\frac{(\Delta N_{CO2}/N_A)l}{A\Delta t p}
\end{equation}
\noindent where $\Delta N_{CO2}$ is the number of CO$_2$ molecules permeated within time duration $\Delta t$, $N_A$ is Avogadro's constant, $l$ and $A$ are the membrane thickness and area, respectively, and $p$ is the partial pressure - 10 atm in this case - in the feed chamber.
The termination criterion for CPDMD simulations is discussed in detail in the Supplementary Information and shown in Supplemental Fig.\ref{supfig7}. The evolution of the simulated CO$_2$ density profile across a polymer membrane is shown in Supplemental Fig.S8, complementing the simulation results shown in Fig.\ref{fig4}d for the same polymer.
\end{methods}
\begin{addendum}
\item[Acknowledgements] We acknowledge discussion with and support by Manuela F. B. Rodriguez, Rong Chang, Daiju Nakano and Bruno Flach (all IBM Research).
\item[Correspondence]
Correspondence and requests for materials should be addressed to [email protected]
\item[Supplementary Information]
Supplementary Information, including Supplementary Table 1 and Supplementary Figures S1-S8, are available as a pdf-file
\item[Code Availability - 1]
The Polymer Property Prediction (PPP) Engine is available at
\noindent
\url{https://github.com/IBM/polymer\_property\_prediction}
\item[Code Availability - 2]
The jupyter notebook for polymer property predictions based on SMILES input is available - under doi:10.24435/materialscloud:jk-zm - at
\noindent
\url{https://archive.materialscloud.org/record/2022.65}
\item[Data Availability - 1]
The training dataset containing polymer candidates in SMILES format is available - under doi:10.24435/materialscloud:jk-zm - at
\noindent
\url{https://archive.materialscloud.org/record/2022.65}
\item[Data Availability - 2]
The dataset containing AI discovered polymer candidates in SMILES format is available - under doi:10.24435/materialscloud:jk-zm - at
\noindent
\url{https://archive.materialscloud.org/record/2022.65}
\end{addendum}
\clearpage
\newcommand{\beginsupplement}{%
\setcounter{table}{0}
\renewcommand{\thetable}{S\arabic{table}}%
\setcounter{figure}{0}
\renewcommand{\thefigure}{S\arabic{figure}}%
}
\section*{SUPPLEMENTARY INFORMATION}
\beginsupplement
\subsection{Hyperparameter Optimization and Limited Discrepancy Search}
We provide supplementary data, shown in Fig.\ref{supfig1}, for supporting the discussion in the Methods Section of the main manuscript.
\subsection{Generative Molecular Design}
We provide supplementary graphics material, shown in Fig.\ref{supfig2}, for supporting the discussion in the Methods Section of the main manuscript.
\subsection{Constant Pressure Difference Molecular Dynamics Protocol}
In this section, we discuss a series of studies that we have conducted for establishing the computational protocol outlined in the Methods Section of the main manuscript with regards to the Constant Pressure Difference Molecular Dynamics (CPDMD) simulations. This includes the determination of the equilibration process to obtain the morphology of the polymer membrane, the determination of the force field, the length of polymeric chain and, finally, the thickness of polymer membrane. For our benchmark studies we have chosen eight representative homo-polymers, shown in Fig.\ref{supfig3}, covering a broad CO$_2$-permeability range .
In a first step, we have studied the effect of force field on CO$_2$-permeability. The atomic interactions in the polymer membranes are described by the following force fields: DREIDING\cite{Mayo1990}, GAFF\cite{Wang2004} and GAFF2\cite{He2020}. For constructing the polymer membrane, we have adopted a procedure referred to as "annealing" which is shown in Table \ref{suptable1}. We have built all polymer membranes with a fixed chain length of 30 monomers. In Fig.\ref{supfig4}, we show CPDMD simulation results for membranes having a thickness of 6 nm and 8 nm, respectively. Overall, the CO2-permeability does not depend on the choice of force fields or the polymeric membrane thickness. While the CO$_2$-permeability values obtained are similar for the polymers analyzed here, we observe some variability due to the amorphous nature of polymer membranes which is more pronounced for the thicker membranes. Based on the results, we chose the DREIDING force field because it allows us to cover a broader range of polymers. For the purpose of the main study and to account for computational resources, the membrane thickness was set to 6 nm.
In a second step, we have investigated the the equilibration process for obtaining the polymer membrane and the chain length which is shown in Fig.\ref{supfig5}. Specifically, we have considered two different equilibration processes referred to as "compression" and "annealing" which are shown in Table \ref{suptable1}. The polymer chain length is limited by the number heavy atoms, i.e. atoms other than hydrogen, and we have chosen the following values: 200, 500, and 800. In Fig.\ref{supfig5}, we show the CO$_2$-permeabilities obtained for all polymers with the different chain lengths and equilibration processes used. By comparing the data, we obtain best results in "compression" equilibration with a chain length of 800 heavy atoms.
To confirm the choice of parameters, we have performed for six of the homo-polymers shown in Fig.\ref{supfig3} a set of five CPDMD simulations each. The results are shown in Fig.\ref{supfig6}. Overall, we obtain reasonable agreement with literature values for BZ-CF3, IBPA, PIM-PI-EA and PEO, despite the large error bars for BZ-CF3 and PEO. The simulated CO$_2$-permeabilities of TDA1-DM and PI-5 are higher than the literature values, however, one of the PI-5 samples is close to the experimental value. We note that due to the amorphous nature of polymers, both experimental and simulations results typically exhibit large error bars\cite{Kong2019}.
In a final step, we have determined the stop criterion for the CPDMD simulations. In Fig.\ref{supfig7}a, the vertical dashed lines indicate the times at which the polymer membranes reach the saturation level and, therefore, steady-state filtration. In other words, the number of CO$_2$ molecules inside the membrane as a function of time reaches a plateau. Similarly, the permeability curves in Fig.\ref{supfig7}b do not show significant change past that point in time which means that the time to stop the simulation is reached. To further exemplify the process, the evolution of the CO$_2$ density profile across a polymer membrane is shown in Fig.\ref{supfig8} for one of the top-three ranked polymers generated by the AI method.
\clearpage
\begin{table}
\includegraphics[width=\linewidth]{TableS1.pdf}
\caption{Comparison of two equilibration processes for obtaining the polymer membrane morphology.}
\label{suptable1}
\end{table}
\begin{figure}[h]
\includegraphics[width=\linewidth]{FigureS1.pdf}
\caption{Comparison of regression results obtained with (upper panel) and without (lower panel) the application of hyperparameter optimization.}
\label{supfig1}
\end{figure}
\begin{figure}[h]
\includegraphics[width=\linewidth]{FigureS2.pdf}
\caption{(a) Schematic visualization of the Molecular-Customized McKay’s Canonical Construction Path algorithm. Visual conceptions of the advanced versions of the Molecular-Customized McKay’s Canonical Construction Path algorithm with (b) sub-structure representations and (c) molecular construction example, respectively. Three level of sub-structures with graph representations are considered: single vertex representation for canonical construction path check, isomorphic equivalent representation for extending vertex check, and original representation for counting fragment occurrences.}
\label{supfig2}
\end{figure}
\begin{figure}[h]
\includegraphics[width=\linewidth]{FigureS3.pdf}
\caption{Representative monomer units chosen for Constant Pressure Difference Molecular Dynamics simulations of polymers. Experimental permeability values for benchmarking purpose were obtained from the literature: BZ-O \cite{Powell2006}, BZ-CF3\cite{Powell2006}, IBPA\cite{Yampolskii2012}, PIM-PI-EA\cite{Rogan2014}, TDA1-DMN\cite{Ghanem2016}, PI-3\cite{Powell2006} , PI-5\cite{Powell2006} and PEO\cite{Powell2006}.}
\label{supfig3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{FigureS4.pdf}
\caption{Effect of choice of force field on simulated CO$_2$-permeability for polymer membranes having a thickness of (a) 6nm and (b) 8 nm, respectively. }
\label{supfig4}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{FigureS5.pdf}
\caption{Effect of choice of polymer chain length and membrane equilibration process. (a) CO$_2$-permeability for representative monomer units. (b) Simulated CO$_2$-permeability versus experimental CO2 permeability for representative monomer units.}
\label{supfig5}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{FigureS6.pdf}
\caption{Constant Pressure Difference Molecular Dynamics (CPDMD) benchmark. For each representative monomer, the simulated CO$_2$-permeability values obtained for five polymer sample representations are plotted as open squares. The average permeability values obtained for each polymer are plotted as red squares. Experimental values obtained from the literature are plotted as solid squares: BZ-CF3 from \cite{Powell2006}, IBPA\cite{Yampolskii2012}, PIM-PI-EA\cite{Rogan2014}, TDA1-DMN\cite{Ghanem2016}, PI-5\cite{Powell2006} and PEO\cite{Powell2006} .}
\label{supfig6}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{FigureS7.pdf}
\caption{Constant Pressure Difference Molecular Dynamics (CPDMD) simulations of polymer membrane filtration. (a) Number of CO$_2$ molecules inside the polymer membrane. (b) CO$_2$-permeability of the polymer membrane as function of time. The vertical dashed lines in (a) indicate the times at which the simulations reach a steady state: 4 ns, 23 ns, and 28 ns for IBPA, BZ-CF3, and PEO, respectively. The vertical dashed lines in (b) indicate the time at which the CO$_2$-permeability reach steady state; 10 ns, 51 ns, and 58 ns for IBPA, BZ-CF3 and PEO, respectively.}
\label{supfig7}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{FigureS8.pdf}
\caption{CO$_2$ density profile across a polymer membrane simulated by using a representative, AI discovered monomer unit.}
\label{supfig8}
\end{figure}
\clearpage
\textbf{References}
|
1,116,691,500,403 | arxiv | \section*{Introduction}
In recent years, the introduction of Transformation Optics has shed a new light on the propagation of electromagnetic
waves in complex media and has proven to be an intuitive yet powerful tool for
engineering the flow of light at the sub-wavelength scale \cite{Pendry1780,LeonhardtPhilbin2006,Leonhardt1777}.
The theory is based on the invariance of Maxwell's equations under a change of coordinates,
resulting in equivalent permittivity and permeability profiles that are generally anisotropic, spatially varying and sometimes singular.
Perhaps the most popular application has been an invisibility cloak, which has been realized experimentally in various frequency regimes
for two dimensional and three dimensional setups \cite{Schurig977,valentine2009optical,ergin2010three}
thanks to the development of metamaterials and advanced manufacturing techniques \cite{chen2010transformation}.
However, the complexity of the required material properties makes practical realisation a hard task,
while the use of resonant meta-atoms to reach extreme parameters results usually in a narrow
frequency band of operation \cite{Oscar2012,Oscar2013}. There is thus a critical need for other approaches to achieve invisibility at least to reduce
diffraction significantly such as mantle cloaking \cite{AluMantle2009}, optimized dielectric covers \cite{Sigmund2011,Vial2015} or by
introducing gain \cite{Lin2011,Mostafazadeh2013}.
Quite paradoxically, although it is a very common phenomenon in wave physics, relatively little is known regarding what does or does not cause
scattering when the material properties are allowed to vary rapidly in space \cite{Berry1990,Horsley2016,Philbin2016,horsley2015spatial}.
Finally, there is an ever increasing demand for controlling optical fields
at the nanoscale for applications ranging from medical diagnostics and sensing to optical devices and
optoelectronic circuitry \cite{zeng2014nanomaterials,Singh,Ozbay189,li2008harnessing}. In particular, local field enhancement is of paramount
importance in phenomena such as surface enhanced Raman scattering (SERS) \cite{SERS2013,stiles2008surface}, improved non-linear effects
\cite{EnhancNonlinear1,novotny2012,nphotonZayats}, optical antennae and the
control of the local density of states \cite{hoppener2012self,belacel2013controlling}.\\
In this paper we present a general purpose method to control the amplitude and/or phase of a wave propagating
in a two dimensional (2D) inhomogeneous isotropic medium. Although we focus our attention on media
that does not scatter an incident plane wave while producing a
specified amplitude and/or phase, the technique might be extended to arbitrary incident fields as well as to control the scattering pattern.
In addition, the method is not based on the geometrical optics approximation and is valid at every frequency.\\
\section{Governing equations}
We consider here linear, isotropic, lossless and possibly dispersive materials characterized by their $z$-invariant
relative permittivity $\varepsilon(\bm r)$ and relative permeability $\mu(\bm r)$, where $\bm r=(x,y)^{\rm T}$ is the position
vector. This medium is illuminated by a monochromatic electromagnetic wave of
pulsation $\omega=k_0/c$, amplitude $A_0(\bm r,k_0)$ and phase $\phi_0(\bm r,k_0)$ whose electric
field is linearly polarized along the $z$ axis, which is the so called Transverse Electric (TE) polarization,
so that $\bm E=E_z \bm z$.
Under these conditions, Maxwell's equations can be recast as the scalar wave equation:
\begin{equation}
\nabla\cdotp\left(\frac{1}{\mu}\,\nabla E_z\right) + k_0^2\,\varepsilon\, E_z = 0 ,
\label{waveEq}
\end{equation}
By writing the total electric field in polar form as $E_z=A\mathrm{e}^{\rm i \phi}$ ($A$ and $\phi$ real), Eq.~(\ref{waveEq}) is
separated into the following two equations:
\begin{empheq}[left=\empheqlbrace]{align}
&\bm\nabla\cdotp\left(\frac{A^2}{\mu}\bm\nabla \phi\right)=0 \label{eqsimon1} \\
&( \bm{\nabla}\phi)^2-k_0^2\varepsilon\mu-\frac{\bm\nabla^2 A}{A}+ \frac{\bm\nabla \mu}{\mu}\cdotp\frac{\bm\nabla A}{A}=0 \label{eqsimon2}
\end{empheq}
The physical meaning of these two equation is well known:
the first is the continuity equation for the Poynting vector,
while the second is the \emph{exact} eikonal equation governing the motion of the rays \cite{holland1995quantum,PhilbinExactGO}.
They are usually solved through setting $\varepsilon$ and
$\mu$ as known quantities and then solving for $E_z$, \textit{i.e.} $A$ and $\phi$.
However, the methodology presented here allows us to fix arbitrarily two parameters and then compute
the two others using Eqs. (\ref{eqsimon1})-(\ref{eqsimon2}).\\
From now on we consider an incident homogeneous plane wave with constant amplitude $A_0$ and
phase $\phi_0(\bm r,k_0)=k_0 \bm n \cdotp \bm r$, with $\bm n =(\cos\theta_0,\sin\theta_0)^{\rm T}$ the unit vector defining the incidence direction.
The gradient of the phase can then be written as
$$\bm{\nabla}\phi=\bm n k_0+\bm{\nabla}\psi,$$ where $\psi$ is an additional phase term.
If $\bm{\nabla}\psi\rightarrow 0$ and
$A\rightarrow A_0$ as $r=\sqrt{x^2+y^2}\rightarrow +\infty$,
the incident wave remains plane and the material will be invisible. \\
\section{Controlling amplitude and permeability}
In this section we suppose that we fix $A$ and $\mu$.
Substituting $\bm{\nabla}\phi$ into Eq.~(\ref{eqsimon1}),
we obtain the following Poisson's equation for $\psi$
\begin{equation}
\bm\nabla\cdotp\left(\frac{A^2}{\mu}\bm\nabla \psi\right)= -k_0\bm n\cdotp\bm\nabla\left(\frac{A^2}{\mu}\right) ,
\label{poisson}
\end{equation}
which can be solved to give
$$ \bm \nabla \psi(\bm r) = -\frac{\mu(\bm r)k_0}{2\pi A^2(\bm r)}
\int {\rm d}^2\bm r'\frac{\bm r-\bm r'}{|\bm r-\bm r'|^2} \bm n \cdotp
\bm{\nabla}'\left(\frac{A^2(\bm r')}{\mu(\bm r')}\right).$$
This shows that if we specify the quantity $\zeta=A^2/\mu$ over space then the gradient of the phase changes in response to the change
in $\zeta$ in the same way the electric field responds to a charge density.
Substituting the above equation into (\ref{eqsimon2}) then determines a relationship between $\varepsilon$ and $\mu$. \\
In the following we further assume that $A$ and $\mu$ are dispersionless and introduce the frequency
independent quantities $\alpha=\phi/k_0$ and $\beta=\psi/k_0$.
Locally, the permittivity dispersion takes the form of a lossless Drude model
\begin{equation}
\varepsilon(\omega)=\varepsilon_{\infty}-\omega_{\rm p}^2/\omega^2,
\label{drudemodel}
\end{equation}
with the
permittivity at infinite frequency $\varepsilon_{\infty}$ and the plasma frequency $\omega_{\rm p}$ defined as:
\begin{align}
\varepsilon_{\infty}&= \frac{(\bm{\nabla} \alpha)^2}{\mu}=\frac{1}{\mu}\left[1+(\bm{\nabla} \beta)^2+2\,\bm n \cdotp \bm{\nabla} \beta \right],\label{drudeparam1}\\
\omega_{\rm p}^2&=\frac{c^2}{\mu} \left(\frac{\bm\nabla^2 A}{A}-\frac{\bm\nabla \mu}{\mu}\cdotp\frac{\bm\nabla A}{A}\right).
\label{drudeparam2}
\end{align}
The obtained permittivity is linear, spatially varying, with a $1/\omega^2$ dispersion and non-local since
$\varepsilon_{\infty}$ depends on the incidence direction $\bm n$.
On the basis of time reversal, a plane wave coming from the opposite direction
gives a total field with the same amplitude but an opposite phase as $\phi(-\bm n)=-\phi(\bm n)$, while invisibility
is maintained for the same permittivity
since $\varepsilon(-\bm n)=\varepsilon(\bm n)$, even if generally the amplitude and material profiles do not possess
any particular symmetry.\\
\subsection{A special case}
There is a particular situation for which we can get rid of the non-locality, and this happens when
$\bm{\nabla}\beta=\bm 0$, \textit{i.e.} when $\mu$ is proportional to $A^2$. In this case and
in the ray optics approximation we retrieve a medium with
unit index of refraction because $\varepsilon\rightarrow 1/\mu$ as $\omega\rightarrow +\infty$,
which is an inhomogeneous medium where all the waves travel in straight lines and without reflection.
Essentially, our approach can be understood by considering this limiting case $\varepsilon=1/\mu$ and extending
it to work for all frequencies and all incidences by adding dispersive and non-local terms into $\varepsilon$.
On the other side of the spectrum,
the medium becomes singular in the quasi-static limit since $\abs{\varepsilon}\rightarrow +\infty$ as $\omega\rightarrow 0$.
This behaviour is due to the fact that any permeability inhomogeneity will cause
large scattering at low frequencies, and one needs large changes in the permittivity to counteract this.\\
Without loss of generality, we now consider the case where $\mu=A^2$: this
implies that the phase is exactly given by $\bm{\nabla}\phi=\bm n k_0$ everywhere,
\textit{i.e.} the field is a plane wave with a non-uniform amplitude, and the Drude parameters simplify as
\begin{equation}
\varepsilon_{\infty}=\frac{1}{\mu}\quad \text{and} \quad
\omega_{\rm p}^2 =\frac{c^2}{\mu} \left( \frac{\bm\nabla^2\sqrt{\mu}}{\sqrt{\mu}}-\frac{\bm\nabla \mu}{\mu}\cdotp\frac{\bm\nabla \sqrt{\mu}}{\sqrt{\mu}}\right). \label{equchi}
\end{equation}
We note that in this case, $\varepsilon$ is frequency dispersive but does not depend on the incidence angle,
similarly to the P\"{o}schl-Teller profile
(which is reflectionless for all angles and depends on $\omega$, see e.g. \cite{Lekner2007}) as
the permittivity is analogous to the quantum potential for the Shr\"{o}dinger equation.\\
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{fig1}
\caption{Invisible material in the case $\mu=A^2$ with $80\%$ damping of the field in the centre.
(a) Permeability (top) and permittivity (bottom) profiles along the radial direction.
(b) Real part of the electric field $E_z$ for $\lambda_0/R=1$. \label{fig1}}
\end{figure}
As an example, suppose we want to obtain a field with a prescribed Gaussian amplitude $A=1-f\exp(-r^2/R^2)$,
and that $\mu=A^2$ (see blue line on the top panel of Fig.~\ref{fig1}~(a)), with
$R=700\,$nm and $f=0.8$. Note that this results in a permeability profile with values below unity, which seems to contradicts our
assumption of neglecting frequency dispersion for $\mu$. In practice indeed we would likely only be able to realise
the $\mu$ profile containing regions of $\mu<1$ for one single frequency.
The calculated permittivity profile is shown for several wavelengths on Fig.~\ref{fig1}~(a) (bottom panel). As discussed previously,
the required $\varepsilon$ is roughly equal to $1/\mu$ for $\lambda_0/R=0.1$, while one needs more extreme permittivity values at longer wavelengths.
We solved the wave equation (\ref{waveEq}) using a Finite Element Method (FEM) for $\lambda_0/R=1$, with a plane wave of unit amplitude incident from the negative $x$ axis
and Perfectly Matched Layers (PML) to truncate the domain.
The real part of the electric field $E_z$ is plotted on Fig.~\ref{fig1}~(b) and reveals a clear damping of the field as well as no scattering and a planar
wavefront everywhere. The computed square norm of the field matches the required one perfectly
(see black circles on the top panel of Fig.~\ref{fig1}~(a)).\\
Note that the Transverse Magnetic (TM) polarization case can be treated similarly by replacing $E_z$ by $H_z$ and swapping $\varepsilon$ and $\mu$.
\subsection{The non-magnetic case}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{fig2}
\caption{Invisible material profile in the non-magnetic case ($\mu=1$) with arbitrary control of the amplitude.
(a) Specified amplitude (from a picture of James Clerk Maxwell).
(b) Computed amplitude.
(c) Permittivity profile.
(d) Real part of the electric field, showing the invisibility effect.
\label{fig2}}
\end{figure}
For practical reasons, we investigate the possibility
of having non-magnetic invisible profiles ($\mu=1$).
We solve Eq.(\ref{eqsimon1}) to obtain the phase and the parameters for the permittivity reduce to:
\begin{equation}
\varepsilon_{\infty}= 1+(\bm{\nabla} \beta)^2+2\,\bm n \cdotp \bm{\nabla} \beta \quad \text{and} \quad
\omega_{\rm p}^2= c^2 \frac{\bm\nabla^2 A}{A}
\label{epsi_diel_Eq}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{fig3}
\caption{Angular response of the permittivity profile of Fig.~\ref{fig2}~(c). Top: scattering cross section $\sigma_{\rm s}$
normalized to the profile size $D$. Bottom: average error on the amplitude ${\rm E_ r}$ defined by Eq.~(\ref{error_Eq}).
\label{fig2bis}}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{fig4}
\caption{Invisible metamaterial with sub-wavelength control of the amplitude. (a) Continuous and (b) metamaterial permittivity
profiles. (c) Central colour map: real part of the electric field at $\lambda_0=\SI{10.32}{\micro\meter}$, top and right panels:
target (black dashed lines) and calculated (red solid lines) amplitudes for $y=0$ and $x=0$ respectively.
(d) Left ordinate axis: permittivity dispersion of SiC (solid and dashed cyan lines for real and imaginary parts) and KBr (solid red line),
the horizontal dashed line indicates a zero value;
right ordinate axis: scattering cross section spectra of the metamaterial structure.
The vertical dashed line indicates $\lambda_0=\lambda_{\rm L}=\SI{10.32}{\micro\meter}$ at which we designed the
structure.
\label{fig3}}
\end{figure*}
To illustrate the arbitrariness of the choice of the amplitude, we used a profile extracted
from a grayscale image of James Clerk Maxwell depicted on Fig.~\ref{fig2}~(a), where dark values correspond
to a 50\% enhancement of the field, with a lateral ``size'' of approximately $D=6\lambda_0$.
The permittivity profile is displayed on Fig.~\ref{fig2}~(c), and presents small features and rapidly varying values
between $-0.5$ and $1.5$.
The real part of $E_z$ is displayed on Fig.~\ref{fig2}~(d), and proves clearly that the field is not a plane wave,
with a retarded phase on the left and an advanced phase on the right of the inhomogeneity,
but that this profile does not induce any scattering.
The required field enhancement is respected as can be seen on
Fig.~\ref{fig2}~(b) with no more than 5\% relative error,
albeit some small reflections due to numerical inaccuracies. This
proves the ability of the method to devise invisible non-magnetic media capable of shaping intricate magnitude patterns.
We then investigate the angular response of this permittivity profile in terms of invisibility an amplitude control.
To quantify this, we computed the scattering cross section $\sigma_{\rm s}$ normalized to the profile size $D$, along
with the average error on the amplitude ${\rm E_ r}$ defined as
\begin{equation}
{\rm E_ r}(\theta_0)=\frac{1}{S_\Omega}\int_\Omega {\rm d} \bm r\left\|1-\frac{|E_z(\theta_0)|}{A}\right\|
\label{error_Eq}
\end{equation}
where $\Omega=[24\lambda_0\times 24\lambda_0]$ is the computational window used (cf. Fig.~\ref{fig2}~(d)) with surface
$S_\Omega = (24\lambda_0)^2$. The results are plotted as a function of the incident angle $\theta_0$ on Fig.~\ref{fig2bis},
and clearly indicate a strong reduction of the scattering and an accurate reconstruction of the field magnitude for
the reference configuration ($\theta_0 = \pi$) as well as for the anti-parallel direction of incidence ($\theta_0 = 0$), as discussed before.
As expected, both effects are fairly narrow-band due to the non-locality of the permittivity.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{fig5}
\caption{Inverse design of amplitude and phase profiles (see text for definitions) represented in (e), (g), giving a desired
electric field (c). Required permittivity (a) and permeability (b) are the used to solve the wave equation (direct problem)
for a sanity check of the field (d), amplitude (f) and phase (h).
\label{fig4}}
\end{figure*}
\subsection{Metamaterial implementation}
As for a possible experimental verification of our method, we propose a metamaterial structure that approximates the permittivity profile given by
Eq.~(\ref{epsi_diel_Eq}) at $\lambda_0=\SI{10.32}{\micro\meter}$ with $A=1-f\exp(-r^2/R^2)$, $f=-0.9$,
and $R=\lambda_0/6.5=\SI{1587}{\nano\meter}$. The resulting continuous permittivity profile is given on
Fig.~\ref{fig3}~(a) and is varies between $0.044$ and $2.239$. To be able to reach values of permittivity smaller than
unity, we use silicon carbide (SiC), a polaritonic material that has a strong dispersion
in the thermal infrared range given by the Drude-Lorentz model \cite{palik}
$\varepsilon_{\rm SiC}(\omega)=\varepsilon_{\infty}[1+(\omega_L^2-\omega_T^2)/(\omega_T^2-\omega^2+\rm i\Gamma\omega)]$,
with $\varepsilon_{\infty}=6.7$, $\omega_{\rm L}=\SI{1.82e14}{\radian\per\second}$,
$\omega_{\rm T}=\SI{1.49e14}{\radian\per\second}$ and $\Gamma=\SI{8.96e11}{\radian\per\second}$ (see solid and dashed
cyan lines on Fig.~\ref{fig3}~(d)). This material exhibits a dielectric to metallic transition
around $\lambda_0=\lambda_{\rm L}=\SI{10.32}{\micro\meter}$ so that $\varepsilon_{\rm i}(\lambda_0)=0.0009 - 0.0815\rm i$.
For values greater than unity, we use potassium bromide (KBr) with permittivity $\varepsilon_{\rm KBr}(\lambda_0)=2.3280$ \cite{li1976refractive}.
The hybrid metamaterial structure is a $51\times 51$ array of square unit cells of period $d=\lambda_0/27=\SI{377}{\nano\meter}$.
The continuous map of Fig.~\ref{fig3}~(a) is discretized at the centre $(x_i,y_j)$ of those unit cells resulting in a discrete set of
values $\varepsilon_{ij}=\varepsilon(x_i,y_j)$.
Since the period is much smaller than the wavelength, we can safely use an effective permittivity $\varepsilon _{\mathrm {eff} }$ given by
the Maxwell-Garnett homogenization formula:
$${\left({\frac {\varepsilon _{\mathrm {eff} }-\varepsilon _{\rm h}}{\varepsilon _{\mathrm {eff} }+2\,\varepsilon _{\rm h}}}\right)
=f\left({\frac {\varepsilon _{\rm i}-\varepsilon _{\rm h}}{\varepsilon _{\rm i}+2\,\varepsilon _{\rm h}}}\right)
} $$
where $\varepsilon _{\rm h}$ is the permittivity of the host medium (air in our case),
$\varepsilon _{\rm i}$ is the permittivity of the inclusions (either SiC or KBr),
$f=a^2/d^2$ is the filling fraction and $a$ is the length of the square section of the rods. The structure is then
constructed as follows: if $\varepsilon_{ij}<0.99$ we use SiC rods, if $\varepsilon_{ij}>1.01$ we use KBr rods, otherwise we just use air
(see Fig.~\ref{fig3}~(b)). The real part of the electric field is plotted on Fig.~\ref{fig3}~(c), and clearly illustrates the
invisibility effect and the sub-wavelength control of the amplitude. The top and left panels compare the target (black dashed lines)
and calculated (red solid lines) amplitudes for $y=0$ and $x=0$ respectively,
revealing a quasi perfect match apart from a small scattering, mostly due to the truncation and discretization of the permittivity profile and
a slightly weaker amplitude than expected, due to losses in SiC rods. The scattering cross section spectrum on Fig.~\ref{fig3}~(d) exhibits
a pronounced dip around $\lambda_0=\SI{10.32}{\micro\meter}$, which illustrates the strong reduction of diffraction resulting in a quasi-invisible
complex metamaterial.\\
\section{The inverse problem: controlling amplitude and phase}
Finally, we study the inverse problem of finding invisible material properties that give a pre-defined electric field.
To this aim, we fix the amplitude $A$ and the additional phase term $\psi$ and rewrite Eq.~(\ref{eqsimon1}) as:
\begin{equation}
A^2\bm\nabla \phi \cdotp\bm\nabla u = \bm\nabla\cdotp\left(A^2\bm\nabla \phi\right),
\end{equation}
with $u=\ln\mu$. This equation is then solved numerically and the obtained value of $\mu$ is plugged into Eq.~(\ref{eqsimon2})
to obtain $\varepsilon$.\\
For the following example, we set $\lambda_0=\SI{700}{\nano\meter}$, $R=\lambda_0$, $\theta_0=\pi/3$,
\begin{align*}
A=1 & - 0.3\,\mathrm{e}^{-\left[(x-2\lambda_0)^2+0.5(y+2\lambda_0)^2\right]/R^2} \\
& + 0.4\,\mathrm{e}^{-\left[0.6(x+2\lambda_0)^2+(y-2\lambda_0)^2\right]/R^2}
\end{align*}
and
$$
\psi = k_0\left[x''_a\,\mathrm{e}^{-\left[{x''_a}^2+0.4{y''_a}^2\right]/R^2} - 0.7\,x''_b\,\mathrm{e}^{-\left[0.5{x''_b}^2+{y''_b}^2\right]/R^2}\right]
$$
using the shifted and rotated coordinates:
\begin{align*}
&x''_a = n_x x'_a + n_y y'_a, \qquad &x'_a= x-2\lambda_0, \\
&y''_a = -n_y x'_a + n_x y'_a,\qquad &y'_a = y-2\lambda_0,\\
&x''_b = n_x x'_b + n_y y'_b ,\qquad &x'_b= x+2\lambda_0,\\
&y''_b = -n_y x'_b + n_x y'_b ,\qquad &y'_b = y+2\lambda_0 .
\end{align*}
This particular choice of amplitude and phase will give the following wave behaviour:
amplitude damping at $(+2\lambda_0,-2\lambda_0)$,
amplitude enhancement at $(-2\lambda_0,+2\lambda_0)$,
phase expansion at $(-2\lambda_0,-2\lambda_0)$ and
phase compression at $(+2\lambda_0,+2\lambda_0)$ (see Figures~\ref{fig4} (e), (g) and (c) for the specified amplitude,
additional phase and electric field respectively). The obtained value of material properties are
plotted on Figs.~\ref{fig4} (a) for the permittivity and (b) for the permeability.
These non trivial profiles allow us to control the wave propagation quite arbitrarily in the near field
while being transparent to a specific incident plane wave. Note that as stated before, the same profiles are still invisible
for a wave coming from the opposite direction, and maintain the amplitude control but the phase has now opposite sign.\\
To double check the validity of our results, we solved the wave equation (\ref{waveEq}) employing the permittivity and permeability
obtained by our approach. The results are plotted in Figs.~\ref{fig4} (f), (h) and (d) for the amplitude,
additional phase and electric field respectively and match the required wave behaviour perfectly. The generality of this
inverse problem makes it quite versatile and reveals a family of amplitude and phase controlling invisible electromagnetic media.
\section*{Conclusion}
In conclusion, we have presented a flexible and systematic methodology to derive isotropic and lossless material properties needed to manipulate
the amplitude and phase of the electromagnetic field in an arbitrary way, for planar propagation.
In addition, our work provides a contribution in the understanding of what
governs scattering in this type of media.
Since it is based on the scalar wave equation, it could be easily extended to other fields such acoustics or fluid dynamics.
In particular we have applied this method to derive a large class of invisible permittivity and permeability profiles.
We illustrated these concepts through numerical examples for TE polarized plane waves
using both $\varepsilon$ and $\mu$ and obtained omni-directional
invisibility and control of the amplitude. Then we studied the case of non-magnetic materials and showed that one can obtain invisibility
and fashion the spatial variation of the magnitude of the electric field for two anti-parallel directions of incidence.
A metamaterial structure working in the infrared has been proposed, exhibiting sub-wavelength control of waves and invisibility at the
same time. Finally, we tackled the inverse problem of finding non-scattering material properties that give a specified electric
field. These results pave the way for a new route towards achieving invisibility with isotropic materials,
and may offer an alternative paradigm for the design of nanophotonic devices with enhanced performances.\\
\begin{acknowledgments}
This work was funded by the Engineering and Physical
Sciences Research Council (EPSRC), UK, under a
Programme Grant (EP/I034548/1) ``The Quest for
Ultimate Electromagnetics using Spatial Transformations
(QUEST)''.
\end{acknowledgments}
\input{main.bbl}
\end{document}
|
1,116,691,500,404 | arxiv | \section{Introduction}
\label{sec:intro}
The exotic antiprotonic helium atoms $\bar{p}\mathrm{He}^{+}$ are formed
when negatively charged antiprotons are slowed down in helium and later
captured by the Coulomb field of the helium nuclei. A fraction of about
3\% of the antiprotons are captured in metastable states with lifetimes
on the order of microseconds, and this has allowed for a series of
high-precision spectroscopy experiments~\cite{yamaz02} that have
produced, among other things, top accuracy values of fundamental
particle characteristics such as the electron-to-antiproton mass ratio
\cite{hori11} and the antiproton dipole magnetic moment~\cite{pask09}.
These high-accuracy goals required various systematic effects to be
accounted for, the density broadening and shift of the spectral lines
being among the most important among them~\cite{tori99}. The density
effects have been evaluated for a gaseous target in the semiclassical
approach \cite{baka00} using a pairwise interaction potential of an
antiprotonic and an ordinary helium atom, calculated {\em ab initio} in
the frame of the symmetrized Rayleigh-Schr\"{o}dinger theory
\cite{jezi98}, and the results were in agreement with the experimental
data taken at helium gas densities up to 127~g/l~\cite{yamaz02}. The
attempts of the ASACUSA collaboration at CERN for the laser spectroscopy
of antiprotonic atoms in liquid helium~\cite{hori-private} made us
revisit the subject, since the collective degrees of freedom in the
liquid phase (sound waves in the liquid consisting of the neutral $^4$He
atoms) provide a~new mechanism for shifting and broadening the atomic
spectra, in addition to those investigated in gaseous helium. We
expected, however, that collisions of the neutral
$\bar{p}\mathrm{He}^{+}$ atom with the neighboring neutral $^4$He atoms
give the dominant contribution to the shift and broadening, both in the
gaseous and liquid helium targets. The reason was that the
characteristic changes of the $\bar{p}\mathrm{He}^{+}$ energy levels due
to these collisions are a~few orders of magnitude greater than the
typical energy of the sound phonons associated with the momentum
transfer between the laser photons and the liquid. In the mean time, we
neglected the effects of inelastic collisions of
$\bar{p}\mathrm{He}^{+}$ with $^4$He atoms since the typical thermal
collision energies $\varepsilon_T$ of the order of
$10^{-4}$--$10^{-3}$~eV are much smaller than the separation
$|\Delta\varepsilon|\gtrsim{}0.1$~eV between the energy levels of states
with adjacent values of the quantum numbers. The detailed calculations
of the collisional quenching rate of antiprotonic helium
in~\cite{quench} show that collisional quenching is indeed strongly
suppressed, in agreement with experimental data.
We focus on the transitions
$|i\rangle=(n,\ell)=(39,35)\to|f\rangle=(n',\ell')=(38,34)$
(transition~I) and $(37,34)\to(36,33)$ (transition~II), which are of
major interest for the experimentalists~\cite{hori11}. For transition~I,
the corresponding photon wavelength equals
$\lambda_0=5972.570$~\AA~\cite{tori99} and the resonance energy is
$E_0=2.07589$~eV. The natural width~$\Gamma_n$ is determined by the
Auger decay rate $R_A$ of the final state
\begin{equation}
\label{eq:gam_Auger}
\Gamma_n = \hbar/\tau_n \approx \hbar R_A \,.
\end{equation}
For this line, the experimental rate is
$R_A\approx{}1.11\times{}10^8\mathrm{s}^{-1}$~\cite{yamag02} and
therefore $\Gamma_n\approx{}0.73\times{}10^{-7}$~eV, the corresponding
frequency is $\nu_n\approx{}0.018$~GHz, and the lifetime
$\tau_n=9.0$~ns. In the case of transition~II, the analogous parameters
are: $\lambda_0=4707.220$~\AA~\cite{tori99}, $E_0=2.63392$~eV,
$R_A\approx{}2.2\times{}10^8\mathrm{s}^{-1}$~\cite{hori98},
$\Gamma_n\approx{}1.4\times{}10^{-7}$~eV, $\nu_n\approx{}0.035$~GHz, and
$\tau_n=4.5$~ns.
In Sec.~\ref{sec:liquid_he} we report the estimations of the line
shift and broadening due to the collective motion in liquid
$^4$He, using the Van Howe formalism~\cite{vanh54}. In
Sec.~\ref{sec:single-atom}, we evaluate the line shift due to
collisions of $\bar{p}\mathrm{He}^{+}$ with $^4$He atoms by making
use of the quasistatic limit of the results of Ref.~\cite{baka00}
in a form that allows for exploiting the experimental data on the
pair correlation function in liquid~$^4$He. Unfortunately, this
method cannot be extended to the evaluation of the line
broadening. Section~\ref{sec:concl} includes a brief discussion of
the results.
\section{Line shift and broadening due to the collective dynamics of
liquid helium}
\label{sec:liquid_he}
In this section, the contributions the line shift and broadening due to
the dynamics of liquid $^4$He are estimated in terms of the
quantum-mechanical response function~$\mathcal{S}(\vec{q},\omega)$,
which was introduced by Van Hove~\cite{vanh54} and discussed in detail
for various targets in many textbooks (see e.g., Ref.~\cite{love84}).
The quantities $\hbar\omega$ and $\hbar\vec{q}$ denote, respectively,
the energy and momentum transfers to a~given target. The response
function depends on the target properties at a~fixed temperature and
density. On the other hand, $\mathcal{S}(\vec{q},\omega)$ is independent
of the nature of interaction of an impinging particle with the target.
The influence of liquid $^4$He dynamics on the line shift and
broadening can be estimated using the method developed by Singwi
and Sj\"olander~\cite{sing60} for describing the $\gamma$ quantum
absorption or emission by a~nucleus located in a~condensed target.
According to Ref.~\cite{sing60}, the cross section~$\sigma$ for
photon absorption or emission can be expressed in the simplified
form
\begin{equation}
\label{eq:sig_abs}
\sigma(E) = \frac{\mathcal{A}}{\hbar}\, \mathcal{S}_i(q,\omega) \,,
\end{equation}
when the natural resonance width~$\Gamma_n$ is so small that the
analogous cross section $\sigma_0$ for the nucleus set at a~fixed
position can be approximated by the $\delta$-function profile
\begin{equation}
\label{eq:delta_profile}
\sigma_0(E) = \mathcal{A}\, \delta(E-E'_0) \,,
\end{equation}
in which $\mathcal{A}$ represents the strength of the resonance, $E$ is
the energy of an absorbed or emitted photon, and $E'_0$ is the resonance
energy. The function $\mathcal{S}_i(q,\omega)$ denotes the Van Hove
single-particle response function, which is the incoherent fraction of
the total response function~$\mathcal{S}(q,\omega)$. In the case of
laser-stimulated transitions in the antiprotonic helium
\begin{equation}
\label{eq:transf}
\hbar q = p \,, \qquad \hbar\omega = E-E'_0 \,,
\end{equation}
where $p$ is the absolute value of momentum of the absorbed or emitted
photon. The resonance energy $E'_0=E_0+\Delta{}E_0$ includes here the
line shift $\Delta{}E_0$ due to the pairwise interaction. Since
$\Delta{}E_0\ll{}E_0$ in liquid helium, as shown in
Sec.~\ref{sec:single-atom}, one can take $E'_0\approx{}E_0$ in numerical
estimates of the collective effects.
In the case of a perfect gas, a harmonic solid, or a particle diffusing
in a classical fluid according to the Langevin equation, the response
function can be rigorously derived~\cite{vanh54}. However, an exact form
of the response function in normal fluid (He~I) and superfluid helium
(He~II) is not known yet. Therefore, $\mathcal{S}(q,\omega)$ for liquid
helium is usually expressed in terms of simple analytical functions,
such as the Gaussian or Lorentzian function, with several parameters to
be determined in experiments. Extensive measurements of the response
function for liquid helium at various conditions were performed for many
years, using x-ray and neutron scattering. The experimental data are
available only for the momentum transfers $q\gtrsim{}0.1$~\AA$^{-1}$. In
the case of the $\bar{p}^4\mathrm{He}^{+}$ atom, the characteristic
momentum transfer equals $q=p/\hbar=2\pi/\lambda_0$, which gives
$q=0.0011$~\AA$^{-1}$ for transition~I and $q=0.0013$~\AA$^{-1}$ for
transition~II. This is two orders of magnitude smaller than in the case
of neutron-scattering experiments. Nevertheless, some conclusions can be
drawn using the available analytical models and experimental parameters.
It is well known from theory and experiment that the response function
for liquid $^4$He contains contributions from one-phonon and multiphonon
processes. A~dispersion relation for low-energy phonons in superfluid
$^4$He was first proposed by Landau~\cite{land41a,land41b}
\begin{equation}
\label{eq:phon_disp}
\omega_\mathrm{pho} = c_s \, q_\mathrm{pho} \,,
\end{equation}
in which $\omega_\mathrm{pho}$ is the phonon energy, $q_\mathrm{pho}$
denotes the phonon momentum and $c_s$ represents the sound velocity in
the target. Therefore, the interaction of a~photon with a
$\bar{p}^4\mathrm{He}^{+}$ atom located in liquid $^4$He can lead to the
simultaneous creation or annihilation of phonons with the small
energy~$E_\mathrm{pho}$
\begin{equation}
\label{eq:E_pho}
E_\mathrm{pho} = \hbar c_s q = c_s p = h c_s /\lambda \,.
\end{equation}
The photon wavelength~$\lambda$ includes here the line shift due to the
pairwise potential.
For many years, a~sharp one-phonon peak observed in the experimental
response function at low temperatures was ascribed to the superfluid
fraction of liquid $^4$He. This fraction is connected with the presence
of the Bose condensate. Therefore, such a~peak was expected to disappear
above the phase-transition temperature $T_\lambda=2.17$~K~\cite{wood78}.
However, the further extensive measurements at
$q\approx{}0.4$~\AA$^{-1}$~\cite{talb88,stir90,bogo04} proved that the
well-defined one-phonon peak does not disappear at $T>T_\lambda$. This
peak was then ascribed to the collective zero-sound mode, which is
independent of the presence of superfluidity. A similar mode is observed
in various classical liquids, at sufficiently low~$q$. Since the laser
spectroscopy of antiprotonic helium is characterized by very low
momentum transfers, the collective-mode contributions to the line shift
and broadening in the He~I region are also determined by this acoustic
one-phonon process.
The low-$q$ phonon processes in liquid~$^4$He are well described by the
following phenomenological one-phonon response
function~\cite{talb88,stir90}
\begin{equation}
\label{eq:S_1}
\begin{split}
\mathcal{S}_1(q,\omega) = \frac{\hbar}{\pi} [n_\mathrm{B}(\omega,T)+1]
Z(q,T) \Biggl[ & \frac{\Gamma_1(q,T)}
{\hbar^2[\omega-\omega_\mathrm{pho}(q,T)]^2+\Gamma_1^2(q,T)} \\
& -\frac{\Gamma_1(q,T)}
{\hbar^2[\omega+\omega_\mathrm{pho}(q,T)]^2+\Gamma_1^2(q,T)}
\Biggr] \,,
\end{split}
\end{equation}
in which $n_\mathrm{B}$ is the Bose factor for phonons
\begin{equation}
\label{eq:Bose_fac}
n_\mathrm{B}(\omega,T) = [\exp(\beta_T\omega)-1]^{-1} ,
\qquad \beta_T = (k_\mathrm{B} T)^{-1}
\end{equation}
and $k_\mathrm{B}$ is the Boltzmann constant. The one-phonon intensity
is denoted here by $Z(q,T)$ and $\Gamma_1(q,T)$ represents the half
width at half maximum of the one-phonon peak. Equation~(\ref{eq:S_1})
includes both the one-phonon creation and annihilation. The experimental
functions $Z(q,T)$, $\omega_\mathrm{pho}(q,T)$, and $\Gamma_1(q,T)$ for
the saturated vapor pressure (SVP) are presented in Ref.~\cite{stir90}
($q=0.4$~\AA$^{-1}$) and~Ref.~\cite{bogo04} ($q=0.22$~\AA$^{-1}$). In
general, for such values of~$q$, these parameters are insensitive to the
phase transition from He~II to~He~I. The phonon energy
$\omega_\mathrm{pho}$ practically does not change with temperature in
superfluid helium and in the vicinity of~$T_\lambda$. The same is true
for the peak intensity $Z(q,T)$. The width~$\Gamma_1$ rises
exponentially with rising temperature below $T\lesssim{}2.3$~K and there
is no abrupt change at~$T_\lambda$. The smallest width reported for
$q=0.4$~\AA$^{-1}$~\cite{stir90} is on the order of~1~GHz at about~1~K.
However, this result is entangled with the experimental energy
resolution of 20~GHz, which is taken into account using a~convolution of
the function~(\ref{eq:S_1}) with the Gaussian that describes the
resolution. The more accurate and reliable measurements~\cite{meze83}
using the neutron spin echo techniques give $\Gamma_1<1$~GHz at 1.35~K,
for $q=0.4$~\AA$^{-1}$. A~similar result is reported in
Ref.~\cite{klim03}. For $T\gtrsim2.3$~K, the width
$\Gamma_1$($q=0.4$~\AA$^{-1}$) increases slowly from 40 to 70~GHz at
$T\approx{}4$~K. Let us note that the observed behavior of $\Gamma_1$ as
a~function of temperature is connected with increasing randomness of the
considered system (see e.g., Ref.~\cite{alex09} and references therein).
In the antiprotonic helium case, the momentum transfers are smaller by
a~factor of 220--400 than the lowest experimental $q$ from
Refs.~\cite{stir90,bogo04,meze83,klim03}. Thus, the knowledge on the
behavior of $\Gamma_1$ as a~function of small $q$ is very important. The
experimental data presented in Ref.~\cite{bogo04} show that $\Gamma_1$
linearly decreases towards very small values with decreasing~$q$, for
$q\lesssim{}0.7$~\AA$^{-1}$. This is consistent with theory which
predicts that the one-phonon peak has the $\delta$-function profile and
$\Gamma_1\to{}0$, in the limit $q\to{}0$. Using the linear
proportionality and the above-mentioned experimental data, one obtains
$\Gamma_1<10^{-2}$~GHz for $T\lesssim{}1$~K. Above this temperature,
$\Gamma_1$ exponentially rises to 0.1~GHz at 2.3~K and then slowly
increases to about 0.18~GHz at~4~K.
Since in the antiprotonic-helium experiments
$\beta_T\omega=\beta_T\omega_\mathrm{pho}\sim{}10^{-2}\ll{}1$, the
phonon population factor $n_\mathrm{B}(\omega,T)+1$
in~Eq.~(\ref{eq:S_1}) approximately equals $1/(\beta_T\omega)$ for the
phonon creation and $-1/(\beta_T\omega)$ for phonon annihilation. As
a~result, assuming that
$\mathcal{S}_i(q,\omega)\approx\mathcal{S}_1(q,\omega)$ is a~reasonable
approximation, the total photon cross section~(\ref{eq:sig_abs}) can be
expressed as follows
\begin{equation}
\label{eq:sig_abs_1}
\sigma(E) = \frac{\mathcal{A}}{\pi} \, \frac{Z(q,T)}{\beta_T\omega}
\Biggl[ \frac{\Gamma_1}
{[E-(E'_0+E_\mathrm{pho})]^2+\Gamma_1^2} \\
+\frac{\Gamma_1}
{[E-(E'_0-E_\mathrm{pho})]^2+\Gamma_1^2}
\Biggr] .
\end{equation}
Thus, the resonance line is split into the two lines, which are
characterized by the line shifts $\Delta{}E_1=E_\mathrm{pho}$ and
$-E_\mathrm{pho}$.
The absolute values of the line shifts $|\Delta{}E_1|=E_\mathrm{pho}$,
which were calculated using Eq.~(\ref{eq:E_pho}), are shown in
Table~\ref{table:E_pho} as functions of temperature.
\begin{table}[htb]
\begin{center}
\caption{The absolute values of line shift $\Delta{}E_1$ in
superfluid and fluid $^4$He at SVP as functions of temperature.}
\label{table:E_pho}
\begin{ruledtabular}
\newcolumntype{.}{D{.}{.}{2.2}}
\begin{tabular}{. . .}
\multicolumn{1}{c}{Temperature}&
\multicolumn{2}{c}{Line shift [GHz]}\\
\cline{2-3}
\multicolumn{1}{c}{[K]}&
\multicolumn{1}{c}{Line~I}&
\multicolumn{1}{c}{Line~II}\\
\hline
1.20 & 0.40 & 0.50 \\
1.50 & 0.39 & 0.50 \\
1.75 & 0.39 & 0.49 \\
2.00 & 0.39 & 0.48 \\
2.17 & 0.36 & 0.46 \\
2.20 & 0.37 & 0.47 \\
2.50 & 0.37 & 0.47 \\
3.60 & 0.35 & 0.44 \\
4.00 & 0.32 & 0.40 \\
4.22 & 0.30 & 0.38 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}%
The velocities~$c_s(T)$ at SVP were taken from Ref.~\cite{mayn76} for
He~II and from Ref.~\cite{find38} for He~I. As it was observed in
experiments, the calculated phonon energies do not significantly change
in He~II and in the vicinity of $T_\lambda$. The calculated ratio
$\Gamma_1/|\Delta{}E_1|$ equals about 0.02 at $T\approx{}1$~K and
increases to about 0.5 at~4~K.
\section{Collisional shift of resonance lines}
\label{sec:single-atom}
In Refs.~\cite{baka00,hfi12} the density shift and broadening of the
spectral lines of antiprotonic helium atoms in gaseous helium were
evaluated in the frame of the semiclassical approach of
P.W.~Anderson~\cite{anderson}. In this approach the emitter --- the
antiprotonic atom --- is subject to full scale quantum treatment, while
the perturber --- the ordinary helium atom --- evolves classically. The
very good agreement of the theoretical results of
Refs.~\cite{baka00,hfi12} with the experimental data taken at a broad
range of helium gas %
densities up to 127~g/l
should be attributed to
(1) the use of an accurate pair-wise state-dependent potential for the
interaction of antiprotonic and ordinary helium atoms, calculated
{\em ab initio} with the symmetrized Rayleigh-Schr\"{o}dinger theory
\cite{jezi98};
(2) the use of curvilinear classical trajectories of the
perturbers determined by the interaction potential for the
{\em initial state} of the transition; and
(3) the fact that the conditions which justify the impact approximation
and the approximation of binary collisions --- typical collision
duration smaller by an order of magnitude than the average interval
between collisions, emitter excitation energies much larger than the
thermal collision energies, uncorrelated motion of the perturbers, etc.\
--- are satisfied for the densities and temperatures in consideration.
In liquid helium, however, the target density may be still higher, the
motion of helium atom cannot be considered as uncorrelated, and this
approach cannot be applied. We therefore put the results of
Ref.~\cite{baka00} in a form that allows us to use phenomenological data
about the liquid helium target density instead of the theoretical
calculations, and this way we obtain a reliable estimate of the
collisional shift of the spectral lines. Unfortunately, the line
broadening cannot be evaluated this way since similar naive approaches
are known to produce wrong values, as pointed out in Ref.~\cite{alex09}.
In the semiclassical approach, the density shift $\Delta{}E_0$ of the
resonance energy $E_0$ of the transition $|i\rangle\rightarrow|f\rangle$
between the initial and final quantum states of the antiprotonic atom,
due to the interaction with the atoms of the surrounding helium gas, is
given by
\begin{equation}
\Delta E_0=N_0\, \bigg\langle 2\pi v\int \mathrm{d}b\, b \,
\sin\!\left(\int \mathrm{d}t\, \Delta V(\vec{R}_b(t))\right)
\bigg\rangle_{\! v} \,.
\label{prl0}
\end{equation}
Here $N_0$ is the number density of the helium gas,
$N_0=\varrho/M_\mathrm{He}$, $\varrho$ being the target density and
$M_\mathrm{He}$ being the $^4$He-atom mass;
$\Delta{}V(\vec{r})=V_f(\vec{r})-V_i(\vec{r})$ is the difference of the
state-dependent $\bar{p}^4\mathrm{He}^{+}$-He interaction potentials;
$\vec{R}_b(t)$ is the classical trajectory of a~He atom with impact
parameter~$b$, determined by the interaction potential in the %
{\em initial} state and parametrized with the proper time~$t$; and
$\langle\ldots\rangle_v$ denotes averaging over the Maxwell-distributed
asymptotic velocities~$v$ of the helium atom. Equation~(\ref{prl0}) has
been derived in the approximation of an ideal helium gas, pairwise
$\bar{p}^4\mathrm{He}^{+}$-He interaction, and under a~few more
assumptions discussed in detail in Ref.~\cite{baka00}.
At low target temperatures, kinetic energy of most of the incident
helium atoms is small, and also small is the phase accumulated along
a~typical trajectory
\begin{equation}
\eta_b=\int \mathrm{d}t\,\Delta V(\vec{R}_b(t))\ll 1 \,,
\end{equation}
so that $\sin\eta_b\approx\eta_b$. We can then transform
Eq.~(\ref{prl0}) to the following form
\begin{equation}
\label{qst}
\Delta E_0 = \int \mathrm{d}^3 r\,\rho(\vec{r})
\left[ V_f(\vec{r}) - V_i(\vec{r}) \right] \,,
\end{equation}
where
\begin{equation}
\label{cdens}
\rho(\vec{r})=2\pi N_0 \bigg\langle \int \mathrm{d}b \, b
\int \mathrm{d}t \, \delta(\vec{r}-\vec{R}_b(t)) \bigg\rangle_{\! v}
\,.
\end{equation}
For spherically symmetric $V_{i,f}(\vec{r})=V_{i,f}(r)$, $\rho$~is also
symmetric: $\rho(\vec{r})=\rho(r)$. Equation~(\ref{qst}) has a simple
physical interpretation: due to the interaction with a helium atom at
position~$\vec{r}$, the energy levels of the initial and final states of
$\bar{p}^4\mathrm{He}^{+}$ (placed at the origin) are shifted by
$V_{i,f}(\vec{r})$, respectively, and the transition energy is shifted
by their difference. The observable shift is the average of the latter
over the spatial distribution $\rho(\vec{r})$ of the surrounding helium
atoms. This approximation will be referred to as ``quasistatic limit''.
Equation~(\ref{qst}) may be the starting point for a~self-consistent
approximate evaluation of the density shift, provided that the helium
gas density $\rho(\vec{r})$ is known. One could think of three different
estimates of $\rho({\mathbf r})$:
\begin{enumerate}
\item
The helium gas density $\rho_c(r)$ calculated from the classical
trajectories used in Ref.~\cite{baka00} is one estimate.
Equation~(\ref{cdens}) gives the algorithm of calculating $\rho_c(r)$
from the set of classical trajectories. The corresponding curve for
a~temperature of 5.4~K, renormalized to~1 at $r=10$~\AA{}, is shown in
Fig.~\ref{fig:dens}. [The curliness of $\rho_c$ is due to the too
rough discretization of the integrals in Eq.~(\ref{cdens})].
\item
Another estimate is the helium gas density $\rho_q(r)$ equal to the
modulus squared of the two-body scattering wave function for the
system of point-like helium and $\bar{p}^4\mathrm{He}^{+}$ atoms
interacting via the potential in the initial state $V_i(r)$. For
a~comparison, in Fig.~\ref{fig:dens} we plot $\rho_q(r)$
evaluated~\footnote{The details of the calculation will be presented
in a separate paper} for the helium temperature $T=5.8$~K and
renormalized to~1 at $r=10$~\AA~\cite{baka13}.
\item
A~third estimate is the helium gas density $\rho_\mathrm{exp}(r)$ from
experiment. Of course, there are no data on the helium density in the
neighborhood of an $\bar{p}^4\mathrm{He}^{+}$ atom, but as a first
approximation one can use the static pair correlation function
$g(\vec{r},T)$ for pure helium that gives the probability density of
finding a~helium atom located at $\vec{r}$ if the reference particle
is placed at the origin. At large $r$, $g(\vec{r},T)\to{}1$. Isotropic
media, such as liquid helium, can be described using the radial
function $\rho_\mathrm{exp}(r)=g(r,T)$. The pair correlation function
is usually determined by means of x-ray or neutron scattering.
\end{enumerate}
In the present work we evaluate the collisional line shift using
Eq.~(\ref{qst}) and the phenomenological density $\rho_\mathrm{exp}(r)$
extracted from the set of functions $g(r,T)$ of Ref.~\cite{sear79} which
were determined for fluid and superfluid $^4$He at the saturated vapor
pressure and various temperatures. We make no use of $\rho_c(r)$ or
$\rho_q(r)$ since they have been calculated under assumptions that may
not be valid in liquid helium; any partial results obtained with
$\rho_c(r)$ or $\rho_q(r)$ are listed uniquely for comparison. The
functions $g(r,T)$ for temperatures $T=1.77$ and 4.27~K are plotted in
Fig.~\ref{fig:dens}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{ab_fig1.eps}
\caption{(Color online) The experimental functions~$g(r,T)$ for
liquid $^4$He at~SVP and $T=$~1.77 and 4.27~K~\cite{sear79},
together with the classical $\rho_c(r)$ and quantum-mechanical
$\rho_q(r)$ probability densities which were calculated in the
two-particle approximation~\cite{baka13}. To emphasize the
similarity of their shape, the curves $\rho_c(r)$ and $\rho_q(r)$
in the plot have been {\em renormalized} to unit radial density at
$r=10$~\AA.}
\label{fig:dens}
\end{center}
\end{figure}%
This figure shows that the density distributions $\rho_\mathrm{exp}(r)$
in both the fluid and superfluid~$^4$He are very close. The helium atoms
cannot be closer than about~2~\AA{}. The maxima at about 3.5 and
6.8~\AA{} correspond to the first and second shell of neighbors,
respectively. The shape of the quantum-mechanical density $\rho_q(r)$,
plotted for comparison, is very similar, in particular at small
distances $r\lesssim{}2.5$~\AA. This proves that radial pair correlation
function, which was obtained for a pure $^4$He target, can be applied
for calculating the line shift with the help of Eq.~(\ref{qst}). At
distances $r\gtrsim{}2.5$~\AA, the difference between the functions
$g(r,T)$ and $\rho_q(r)$ increases. This is due to the increasing role
of many-atom interactions which are not taken into account in
$\rho_q(r)$, while the experimental $\rho_\mathrm{exp}(r)$ describes the
real structure of the quantum liquid~$^4$He.
The potential curves $V_{(n,\ell)}(r)$, $V_{(n',\ell')}(r)$, and
$\Delta{}V(r)$, together with the correlation functions~\cite{sear79},
are shown in~Fig.~\ref{fig:gpot} for transition~I.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{ab_fig2.eps}
\caption{(Color online) The pairwise potential curves
$V_{(n,\ell)}(r)$, $V_{(n',\ell')}(r)$, and $\Delta{}V(r)$ for
transition~I, together with the pair correlation function~$g(r,T)$
at 1.77~K and 4.27~K~\cite{sear79}, versus the distance~$r$ between
the antiprotonic helium and the $^4$He atom.
\label{fig:gpot}}
\end{figure}%
One can see that the main contribution to the resonance energy
shift~Eq.~(\ref{qst}) comes from the interval
2.2~$\gtrsim{}r{}\lesssim$~3.4~\AA, where both the $\Delta{}V(r)$ and
$\rho_\mathrm{exp}(r)\equiv{}g(r,T)$ have significant magnitudes.
In order to check the validity of using the pairwise interaction
potentials for the determination of resonance shifts,
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{ab_fig3.eps}
\caption{(Color online) The number $n(r)$ of $^4$He atoms within the
sphere of radius~$r$ that surround an atom placed at $r=0$ in liquid
helium.}
\label{fig:neighbors_sea}
\end{figure}%
in~Fig.~\ref{fig:neighbors_sea} we plot the average number $n(r)$ of
$^4$He atoms that are located within the sphere of radius~$r$
\begin{equation}
\label{eq:neighbors}
n(r) = 4\pi N_0 \int_0^r \mathrm{d}r'\, r'^2 \rho_\mathrm{exp}(r')\,,
\end{equation}
for $T=1.77$ and 4.27~K at~SVP. The plot shows that $n(r)\leq{}1$ for
$r\leq{}3.1$--3.2~\AA, depending on the target density. Since
$\Delta{}V(r)$ has the largest absolute values within this interval
of~$r$, one can expect that Eq.~(\ref{qst}) establishes a~good
approximation to the resonance shifts.
The number density~$N_0$ in~Eq.~(\ref{cdens}) is a~factor that suggests
a~direct proportionality of the energy shift with respect to the helium
density. However, the function $\rho_\mathrm{exp}(r)=g(r,T)$ slightly
varies with temperature and thus also with the density, which at SVP is
determined by temperature. Therefore, the integration
in~Eq.~(\ref{qst}) leads to a higher-order correction to the
above-mentioned linear proportionality.
The resonance line shifts, which were calculated in the quasistatic
limit using the experimental helium density
$\rho_\mathrm{exp}(r)\equiv{}g(r,T)$ from~Ref.~\cite{sear79}, are shown
in Table~\ref{table:shift_sea}
\begin{table*}[htb]
\begin{center}
\caption{The line shift $\Delta{}E_0$ [GHz] and the reduced line
shift $\Delta{}E_0/\varrho$ [GHz.l/g]) for liquid $^4$He at~SVP,
calculated with Eq.~(\ref{qst}) using $g(r,T)$
from~Ref.~\cite{sear79}. For comparison, in the last two columns
are given the values of the reduced line shift in gaseous helium,
calculated in the semiclassical approach of Ref.~\cite{baka00}.}
\label{table:shift_sea}
\begin{ruledtabular}
\newcolumntype{.}{D{.}{.}{3.3}}
\begin{tabular}{. . . . . . . .}
\multicolumn{1}{c}{Temperature}&
\multicolumn{1}{c}{Density}&
\multicolumn{2}{c}{$\Delta{}E_0$}&
\multicolumn{2}{c}{$\Delta{}E_0/\varrho$}&
\multicolumn{2}{c}{$\Delta{}E_0/\varrho$ in $^4$He gas}\\
\cline{3-4} \cline{5-6} \cline{7-8}
\multicolumn{1}{c}{[K]}&
\multicolumn{1}{c}{[g/l]~~~}&
\multicolumn{1}{c}{Line~I}&
\multicolumn{1}{c}{Line~II}&
\multicolumn{1}{c}{Line~I}&
\multicolumn{1}{c}{Line~II}&
\multicolumn{1}{c}{Line~I}&
\multicolumn{1}{c}{Line~II}\\
\hline
1.00 & 145.1 & -64.1 & -37.7 & -0.442 & -0.260 & -0.509 & -0.181 \\
1.38 & 145.1 & -70.2 & -43.7 & -0.484 & -0.301 & & \\
1.77 & 145.3 & -63.1 & -36.9 & -0.434 & -0.254 & & \\
1.97 & 145.6 & -64.3 & -37.8 & -0.442 & -0.260 & & \\
2.07 & 145.8 & -67.5 & -40.9 & -0.463 & -0.280 & & \\
2.12 & 145.9 & -64.4 & -37.8 & -0.442 & -0.259 & & \\
2.15 & 146.0 & -67.0 & -40.3 & -0.459 & -0.276 & & \\
2.27 & 145.9 & -62.3 & -35.7 & -0.427 & -0.245 & & \\
3.00 & 141.2 & -61.0 & -35.3 & -0.432 & -0.250 & -0.582 & -0.203 \\
3.60 & 134.8 & -58.1 & -33.6 & -0.431 & -0.249 & -0.591 & -0.208 \\
4.27 & 124.0 & -53.3 & -30.9 & -0.429 & -0.249 & & \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table*}%
for the temperature interval $T=1.0$--4.27~K at~SVP. The corresponding
$^4$He densities are interpolated with the help of data
from~Ref.~\cite{donn98}. Although transition~II was not observed at gas
densities higher than 32~g/l~\cite{tori99}, the calculated resonance
shifts are also given here for this line, for the sake of comparison.
Note that there are no experimental data about the line shift and
broadening in liquid helium. The density effects have been studied
experimentally only in a~gaseous helium target at pressures
0.2--8.0~bars, temperatures 5.8--6.3~K and target densities ranging from
about 1.4 to~127~g/l \cite{tori99,hori06} where the linear dependence on
the target density has been confirmed within the experimental accuracy.
The recent experimental results of Ref.~\cite{hori06} read
$\Delta{}E_0/\varrho=-0.63\pm{}0.03$~GHz$\cdot$g/l for transition~I and
$\Delta{}E_0/\varrho=-0.21\pm{}0.02$~GHz$\cdot$g/l for transition~II.
Table~\ref{table:shift_sea} shows that the values of the reduced shift
$|\Delta{}E_0|/\varrho$ in liquid $^4$He, calculated using the
phenomenological $\rho_\mathrm{exp}(r)$, differ from both the
experimental data and the semiclassical calculations for gaseous $^4$He.
The reduced resonance redshifts $|\Delta{}E_0|/\varrho$ as functions of
the upper limit~$r_\mathrm{max}$ of the integral Eq.~(\ref{qst}) are
plotted in~Fig.~\ref{fig:redshift_sea_hn1}.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{ab_fig4.eps}
\caption{(Color online) The reduced resonance redshifts
$|\Delta{}E_0|/\varrho$ for transition I and~II as functions
of~$r_\mathrm{max}$ for $g(r,4.27~\mathrm{K})$
from~Ref.~\cite{sear79}.
\label{fig:redshift_sea_hn1}}
\end{figure}%
In the case of line~I, the asymptotic value of~$\Delta{}E_0$ is
already reached at $r_\mathrm{max}\approx{}3.8$~\AA{}. This confirms
that the approximation of binary interactions is a good one in
this case. A~better approximation would be possible when the
three-body interaction potentials are calculated, which is a~much
more complicated task. For line~II, the asymptotic value
of~$\Delta{}E_0$ is achieved only at
$r_\mathrm{max}\approx{}9$~\AA\ since $\Delta{}V(r)$ changes sign
at $r=3.3$~\AA\ and $\Delta{}E_0(r_\mathrm{max})$ is not a
monotonic function. Thus, in this particular case, the binary
interaction approximation is not as good as for line~I.
\section{Conclusions}
\label{sec:concl}
The present work was motivated by the attempts of the ASACUSA
collaboration at CERN for the high-accuracy spectroscopy measurements of
the antiprotonic helium atoms in liquid helium media, where the
broadening and shift of the spectral lines due to the collective
many-body effects had not been investigated.
The resonance line shift of the antiprotonic helium atom located in
liquid $^4$He is the sum of the contribution $\Delta{}E_0$ from the
pairwise $\bar{p}\mathrm{He}^{+}$-$^4$He interaction and the
contribution $\Delta{}E_1$ due to the collective dynamics of the liquid.
The shift $\Delta{}E_0$ gives a correction on the order of
$|\Delta{}E_0|/E_0\sim{}10^{-4}$ to the resonance wavelength~$\lambda_0$
of an isolated $\bar{p}\mathrm{He}^{+}$ atom. The correction due to the
collective dynamics is much smaller, $|\Delta{}E_1|/E_0\sim{}10^{-6}$
but still may be of importance for high-precision measurements.
The calculated values of $\Delta{}E_0/\varrho$ in
Table~\ref{table:shift_sea} exhibit appreciable variations (9\% for
line~I) with changing temperature, for $T<2.27$~K and almost constant
density. Thus, this phenomenon is apparent only in superfluid~$^4$He. At
higher temperatures, where $^4$He density at SVP significantly
decreases, $\Delta{}E_0/\varrho$ is practically constant. A~similar
effect was observed in the experiments using gaseous helium
targets~\cite{tori99,hori06}.
The reduced line width $\Gamma_1/\varrho$ that comes from the collective
motion in liquid~$^4$He ranges from about $10^{-4}$ to
$10^{-3}$~GHz$\cdot$l/g. Thus, the corresponding contribution to the
line broadening is much lower than the collisional broadening in
a~gaseous helium target as reported in Ref.~\cite{baka00}. As already
pointed out, our method for the evaluation of the density shift in the
quasistatic limit using phenomenological data about the helium target
density cannot be applied in calculating the collisional line
broadening, and we only can estimate the broadening in a~gaseous helium
target as an upper limit for the broadening in liquid helium.
The accuracy of the calculated quasistatic line shifts and broadening
could be improved if the potentials of the $\bar{p}\mathrm{He}^{+}$
interaction with two helium atoms were available. However, the
calculation of such potentials is much more complicated than in the case
of pairwise potential.
Note that one of the methods of studying liquid-helium structure is the
observation of foreign atoms and ions implanted in a liquid $^4$He
target. Unfortunately, the presence of such an atom strongly affects its
helium vicinity. A~large cavity is created around the foreign atom, due
to the Pauli repulsion between the electrons in this atom and the
surrounding helium atoms (see e.g.,~Ref.~\cite{mate11}). The
antiprotonic helium atom, in contrast, having only one electron in the
1s~state, is a good candidate for such studies, as is indicated by the
similarity of the densities $\rho_\mathrm{exp}(r)\equiv{}g(r,T)$ and
$\rho_q(r)$.
\begin{acknowledgments}
We would like to thank M.~Hori and A.~S\'ot\'er for the valuable
discussions. This work has been performed under the framework of
collaboration between the Bulgarian Academy of Sciences and Polish
Academy of Sciences.
\end{acknowledgments}
|
1,116,691,500,405 | arxiv | \section{Introduction}
Deep learning-based methods have shown to be powerful in the development of automated image analysis software in digital pathology. This innovative field of research has been fostered by creation of publicly available data sets of specific histological structures. One of the most extensively researched cell structures in current literature are mitotic figures (microscopic appearance of a cell undergoing cell division) in neoplastic tissue. Quantification of the highest density of mitotic figures is one of the most important histological criteria for assessment of biological tumor behavior and this pattern has therefore drawn much research attention for computerized methods.
Manual enumeration of mitotic figures by pathology experts has some limitations including high inter-rater inconsistency of pathologists in classifying individual cells as mitotic figures as they exhibit a high degree of morphological variability and similarity to some non-mitotic structures. In previous studies, disagreement of classification occurred in 6.4-35.3\% \cite{malon2012mitotic}, and 68.2\% \cite{veta2015assessment} of labels. This calls for algorithm-assisted approaches in order to increase reproducibility as it has been proven that algorithms can have substantial agreement with pathologists on the object level \cite{veta2016mitosis}. Poor consistency of expert classification is, however, also a potential bias for deep learning-based methods, as pathologists are the current gold standard for assessment of morphological patterns, including mitotic figures, and creation of histological ground truth datasets. Due to the high inter-observer discordance of pathologists, we suspect some variability in assigned labels if images are annotated a second time. The usage of pathologist-defined labels for machine learning methods are thus somewhat a paradox as algorithmic methods, which are trained with and tested on these partially noisy ground truths, aim to overcome cognitive and visual limitations of pathologists.
In order to assess the robustness of algorithms and the reproducibility of newly developed deep learning-based methods it is necessary to test on several independent ground truth datasets. For these aspects, images should be independent but the ground truth should ideally be consistent throughout the datasets. To date, several open access datasets are available with labels for mitotic figures in digitalized microscopy images of human breast cancer \cite{Roux:2014tt,Roux:2013kn,veta2018predicting} and canine cutaneous mast cell tumors \cite{bertram2019large}, which have been developed by three research groups with somewhat variable labeling methods. As several publications have compared their algorithmic approaches between these publicly available datasets (for example \cite{aubreville2019learning,akram2018leveraging,li2019weakly}), a strong difference in test performance is known for these datasets. However, the influence of variability in the ground truth labels on training and test performance is currently unknown. In the present work, we have developed an alternative ground truth dataset for one of those publicly available images sets and assessed the difference to the original dataset. This was done using a new labeling methodology, targeted towards improved identification of mitotic figure events, and supported by the use of deep learning.
\section{Related Work}
Most publicly available data sets with annotations for mitotic figures are from human breast cancer, due to the high prevalence and high prognostic importance of the mitotic count for this tumor type. Roux \textit{et al.}~ were the first to present a data set, consisting of five cases scanned by two whole slide scanners (and one multi-spectral scanner) and annotated by a single pathologist (ICPR MITOS 2012, \cite{Roux:2013kn}). The biggest limitation of this dataset was the potential overlap of training and test images, which were retrieved from different locations of the same whole slide images (WSI). A year later, the MICCAI AMIDA 13 challenge introduced a new data set, covering in total 23 cases, which were evenly spread between training and test set \cite{veta2015assessment}. They were the first to acknowledge potential bias (inter-rater variance) by a single pathologist and thus perform the task by two pathologists independently, with a panel of two additional pathologists judging discordant annotations (see Fig.
~\ref{fig:tupac_label_method}). The following year, the group behind the MITOS 2012 data set introduced an extended data set at ICPR (ICPR MITOS 2014, \cite{Roux:2014tt}), consisting of 16 cases (11 for training and 5 for test), again scanned using two scanners, but this time including annotations from two pathologists. In case the pathologists disagreed, a third pathologist decided for the particular cell. The data sets includes also an expert confidence score for each mitotic figure as well as for cells probably not mitotic figures (hard negative cells). The most recent mitotic figure dataset was part of the TUPAC16 challenge \cite{veta2018predicting}, incorporating all 23 AMIDA13 cases in the training set in addition to 50 new training cases and 23 new test cases. This dataset comprises the currently the highest number of mitotic figure labels in human breast cancer.
Data about the agreement of experts in the MITOS 2014 data set can be extracted from the labels given by the challenge. Out of all 1,014 cells that were flagged by at least one pathologist as \textit{mitosis} or \textit{probably mitosis}, only 317 (31.26\%) were agreed by all pathologists to be mitotic figures, but for 749 (73.87\%) the expert consensus was \textit{mitosis}. For the MICCAI AMIDA 13 data set, Veta \textit{et al.}~ reported an agreement in 649 out of 2038 (31.84\%) annotated cells by the two initial readers, and the consensus found 1157 (56.77\%) to be actual mitotic figures \cite{veta2015assessment}. The fact that for both data sets the final consensus strongly exceeds the initial agreement highlights that spotting of rare mitotic figure events is a difficult component in the labeling process which might lead to data set inconsistency.
For data sets, inclusion of real-life variance of stain and tissue quality is an advantage, as the data is much more representative of a realistic use case. Current datasets on mitotic figures exhibit some differences in staining and other characteristics causing a certain domain shift \cite{aubreville2019learning} and somewhat limiting dataset transferability / robustness. Of the aforementioned datasets, the TUPAC16 dataset likely includes the highest variability due to inclusion of currently highest number of cases that were retrieved from three laboratories and scanned with two different scanners \cite{veta2018predicting}. The consequence of the higher variability is an increased difficulty for the pattern recognition task of automatic mitotic figure detection, as also reflected by lower recognition scores achieved on the data set compared to the other data sets. However, this variability represents a more realistic use-case, and is highly beneficial for the development of algorithms to be used in heterogeneous clinical environments.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{TUPAC_Method_original.pdf}
\caption{Annotation workflow in the original AMIDA13/TUPAC16 data sets \cite{veta2015assessment,veta2018predicting}. The images were independently screened by two pathologists. All agreed mitotic figures were directly accepted as ground truth, while disagreed cases were submitted to a panel of two additional experts.}
\label{fig:tupac_label_method}
\end{figure}
\section{Material and Methods}
\subsection{Development of an Alternative Set of Labels}
Due to the relevance of the TUPAC16 dataset (see above), we have decided to use these images in the present study for assessment of reproducibility of pathologist-defined labels. Available images from the TUPAC16 test and training set (N = 107 cases \cite{veta2018predicting}) were retrieved from the TUPAC challenge website. Cases from the AMIDA13 challenge were available as several separate, but often flanking image sections, which we stitched to single images by utilizing correlation at the image borders, wherever possible. The alternative dataset was developed in a similar way as published by Bertram \textit{et al.}~ \cite{bertram2019large}: First, one pathology expert screened all images twice (see Fig. \ref{fig:tupac_label_method_new}) with an open source software solution with a guided screening mode \cite{aubreville2018sliderunner}. Mitotic figures (MF) and similar structures (hard negatives, HN) were labeled. The dataset from the first screening of the training set included 5,833 labels (2,188 MF; 3,645 HN), and from the second screeing 7,220 labels (2,218 MF; 5,002 HN).
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{TUPAC_Method_new.pdf}
\caption{Labeling approach used for the creation of the alternative set of labels consisting of two steps. First, an expert screened the slides twice, and another expert performed a blind evaluation of all cells. Second, an algorithmic pipeline was used to detect cells potentially missed by the manual screening. For both steps, disagreed labels were re-evaluated by both experts in order to find a common consensus.}
\label{fig:tupac_label_method_new}
\end{figure}
The dataset was given to a second pathologist, who assigned a second label (MF or HN) in a blinded manner (label class obscured) supported by the annotation software through automatic presentation of image section with unclassifed objects. The second pathologist assigned the MF label in 2,272 cases and the HN label in 4,978 cases. Initial agreement for the class MF was found for 1,713 cells (61.69\%), the pathologists disagreed on 1,064 cells (14.74\% of all cells). All disagreed cells were re-evaluated by both experts, and the consensus of the manual dataset contained 1,898 MF and 5,340 HN.
Subsequently, labels from the first expert were used for an algorithmic-aided pipeline for detection of missed objects, like described in \cite{bertram2019large}. The pipeline extracted image patches around additionally detected mitotic figure candidates, sorted according to their model score. The algorithm-based screening additionally found 5,824 objects (mitotic figure candidates), which were then extracted as 128x128\,px image patches centered around the detection. Two experts assessed (MF or HN) these patches independently and agreed on all but 142 patches. All agreed objects were assigned to the dataset immediately and disagreed objects were re-evaluated by joint assessment for consensus. The final augmented data set contains 1,999 MF and 10,483 HN. Please note that all numbers are given only for the training part of the set to not reveal information about the test set for further usage.
\subsection{Automatic Mitosis Detection Methods}
We evaluated the alternative labels using a standard, state-of-the-art object detection approach: We customized RetinaNet \cite{lin2017focal} based on a pre-trained ResNet-18 stem with an input size of $512\times512$\,px to have the object detection head only attached at the $32\times32$ resolution of the feature pyramid network. We chose four different sizes (scales) of bounding boxes to enable augmentation by zooming and rotation, but only used an 1:1 aspect ratio, since the bounding boxes were defined to be squares. We randomly chose 10 tumor cases to be our validation set, which was used for model selection based on the mAP metric. After model selection, we determined the optimum detection threshold on the concatenated training and validation set. Models were trained on both, the original TUPAC16 label set and the novel, alternative set, and evaluated on the respective test sets using $F_1$ as metric.
Additionally, we calculated the model scores for individual cells of the data sets to assess model confidence. Since the test set of both label sets are not available publicly, we used a three-fold cross-validation on the training set. For this, we disabled the threshold normally used within the non-maximum-suppression of the model post-processing, which enabled us to derive model scores from the classification head of the model for all cells of our data set. We matched annotations in both data sets under the assumption that that all annotations within a distance of 25 pixels refer to the same object.
The complete training set and all code that was used for the evaluation is made available online\footnote{\url{https://github.com/DeepPathology/TUPAC_alternativeLabels}}. We encourage other research groups to use this alternative dataset for training their algorithms and we will provide evaluation of the performance of detection results on the test set upon a reasonable request to the corresponding author.
\section{Results}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Distribution.pdf}
\caption{Comparison of the original TUPAC16 and alternative label sets (training part only). The two expert teams agreed upon 1,239 mitotic figures, while the new set contains 760 additional labels for mitotic figures and 246 out 309 disagreed cells were labeled as hard negatives. The plot shows also the concatenated model scores given by a RetinaNet-approach trained in three-fold cross-validation on the original (blue) and alternative (green) label set. }
\label{fig:results_distribution}
\end{figure}
Comparing the original and the new, alternative training label sets, we find that they agree for 1,239 MF annotations (53.59\%), while the two expert groups disagreed on 1,073 cells (46.41\%). As depicted in Fig. \ref{fig:results_distribution}, 246 of MF identified in the original TUPAC16 label set were assigned to be hard examples in the alternative set, while 67 were not annotated at all. The alternative set assigned 760 further cells with the label MF, that were not annotated in the original label set.
Looking at the concatenated model scores from the cross-validation experiment, we can state that the model trained on the alternative set shows an overall higher confidence for agreed mitotic figures. In contrast, MF labels only present in the original TUPAC16 dataset had an overall lower model score with a tendency towards higher values in the models trained on the original set (median values are 0.326 and 0.284). The group of labels newly assigned in the alternative set shows higher scores for the model trained on the alternative set (median value of 0.503 vs. 0.266), while the group of hard negatives has a very similar distribution with low model scores for both training label sets.
\begin{table}[]
\setlength{\tabcolsep}{6pt}
\resizebox{\linewidth}{!}{
\centering
\begin{tabular}{c|c|ccc}
\multirow{2}{*}{metric} & \multirow{2}{*}{training} & \multicolumn{3}{c}{test} \\
\cline{3-5}
& & TUPAC original & alternative(manual) & alternative(augmented) \\
\hline
\multirow{2}{*}{\textbf{$F_1$ score}} & TUPAC original & 0.549 & 0.587 & 0.563 \\
\cline{2-5}
& alternative (augmented) & 0.555 & 0.719 & 0.735 \\
\hline
\multicolumn{5}{c}{} \\
\hline
\multirow{2}{*}{\textbf{precision}} & TUPAC original & 0.540 & 0.682 & 0.699 \\
\cline{2-5}
& alternative (augmented) & 0.477 & 0.713 & 0.772 \\
\hline
\multicolumn{5}{c}{} \\
\hline
\multirow{2}{*}{\textbf{recall}} & TUPAC original & 0.559 & 0.515 & 0.471 \\
\cline{2-5}
& alternative (augmented) & 0.665 & 0.725 & 0.701 \\
\hline
\end{tabular}
}
\caption{Comparison of $F_1$ score, precision and recall achieved on the different label sets with a customized RetinaNet\cite{lin2017focal} approach.}
\label{tab:results_comparison}
\end{table}
The higher model confidence for mitotic figures on the alternative dataset in Fig. \ref{fig:results_distribution} coincides with a generally higher $F_1$ score in model performance on the test set (see Table \ref{tab:results_comparison}). We see a small increment for using the data set using the machine-learning-aided detections for potentially missed cells, related to a notable increase in precision.
\section{Discussion}
Labeled data is the foundation for training and testing of deep learning-based algorithms. Although a vast diversity of labeling methods have been applied for mitotic figure dataset development \cite{bertram2019large,marzahl2020fast,Roux:2013kn,Roux:2014tt,veta2015assessment,veta2018predicting}, the effects of these methods on algorithmic performance is currently not fully understood. With recent improvements of deep learning methods, the demand for high-quality data is also increasing. The currently highest reported $F_1$ score on the original TUPAC16 dataset is 0.669 \cite{li2019weakly}, which is significantly higher than the value achieved by our standard RetinaNet approach on the same labels ($F_1$ score: 0.549). Considering the difference between the present and the state-of-the-art results by Li \textit{et al.}~ \cite{li2019weakly} on the original TUPAC16 dataset, it seems likely that also the results on the alternative datasets may be further improved by more advanced methods, which we encourage as we have made the alternative datasets publicly available. However, instead of aiming to achieve highest possible performance, we wanted to assess effects of using different ground truth datasets of the same images with the same deep learning method. The major finding of the present study was that pathologists-defined labels are not necessarily reproducible even when using annotation protocols that take the consensus of several experts as the ground truth, and differences may lead to notable variation in performance. In this case the model trained and tested on the alternative dataset yielded an higher $F_1$ score of 18.6 percentage points compared to the same model architecture trained and tested on the original label set. The present results indicate that comparing model performance between two different datasets should be done with caution.
The alternative datasets contains 28.80\% more mitotic figure labels in the training set. Some of these additional mitotic figures have a relatively low model score, which could question the unambiguous nature of the labels regardless of the overall higher $F_1$ score. However, the increased model scores for the algorithms trained on the alternative data, in comparison to the original data, indicates a overall higher consistency. Regardless, both datasets include numerous labels with low model score, which could potentially be explained by the high morphological variability of mitotic figures and availability of very few patches of some morphological variants for training. Large-scale datasets with even higher numbers of mitotic figure labels might potentially overcome this limitation. Additionally, different degrees of inconsistency have been described between pathologists \cite{malon2012mitotic,Roux:2014tt,veta2018predicting} and pathologists-defined labels represent a somewhat noisy reference standard regardless agreement or consensus by several pathologists.
Besides the difficulties in the classification of mitotic figures, differences of expert-defined labels may arise from lack of identifying rare events \cite{veta2016mitosis}. The higher number of mitotic figure labels with presumably high label consistency in the alternative datasets (see above), suggests that fewer mitotic figures were overlooked. The labeling method of the alternative dataset basically follows the paradigm of Viola and Jones \cite{viola2001rapid}, of having an initially highly sensitive detection, followed by a secondary classification with high specificity achieved through dual expert consensus. High sensitivity in detecting potential mitotic figure labels was achieved by repeated manual screening of the images and an additional algorithmic augmentation. As the algorithmic detection of missed objects may potentially introduce a confirmation bias, image patches were reviewed by two pathologists independently. Final agreement on mitotic figures was only obtained for 2.4\% of the augmented cells, illustrating the desired high sensitivity / low specificity of this approach for algorithmic mitotic figure detection. Thereby identification of presumably almost all mitotic figures (high number of true positives and low number of false negatives) was ensured. Of note, adding this relatively low number of labels to the ground truth had a notable effect on performance of up to 1.6 percentage points, consistent with previous findings \cite{bertram2019large}.
Algorithmic approaches for dataset development have become more popular in recent years due to increasing demand on datasets that are difficult to accomplish with solely manual approaches. As described above, algorithmically supported identification of missed candidates may improve dataset quality and requires algorithms with high sensitivity \cite{bertram2019large}. In contrast, enlargement of datasets (higher quantity) may be facilitated through algorithmic detections with high specificity in order to ensured that mainly true positives and only few false positive labels are generated. This approach can be used for the creation of datasets with reduced expenditure of expert labor (crowd sourcing \cite{albarqouni2016aggnet} or expert-algorithm-collaboration \cite{marzahl2020fast}), or fully automated generation of additional data without pathologists-defined labels (pseudo-labels) \cite{akram2018leveraging}. Tellez \textit{et al.}~ \cite{tellez2018whole} recently investigated another approach, that used an specific staining for mitotic figures (immunohistochemistry with antibodies against phosphohistone H3) with computerized detection of reference labels and subsequent registration to images of the same tissue section with standard, non-specific hematoxylin and eosin stain. Besides requiring minimal manual annotation effort, this methods may eliminate expert-related inconsistency and inaccuracy.
In conclusion, this study shows considerable variability in pathologists-defined labels. A subsequent effect was evident on training the models (variation of model scores) and performance testing (variation of $F_1$ score). This needs to be considered when robustness of algorithms or reproducibility of developed deep learning methods are to be tested on independent ground truth datasets with different labeling methods. Therefore, scores should be interpreted in relation to reference results on that specific datset. Further studies on reduction of expert-related inconsistency and inaccuracy are encouraged.
\bibliographystyle{splncs04}
|
1,116,691,500,406 | arxiv | \section{Introduction}
The perovskites are very important class of materials with diverse range of properties for potential applications like sensors\cite{Harwell2020, fergus2007}, dielectrics\cite{davies2008}, thermoelectrics\cite{Sukanti2017, maiti2019}, magnetic materials\cite{pardo2009}, multiferroics\cite{Tokunaga2009}, supercondutivity\cite{Manju2020} etc. The cobaltate based double perovskites particularly are complex crystals and often strongly correlated systems\cite{ikeda2016, pardo2009, Vasala2010, taskin2005, taskin2006}. The interest in these compounds arises due to spin and charge degrees of freedom owing to strong correlation among entities like oxygen content(carrier doping), rare earth ion radius and nonstoichiometry of central B-site ions (charge and spin)\cite{pardo2009, Vasala2010, taskin2005, taskin2006}. Particularly, in $AA^{\prime}B_2O_6$ type structure, the non-stoichiometry of central (B site) Co ions introduces various exotic properties like electron-hole asymmetry\cite{taskin2005PRL}, metal-insulator transitions\cite{frontera2002}, magnetic phase transitions\cite{taskin2005}, oxygen ion mobility etc in $ReBaCo_2O_6$. Besides, doping at Co site or varying the oxygen content of the lattice, induces either hole doping or electron doping in the system\cite{taskin2005, taskin2006}.
\begin{figure}[b]
\includegraphics[width=8.5cm]{Fig1}
\caption{The schematic diagram showing (a) the crystal structure and (b) phase diagram of the $GdBaCo_2O_{5+\delta}$, with average Co oxidation state for $\delta$ =0, 0.5 and 1.0 showing tetragonal, orthorhombic and tetragonal unit cells respectively. }
\label{Fig1}
\end{figure}
As shown in Figure \ref{Fig1}, the crystal structure of double perovskite type $RBaCo_2O_{5+\delta}$ (RBCO)(where R is a rare earth element) consists of a sequence of metal oxide layers like CoO$_2$-BaO-CoO$_2-$ReO$_\delta$ periodic layers along the $\bar{c}$-axis. The $\delta$-value, i.e. oxygen content of the lattice mainly depends on the valency of Co, which can be 2+, 3+ or 4+. In a regular double perovskite $AA^{\prime}B_2O_6$ lattice i.e. $\delta$ =1, all Co ions have 50:50 3+:4+ i.e. all octahedra(O) of $CoO_6$; where 50\% Co ions have 3+ oxidation state and 50\% Co ions have 4+ oxidation state. While $\delta$ = 0, all Co ions have 50:50 2+:3+ (i.e. all square pyramids (P) of $CoO_5$; where 50\% Co ions have 2+ oxidation state and 50\% Co ions have 3+ oxidation state. However, at $\delta$= 0.5, all the Co are in 3+ state, i.e. there is exact equal contributions of $CoO_5$ – square pyramids and $CoO_6$ – octahedra. This causes the crystal structure to change from tetragonal ($\delta$ = 0) to orthrhombic ($\delta$ = 0.5) to again tetragonal ($\delta$ = 1) as shown in Fig. \ref{Fig1}(b). This results in varying physical properties such as, magnetic signatures of the composition varies due to either high spin, low spin or intermediate spin state of the Co ion\cite{takubo2006, taskin2005}. Further, the oxygen ions incorporated play a vital role in the properties of layered cobaltates\cite{taskin2005, taskin2006, liu2011}. The weak bonding of oxygen in $ReO_\delta$ layer enhances the oxygen diffusivity and hence increases the mobility of oxygen through the surface\cite{hermet2010, tsvetkov2014}. This in-turn shows a large change in transport properties of this systems as it is proposed that the equilibrium $\delta$ value at a given temperature linearly varies with the logarithm of the oxygen partial pressure (except for $\delta$ = 0)\cite{taskin2005}. Because of this, the double perovskites (AA$^\prime$B$_2$O$_{5+\delta}$ ) systems are rich in phases, like antiferromagnetic insulator, ferromagnetic insulator, ferromagnetic metal, paramagnetic metal etc\cite{taskin2005, ahmed2017}. They distinctly show insulator to metal transition (for intermediate $\delta$ values) near room temperature (~340 K for GBCO) and an order-disorder phase transition at relatively high temperature, (723 K for GBCO)\cite{liu2011}. This order-disorder phase transition is particularly characterized by a rearrangement of oxygen and its vacancies in the lattice. It may result in one-dimensional ordered oxygen vacancies along $\bar{a}$-axis and two-dimensional distributed vacancies in ReO$_\delta$ plane at low and high temperature respectively. This eases mobility of oxygen causing a change in the electronic transport of the materials, which can efficiently be utilised for monitoring oxygen levels for various applications like oxygen storage or fuel cells etc\cite{tarancon2008}. The beauty of layered rare earth cobaltates is that the change in annealing conditions can controllably alter the oxygen concentration over a wide range in the $ReO_\delta$ planes. Here, the size of rare earth ion $Re^{+3}$ governs the equilibrium $\delta$ value at given temperature. Thus, because of the intermediate size of $Gd^{+3}$ ion among the lanthanide series, it was chosen as it allows a wide range of oxygen concentration. Besides, the air synthesized samples of Gd cobaltate show the $\delta$ value of near 0.5 which allows possibility of both, electron as well as hole doping and thus either decreasing or increasing $\delta$ value.
The most successful oxygen sensors utilises wide bandgap metal oxide semiconductors like ZrO$_2$ which works on either potentiometric or amperiometric principles\cite{ramamoorthy2003, haaland1977}. These sensors require a constant reference on one side which is compared with the ambient oxygen level. Further to limit the current a small pin hole is used which is a tedious task to achive. Moreover, most of these sensors (either conductometric or amperometric or potentiometric) operate at much higher temperature i.e. 700 K and above which is practically difficult to maintain as it requires a power hungry heater. Besides, they limited applicability for applications like high altitude air breathability where the temperatures are fairly low.
In this paper, a simple, proof of concept oxygen sensor has been proposed which exploits the sensitivity of thermoemf to the oxygen partial pressure and operates at room temperatures or even lower. The response is measured in change in open circuit voltage for a constant temperature difference. The response is large below the metal insulator transition temperature of GBCO which is 340 K. Thus, crucial role of mobile lattice oxygen in these double perovskite structures is studied thoroughly and the potential device is proposed that allows monitoring of oxygen, with reliable sensitivity using a more straightforward thermoemf measurement for use in monitoring the ambient oxygen breathability and control.
\section{Experimental}
\subsection{Synthesis}
The $GdBaCo_2O_{5+\delta}$ was prepared by solid-state synthesis technique. In a typical synthesis, gadolinium oxides (from Sigma Aldrich, Germany) was preheated at 800 $^o$C for 12 hrs to remove any absorbed volatilities from it. The stoichiometric amounts of the rare earth oxides were mixed with $BaCO_3$ (from Sigma Aldrich, Germany) and $Co(NO_3)_2.6H_2O$ (from Sigma Aldrich, Germany) to get an amount of 10 gm of $GdBaCo_2O_{5+\delta}$. The composition was ground for about 2 hours in ethanol medium and was heated at 850 $^o$C for 24 hrs. Further, it was cooled naturally and grinding was repeated for 2 hrs. Subsequently, it was heated at 1100 $^o$C for 24 hrs.
\subsection{Material characterizations}
The X-ray diffraction (XRD) was recorded to confirm its crystal structure, phase formation and to estimate the crystallite size. The high resolution XRD data was recorded in 2$\theta$ range 5 to 90 degrees at a step size of 0.02 degrees at room temperature with sufficient exposure time to get good quality data on Philiphs panalytical X'pert pro diffractometer equipped with an accelerated detector. In order to reduce the grain size, planetary ball milling was done on Fritsch system (model Pulverisette 7) for 12 hrs for each sample, and again the XRD was taken after ball milling to compare the crystallinity.
The samples were pelletized by cold pressing and was heated at 1100 $^o$C for 24 hrs. To check the crystallinity and purity, XRD was repeated. The X-ray Photoelectron Spectroscopy (XPS) measurements were undertaken to verify the stoichiometry of each element in the compound. The electrical resistivity of sintered pellet was studied in van der Pauw geometry in an in-house built system as a function of temperature in vacuum ($10^{-2}$ torr). Keithley 2401 source meter was used to supply current and Keithley 2700 digital multimeter was used to measure the potential drop in four probes. The system was interface to personal computer for data logging.
\begin{figure}[b]
\center
\includegraphics[width=8.5cm]{Fig2}
\caption{The schematic diagram of the thermoelectric gas sensor system with the cartoon of thermocouple junction made by crossing the wires shown on top-right corner.}
\label{Fig2}
\end{figure}
\subsection{Thermoelectric gas sensing measurements}
An in-house seebeck based gas sensing system was developed to carry out the measurements. Figure \ref{Fig2} shows the schematic diagram of the gas sensing system. In a stainless steel chamber the sample was placed on a heater stage, with a thin alumina substrate below such that some part of the sample is floating. Thus, naturally there exists a small temperature gradient which is exploited to generate and measure thermoemf. The thermocouples are mounted on a spring base to ensure proper pressure at point of contact. In order to sweep the temperature gradient form negative to positive (for seebeck measurements) a secondary heater was placed touching the floating end of the sample. This heater was controlled using a Lakeshore 336 temperature controller for precise control.
The two thermocouples used for measuring the temperatures and open circuit voltage are made by crossing two alumel-chromel wires (36 guage) through four bore alumina ceramics as shown in top-right in Fig \ref{Fig2}. This design allows to make a precise junction which is in contact with sample and avoids cold finger effect due to thinner thermocouple junction\cite{iwanaga2011}. Besides, the negligible thermal mass of the thermocouple ensured high sensitivity avoiding the time delay to attain thermal equilibrium.
\section{Results and Discussion}
\subsection{Structural studies using X-ray Diffraction}
\begin{figure}[t]
\includegraphics[width=8cm]{Fig3}
\caption{Rietveld analysis of X-ray diffraction patterns of as prepared $GdBaCo_2O_{5.5}$ with $\delta$=0.5. The bottom panel shows the magnified view peaks showing splitting in (0kl) and (h0l) reflections}
\label{Fig3}
\end{figure}
The XRD patterns were collected for as synthesized powder as well as after ball milling. All the samples show the pure phase nature of double perovskites synthesized (see supplementary information section Fig S1). The XRD peaks are considerably broadened due to smaller crystallite size as a result of ball milling. The crystallite size was calculated from the full width at half maxima of the diffraction peaks using Scherrer formula shown in the equation \ref{scherrer},
\begin{equation}
t= \frac{0.9\lambda}{ \sqrt{\beta^2-\omega^2} \times cos \theta}
\label{scherrer}
\end{equation}
where, 2$\theta$ is the Bragg angle, $\omega$ is the full width at half maxima and $\lambda$ is the wavelength of the incident x-ray (Cu k$_\alpha$ = 1.5418 $\si{\angstrom}$). The crystallite sizes computed from the XRD patterns are 130 and 95 nm before and after ball milling respectively. Thus, the crystallite size(t) was significantly reduced after ball milling. This size reduction was undertaken to improve the sinterability of the powders so that high densities are obtained to ensure high electrical conductivity of the sample. Since after ball milling the sample prepared for measurement by palletization.
The X-ray diffraction (XRD) pattern of the GDCO powder sample is shown in Fig. \ref{Fig3} It was analysed by the rietveld refinement method to evaluate the crystal structure using Fullprof (version 5.6) software package. As seen from Fig \ref{Fig3}, the pattern showed excellent goodness of fit (GoF) for observed and calculated data for the orthorhombic (Pmmm) structure corresponding to the $\delta$ value of 0.5 i.e $GdBaCo_2O_{5.5}$. The reduced $\chi^2$ value was 1.13 as shown in figure which signifies a very close fit. The lattice parameters obtained after refinement were a = 3.8793 $\si{\angstrom}$, b = 7.8350 $\si{\angstrom}$ and c = 7.5349 $\si{\angstrom}$ . These values agrees well with those reported in literature for orthorhombic phase having $\delta$ between 0.45 and 0.55\cite{taskin2005PRL}.
As mentioned earlier, the double perovskites can have variable lattice oxygen contents and this is primarily governed by the processing conditions. This large variation in oxygen site occupancy results in structural phase transitions as a function of oxygen content, precise $0 \leq \delta \leq 1 $ values. When the structure has low oxygen contents i.e. $\delta<0.5$ The structure is tetragonal. i.e. two of the three crystallographic axes are degenerate and hence only one single peak is obtained for degenerate (0kl) and (h0l) reflections for same value of $l$ index. On the other hand, when $\delta$ value increases and is close to 0.5, the structure becomes orthorhombic (space group $Pmmm$) as systematically one oxygen from every alternate $CoO_6$ octahedra is missing. Thus, (0kl) and (h0l) are no more the same spacing. In fact the lattice constant of $\bar{b}$ -axis is doubled as shown in Fig \ref{Fig1} (however, the second order peak of (0 k/2 l) still appears close to (h0l). Hence, (0kl) and (h0l) gives two different bragg peaks depending on the difference in $\bar{a}$ and $\bar{b}$ lattice parameter. Further increase in $\delta$ makes the system again tetragonal and hence(0kl) and (h0l) again becomes same for same value of $l$. Thus, in our samples it was observed that GBCO shows two splitted peaks (see supplementary information Fig S1). Based on this one may conclude that GBCO is completely orthorhombic thus, their $\delta$ value is close to 0.5, maybe just lesser or little more.
\subsection{X-ray Photoelectron spectroscopy}
\begin{figure*}
\center
\includegraphics[width= 16cm]{Fig4}
\caption{The X-ray Photoelectron spectra of (a) Ba $3d$ and Co $2p$ (b) Gd $4d$ and (c) O $1s$ core levels.}
\label{Fig4}
\end{figure*}
The X-ray Photoelectron Spectra of the as-synthesized sample was studied. The core level spectra of Ba $3d$, Co $2p$ and Gd $4d$ and O $1s$ are shown in \ref{Fig4}. Although there are a number of reports in the literature showing X-ray photoelectron spectra (XPS) of double perovskites, most of them discuss the spectra qualitatively and do not show any deconvolution. In this work, we seek to analyse the XPS spectra of these materials for the first time to the best of our knowledge and establish a correlation with observed electronic transport properties. The analysis of XPS spectrum of these systems is rather challenging due to several complexities like, variable oxidation state of Co and its overlap of $2p$ line with Ba $3d$; variation in the oxygen content in the system ($\delta$-value) and the inherent magnetic moments of certain rare earth elements like Gd. Thus, the XPS analysis of these systems is non-trivial and hence could be of great relevance for researchers interested in these 112 type double perovskite systems. Ideally, for Co 2p level has the highest photoionisation cross-section, However, since Co $2p$ and Ba $3d$ core levels are overlapping in binding energy, we have also investigated the next prominent core level spectra, i.e. Co $3p$ and Ba $4d$ respectively and resolved the spectrum.
Fig. \ref{Fig4}(a) shows the XPS spectrum of Co $2p$ and Ba $3d$ overlapping core levels. Here several peaks have been identified considering spin-orbit splitting of $3d$ ($l$=2) of Ba and $2p$ ($l$=1) of Co photo-emissions. The Ba $3d_{5/2}$ and $3d_{3/2}$ lines have been fixed with reference to those observed in similar systems such as by Maiti et.al.\cite{Maiti2009} and others \cite{fetisov2015}. Maiti et.al.\cite{Maiti2009} have shown that in case of $YBa_2Cu_3O_{6+\delta}$, which is also oxygen deficient perovskite system, the binding energy of Ba $3d$ line lies in between 777 and 778 eV for $\delta$ =0.5 $\pm$ 0.05. The same has been confirmed by the spectra shown by Pramana et.al.\cite{pramana2018}. Thus we fixed the Ba 3d$_{5/2}$ line in the spectrum at this binding energy and tried to adjust the deconvolution for Co ions accordingly. The Ba $3d_{3/2}$ line was marked considering the spin-orbit splitting energy of Ba 3d line from the literature\cite{takubo2006} which marks same position for ba 3d 5/2 for $\delta$=0.5.
Regarding fixing the oxidation state of Co ions is challenging here because of possibility of variable oxidation state and corresponding satellites. However, the presence of satellites itself can be effectively used for fixing the oxidation state of transition metal ions\cite{biesinger2011}. Thus, the two humps seen at 785 and 789 eV (shown by single broad peak) are ascribed to the satellite peaks of $Co^{3+}$ and $Co^{4+}$ oxidation states respectively\cite{biesinger2011}. As shown by Takubo et.al.\cite{takubo2006}, in case of Nd and Tb based rare earth cobaltate, the Co $2p_{3/2}$ is expected at 780 eV. This is the most intense and hence abundant oxidation state in the structure i.e. $Co^{3+}$ in octahedral (O) position for $\delta=0.5$ composition. Thus, the next oxidation state i.e. $Co^{4+}$ which is also in Octahedral site is hardly 1 eV higher than its predecessor i.e.$Co^{3+}$(O). Besides there 2p$_{1/2}$ counterparts are included at 794 and 795 eV respectively.
In addition to the peaks mentioned above, there is a shoulder seen to the left of the peak, which could neither be attributed to $Co^{3+}$(O) and $Co^{4+}$(O). Besides, it could also not be $Co^{2+}$ because it is unlikely to occupy an { GDCO ($\delta <$ 0.5) (a) at 333 K }octahedral site while having a 2+ charge. Even if it would, it would not have been so lower binding energy than $Co^{3+}$(O). Thus, this has been identified as the $Co^{3+}$ ions occupying a crystallographic site other than that of octahedral one which is square planar site (marked by P) in this case. Since the binding energy of photoelectron ejected from a particular atom is highly sensitive to its local chemical environment in addition to the nuclear charge. Thus, like chemical shift due to variable oxidation state of the emitting atoms, it is also found that the binding energies of photoemitted electrons are also slightly different for same element, having same oxidation states but different crystallographic sites\cite{reiche2000}.
Fig. \ref{Fig4}(b) shows the Gd 4d core level of the XPS spectrum. Usually, in case of d orbital which has azimuthal quantum no $l=2$, the two spin orbit interactions expected are $j_1=l+s$ and $j_2=l-s$ where $s=1/2$ for the electron. which means only two lines of $j_1=5/2$, $j_2=3/2$ are expected. However, $Gd^{3+}$ being a $4f^7$ ion it shows a strong coulomb and exchange interaction between 4d and 4f electrons as presented in some of the literature\cite{lademan1996, talik2016} in spite of the spin integrated collection of photoelectrons than spin resolved spectra. The two intense lines corresponds to $^9D$ and $^7D$ final states which show spin-orbit interaction energy of nearly 6 eV. The spin parallel state $^9D$ shows finer spillting due to spin spin interaction, while the anti-parallel state $^7D$ could not be resolved. These levels further split into 6 depending on separate $m_j$ values due to strong $4d-4f$ coupling of Gd ion. The broad peak at 154 eV is ascribed to energy loss feature.
The oxygen $1s$ spectrum shows three contributions, which have been identified as the lattice oxygen peak (at 528.5 eV), the peak corresponding to lattice oxygen vacancies (530.5 eV) and the surface chemisorbed oxygen from atmosphere at about 532.5 eV. These accreditation are in good agreement with those of similar systems \cite{takubo2006}.
\subsection{Transport Measurements}
\subsubsection{Electrical resistivity}
\begin{figure}
\includegraphics[width= 8cm]{Fig5}
\caption{(a) The variation of electrical resistivity of GDCO sample measured in two different systems and compared with that of the reported data. (b) The seebeck coeffocient measurement on the same sample across the metal insulator transition marked by first derivative of seebeck with temperature.}
\label{Fig5}
\end{figure}
The electrical resistivity and seebeck coefficient of the samples were measured in an in house built system. The measurement utilized Van der Paw method of four probes using the equation \ref{Resistivity_Eq}. The system was calibrated using Ni metal foil as standard. A constant current (10 mA to 1A) is applied and the voltage drop (V) has been measured as a function of temperature (T) of all the pellet samples.
\begin{equation}
\rho = \frac{\pi R }{ln 2}\times d
\label{Resistivity_Eq}
\end{equation}
where $\rho$ is the electrical resistivity, R is the sheet resistance and d is the sample thickness.
Fig \ref{Fig5} shows the resistivity behaviour of as prepared sample in vacuum from 100 to 500K. in the lower temperature range, the resistivity shows a drop until 330 K, after which it shows a sudden drop by more than one order of magnitude. beyond which the resistivity shows a small positive slope. The same sample was measured in two different systems (marked as IISER and INST) to confirm the measurement data. Also, the data from Taskin et.al.\cite{taskin2006} has been shown for comparison and matching of delta value reference. From Figure \ref{Fig5}(a) it is clear that the data obtained matched very well with that of Taskin et.al.\cite{taskin2006} for $\delta$ =0.5. The small discrepancy in temperature of transition could be due tothe fact that the data of Taskin et.al.\cite{taskin2006} is measured on single crystals while that of this study is on a polycrystalline pellet sample.
\begin{figure}
\includegraphics[width=8cm]{Fig6}
\caption{The valance band region of X-ray photoelectron spectra showing the crystal filed energy levels of Co ion at room temperature. The inset cartoon on top right shows the change in electron occupancy at fermi level ($E_F$) when the system is heated beyond 350K, $CoO_6$ undergoes a lattice distortion changing the density of states at the $E_F$ from near zero (insulating) to non-zero (Metallic) behaviour.}
\label{Fig6}
\end{figure}
Tarancon et.al.\cite{tarancon2008} have performed high temperature XRD on GDCO and shown that at ~350 K, there happens a lattice distortion, i.e. lattice parameter $\bar{a}$ shrinks i..e corresponding spacing shows a shift towards higher Bragg angle; while lattice parameter $\bar{b}$ and $\bar{c}$ show a sudden elongation i.e. corresponding lattice spacing shows a shift towards lower Bragg angle. This lattice distortion causes the change in crystal Field splitting of the $CoO_6$ octahedra and thereby changing the position of $t_{2g}$ and $e_g$ orbitals relative to each other bring non-zero density of states at the Fermi level. The low binding energy region of XPS spectrum which essentially depicts the valance band is shown in Fig. \ref{Fig6} recorded at room temperature. This shows near flat region at $E_F$ i.e. zero binding energy. Because this is very close to the metal insulator transition(MIT) at about 350K, there may be a small non-zero electron occupancy even at room temperature. According to literature, the transition only occurs for compositions having a $\delta$ value of 0.5 and its vicinity i.e. 0.45 $< \delta <$ 0.65. If the delta value is above and below this range the resistivity shows an normal insulator like exponential decrease.
The signature of MIT is quite evident from the seebeck coefficient variation of the sample as shown in Fig. \ref{Fig5}(b). The seebeck coefficient is fairly large in the insulating state and decreases gradually as the resistivity decreases. Beyond the MIT temperature, S shows a marked change of slope and has very small value ($\mathtt{\sim 1 \mu V/K}$) marking a metallic behavior.
The first derivative of S with respect to the temperature shows a clear minima which marks the MIT around 345K. The thermoemf behaviour of this system has been studied thoroughly an hence deduce the oxygen content of the lattice by corroborating the result with that of the literature. According to the detailed study conducted by Taskin et.al \cite{taskin2005, taskin2006}, the maximum temperature of MIT is obtained for $\delta=0.5$ and the temperature decreases as the value $\delta$ deviates from this value on either side. Moreover, the behavior of seebeck is the same for single crystals or polycrystalline bulk samples. However, from mere visual comparisons it can be deduced that the Seebeck values obtained here for as-prepared sample matches with that of the $\delta$ of 0.497, rather in between to that of 0.495 to 0.501.
\\
According to literature, $Co^{3+}$ in octahedral coordination has a low spin (LS, $t_{2g}^6e_g^0$, S=0) state whereas in square pyramidal coordination same $Co^{3+}$ has intermediate spin (IS, $t_{2g}^5e_g^1$, S=1). On the other hand, when the system is doped with electrons, i.e. $Co^{2+}$ which occupies square pyramidal coordination has high spin (IS, $t_{2g}^5e_g^2$, S=3/2)and doped with holes i.e. $Co^{4+}$ which goes in octahedral coordination has low spin ($t_{2g}^5e_g^0$, S=1/2). Thus, although the change is $\delta$ upon incorporation or removal of oxygen is symmetric about $\delta$=0.5, the transport properties are not symmetric. \cite{taskin2005, taskin2005PRL, taskin2006}. Thus, the system is more conducting for hole doping (i.e. $\delta >0.5$) and rather insulating for electron doping ($\delta<0.5$). This is due to the fact that the transport of HS electrons $Co^{2+}$ is rather suppressed due to spin blockade\cite{taskin2005PRL} in the background of $Co^{3+}$ which has LS. This is the reason, the seebeck coefficient of the system shows asymmetry in spite of divergence at 0.5 $\delta$ value. the seebeck coefficient for $\delta =0.5$ composition is rather negative at low temperature (<250K), which becomes large positive even with small change in $\delta$ i.e. 0.501. The $\delta$ values Fig \ref{Fig7}(b) shows good correlation with these arguments.
\begin{figure}[b]
\includegraphics[width= 8cm]{Fig7}
\caption{(a) The variation of electrical resistivity of GDCO sample measured in two different systems and compared with that of the reported data. (b) The seebeck coefficient measurement on the same sample across the metal insulator transition marked by first derivative of seebeck with temperature.}
\label{Fig7}
\end{figure}
Thus, when the sample ambience changes from oxygen rich to oxygen lean, the sample seebeck value shows a large change as compared to its resistivity, which is is nearly same across the $\delta$ value. Here we measured the change in the seebeck coefficient by changing the atmosphere from $N_2$ to $O_2$ and back to see the change in the potential difference for a fixed temperature gradient across the the hot and cold end of the sample. The system is calibrated using Ni metal sample in the given temperature range. See Figure S5 and S6 in the supplementary information section for the Nickel calibration data and the data analysis of seebeck measurements.
As seen in Fig \ref{Fig7}(a), the seebeck coefficient in oxygen atmosphere is lower than that of nitrogen atmosphere. This is due to diffusion of oxygen into the lattice, which doped the system with holes, makes it more conducting by oxidising some of $Co^{2+}$ to $Co^{3+}$, i.e. square pyramidal to octahedral coordination. However, this change is conspicuous only for high seebeck values. There maybe change in Seebeck when it is low, however, the resolution of measurement may be a limiting factor. Nevertheless, the ambient cycling at room temperature changes the oxygen stoichiometry.
In order to estimate the change in seebeck upon ambient cycling for sample with high initial $\delta$ values, the sample was ground in mortar and pestle and annealed in ambient oxygen at 1000 $^oC$ for 2 hours and naturally cooled to room temperature. The sample so obtained showed a large seebeck value as shown in Figure \ref{Fig7}(b), which confirmed the value of $\delta$ to be greater than 0.5 (upon comparison with data in literature \cite{taskin2005}). The nominal MIT temperature value is shown by vertical dashed line in Figure \ref{Fig7}(b).
\begin{figure*}[]
\includegraphics[width= 15cm]{Fig8}
\caption{ The response of seebeck coefficient of GDCO ($\delta <$ 0.5) (a) at 333 K and (b) at 355 K. (c) the response of seebeck coefficient of GDCO ($\delta >$ 0.5) at 300 K for different gas ambience 100 \%$N_2$ to $O_2$ and back. (d) for 20\% $O_2$ and 100\% $N_2$ and back at 300 K.}
\label{Fig8}
\end{figure*}
The seebeck measurement data for sample which showed consistent results is shown in Figure \ref{Fig8}. As mentioned earlier after the first ambient cycling at room temperature the as synthesized sample ($\delta =0.5$) showed a small change in oxygen stoichiometry ($\delta <0.5$) as manifested in seebeck. However, the resulting oxygen stoichiometry was fairly consistent until it was re-annealed at high temperature to give a larger $\delta$ ($>0.5$) value.
Thus, the Figure \ref{Fig8} (a) and (b) show the response in seebeck for the $\delta <0.5$ sample before and after MIT i.e. 340 K. As it is seen from Figure \ref{Fig8}(a) obtained at 333K shows a large and noticeable change in seebeck where as \ref{Fig8}(b) obtained at 353 K does not. On the other hand, the same trend is observed for sample showing large seebeck ( and $\delta >0.5$) at room temperature, 300 K shown in \ref{Fig8}(c). In either cases, for insulating state, a large change in seebeck has been observed (10-15 $\mu$V/K) and in metallic state there is no significant change in seeebeck.
We also measured the response in seebeck coefficient for fractional oxygen atmosphere such as ambient 20\% $O_2$ and balance 80\% $N_2$ to that of 100\% $N_2$. The result is shown in Figure \ref{Fig9}(a) wherein a clear sharp rise and fall is seen for change in atmosphere. Before measuring this response the chamber was first evacuated and then filled with 100\% $N_2$ gas and subsequently 20\% $O_2$ was introduced under flowing condition. similarly the change was recorded for 10\% $O_2$. Notably, the change in seebeck for a given concentration of $O_2$ was practically the same irrespective of the initial seebeck value. This means the change in seebeck coefficient was found to be independent of initial $\delta$ value except for where S it is really low. This change in S i.e. $\Delta S$ was plotted against the given $O_2$ concentration in percentage. Surprising the data was found to obey the power law of semiconductor gas sensors \cite{kamble2016SNB} as shown in Figure \ref{Fig9}. This implies that the response of the material is scaled as power of concentration of the gas. Usually the response is measured in change in resistance, unlike change in seebeck in this case. The governing equation of the law is stated as-
\begin{figure}[b]
\includegraphics[width= 8cm]{Fig9}
\caption{The change in Seebeck coefficient ($\Delta S$) for different $O_2$ concentrations in \% with power law fit.}
\label{Fig9}
\end{figure}
\begin{equation}
\Delta S = AC^\beta
\label{power law}
\end{equation}
where, A is a scaling prefactor and $\beta$ is the exponent which is governed by the gas-solid interaction. In this case the interaction is diffusion of oxygen through $ab$ plane which leads to change in seebeck coefficient of the system. This change could be due to the change in carrier concentration (n). As mentioned by Taskin et.al.\cite{taskin2005PRL}, even the small change in $\delta$ value of 0.001 i.e from 0.5 to 0.501, causes a large change in the carrier concentration of nearly $10^{19} cm^{-3}$. Because seebeck coefficient depends mainly on three quantities, carrier concentration(n), temperature (T) and effective mass (m*) i.e. slope of the density of states at $E_F$ as shown in equation below.
\begin{equation}
S = \frac{8\pi^2 k_B^2}{3eh^2} m* T \left[\frac{\pi}{3n}\right]^{2/3}
\end{equation}
Thus, if the temperature of sample is held constant, only two quantities can change i.e. n and m*. However, it is unlikely that m* would change drastically giving such as large change in seebeck. moreover, when oxygen is diffused inside the lattice and occupies the vacant sites along the ordered oxygen vacancies in ab plane, it dopes the system with excess holes, making the system more conducting. thus, can produce significant change in n. This change is very much evident in S variation at lower temperature\cite{taskin2005}. Moreover as the $\delta$ values drops below 0.5 the seebeck diverges and shows first a low positive value ($\delta > 0.7$), large positive value ($0.7 > \delta > 0.5$), large negative value ($0.5 > \delta > 0.45 $) and moderately large negative value ($\delta < 0.45$).
After the virgin test of change in ambience, it was found that the seebeck coefficient of the sample has shown a different trend which did not match to that of $\delta$ =0.5 as seen in Fig \ref{Fig7}(b). However, this trend was found to very reproducible ($\delta < 0.5$) at room temperature.
The average oxygen content in the lattice ($\bar{\delta}$) has been found to show an exponential dependence with time and the time constant ($\tau$ ) depends on temperature\cite{taskin2005}.
\begin{equation}
\bar{\delta}= \delta_\infty - [\delta_\infty - \delta_0] e^{(-t/\tau)}
\label{time_constant}
\end{equation}
where, $\delta_\infty $ is the oxygen content at equilibrium (t = $\infty$) and $\delta_0 $ is the initial oxygen content (t=0). This time constant is governed by the diffusion coefficient of oxygen into the lattice. As per the detailed kinetics studied by Taskin et.al.\cite{taskin2005}, as the temperature increases the diffusivity of oxygen increases and hence at higher temperature the oxygen content ($\delta$) lower at given oxygen partial pressure ($P_{O_2}$). However, after annealing at higher temperature for a long time (several hours) if the sample is naturally cooled to room temperature in a constant $P_{O_2}$, the lattice oxygen content rises as it cools and reaches the thermal equilibrium value of room temperature. Thus, the sample synthesized in atmospheric $P_{O_2} \mathtt{\sim 10^{-1}}$ bar shows a high $\delta$ value of near 0.5. The data obtained is fit with exponential functions according to equation \ref{time_constant} and the values of time constants $\tau$ is estimated for response as well as recovery. A very small value of about 10 sec is obtained for the time constant which is remarkably fast for a bulk diffusion phenomenon.
The kinetics of the response and recovery times are governed by the diffusion coefficient of oxygen at a given temperature which decides the time constant as shown in equation \ref{diffusivity}\cite{Sreya2019},
\begin{equation}
D=\frac{2 L^2}{\pi^2 \tau}
\label{diffusivity}
\end{equation}
where, $L^2$ is the area of surface available through which the diffusion takes place and $\tau$ is the time constant of diffusion for rise and recovery.
According to Taskin et.al. \cite{taskin2005} the diffusivity (D) of oxygen in GDCO is 3 $\times 10^{-8} cm^2/sec$ at $250^oC$. Moreover, this diffusion being a thermally activated process, diffusivity value would be much lower at room temperature. However, using this value and typical surface area of our sample, i.e. 1 cm radius one gets the value of time constant as nearly $10^{+7} sec$ which is much much larger than that of found in our measurement. One of the reason could be the smaller crystallite size (100 nm) in our case as compared to that of Taskin et.al.(1 $\mu m$). Moreover, in our case since the time constant is measured using transport measurements and the thermocouples being resting on the surface, we get a near instantaneous change in the surface voltage which saturates quickly.
The idea of thermoelectric gas sensor was first proposed by Retting and Moos \cite{rettig2007} which proposed viable design and demonstrated oxygen sensing abilities of $SrTi_{0.6}Fe_{0.4}O_3$ using thermoelectric prinicples at 750 $^oC$. Thus, one can prepare thin films of GDCO on a microheater platform which is inherently designed to give a small (calibrated) temperature at a given heater current and thus measure its open circuit voltage. Higher the temperature gradient, higher is the voltage produced Subsequenty higher resolution is obtained between increasing oxygen concentrations. Similar thermoelectroc based sensor has been demonstrated by Masoumi et.al. \cite{masoumi2019} using ZnO for Volatile Organic Compound (VOC) sensing. However it also had much higher operating temperature (400 $^oC$) and very poor selectivity among VOCs. Nonetheless, similar power law dependance was observed. The primary mechanism identified for change in seebeck upon exposure to VOCs was change in the inter-grain barrier height due to carrier injection or trapping becuase of gas-surface interaction\cite{hossein2018}, unlike bulk oxygen diffusion in our case.
\section{Conclusion}
Here we demonstrate that the polycrystalline bulk ceramic sample of $GdBaCo_2O_{5.5}$ (GDCO) prepared by solid state route showed an optimum oxygen content ($\delta$= 0.5) resulting in demonstration of metal insulator transition close to room temperature and an oxygen ambient dependant transport properties. We found that the Ba $3d$ and Co $2p$ core level regions overlap in XPS making this analysis challenging. However, with reference to literature on similar systems and transport properties, we have successfully analysed the XPS spectra to reveal significant disproportionation of Co ions into 3+ and 4+ along with $Ba^{2+}$ 3d photoemission. Moreover, interestingly, there exists a significant contribution of 3+ in non-octahedral coordination, which is characterised by rather lower binding energy than usual $Co^{n+}$ in octahedral site. This binding is lower due to absence of one high electron affinity oxide ion in square pyramidal geometry than that of octahedral. The transport properties reveal that the as prepared sample has $\delta$= 0.5 and shows an MIT at nearly 340 K which agrees well with literature. Further, the seebeck coefficient shows a step change at MIT. It also exhibits a large change upon change of oxygen content of the ambience. The small response and recovery time constant (nearly 10 sec) obtained at room temperature depict a surface sensitive nature of measurement method. Thus, this method of measuring open circuit voltage for a small temperature difference has an edge over the existing oxygen sensors involving very high temperature of operation ($\geq 500 ^oC$) and a reference oxygen level for comparison.
\begin{acknowledgements}
The authors are thankful the the funding revived from Science and Engineering Research Board (SERB) Govt. of India (Grant No EEQ/2018/000769). The authors are thankful to Dr Chandan Bera of INST Mohali, for electrical conductivity measurements at low temperature. The authors are also thankful to Prof Arun M Umarji for his guidance about the system of double perovskites.
\end{acknowledgements}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\nocite{*}
|
1,116,691,500,407 | arxiv | \section{\uppercase{#1}}\label{#2}}
\newcommand{ |
1,116,691,500,408 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\underline{1}}{\underline{1}}
\newcommand{\underline{2}}{\underline{2}}
\newcommand{\underline{a}}{\underline{a}}
\newcommand{\underline{b}}{\underline{b}}
\newcommand{\underline{c}}{\underline{c}}
\newcommand{\underline{d}}{\underline{d}}
\newcommand{\underline{i}}{\underline{i}}
\newcommand{\underline{j}}{\underline{j}}
\newcommand{\underline{k}}{\underline{k}}
\newcommand{\underline{l}}{\underline{l}}
\newcommand{\underline{I}}{\underline{I}}
\newcommand{\underline{J}}{\underline{J}}
\newcommand{\underline{K}}{\underline{K}}
\newcommand{\underline{m}}{\underline{m}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\def\dt#1{{\buildrel {\hbox{\LARGE .}} \over {#1}}}
\def\double #1{#1{\hbox{\kern-2pt $#1$}}}
\newcommand{{\hat{m}}}{{\hat{m}}}
\newcommand{{\hat{n}}}{{\hat{n}}}
\newcommand{{\hat{p}}}{{\hat{p}}}
\newcommand{{\hat{q}}}{{\hat{q}}}
\newcommand{{\hat{a}}}{{\hat{a}}}
\newcommand{{\hat{b}}}{{\hat{b}}}
\newcommand{{\hat{c}}}{{\hat{c}}}
\newcommand{{\hat{d}}}{{\hat{d}}}
\newcommand{{\hat{e}}}{{\hat{e}}}
\newcommand{{\hat{M}}}{{\hat{M}}}
\newcommand{{\hat{N}}}{{\hat{N}}}
\newcommand{{\hat{A}}}{{\hat{A}}}
\newcommand{{\hat{B}}}{{\hat{B}}}
\newcommand{{\hat{C}}}{{\hat{C}}}
\newcommand{{\hat{i}}}{{\hat{i}}}
\newcommand{{\hat{j}}}{{\hat{j}}}
\newcommand{{\hat{k}}}{{\hat{k}}}
\newcommand{{\hat{l}}}{{\hat{l}}}
\newcommand{{\hat{\alpha}}}{{\hat{\alpha}}}
\newcommand{{\hat{\beta}}}{{\hat{\beta}}}
\newcommand{{\hat{\gamma}}}{{\hat{\gamma}}}
\newcommand{{\hat{\delta}}}{{\hat{\delta}}}
\newcommand{{\hat{\rho}}}{{\hat{\rho}}}
\newcommand{{\hat{\tau}}}{{\hat{\tau}}}
\newcommand{{\dot\gamma}}{{\dot\gamma}}
\newcommand{{\dot\delta}}{{\dot\delta}}
\newcommand{{\tilde{\sigma}}}{{\tilde{\sigma}}}
\newcommand{{\tilde{\omega}}}{{\tilde{\omega}}}
\renewcommand{\Bar}{\overline}
\newcommand{{\underline{\alpha}}}{{\underline{\alpha}}}
\newcommand{{\underline{\beta}}}{{\underline{\beta}}}
\newcommand{{\underline{\gamma}}}{{\underline{\gamma}}}
\newcommand{{\underline{\delta}}}{{\underline{\delta}}}
\newcommand{{\underline{\hal}}}{{\underline{{\hat{\alpha}}}}}
\newcommand{{\underline{\hbe}}}{{\underline{{\hat{\beta}}}}}
\newcommand{{\underline{\hga}}}{{\underline{{\hat{\gamma}}}}}
\newcommand{{\underline{\hde}}}{{\underline{{\hat{\delta}}}}}
\newcommand{{\underline{\hrh}}}{{\underline{{\hat{\rho}}}}}
\newcommand{{\nabla}}{{\nabla}}
\newcommand{{\bar{\sigma}}}{{\bar{\sigma}}}
\newcommand{{\theta}}{{\theta}}
\newcommand{{\bar{\theta}}}{{\bar{\theta}}}
\newcommand{\mathsf{Sp}}{\mathsf{Sp}}
\newcommand{\mathsf{SU}}{\mathsf{SU}}
\newcommand{\mathsf{SL}}{\mathsf{SL}}
\newcommand{\mathsf{GL}}{\mathsf{GL}}
\newcommand{\mathsf{SO}}{\mathsf{SO}}
\newcommand{\mathsf{U}}{\mathsf{U}}
\newcommand{\mathsf{S}}{\mathsf{S}}
\newcommand{\mathsf{PSU}}{\mathsf{PSU}}
\newcommand{\mathsf{PSL}}{\mathsf{PSL}}
\newcommand{\mathsf{OSp}}{\mathsf{OSp}}
\newcommand{\mathsf{Spin}}{\mathsf{Spin}}
\newcommand{\mathsf{Mat}}{\mathsf{Mat}}
\newcommand{\begin{subequations}}{\begin{subequations}}
\newcommand{\end{subequations}}{\end{subequations}}
\newcommand{{\oplus}}{{\oplus}}
\newcommand{{\ominus}}{{\ominus}}
\newcommand{{\bar{\theta}}}{{\bar{\theta}}}
\newcommand{{\overline{1}}}{{\overline{1}}}
\newcommand{{\overline{2}}}{{\overline{2}}}
\newcommand{{\bar{\Delta}}}{{\bar{\Delta}}}
\newcommand{{\bar{A}}}{{\bar{A}}}
\newcommand{{\bar{B}}}{{\bar{B}}}
\newcommand{{\bar{\Phi}}}{{\bar{\Phi}}}
\newcommand{{\bar{\chi}}}{{\bar{\chi}}}
\newcommand{{\mathbb D}}{{\mathbb D}}
\newcommand{{\mathbb \DB}}{{\mathbb \bar{D}}}
\newcommand{{\bar i}}{{\bar i}}
\newcommand{{\bar j}}{{\bar j}}
\newcommand{{\bar k}}{{\bar k}}
\newcommand{{\bar l}}{{\bar l}}
\newcommand{{\bar p}}{{\bar p}}
\newcommand{{\bar q}}{{\bar q}}
\newcommand{{\bar 1}}{{\bar 1}}
\newcommand{{\bar 2}}{{\bar 2}}
\newcommand{{\bar 0}}{{\bar 0}}
\newcommand{{\bar n}}{{\bar n}}
\newcommand{{\bar m}}{{\bar m}}
\newcommand{{\bar 4}}{{\bar 4}}
\newcommand{{\rm L}}{{\rm L}}
\newcommand{{\rm R}}{{\rm R}}
\newcommand{{\nabla}}{{\nabla}}
\newcommand{{\bar{\nabla}}}{{\bar{\nabla}}}
\newcommand{``}{``}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\setlength{\parindent}{0pt}
\begin{document}
\begin{titlepage}
\begin{flushright}
\par\end{flushright}
\vskip 1.5cm
\begin{center}
\textbf{\huge \bf Self-Dual Forms in Supergeometry I:}
\\
\vskip .2cm
\textbf{\Large \bf The Chiral Boson}
\vskip 1.5cm
\large {\bf C.~A.~Cremonini}$^{~a,b,}$\footnote{[email protected]},
\large {\bf P.~A.~Grassi}$^{~c,d,e,}$\footnote{[email protected]},
{\small
\vskip .5cm
\medskip
\centerline{$^{(a)}$ \it Dipartimento di Scienze e Alta Tecnologia (DiSAT),}
\centerline{\it Universit\`a degli Studi dell'Insubria, via Valleggio 11, 22100 Como, Italy}
\medskip
\centerline{$^{(b)}$ \it INFN, Sezione di Milano, via G.~Celoria 16, 20133 Milano, Italy}
\medskip
\centerline{$^{(c)}$
\it Dipartimento di Scienze e Innovazione Tecnologica (DiSIT),}
\centerline{\it Universit\`a del Piemonte Orientale, viale T.~Michel, 11, 15121 Alessandria, Italy}
\medskip
\centerline{$^{(d)}$
\it INFN, Sezione di Torino, via P.~Giuria 1, 10125 Torino, Italy}
\medskip
\centerline{$^{(e)}$
\it Arnold-Regge Center, via P.~Giuria 1, 10125 Torino, Italy}
}
\end{center}
\vskip 0.1cm
\begin{abstract}
Recent results of A. Sen on quantum field theory models with self-dual field strengths use
string field theory as a starting point. In the present work, we show that combining string field
theory and supergeometry we can provide a constructive method for all these models, for
any superspace representation and for any given background. The analysis is based on
the new concept of pseudoform, emerging in supergeometry, which opens a new page
in quantum field theory and, in particular, in supergravity. The present work deals with an explicit example, the case of the chiral boson multiplet in $d=2$.
\end{abstract}
\vfill{}
\vspace{1.5cm}
\end{titlepage}
\newpage\setcounter{footnote}{0}
\tableofcontents
\section*{Introduction}\label{sec:intro}
\addcontentsline{toc}{section}{\nameref{sec:intro}}
Recently the work of A. Sen \cite{Sen:2015nph,Sen:2019qit} has brought back the attention to the
long standing problem of constructing a consistent field theory with self-dual field strengths. There are several examples of
models such as $N=2$, $d=10$ Type $IIB$ supergravity (where the 5-form field strength for the Ramond-Ramond (RR) fields has to be self-dual), $d=6$ tensor multiplet,
self-dual Yang-Mills and the chiral boson. Several proposals have been considered, see for example the non-exhaustive list \cite{Floreanini:1987as,Henneaux:1988gg,Bastianelli:1989cu,Bastianelli:1989hi,McClain:1990sx,Wotzasek:1990zr,Devecchi:1996cp,Berkovits:1996nq,Berkovits:1996em,Pasti:1996vs,Pasti:1997gx,Schwarz:1997mc,DallAgata:1997gnw,DallAgata:1998ahf} and the more recent \cite{Lambert:2019diy,Lambert:2019khh,Buratti:2019guq,Mkrtchyan:2019opf,Andriolo:2020ykk,Bandos:2020hgy,1837177} (see in particular \cite{Townsend:2019koy}, which is mainly focused on the two-dimensional case) where the self-duality is a crucial requirement.
Moreover, there
are several {\it ad hoc} solutions which might solve the issue for special cases of interest (see e.g. \cite{AlvarezGaume:1983ig,Berkovits:1994wr,Berkovits:1996bf}), for example, one could violate manifest Lorenz invariance, or use an infinite number of auxiliary fields, or adopt non-polynomial Lagrangians.
Here we consider a geometrical approach in order to deal with the supersymmetric version of the problem, which can be easily generalized to any background of supergravity.
A full-fledged action whose variational principle provides the correct covariant (finite number of) equations was missing until the proposal of A.Sen.
The main problem is that, by a simple analysis of the equations of motion, one immediately sees that for each self-dual propagating degree of freedom,
its anti-self-dual companion propagates as well, jeopardizing the counting of degrees of freedom. Sen's approach \cite{Sen:2015nph,Sen:2019qit} is different:
it is based on the structure of superstring field theory action
where the closed string sector (with Neveu-Schwarz-Neveu-Schwarz (NSNS) and RR fields) is described by two string fields, $\Phi$ and $\Psi$. The equations
of motion fix them in terms of each other such that the self-dual part of the 5-form of RR couples to the rest of theory
while the anti-self dual part decouples. The form of the action given in \cite{Sen:2015uaa} is
\begin{eqnarray}
\label{streA}
S_{_{SFT}} = \langle \Phi, Q \mathbb{Y} \Phi \rangle + \langle \Phi , Q \Psi \rangle + f(\Psi) \ ,
\end{eqnarray}
where $Q$ is the BRST operator, $\Phi$ and $\Psi$ are the string fields, $\mathbb{Y}$ is the Picture Changing Operator needed to provide a consistent kinetic term for the field $\psi$ and $f(\Psi)$ is a potential term for the string field $\Psi$. The field $\Psi$ is endowed with self-duality properties and therefore the coupling between $\Phi$ and $\Psi$
determines which part of the field $\Phi$ couples to the rest of the theory. In particular, as shown in \cite{Sen:2015nph,Sen:2019qit}, if
$\Psi$ represents a self-dual form, only the self-dual part of $\Phi$ couples to the rest of the theory and the anti-self dual part of it decouples from the rest, or viceversa.
We will tackle the problem of self-dual forms within the geometrical framework of \emph{rheonomy} \cite{cube}; in particular this framework is efficient for (rigid) supersymmetric and supergravity models. The key point of this language is that it allows to construct Lagrangians in terms of differential forms, hence keeping manifest the symmetries of the theory. Rheonomy allows to study a large variety of models such as extended supergravity ($N=2,$ type $II B$ supergravity of $d=6$ supergravity), where one has eventually to face the problem of self-dual field strengths. Some of the models are discussed in the books \cite{cube}, others in some articles \cite{Castellani:1991jf,Castellani:1993ye}. However, despite the power of the framework, even the simple case of chiral boson and its superpartner was not properly addressed.\footnote{To our knowledge, the case of the chiral boson has not been analysed in the rheonomic framework, nevertheless in several works (\cite{cube,Castellani:1991jf,Castellani:1993ye}) the authors propose some solutions to deal with self-dual field strengths. Those solutions are not satisfactory since the rheonomic variation principle does not include self-dual conditions.}
This is due to the fact that the rheonomic Lagrangian has to be supplemented with a \emph{Picture Changing Operator} (PCO) in order to define a consistent action principle. The PCO considered in rheonomy is the component one, projecting the Lagrangian to its component version or analogously setting the odd variables $\theta$'s and their even differentials $d \theta$'s to zero. Being the geometric PCO a de Rham cohomology representative, this choice is not at all unique. The problem in this context is that once the Lagrangian is projected via \emph{any} PCO, the resulting equations of motion do not correspond to the single chiral boson as one would expect. We will show that this problem is related to the fact that the Maurer-Cartan (MC) equations (once the conventional constraint has been implemented) imply the equations of motion. Hence, the verification of the closure of the Lagrangian is `` trivialised" in the sense that the MC equations put the Lagrangian on-shell. This is a long-standing problem in rheonomy and it is tighten to the existence of auxiliary fields.
Before the implementation of the SFT-inspired method for describing self-dual forms, we will recall the construction of the Hodge dual on supermanifolds discussed in \cite{CCGir, Castellani:2015ata}. This is built on the complete complex of forms, it is an involutive operation and it acts as
\begin{eqnarray}
\label{HoA}
\star: \Omega^{(p|q)} \left( \mathcal{SM}^{(2n|2m)} \right) \rightarrow \Omega^{(2n-p|2m-q)} \left( \mathcal{SM}^{(2n|2m)} \right) \ ,
\end{eqnarray}
and therefore (anti) self-dual forms can be defined only if $p=n$ and $q=m$. Self-dual forms on supermanifolds are then immediately found to live in the \emph{psuedoform complex} $\Omega^{(n|m)}$. They can be lifted to integral forms by means of the PCO $\mathbb{Y}$, or be de-lifted to superforms by means of the PCO $Z$.
With all these ingredients, we are able to provide the action on a supermanifold $\mathcal{SM}^{(2n|2m)}$. Given a $(n-1|0)$-form $\Phi$, we have
\begin{eqnarray}\label{streAA}
S = \int_{_{\mathcal{SM}^{(2n|2m)}}} \hspace{-1cm} \left( \mathcal{L}^{(2n|0)} (\Phi) \wedge \mathbb{Y}^{(0|2m)} + d\Phi^{(n-1|0)} \wedge Q^{(n|2m)} \right) \ ,
\end{eqnarray}
where $Q^{(n|2m)}$ is a `` self-dual form" (actually, it is an integral form obtained from a self-dual form, as will be explained in the following). The first term reproduces the kinetic term for the field $\Phi^{(n-1|0)}$, while the second term reproduces Sen's coupling between $\Phi^{(n-1|0)}$ and the rest of the theory. The form of the action resembles, at the level of quantum field theory on a supermanifold, the closed string field theory action (\ref{streA}). In the particular case of the chiral boson we will set $n=1, m=1$.
In the first section, we will review the rheonomic Lagrangian for the chiral boson in $N=1$,$d=2$, and the related problems. In the second section, we will give a self-contained description of self dual (pseudo)forms on a supermanifold $\mathcal{SM}^{(2|2)}$. We will omit a description of form complexes on supermanifolds, for a complete discussion we refer the reader to \cite{Witten:2012bg,Catenacci:2018xsv,Catenacci:2019ksa,Cremonini:2019aao,Cremonini:2019xco,Cacciatori:2020hcm}. In the third section, we will present the the SFT-inspired action and the counting of degrees of freedom. We speculate how to rewrite the action in the pseudoform language, this will be object of future investigations.
\section{$N=1 , d=2$ Flat Supermanifolds}
We will begin by reviewing both the $N=1$,$d=2$ non-chiral model and the chiral one. We will briefly show how to construct the Lagrangian from the rheonomic prescriptions and the resulting (apparent, as we will discuss in full details) equations of motion.
\subsection{The Non-chiral Model}
Let us briefly explain the basic geometry. We consider two bosonic coordinates $(z, \bar z)$ (defined as complex coordinates) and two Grassmann odd coordinates $(\theta, \bar\theta)$, corresponding to the superspace $N = 1$, $d = 2$. We also introduce the differentials $(dz, d\bar z, d\theta, d\bar\theta)$ and the flat supervielbeine
\begin{eqnarray}
\label{oneA}
V = dz + \theta d\theta\,, ~~~~~~
\bar V = d\bar z + \bar \theta d\bar\theta\,, ~~~~~~ \psi = d\theta\,,~~~~~
\bar\psi = d \bar \theta\,,
\end{eqnarray}
which are supersymmetric invariant under $\delta \theta = \epsilon, \delta \bar{\theta} = \bar{\epsilon}$ and $\delta z = \epsilon \theta, \delta \bar{z} = \bar{\epsilon} \bar{\theta}$:
\begin{eqnarray*}
\delta V &=& d \left( \delta z \right) + \delta \theta d \theta + \theta d \left( \delta \theta \right) = - \epsilon d \theta + \epsilon d \theta + \theta d \epsilon = 0 \ , \\
\delta \bar{V} &=& d \left( \delta \bar{z} \right) + \delta \bar{\theta} d \bar{\theta} + \bar{\theta} d \left( \delta \bar{\theta} \right) = - \bar{\epsilon} d \bar{\theta} + \bar{\epsilon} d \bar{\theta} + \bar{\theta} d \bar{\epsilon} = 0 \ , \\
\delta \psi &=& d \left( \delta \theta \right) = d \epsilon = 0 \ , \\
\delta \bar{\psi} &=& d \left( \delta \bar{\theta} \right) = d \bar{\epsilon} = 0 \ ,
\end{eqnarray*}
where we have used the odd parity of $\epsilon$ and $\bar{\epsilon}$ and the fact that we are considering rigid supersymmetries $d \epsilon = 0 = d \bar{\epsilon}$.
The forms in \eqref{oneA} satisfy the Maurer-Cartan algebra
\begin{eqnarray}
\label{oneAB}
d V = \psi\wedge \psi\,, ~~~~~~
d \bar V = \bar\psi\wedge \bar\psi\,, ~~~~~~ d\psi =0\,, ~~~~~~
d\bar\psi =0\,.
\end{eqnarray}
The corresponding algebra of vector fields is given by
\begin{equation}\label{algebra2d}
D = \partial_\theta - \theta \partial_z \ , \ \bar D = \partial_{\bar\theta} - \bar\theta \bar\partial_{\bar{z}} \ , \ D^2 = - \partial_z \ , \ \bar D^2 = - \bar\partial_{\bar{z}} \ , \ \left[ D , \bar{D} \right] = D \bar D + \bar D D=0 \ .
\end{equation}
In order to avoid clumsy notations we will denote $\partial_z \equiv \partial$ and $\bar{\partial}_{\bar{z}} \equiv \bar{\partial}$.
Let us now start by describing the non-chiral multiplet. This is given by a superfield $\Phi$ with the decomposition
\begin{eqnarray}
\label{oneB}
\Phi = \phi + \phi_\theta \theta + \phi_{\bar\theta} \bar\theta + \phi_{\theta\bar\theta} \theta \bar\theta \,, ~~~~
W = D \Phi \,, ~~
\bar W = \bar D \Phi\,, ~~
F = \bar D D \Phi\,.
\end{eqnarray}
The component fields $\phi (z,\bar z), \phi_\theta (z,\bar z) , \phi_{\bar\theta}(z,\bar z) $ and $ \phi_{\theta\bar\theta}(z,\bar z) $ are the spacetime fields; the first and last are even while the second and third are odd. On the other hand, $(\Phi, W, \bar W, F)$ are the superfields whose first components are the components fields enumerated before. $\Phi$ and $F$ are even while $W$ and $\bar W$ are odd superfields.
If we write the differential of each superfield we find the following relations (we use $\displaystyle d = V \partial + \bar{V} \bar{\partial} + \psi D + \bar{\psi} \bar{D}$)
\begin{eqnarray}
\label{oneC}
d \Phi &=& V \partial \Phi + \bar V \bar\partial \Phi + \psi W + \bar\psi \bar W\,, \nonumber \\
d W &=& V \partial W + \bar V \bar\partial W - \psi \partial \Phi + \bar \psi F\,, \nonumber \\
d \bar W &=& V \partial \bar W + \bar V \bar\partial \bar W - \bar\psi \bar \partial \Phi - \psi F\,, \nonumber \\
d F &=& V \partial F + \bar V \bar\partial F + \psi \partial \bar W - \bar \psi \bar\partial W\,.
\end{eqnarray}
The last field $F$ is the auxiliary field and therefore we expect its equation of motion to be purely algebraic and to set it to zero. Before we write the rheonomic Lagrangian for the multiplet, let us first write down the equations of motion. If we set $F =0$, we see from the last equation of \eqref{oneC} that
\begin{eqnarray}
\label{oneD}
\partial \bar W =0\,, ~~~~~
\bar \partial W =0\,.
\end{eqnarray}
These equations imply that the superfield $W$ is holomorphic $W = W(z)$ and its conjugated $\bar W$ is anti-holomorphic. If we now substitute these constraints in (\ref{oneC}) we obtain
\begin{eqnarray}
\label{oneE}
d \Phi &=& V \partial \Phi + \bar V \bar\partial \Phi + \psi W + \bar\psi \bar W\,, \nonumber \\
d W &=& V \partial W - \psi \partial \Phi \,, \nonumber \\
d \bar W &=& \bar V \bar\partial \bar W - \bar\psi \bar\partial \Phi \,.
\end{eqnarray}
We can now impose the nilpotence condition for the exterior derivative (corresponding to the Bianchi identities) $d^2 =0$ leading to $\partial \bar \partial \Phi =0$. Hence, the full set of equaions of motion reads
\begin{eqnarray}
\label{oneF}
\partial \bar \partial \Phi =0 \,, ~~~~~~~
\partial \bar W =0\,, ~~~~~~~
\bar \partial W =0\,, ~~~~~~~
F =0\,.
\end{eqnarray}
which are the free equations of $N=1,d=2$ supermultiplet. The Klein-Gordon equation in $d=2$ implies that the superfield $\Phi$ is decomposed into holomorphic and anti-holomorphic parts:
\begin{equation}
\partial \bar{\partial} \Phi = 0 \ \implies \ \Phi = \Phi_z + \Phi_{\bar z}\ ,
\end{equation}
and therefore we get the on-shell matching of the degrees of freedom. In particular, we can write the on-shell holomorphic and anti-holomorphic superfields as
\begin{eqnarray}
\label{oneG}
\Phi_z = \phi(z) + \phi_{\theta}(z) \theta\,, ~~~~~
\Phi_{\bar z} = \phi(\bar z) + \phi_{\bar\theta}(\bar z)\bar\theta\,, ~~~~~
\end{eqnarray}
factorizing into left- and right-movers. \\
Let us now focus in writing the action. In order to write an action in the first order formalism, we can introduce two additional auxiliary superfields $\xi$ and $\bar\xi$, to be used as Lagrange multipliers. Then, the action reads
\begin{eqnarray}
\label{oneH}
{\cal L}^{(2|0)} &=& (\xi V + \bar \xi \bar V) \wedge ( d \Phi - \psi W - \bar \psi \bar W) +
\left(\xi \bar \xi + \frac{F^2}{2}\right)V\wedge \bar V + \nonumber \\
&+&
W dW \wedge V - \bar W d \bar W \wedge \bar V - d\Phi \wedge (W \psi - \bar W \bar \psi)
- W \bar W \, \psi \wedge \bar \psi \ .
\end{eqnarray}
The corresponding equations of motion can be easily calculated, so that they read
\begin{eqnarray}
\label{oneHA}
&&V \wedge (d\Phi - \psi W - \bar\psi \bar W) + \bar \xi V\wedge \bar V =0 \ , \nonumber \\
&&\bar V \wedge (d\Phi - \psi W - \bar\psi \bar W) + \xi V\wedge \bar V =0 \ , \nonumber \\
&&(\xi V + \bar \xi \bar V) \psi + 2 d W \wedge V - W \psi\wedge \psi + d\Phi \wedge \psi - \bar W \psi \wedge \bar\psi=0 \ , \nonumber \\
&&(\xi V + \bar \xi \bar V) \bar\psi -
2 d \bar W \wedge V + \bar W \bar\psi\wedge \bar\psi - d\Phi \wedge \bar\psi + W \psi \wedge \bar\psi=0 \ , \nonumber \\
&&d ( \xi V + \bar \xi \bar V) + d W \psi - d\bar W\bar \psi =0 \ , \nonumber \\
&& F = 0 \ .
\end{eqnarray}
It is an easy task to see that they imply the on-shell differentials (\ref{oneE}), the equations of motion (\ref{oneF}) and the relations for the new auxiliary fields in terms of $\Phi$
\begin{eqnarray}
\label{oneHB}
\xi = \partial \Phi \ , ~~~~~
\bar \xi = - \bar\partial \Phi \ .
\end{eqnarray}
The rheonomic action is a $(2|0)$ superform. It can be verified that it is closed, only by using the algebraic equations of motion for $\xi$ and $\bar \xi$ (\ref{oneHB}) and the curvature parametrisation $d\Phi, dW, d\bar W$ and $d F$ given in (\ref{oneC}). Note that those equations are off-shell parametrizations of the curvatures and therefore they do not need the equations of motion stemming out of (\ref{oneH}) (this a priori `` trivial" consideration will be exactly the origin of the problem arising in the chiral case).
To move from the rheonomic Lagrangian to the component one we have to use use the component PCO ${\mathbb Y}^{(0|2)} = \theta \bar\theta \delta(\psi) \delta(\bar\psi)$ (we refer the reader to the Appendix for an introduction to integral forms, PCO's and the notations we use in the following sections):
\begin{eqnarray}
\label{oneI}
S &=& \int_{{\cal SM}} {\cal L}^{(2|0)}\wedge {\mathbb Y}^{(0|2)} = \nonumber \\
&=&
\int_{z,\bar{z},dz,d\bar{z}} \left[
(\xi_0 dz + \bar\xi_0 d\bar z)\wedge d \phi + \left(\xi_0 \bar \xi_0 + \frac12 {\phi_{\theta\bar\theta}^2} \right) dz\wedge d\bar z +
\phi_\theta d \phi_\theta \wedge dz + \phi_{\bar\theta} d \phi_{\bar\theta} \wedge d\bar z \right] = \nonumber \\
&=& \int_{z,\bar{z}} \left[
\xi_0 \partial\phi - \bar\xi_0 \bar{\partial} \phi + \xi_0 \bar \xi_0 + \frac12 {\phi_{\theta\bar\theta}^2} - \phi_\theta \bar{\partial} \phi_\theta + \phi_{\bar\theta} \partial \phi_{\bar\theta} \right] \ ,
\end{eqnarray}
where $\xi_0$ and $\bar \xi_0$ are the first components of the superfields $\xi$ and $\bar \xi$. Due to the choice of the component PCO, we have used the Dirac delta's to project out all pieces depending on the gravitinos $\psi, \bar{\psi}$ and the term $\theta \bar\theta$ to project out the dependence on Grassmann coordinates. If we eliminate $\xi_0$, $\bar \xi_0$ and $\phi_{\theta \bar{\theta}}$ via their algebraic equation of motion, we obtain the usual action of $d=2$ free sigma model \cite{Catenacci:2018jjj}:
\begin{equation}\label{oneIA}
\int_{z,\bar{z}} \left[ \partial \phi \bar{\partial} \phi - \phi_\theta \bar{\partial} \phi_\theta + \phi_{\bar\theta} \partial \phi_{\bar\theta} \right] \ .
\end{equation}
In order to consider the superspace action, we have to consider the supersymmetric PCO:
\begin{eqnarray}
\label{oneL}
{\mathbb Y}^{(0|2)} = V \iota \delta(\psi) \wedge \bar V \bar \iota \delta(\bar\psi) \ .
\end{eqnarray}
By substituting this PCO in the action we obtain
\begin{eqnarray}
\label{oneM}
S &=& \int_{\cal SM} \left[ W \bar W \psi \wedge \bar \psi - d\Phi \wedge (W \psi + \bar W \bar \psi) \right] \wedge
V \iota \delta(\psi) \wedge \bar V \bar \iota \delta(\bar\psi) \nonumber \\
&=& - \int_{\cal SM} \Big( W \bar W - [\bar{D} \Phi W + D \Phi \bar W]\Big) V\wedge \bar V \wedge \delta(\psi) \delta(\bar \psi) = \nonumber \\
&=& \int_{z,\bar{z},\theta,\bar{\theta}} \Big[ - W \bar W + \bar{D} \Phi W + D \Phi \bar W \Big] \ .
\end{eqnarray}
If we eliminate the superfields $W$ and $\bar W$ via their algebraic equations of motion, we obtain the usual free $d=2$ superspace action in flat background:
\begin{equation}\label{oneN}
\int_{z,\bar{z},\theta,\bar{\theta}} D \Phi \bar{D} \Phi \ .
\end{equation}
It is easy to verify that \eqref{oneN} leads to \eqref{oneIA} (after the Berezin integration). We want to stress the fact that we already knew a priori that the two actions describe the same field content, since the Lagrangian is closed, hence there is no dependence on the choice of the PCO (again, see the Appendix for further comments).
\subsection{The Chiral Model}
Let us now move to the chiral model. In this case we require that $\bar D \Phi =0$, which implies also that $\bar \partial \Phi =0$, due to the algebraic relations \eqref{algebra2d}. In this case, the rheonomic equations \eqref{oneC} become
\begin{eqnarray}
\label{oneO}
d \Phi = V \partial \Phi + \psi W\,, ~~~~~
d W = V \partial W - \psi \partial \Phi\,.
\end{eqnarray}
There is no room for the auxiliary field $F$ and the field content is represented by one scalar field $\phi$ and one fermion $w$ only (the first components of the superfields $\Phi$ and $W$ respectively). The chirality implies that $\Phi$ is also holomorphic (and that in turn implies that it is on-shell) and this matches with $w$ which is holomorphic on-shell. The free equations of motion are
\begin{eqnarray}
\label{oneP}
\bar \partial \Phi =0\,, ~~~~~~~
\bar \partial W =0\,.
\end{eqnarray}
Yet at this point, it appears clear that writing an action encoding both equations (\ref{oneO}) and (\ref{oneP}) might be difficult. The key point is that we cannot use (\ref{oneO}) directly in the action since they imply on-shell conditions. We will write down an action according to the rheonomic prescriptions, but we will then shown that it is not consistent (in the sense that it does not describe the desired chiral boson).
Let us consider the rheonomic Lagrangian (the superspace is still $N=1$ and therefore, we still have the second coordinate $\bar\theta$)
\begin{eqnarray}
\label{oneQ}
{\cal L}^{(2|0)}_c = \Big[(\xi V + \bar\xi \bar V) \wedge (d\Phi - \psi W) + \xi \bar \xi V\wedge \bar V +
W d W \wedge V - d\Phi \wedge \psi W \Big] \ ,
\end{eqnarray}
obtained from \eqref{oneH} by setting to zero $\bar W$ and $F$. Here a problem arises: if we want to check whether the Lagrangian is closed, we have to use the MC \eqref{oneO}, but these equations imply the equations of motion. Therefore ${\mathcal L}^{(2|0)}_c(\Phi, W, V, \psi)$ is `` trivially closed", in the sense that when we verify its closure we are immediately on-shell. This is a priori a non-trivial problem, since we are not ensured that any modification of the PCO will leave the theory unchanged.
In order to show the emergence of these problems from a different point of view, let us consider the equations of motion \emph{without} the PCO, i.e., the equations obtained from \eqref{oneQ} assuming the independence on the PCO (in other words, the closure of the Lagrangian). The equations of motion read
\begin{eqnarray}
\label{oneR}
V \wedge (d \Phi - \psi W) + \bar \xi V\wedge \bar V =0\,, \nonumber \\
\bar V \wedge (d \Phi - \psi W) + \xi V\wedge \bar V =0\,, \nonumber \\
d ( \xi V + \bar\xi \bar V) + d W \wedge \psi =0\,, \nonumber \\
\psi (\xi V + \bar \xi \bar V) + 2 d W\wedge V + d\Phi\wedge \psi =0\,.
\end{eqnarray}
From the first equation, we have
\begin{eqnarray}
\label{priA}
W = D\Phi\,, ~~~~~
\bar \xi = - \bar\partial \Phi\,, ~~~~~
\bar D \Phi =0\,;
\end{eqnarray}
from the second equation, we get
\begin{eqnarray}
\label{priAA}
\xi = \partial \Phi\,; ~~~~~
\end{eqnarray}
from the third equation, we get
\begin{eqnarray}
\label{priAB}
&&\bar\partial \xi - \partial \bar \xi =0\,, ~~~~~
D \xi + \partial W =0\,, ~~~~~
\bar D \xi=0\,, ~~~~
D \bar \xi + \bar \partial W =0\,, ~~~~\nonumber \\
&&\xi + D W =0\,, ~~~~~
\bar D W =0\,, ~~~
\bar\xi =0\,.
\end{eqnarray}
Combining all these equations, we obtain
\begin{eqnarray}
\label{priC}
&&W = D\Phi\,, ~~~~~ \xi = \partial \Phi\,, ~~~~ \bar \xi =0\,, ~~~~~ \bar D\Phi =0\,, \nonumber \\
&&\bar \partial \Phi=0\,, ~~~~~\bar D W =0\,, ~~~~~ \bar D \xi =0\,, \nonumber \\
&& \bar\partial W =0\,, ~~~~~\bar\partial \xi =0\,.
\end{eqnarray}
From this set of equations we see that $W$ and $\xi$ are defined in terms of the derivatives of $\Phi$, and $\Phi$ is chiral $\bar D\phi=0$. Clearly, it follows the matching of bosonic and fermionic degrees of freedom.
The action on the whole supermanifold, corresponding to the previous Lagrangian reads
\begin{eqnarray}
\label{rheoB}
S = \int_{_{\mathcal{SM}}}{\mathcal L}^{(2|0)}_c(\Phi, W, V, \psi) \wedge \mathbb{Y}^{(0|2)} \ .
\end{eqnarray}
As we stressed many times, if the action were closed, we could change the PCO by exact terms without changing the field content, but since ${\mathcal L}^{(2|0)}_c(\Phi, W, V, \psi)$ is not closed (or worse, its MC imply the equations of motion), we cannot freely make such a choice. In the following paragraphs we will study explicitly the cases corresponding to some choices of the PCO. We will show that the \emph{correct} equations of motion coincide with a subset of \eqref{priC} and actually do not represent a single chiral boson (and its supersymmetric partner).
\subsubsection{The Component PCO}
Let us start by considering the component PCO ${\mathbb Y}^{(0|2)} = \theta \bar\theta \delta(\psi) \delta(\bar\psi)$. It projects the Lagrangian on a purely bosonic manifold and from \eqref{rheoB} we obtain
\begin{equation}
\label{oneRA}
S = \int_{\cal SM} {\cal L}^{(2|0)}_c \wedge \theta \bar\theta \delta(\psi) \delta(\bar\psi) = \int_{z,\bar{z}} \Big( \xi_0 \bar{\partial} \phi - \bar{\xi}_0 \partial \phi + \xi_0 \bar{\xi}_0 - w \bar{\partial} w \Big) \ .
\end{equation}
The action does not describe a chiral boson anymore: if we use the algebraic equations for $\xi_0$ and $\bar{\xi}_0$ we obtain the following equations of motion
\begin{equation}
\partial \bar{\partial} \phi = 0 \ , \ \bar{\partial} w = 0 \ .
\end{equation}
In particular, we have a propagating holomorphic fermion, while both the holomorphic and the anti-holomorphic parts of the boson propagate.
\subsubsection{The Half-Supersymmetric PCO}
Let us consider another PCO of the form
\begin{eqnarray}
\label{oneS}
{\mathbb Y}^{(2|0)} = V \iota \delta(\psi) \wedge \bar\theta \delta(\bar \psi) \ ,
\end{eqnarray}
which is manifestly supersymmetric on the holomorphic part only. It is easy to check that
indeed is closed and not exact (see the Appendix for explicit calculations). Inserting it into the action we get
\begin{equation}
\label{oneT}
S = \int_{\cal SM} {\cal L}^{(2|0)}_c \wedge V \iota \delta(\psi) \bar\theta \delta(\bar \psi) = \int_{\cal SM} \Big[ \bar \xi (D \Phi + W) - \bar\partial \Phi W \Big] \bar\theta V\wedge \bar V \delta(\psi) \delta(\bar \psi) = $$ $$ = \int_{z,\bar{z},\theta} \Big[ \bar \xi (D \Phi + W) - \bar\partial \Phi W \Big] = \int_{z,\bar{z}} \Big[ D \bar \xi (D \Phi + W) + \bar\xi ( \partial \Phi + D W) + \partial \Phi \bar \partial \Phi - W \bar \partial W \Big]_{\theta=\bar \theta =0}\,.
\end{equation}
This action leads to four equations of motion, two of which are algebraic. The resulting two equations of motion read
\begin{equation}\label{oneTA}
\partial \bar{\partial} \phi = 0 \ , \ \bar{\partial} w = 0 \ ,
\end{equation}
exactly as in the previous case. The same field content would have been obtained by considering the conjugate PCO
\begin{eqnarray}
{\mathbb Y}^{(2|0)} = \theta \delta(\psi) \wedge \bar{V} \bar{\iota} \delta(\bar \psi) \ .
\end{eqnarray}
We can also consider a composite PCO as
\begin{eqnarray}
\label{oneUA}
{\mathbb Y}^{(2|0)} = V \iota \delta(\psi) \wedge \bar\theta \delta(\bar \psi) +
\theta \delta(\psi) \wedge \bar V \bar\iota \delta(\bar\psi)
\end{eqnarray}
where the relative coefficient is fixed by requiring the reality of the PCO. This PCO leads to
\begin{eqnarray}
\label{oneW}
S = \int_{\cal SM} \left[ \left( \bar \xi (D \Phi + W) - \bar\partial \Phi W \right) \bar\theta V\wedge \bar V \delta(\psi) \delta(\bar \psi) + \left( \xi \bar{D} \Phi - W \bar{D} W \right) \theta V \bar{V} \delta(\psi) \delta(\bar \psi) \right] \ ,
\end{eqnarray}
which can be seen to lead exactly to \eqref{oneTA}.
\subsubsection{The Supersymmetric PCO}
Finally, we can also use the supersymmetric PCO
\begin{eqnarray}
\label{oneU}
{\mathbb Y}^{(0|2)} = V \iota \delta(\psi) \wedge \bar V \bar\iota \delta(\bar \psi) \,,
\end{eqnarray}
we have
\begin{equation}
\label{oneZ}
S = \int_{\cal SM} {\cal L}^{(2|0)}_c \wedge V \iota \delta(\psi) \wedge \bar V \bar\iota \delta(\bar \psi) = \int_{z,\bar{z},\theta, \bar{\theta}} \left[ - \bar D\Phi W \right] \ ,
\end{equation}
whose equations of motion are
\begin{eqnarray}
\label{oneZA}
\bar D \Phi =0\,, ~~~~~~
\bar D W =0\,,
\end{eqnarray}
which imply that $\bar \partial \Phi=0$ and $\bar\partial W=0$ which are the equations of motion for two chiral superfields. Each equation of \eqref{oneZA} describes a chiral boson and a chiral fermion, hence leading to two chiral bosons and their corresponding superpartners. This example not only shows that again this PCO is not suitable for describing a single chiral boson (and its superpartner), but also that in this case the field content is represented by two holomorphic bosons and two holomorphic fermions, in contrast to the field content we found for the component or half-supersymmetric PCO's. Actually, this confirms that the action is not closed and that the PCO's always project a non-trivial part of the `` would-be" equations of motion \eqref{priC}.
One interesting aspect that we have to consider is the following: using the supersymmetric PCO (\ref{oneU}) we have derived a superspace action (\ref{oneZ}) which is manifestly supersymmetric, being written in terms of superfields integrated on the superspace. Since the propagating degrees of freedom do not coincide with those of the component action, it is interesting to see which component action we obtain by performing the Berezin integration of the superspace one. Given the decompositions of the superfields $\Phi$ and $W$
\begin{eqnarray}
\label{decoAA}
\Phi = \phi + \phi_{\theta} \theta + \phi_{\bar{\theta}} \bar\theta + \phi_{\theta \bar{\theta}} \theta \bar\theta\,, ~~~~~~~~~
W = w + w_{\theta} \theta + w_{\bar{\theta}} \bar\theta + w_{\theta \bar{\theta}} \theta \bar\theta\,, ~~~~~~~~~
\end{eqnarray}
where the Grassmannian nature of the fields is understood (recall that the superfield $\Phi$ is even and the superfield $W$ is odd), we can compute the component action as
\begin{eqnarray}
\label{selfdecoC}
S = \int_{z,\bar{z}} \left( - \phi_{\theta} w_{\theta \bar{\theta}} + \phi_{\theta \bar{\theta}} w_{\bar{\theta}} - \bar{\partial} \phi w_{\theta} - \bar{\partial} \phi_{\theta} w \right) \ ,
\end{eqnarray}
where the Berezin integral has already been performed. After removing the algebraic equations, we are left with
\begin{eqnarray}
\label{decoDA}
S = - \int_{z,\bar{z}} \left( \bar{\partial} \phi w_{\theta} + \bar{\partial} \phi_{\theta} w \right)\,,
\end{eqnarray}
which is a Lagrangian for a $bc-\beta\gamma$ system used in the BRST quantization of NRS superstring. It is supersymmetric and it propagates two chiral bosons $\phi$ and $w_\theta$ and two chiral fermions $w$ and $\phi_\theta$. \\
The goal of these subsections was to show that the rheonomic formulation of the chiral boson is affected by some non-trivial and intrinsic problems. The point is that the `` naive" equations of motion \eqref{oneR} are incorrect. The correct equations of motion are
\begin{eqnarray}
\label{correcteom}
&&\Big( V \wedge (d \Phi - \psi W) + \bar \xi V\wedge \bar V \Big)\wedge \mathbb{Y}^{(0|2)} =0 \,, \nonumber \\
&&\Big( \bar V \wedge (d \Phi - \psi W) + \xi V\wedge \bar V \Big)\wedge \mathbb{Y}^{(0|2)} =0 \,, \nonumber \\
&&\Big( d ( \xi V + \bar\xi \bar V) + d W \wedge \psi \Big)\wedge \mathbb{Y}^{(0|2)} =0\,, \nonumber \\
&&\Big( \psi (\xi V + \bar \xi \bar V) + 2 d W\wedge V + d\Phi\wedge \psi \Big)\wedge \mathbb{Y}^{(0|2)} =0\,,
\end{eqnarray}
that is, we have to consider the projection with the PCO as well. This means that not all the equations of motion will survive the projection. Despite the huge redundancy of \eqref{oneR}, we have seen that any choice of PCO leave untouched an `` insufficient" number of equations of motion, whose solution do not coincide with the solution of \eqref{oneR}. When considering theories with closed Lagrangians, we have that \emph{any} choice of the PCO selects a subset of the rheonomic equations of motion whose solution coincides with the one of the full rheonomic set.\\
Hence the necessity of finding an alternative, consistent solution for treating the chiral boson Lagrangian (and in general any Lagrangian containing self-dual forms). In the following sections we will implement the strategy pioneered by A.Sen in \cite{Sen:2015nph,Sen:2019qit} to our supergeometric context, showing that it actually solves the problem of self-duality in field theory.
\section{Self-Dual Forms in Supergeometry}
Before we describe the implementation of the SFT-inspired method, let us briefly recall some basic definitions and notations about self-duality in supergeometry. We will focus mainly on the case of our interest $\mathcal{SM}^{(2|2)}$, while these results are ready for higher dimensional cases. We will avoid general treatment of Hodge theory on supermanifolds (which can be found in \cite{Castellani:2015ata}), we will give basic definitions and results for the $\star$ operator in order to make the paper self-contained.
In this section we will focus on the `` flat operator", which is defined as
\begin{eqnarray}\label{ADHAA}
\star : \Omega^{(p|q)} \left( \mathcal{SM}^{(n|m)} \right) &\rightarrow& \Omega^{(n-p|m-q)} \left( \mathcal{SM}^{(n|m)} \right) \nonumber \\
\omega^{(p|q)} &\rightarrow& \star \omega^{(p|q)} \ , \ \text{such that} \ \omega^{(p|q)} \wedge \star \omega^{(p|q)} = \mathpzc{B}er^{(n|m)} \ ,
\end{eqnarray}
where $\mathpzc{B}er^{(n|m)}$ is the volume form of the supermanifold. Notice that actually we are considering the action of the $\star$ operator leaving the components of the forms unchanged. When writing $\omega^{(p|q)} \wedge \star \omega^{(p|q)} = \mathpzc{B}er^{(n|m)}$ we are thus assuming the components to be constants (or even to be 1). If we restrict to the case of $n=2=m$, we have
\begin{eqnarray}\label{ADHA}
\star : \Omega^{(p|q)} \left( \mathcal{SM}^{(2|2)} \right) &\rightarrow& \Omega^{(n-p|m-q)} \left( \mathcal{SM}^{(2|2)} \right) \nonumber \\
\omega^{(p|q)} &\rightarrow& \star \omega^{(p|q)} \ , \ \text{such that} \ \omega^{(p|q)} \wedge \star \omega^{(p|q)} = \mathpzc{B}er^{(2|2)} = V \bar{V} \delta \left( \psi \right) \delta \left( \bar{\psi} \right) \ .
\end{eqnarray}
From the definition \eqref{ADHAA} it follows the involution property of the $\star$ operator on a supermanifold $\mathcal{SM}^{(n|m)}$:
\begin{equation}\label{ADHC}
\mathpzc{B}er^{(n|m)} = \omega \wedge \star \omega = \left( -1 \right)^{|\omega| \left( m + n - |\omega| \right)} \star \omega \wedge \omega \ ,
\end{equation}
where $|\omega|$ indicates the parity of the form $\omega$. Notice that for supermanifolds that admit self-dual forms, we have that both the bosonic dimension and the fermionic one are even, thus it follows
\begin{equation}\label{ADHD}
\mathpzc{B}er^{(n|m)} = \omega \wedge \star \omega = \left( -1 \right)^{|\omega|} \star \omega \wedge \omega \ .
\end{equation}
This leads to
\begin{equation}\label{ADHE}
\star^2 \omega = (-1)^{|\omega|} \omega \ .
\end{equation}
In the usual bosonic setting, given a manifold $\mathcal{M}^{(2n)}$, since in a given space $\Omega^{(p)} \left( \mathcal{M}^{(2n)} \right)$ all the forms have the same parity $|\omega | = p \mod 2 , \forall \omega \in \Omega^{(p)} \left( \mathcal{M}^{(2n)} \right)$, it follows from \eqref{ADHE} that $\star$ is an (anti-)involution of period 2, hence the operator splits the stable space $\displaystyle \Omega^{(n)} \left( \mathcal{M}^{(2n)} \right)$ in two parts, namely the self-dual part and the anti-self-dual part. In the super-setting this is not the case, since given a fixed space $\Omega^{(p|q)} \left( \mathcal{SM}^{(2m|2n)} \right)$ it contains both even forms and odd forms. If we impose the self-duality constraint on the stable space $\Omega^{(m|n)} \left( \mathcal{SM}^{(2m|2n)} \right)$ we have that the odd forms are identically set to 0:
\begin{equation}\label{ADHF}
\star \omega = \omega \ \implies \ \star^2 \omega = \star \omega \ \implies \ \omega = 0 \ .
\end{equation}
Thus the constraint halves the dimension\footnote{This as to be interpreted carefully, recall that the spaces of pseudoforms are (countably) infinite-dimensional.} of the even forms and annihilates the odd forms. The same result hold for the anti-self-duality constraint. Thus, the net recipe for counting the degrees of freedom is: when considering a pseudoform $\omega \in \Omega^{(m|n)} \left( \mathcal{SM}^{(2m|2n)} \right)$, the (anti-)self duality constraint annihilates the odd forms and halves the number of even forms. This will be shown explicitly in the following.
Let us now move to the concrete analysis of self-dual pseudoforms on $\mathcal{SM}^{(2|2)}$. The only stable space (w.r.t. the action of the operator $\star$ defined in \eqref{ADHA}) is given by $\Omega^{(1|1)} \left( \mathcal{SM}^{(2|2)} \right)$. We can separate any pseudoform $Q^{(1|1)} \in \Omega^{(1|1)} \left( \mathcal{SM}^{(2|2)} \right)$ in a piece along $\delta \left( \psi \right)$ and a second one along $ \delta \left( \bar{\psi} \right)$, thus we write
\begin{eqnarray}\label{SDFPB}
Q^{(1|1)} = Q^{(1|1)}_+ + Q^{(1|1)}_- \ ,
\end{eqnarray}
where in particular (we will sometimes use the shorter notations $\delta \equiv \delta \left( \psi \right)$, $\bar{ \delta} = \delta \left( \bar{\psi} \right)$, $\delta^{(n)} \equiv \iota^n \delta \left( \psi \right)$ and $\bar{\delta}^{(n)} \equiv \bar{\iota}^n \delta \left( \bar{\psi} \right)$)
\begin{eqnarray}
\nonumber Q^{(1|1)}_+ &=& \sum_{n=0}^\infty \left[ Q_{+,0,n} \bar{\psi}^{n+1} \delta^{(n)} + Q_{+,V,n} V \bar{\psi}^{n} \delta^{(n)} + Q_{+,\bar{V},n} \bar{V} \bar{\psi}^{n} \delta^{(n)} + Q_{+,V \bar{V},n} V \bar{V} \bar{\psi}^{n} \delta^{(n+1)} \right] = \\
\label{SDFPBA} && = \sum_{n=0}^\infty \bar{\psi}^n \iota^n \left[ Q_{+,0,n} \bar{\psi} + Q_{+,V,n} V + Q_{+,\bar{V},n} \bar{V} + Q_{+,V \bar{V},n} V \bar{V} \iota \right] \delta \left( \psi \right) \ \ ; \\
\nonumber Q^{(1|1)}_- &=& \sum_{n=0}^\infty \left[ Q_{-,0,n}\psi^{n+1} \bar{\delta}^{(n)} + Q_{-,V,n} V \psi^{n} \bar{\delta}^{(n)} + Q_{-,\bar{V},n} \bar{V} \psi^{n} \bar{\delta}^{(n)} + Q_{-,V \bar{V},n} V \bar{V} \psi^{n} \bar{\delta}^{(n+1)} \right] = \\
\label{SDFPBB} && = \sum_{n=0}^\infty \psi^n \bar{\iota}^n \left[ Q_{-,0,n} \psi + Q_{-,V,n} V + Q_{-,\bar{V},n} \bar{V} + Q_{-,V \bar{V},n} V \bar{V} \bar{\iota} \right] \delta \left( \bar{\psi} \right) \ \ .
\end{eqnarray}
Actually, we want now to analyse the self-duality prescription in order to construct an integral form $Q^{(1|2)}$ (as the one appearing in \eqref{streAA}) which is given by
\begin{equation}\label{SDFPA}
Q^{(1|2)} = Q^{(1|1)} \wedge \left( \mathbb{Y}^{(0|1)} + \bar{\mathbb{Y}}^{(0|1)} \right) \ ,
\end{equation}
where $\mathbb{Y}^{(0|1)} = V \iota \delta \left( \psi \right)$ and $\bar{\mathbb{Y}}^{(0|1)} = \bar{V} \bar{\iota} \delta \left( \bar{\psi} \right)$, while $Q^{(1|1)}$ is a self-dual pseudoform. Notice that, a priori, one could consider any linear combination of the two PCO's; here we will consider only the case shown in \eqref{SDFPA} corresponding to a real PCO. For the sake of clarity, we are using the supersymmetric PCO's, while a priori one could think to use different PCO's, e.g. the component ones. In the next section we will give a brief mention to the choice of components operators, showing that actually one comes to analogous results. This is again far from trivial, since when introducing terms à la SFT one looses control with the notions of `` closure" of the Lagrangian, since in this case we will have the rheonomic Lagrangian represented by a superform and the SFT term represented by a pseudoform. \\
It is easy to verify that all the terms containing $\psi^2, \bar{\psi}^2, \iota^2$ or $\bar{\iota}^2$ vanish identically when inserted in \eqref{SDFPA}, thus we do not lose in generality if we consider a pseudoform built as follows (recall $\delta' \equiv \iota \delta$, the same for $\bar{\delta}'$):
\begin{eqnarray}
\label{SDFPC} Q^{(1|1)}_- &=& Q_1 \psi \bar{\delta} + Q_2 V \bar{\delta} + Q_3 \bar{V} \bar{\delta} + Q_4 V \psi \bar{\delta}' + Q_5 \bar{V} \psi \bar{\delta}' + Q_6 V \bar{V} \bar{\delta}' \ , \\
\nonumber Q^{(1|1)}_+ &=& Q_7 \bar{\psi} \delta + Q_8 V \delta + Q_9 \bar{V} \delta + Q_{10} V \bar{\psi} \delta' + Q_{11} \bar{V} \bar{\psi} \delta' + Q_{12} V \bar{V} \delta' \ .
\end{eqnarray}
Before giving the final expression of the self-dual pseudoform built from \eqref{SDFPC}, and its corresponding integral form given by \eqref{SDFPA}, let us list some useful expressions derived from \eqref{ADHA}:
\begin{align}
\nonumber & \star \left[ \psi \delta \left( \bar{\psi} \right) \right] = V \bar{V} \iota \delta \left( \psi \right) \ \ , \ \ \star \left[ \bar{\psi} \delta \left( \psi \right) \right] = - V \bar{V} \bar{\iota} \delta \left( \bar{\psi} \right) \ \ , \ \ \star \left[ V \delta \left( \psi \right) \right] = - \bar{V} \delta \left( \bar{\psi} \right) \ \ , \\
\nonumber & \star \left[ V \delta \left( \bar{\psi} \right) \right] = \bar{V} \delta \left( \psi \right) \ \ , \ \ \star \left[ \bar{V} \delta \left( \psi \right) \right] = V \delta \left( \bar{\psi} \right) \ \ , \ \ \star \left[ \bar{V} \delta \left( \bar{\psi} \right) \right] = - V \delta \left( \psi \right) \ \ , \\
\nonumber & \star \left[ V \psi \bar{\iota} \delta \left( \bar{\psi} \right) \right] = \bar{V} \bar{\psi} \iota \delta \left( \psi \right) \ \ , \ \ \star \left[ \bar{V} \psi \bar{\iota} \delta \left( \bar{\psi} \right) \right] = - V \bar{\psi} \iota \delta \left( \psi \right) \ \ , \ \ \star \left[ V \bar{\psi} \iota \delta \left( \psi \right) \right] = - \bar{V} \psi \bar{\iota} \delta \left( \bar{\psi} \right) \ \ , \\
\label{ADHB} & \star \left[ \bar{V} \bar{\psi} \iota \delta \left( \psi \right) \right] = V \psi \bar{\iota} \delta \left( \bar{\psi} \right) \ \ , \ \ \star \left[ V \bar{V} \bar{\iota} \delta \left( \bar{\psi} \right) \right] = \bar{\psi} \delta \left( \psi \right) \ \ , \ \ \star \left[ V \bar{V} \iota \delta \left( \psi \right) \right] = - \psi \delta \left( \bar{\psi} \right) \ \ .
\end{align}
These expressions allow us to implement the self-duality prescription that hence leads to
\begin{eqnarray}\label{SDFPE}
Q^{(1|1)} = Q_2 \left( V \bar{\delta} + \bar{V} \delta \right) + Q_3 \left( \bar{V} \bar{\delta} - V \delta \right) + Q_4 \left( V \psi \bar{\delta}' + \bar{V} \bar{\psi} \delta' \right) + Q_5 \left( \bar{V} \psi \bar{\delta}' - V \bar{\psi} \delta' \right) \ .
\end{eqnarray}
Notice in particular that, as we discussed previously, the terms corresponding to odd forms have been eliminated, while the degrees of freedom of the terms corresponding to bosonic forms have been halved. From this self-dual pseudoform we can generate the corresponding integral form as in \eqref{SDFPA}:
\begin{equation}\label{SDFPF}
Q^{(1|2)} = Q^{(1|1)} \left( \mathbb{Y}^{(0|1)} + \bar{\mathbb{Y}}^{(0|1)} \right) = \left( Q_3 + Q_5 \right) \left( V \bar{V} \delta \bar{\delta}' - V \bar{V} \delta' \bar{\delta} \right) \ .
\end{equation}
The pieces containing $Q_2$ or $Q_4$ are annihilated both by $\mathbb{Y}^{(0|1)}$ and $\bar{\mathbb{Y}}^{(0|1)}$.
It is worthy to consider the inverse problem, i.e., whether given an integral form $Q^{(1|2)}$, one can reconstruct the corresponding self-dual pseudoform. In other words, we are asking if there is a way to give a consistent `` self-duality prescription" on an integral form.\footnote{Clearly, the resulting integral form will not be self-dual in the usual sense, we are looking for an integral form originated by a self-dual pseudoform.} Let us then consider a generic $(1|2)$ integral form
\begin{equation}\label{SDFPG}
\tilde{Q}^{(1|2)} = A V \bar{V} \delta' \bar{\delta} + B V \bar{V} \delta \bar{\delta}' + C V \delta \bar{\delta} + D \bar{V} \delta \bar{\delta} \ .
\end{equation}
It is possible to obtain a self-dual pseudoform that generates $\tilde{Q}^{(1|2)}$ by using the new PCO's
\begin{equation}\label{SDFPH}
\mathbb{Y}^{(0|-1)\dagger} = \star \mathbb{Y}^{(0|1)} \star \equiv \mathbb{Y}^\dagger \ \ , \ \ \bar{\mathbb{Y}}^{(0|-1)\dagger} = \star \bar{\mathbb{Y}}^{(0|-1)} \star \equiv \bar{\mathbb{Y}}^\dagger \ .
\end{equation}
Really, we can apply the PCO $\tilde{\mathbb{Y}}^{\dagger} = \alpha \mathbb{Y}^{\dagger} + \beta \bar{\mathbb{Y}}^{\dagger}$, with $\alpha , \beta$ generic coefficients:
\begin{eqnarray}
\nonumber && \star \left( A V \bar{V} \delta' \bar{\delta} + B V \bar{V} \delta \bar{\delta}' + C V \delta \bar{\delta} + D \bar{V} \delta \bar{\delta} \right) = - A \psi - B \bar{\psi} + C \bar{V} - D V \ \ ; \\
\nonumber && \left( - A \psi - B \bar{\psi} + C \bar{V} - D V \right) \wedge \left( \alpha \mathbb{Y}^{(0|1)} + \beta \bar{\mathbb{Y}}^{(0|1)} \right) = \\
\nonumber && = \alpha A V \delta - \beta A \bar{V} \psi \bar{\delta}' - \alpha B V \bar{\psi} \delta' + \beta B \bar{V} \delta - C V \bar{V} \iota \delta - D V \bar{V} \bar{\iota} \bar{\delta} \ \ ; \\
\label{SDFPI} && \star \left( \alpha \mathbb{Y}^{(0|1)} + \beta \bar{\mathbb{Y}}^{(0|1)} \right) \star Q^{(1|2)} = - \alpha A \bar{V} \bar{\delta} + \beta A V \bar{\psi} \delta' + \alpha B \bar{V} \psi \bar{\delta}' - \beta B V \delta + C \psi \bar{\delta} - D \bar{\psi} \delta \ \ .
\end{eqnarray}
By confronting \eqref{SDFPI} with \eqref{SDFPE} we obtain
\begin{equation}\label{SDFPJ}
Q_3 = - \alpha A \ \ , \ \ Q_5 = \beta B \ \ , \ \ Q_3 = \alpha B \ \ , \ \ Q_5 = - \beta A \ \ , \ \ C = 0 \ \ , \ \ D = 0 \ \ \implies $$ $$ \implies \ \ A = - \frac{\beta}{\alpha} B \ \ , \ \ A = - \frac{\alpha}{\beta} B \ \ \implies \ \ \alpha^2 = \beta^2 \ .
\end{equation}
Notice that, as expected, we could have removed any piece which is either in the kernel of $\mathbb{Y}^{(0|1)}$ or in that of $\bar{\mathbb{Y}}^{(0|1)}$, i.e., the terms containing either $C$ or $D$. For the sake of clarity we can fix $\alpha = \beta = 1$ (corresponding again to the real PCO), so that it is possible to find the `` originating" pseudoform if $A = - B$.
Hence, the self-duality prescription on integral forms reads
\begin{equation}\label{SDFPL}
\star \tilde{\mathbb{Y}}^{\dagger} Q^{(1|2)} = \tilde{\mathbb{Y}}^{\dagger} Q^{(1|2)} \ ,
\end{equation}
leading to
\begin{equation}\label{SDFPP}
Q^{(1|2)} = B \left( V \bar{V} \delta \bar{\delta}' - V \bar{V} \delta' \bar{\delta} \right) \ ,
\end{equation}
which is exactly the same form as \eqref{SDFPF}. This shows that both if we start from a self-dual pseudoform and lift it to an integral form and if we start from an integral form and apply to it a self-duality prescription we come to the same result.
Let us now discuss the analogous result carried on with the component PCO. In particular, let us consider \eqref{SDFPA} with the PCO's $\mathbb{Y} = \theta \delta$, $\bar{\mathbb{Y}} = \bar{\theta} \bar{\delta}$:
\begin{equation}\label{SDFPQ}
Q^{(1|2)} = Q^{(1|1)} \wedge \left( \theta \delta + \bar{ \theta} \bar{\delta} \right) \ \ , \ \ Q^{(1|1)} = \star Q^{(1|1)} \ \ .
\end{equation}
In this case, we see from \eqref{SDFPBA} and \eqref{SDFPBB} that only the following terms should be considered (the others are trivially 0 after the projection with the component PCO's):
\begin{equation}\label{SDFPQA}
Q^{(1|1)} = Q_1 V \delta + Q_2 \bar{V} \delta + Q_3 V \bar{V} \iota \delta + Q_4 V \bar{\delta} + Q_5 \bar{V} \bar{\delta} + Q_6 V \bar{V} \bar{\iota} \bar{\delta} \ .
\end{equation}
By imposing the self-duality constraint we obtain the most general self-dual pseudoform that fits into \eqref{SDFPQ}:
\begin{equation}\label{SDFPR}
Q^{(1|1)} = Q_1 \left( V \bar{\delta} + \bar{V} \delta \right) + Q_2 \left( V \delta - \bar{V} \bar{\delta} \right) \ \ .
\end{equation}
By inserting \eqref{SDFPR} in \eqref{SDFPQ} we obtain
\begin{equation}\label{SDFPS}
Q^{(1|2)} = \left( - Q_1 \theta - Q_2 \bar{\theta} \right) V \delta \bar{\delta} + \left( Q_1 \bar{\theta} - Q_2 \theta \right) \bar{V} \delta \bar{\delta} \ \ .
\end{equation}
A priori it may seem that this result is not compatible with the one we found in \eqref{SDFPF}, but indeed this is not true. In order to see this explicitly, let us count the number of degrees of freedom in the two cases: let us suppose the following decomposition of a superfield $Q_i$:
\begin{equation}\label{SDFPSA}
Q_i = q_i + \theta q_{i \theta} + \bar{\theta} q_{i \theta} + \theta \bar{\theta} q_{i \theta \bar{\theta}} \ .
\end{equation}
In \eqref{SDFPF} we have four degrees of freedom, since the two superfields appear only in the combination $Q_3 + Q_5$; in \eqref{SDFPS} we have that
\begin{eqnarray}
\label{SDFPSB} && - Q_1 \theta - Q_2 \bar{\theta} = - q_1 \theta - q_2 \bar{\theta} + \left[ \left( -1 \right)^{|q_{2\theta}|} q_{2\theta} - \left( -1 \right)^{|q_{1 \bar{\theta}}|} q_{1 \bar{\theta}} \right] \theta \bar{\theta} \ , \\
\label{SDFPSC} && Q_1 \bar{\theta} - Q_2 \theta = q_1 \bar{\theta} - q_2 \theta + \left[ \left( -1 \right)^{|q_{1 \theta}|} q_{1 \theta} + \left( -1 \right)^{|q_{2 \bar{\theta}}|} q_{2 \bar{\theta}} \right] \theta \bar{\theta} \ .
\end{eqnarray}
From \eqref{SDFPSB} and \eqref{SDFPSC} we count again four degrees of freedom, two bosonic and two fermionic as in the previous case.
After this preamble on self-dual forms on $\mathcal{SM}^{(2|2)}$ we are now ready to discuss the action for the chiral boson.
\section{The Action: Coupling with Self-Dual Pseudoform}
In the previous section we have given a consistent prescription for the self-duality condition on the supermanifold $\mathcal{SM}^{(2|2)}$. In this section we explicitly calculate the action defined in \eqref{streAA} (we also add the coupling with external fields) for the supersymmetric and the component PCO's, in order to show which degrees of freedom decouple from the theory and which ones propagate.
\subsection{Supersymmetric PCO}
By following the prescriptions given in the introduction, the action reads
\begin{eqnarray}
\label{TAWP0} S = \int_{\mathcal{SM}^{(2|2)}} \Big[ && \mathcal{L}^{(2|0)}_c \wedge \mathbb{Y}^{(0|2)} + d \Phi^{(0|0)} \wedge Q^{(1|2)} + f^{(2|2)} \left( M,Q \right) \Big] \ ,
\end{eqnarray}
where the last term represents the coupling of the form $Q^{(1|2)}$ with the external fields generically denoted $M$. The first term contains the PCO $\mathbb{Y}^{(0|2)}$ which we now fix to $\mathbb{Y}^{(0|2)} = V \iota \delta \left( \psi \right) \bar{V} \bar{\iota} \delta \left( \bar{\psi} \right)$. In this case, the first term of \eqref{TAWP0} involves the superfields $\Phi$ and $W$ only:
\begin{equation}\label{TAWP0A}
\mathcal{L}^{(2|0)}_c \wedge \mathbb{Y}^{(0|2)} = \bar{D} \Phi W V \bar{V} \delta \left( \psi \right) \delta \left( \bar{\psi} \right) \ .
\end{equation}
Let us start by calculating the term $\displaystyle d \Phi^{(0|0)} \wedge Q^{(1|2)} $, with $Q^{(1|2)}$ given as as in \eqref{SDFPF} (we define $Q_3 + Q_5 = \Lambda$):
\begin{align}\label{TAWPA}
d \Phi^{(0|0)} \wedge Q^{(1|2)} = \left[ - \bar{D} \Phi + D \Phi \right] \Lambda V \bar{V} \delta \left( \psi \right) \delta \left( \bar{\psi} \right) \ .
\end{align}
Notice that, in order to respect the parity of the action, $\Lambda$ is an odd superfield. By inserting \eqref{TAWPA} in \eqref{TAWP0}, we obtain
\begin{eqnarray}\label{TAWPC}
\nonumber S = \int_{\mathcal{SM}^{(2|2)}} \left[ \left( \bar{D} \Phi W - \bar{D} \Phi \Lambda + D \Phi \Lambda \right) V \bar{V} \delta \left( \psi \right) \delta \left( \bar{\psi} \right) + f^{(2|2)} (M,\mathcal{Q}) \right] \ .
\end{eqnarray}
The term containing the coupling with external fields $M$ is a $(2|2)$-integral form $f^{(2|2)}(M,Q)$; since $Q^{(1|2)}$ is a picture-2 form, we have that $f^{(2|2)}(M,Q)$ is at most linear in $Q^{(1|2)}$, hence it generically reads
\begin{equation}\label{TAWPD}
f^{(2|2)}(M,Q) = R^{(1|0)}(M) \wedge Q^{(1|2)} + f^{(2|2)} (M) = \left[ \left( R_\psi (M) - R_{\bar{\psi}} (M) \right) \Lambda + f(M) \right] V \bar{V} \delta \bar{\delta} \ .
\end{equation}
Then the full action reads
\begin{equation}\label{TAWPE}
S = \int_{z , \bar{z} , \theta , \bar{\theta}} \hspace{-.7cm} \left( \bar{D} \Phi W - \bar{D} \Phi \Lambda + D \Phi \Lambda + \left( R_\psi - R_{\bar{\psi}} \right) \Lambda + f (M) \right) \ .
\end{equation}
The equations of motion are
\begin{eqnarray}
\label{TAWPF} \bar{D} \Phi = 0 \ \ , \ \ - \bar{D} W + \bar{D} \Lambda - D \Lambda = 0 \ \ , \\
\label{TAWPG} \bar{D} \Phi - D \Phi - R_\psi + R_{\bar{\psi}} = 0 \ \ , \\
\label{TAWPH} \frac{\delta S}{\delta M} = 0 \ \ .
\end{eqnarray}
Notice that, as expected, the equations \eqref{TAWPF} imply that only the $\bar{D}$ part of the superfield $W$ couples to the superfield $\Lambda$. We can act on \eqref{TAWPG} with the superderivative $\bar{D}$ and then substitute the expression for $\bar{D} \Phi$ from \eqref{TAWPF} in order to obtain
\begin{eqnarray}
- \bar{D} R_\psi + \bar{D} R_{\bar{\psi}} = 0 \ .
\end{eqnarray}
This equation together with \eqref{TAWPH} provide an independent set of equations for $\Lambda$ and $M$. Given one of their solutions we can compute $W$ and $\Phi$ from \eqref{TAWPG} and \eqref{TAWPF}. The key point is that different solutions to the previous equations differ from each other by free field equations of motion
\begin{eqnarray}\label{TAWPJ}
\bar{D} \Delta \Phi = 0 \ \ , \ \ D \Delta \Phi = 0 \ \ , \ \ \bar{D} \Delta W = 0 \ \ .
\end{eqnarray}
The first two equations imply constant fluctuations $\Delta \Phi$ of $\Phi$, which can be discarded (up to zero-mode counting), the last equation of \eqref{TAWPJ} yields holomorphic fluctuations $\Delta W$ of $W$, whose component content amounts to a chiral boson and a chiral fermion.
This argument shows that one needs to use the full power of supergeometry (in particular the full complex of differential forms) in order to keep the geometrical point of view of a field theory under control. In this case we have shown that pseudoforms are essential objects in order to overcome some limitations that rheonomy has built-in.
\subsection{Component PCO}
In this subsection we want to show that we can obtain the same results of the previous one by considering the `` self-dual" form $Q^{(1|2)}$ as in \eqref{SDFPS}. The coupling term $d \Phi^{(0|0)} \wedge Q^{(1|2)}$ reads
\begin{equation}\label{TAWPQ}
d \Phi^{(0|0)} \wedge Q^{(1|2)} = \left[ \left( - Q_1 \theta - Q_2 \bar{\theta} \right) \bar{\partial} \Phi + \left( -Q_1 \bar{\theta} + Q_2 \theta \right) \partial \Phi \right] V \bar{V} \delta \bar{\delta} \ \ .
\end{equation}
Analogously, the term containing the other fields $R^{(1|0)} \wedge Q^{(1|2)}$ reads
\begin{equation}\label{TAWPR}
R^{(1|0)} \wedge Q^{(1|2)} = \left[ \left( - Q_1 \theta - Q_2 \bar{\theta} \right) R_{\bar{V}} + \left( - Q_1 \bar{\theta} + Q_2 \theta \right) R_V \right] V \bar{V} \delta \bar{\delta} \ \ .
\end{equation}
The action, already expanded in components, reads
\begin{align}
\nonumber S = \int_{_{z \bar{z}}} &\Big[ \phi_{\bar{\theta}} w_{\theta \bar{\theta}} - \phi_{\theta \bar{\theta}} w_{\bar{\theta}} + \bar{\partial} \phi w_{\theta} + \bar{\partial} \phi_{\theta} w - q_{1} \bar{\partial} \phi_{\bar{\theta}} + \left( q_{2,\theta} - q_{1, \bar{\theta}} \right) \bar{\partial} \phi + q_{2} \bar{\partial} \phi_\theta + q_{1} \partial \phi_\theta + \\
\nonumber &- \left( q_{1,\theta} + q_{2 \bar{\theta}} \right) \partial \phi + q_{2} \partial \phi_{\bar{\theta}} - q_{1} R_{\bar{V},\bar{\theta}} + \left( q_{2,\theta} - q_{1, \bar{\theta}} \right) R_{\bar{V}, 0} + q_{2} R_{\bar{V}, \theta} + q_{1} R_{V,\theta} + \\
\label{TAWPS} &- \left( q_{1,\theta} + q_{2,\bar{\theta}} \right) R_{V,0} + q_{2} R_{V, \bar{\theta}} + f \Big] \ \ .
\end{align}
We can rearrange the six fields emerging from $Q_{1/2}$ in the four independent combinations as follows:
\begin{equation}\label{TAWPT}
-q_{1} = \eta_1 \ , \ q_{2} = \eta_2 \ , \ q_{2,\theta} - q_{1, \bar{\theta}} = p_1 \ , \ -q_{1,\theta} - q_{2, \bar{\theta}} = p_2 \ \ .
\end{equation}
Then the action reads
\begin{align}
\nonumber S = \int_{_{z \bar{z}}} &\Big[ \phi_{\bar{\theta}} w_{\theta \bar{\theta}} - \phi_{\theta \bar{\theta}} w_{\bar{\theta}} + \bar{\partial} \phi w_{\theta} + \bar{\partial} \phi_{\theta} w + \eta_1 \left( \bar{\partial} \phi_{\bar{\theta}} - \partial \phi_\theta + R_{\bar{V}, \bar{\theta}} - R_{V, \theta} \right) + p_1 \left( \bar{\partial} \phi + R_{\bar{V},0} \right) + \\
\label{TAWPU} &+ \eta_2 \left( \bar{\partial} \phi_\theta + \partial \phi_{\bar{\theta}} + R_{\bar{V},\theta} + R_{V, \bar{\theta}} \right) + p_2 \left( \partial \phi + R_{V,0} \right) + f \Big] \ \ .
\end{align}
The equations of motion can be easily obtained:
\begin{eqnarray}
\label{TAWPV} \frac{\delta S}{\delta W} = 0 \ &\implies& \ \bar{\partial} \phi_{\theta} = 0 \ , \ \bar{\partial} \phi = 0 \ , \ \phi_{\theta \bar{\theta}} = 0 \ , \ \phi_{\bar{\theta}} = 0 \ \ , \\
\nonumber \frac{\delta S}{\delta \Phi} = 0 \ &\implies& \ - \bar{\partial} w_\theta - \bar{\partial} p_1 - \partial p_2 = 0 \ , \ - \bar{\partial} w - \partial \eta_1 + \bar{\partial} \eta_2 = 0 \ \ , \\
\label{TAWPW} && w_{\theta \bar{\theta}} + \bar{\partial} \eta_1 + \partial \eta_2 = 0 \ , \ w_{\bar{\theta}} = 0 \ \ , \\
\label{TAWPX} \frac{\delta S}{\delta \eta_{_{1/2}}} = 0 \ &\implies& \ \bar{\partial} \phi_{\bar{\theta}} - \partial \phi_\theta + R_{\bar{V}, \bar{\theta}} - R_{V, \theta} = 0 \ , \ \bar{\partial} \phi_\theta + \partial \phi_{\bar{\theta}} + R_{\bar{V},\theta} + R_{V, \bar{\theta}} = 0 \ \ , \\
\label{TAWPY} \frac{\delta S}{\delta p_{_{1/2}}} = 0 \ &\implies& \ \bar{\partial} \phi + R_{\bar{V},0} = 0 \ , \ \partial \phi + R_{V,0} = 0 \ \ , \\
\label{TAWPZ} \frac{\delta S}{\delta M} = 0 \ \ .
\end{eqnarray}
We can use \eqref{TAWPV} in \eqref{TAWPX} and \eqref{TAWPY} (and in case apply $\bar{\partial}$) in order to obtain equations for $R$ (namely, for $M$):
\begin{equation}\label{TAWPZA}
\bar{\partial} R_{\bar{V}, \bar{\theta}} - \bar{\partial} R_{V, \theta} = 0 \ , \ R_{\bar{V},\theta} + R_{V, \bar{\theta}} = 0 \ , \ R_{\bar{V},0} = 0 \ , \ \bar{\partial} R_{V,0} = 0 \ \ .
\end{equation}
We then see that in this case $\phi$ and $\phi_\theta$ are the decoupled chiral boson and chiral fermions respectively, while from \eqref{TAWPW} we easily see that only the chiral part of $w$ (fermion) and $w_\theta$ (boson) couple to the rest of the fields, in exact analogy with the previous case. This confirms once again that the results obtained with the supersymmetric and the component PCO's are consistent.
\subsection{Non-Factorised Action: an Alternative Approach?}
In the previous calculations, there might be a caveat: the rheonomic Lagrangian built in this way is still not closed. We have shown that we get the correct counting of degrees of freedom by using the supersymmetric PCO. This fact suggests that it may exist a more general formulation. Once again, SFT suggests a way: is it possible to lift the rheonomic Lagrangian to a non-factorised one? One should then lift the fields in the rheonomic Lagrangian to genuine pseudoforms, so that the picture number needs not to be saturated via PCO's. This approach has been investigated in \cite{Cremonini:2019aao} in the context of super Chern-Simons theory: instead of using factorised fields $A^{(1|0)} \wedge \mathbb{Y}^{(0|1)}$, the authors used pseudoforms $A^{(1|1)}$. This lead to the emerging of non-trivial algebraic structures ($A_\infty$ and $L_\infty$ algebras) which mimic the same structures of open Superstring Field Theory (see, e.g., the recent \cite{Erler:2013xta}).
\noindent We see that if we consider the non-factorised action, the SFT-inspired term is already built-in. Indeed, we show that a simple field redefinition produces the term $d \Phi \wedge Q$.
Let us `` lift" the fields in \eqref{oneQ} to picture 1 fields, so that the resulting action reads
\begin{equation}
S = \int_{\mathcal{SM}^{(2|2)}} \Big[(\xi^{(0|1)} V + \bar\xi^{(0|1)} \bar V) \wedge (d\Phi^{(0|1)} - \psi W^{(0|1)}) + \xi^{(0|1)} \bar \xi^{(0|1)} V\wedge \bar V + $$ $$ + W^{(0|1)} d W^{(0|1)} \wedge V - d\Phi^{(0|1)} \wedge \psi W^{(0|1)} \Big] \ .
\end{equation}
We wonder if there exist a field redefinition that generates Sen's term. In order to verify this, we can rewrite the action as
\begin{equation}
S = \int_{\mathcal{SM}^{(2|2)}} \Big[ (\xi^{(0|1)} V + \bar\xi^{(0|1)} \bar V + \psi W^{(0|1)}) d\Phi^{(0|1)} - \psi W^{(0|1)} \left( \xi^{(0|1)} V + \bar\xi^{(0|1)} \bar V \right) + $$ $$ + \xi^{(0|1)} \bar \xi^{(0|1)} V\wedge \bar V +
W^{(0|1)} d W^{(0|1)} \wedge V \Big] \ .
\end{equation}
Let us consider the first term of the action and denote it as
\begin{equation}
A^{(1|1)} \wedge d\Phi^{(0|1)} \ , \ A^{(1|1)} = (\xi^{(0|1)} V + \bar\xi^{(0|1)} \bar V + \psi W^{(0|1)}) \ .
\end{equation}
We can manipulate this expression as
\begin{equation}
A^{(1|1)} = A^{(1|1)} + \star A^{(1|1)} - \star A^{(1|1)} = Q^{(1|1)} - \star A^{(1|1)} \ ,
\end{equation}
where $Q$ is self-dual. Actually one should pay attention at this point: as we noticed in the previous sections, $\star^2$ produces a minus sign if the form on which it acts is odd. We assume that one correctly groups together the even terms from $\star A$ and the odd terms from $-\star A$ and eventually field-redefines the remaining fields to correct signs. Here we are are skipping these details, we just want to give an intuition of the emergence of the $Q$ term. We have then obtained\vspace{-0.3cm}
\begin{equation}
A^{(1|1)} \wedge d\Phi^{(0|1)} = Q^{(1|1)} \wedge d\Phi^{(0|1)} - \star A^{(1|1)} \wedge d\Phi^{(0|1)} = Q^{(1|1)} \wedge d\Phi^{(0|1)} - A^{(1|1)} \wedge \star d\Phi^{(0|1)} \ ,
\end{equation}
where we have moved the Hodge operator by implicitly assuming the integration over $\mathcal{SM}^{(2|2)}$. We can now re-define $\displaystyle - \star d\Phi^{(0|1)} = \left( d\Phi^{(0|1)} \right)' $ (notice that $d \Phi$ is a $(1|1)$-form) and the term with $Q$ reads
\begin{equation}
Q^{(1|1)} \wedge d\Phi^{(0|1)} = \star Q^{(1|1)} \wedge d\Phi^{(0|1)} = Q^{(1|1)} \wedge \star d\Phi^{(0|1)} = - Q^{(1|1)} \wedge \left( d\Phi^{(0|1)} \right)' = \left( d\Phi^{(0|1)} \right)' \wedge Q^{(1|1)} \ .
\end{equation}
The field re-defined action now reads
\begin{equation}
S = \int_{\mathcal{SM}^{(2|2)}} \Big[ (\xi^{(0|1)} V + \bar\xi^{(0|1)} \bar V + \psi W^{(0|1)}) \left( d\Phi^{(0|1)} \right)' - \psi W^{(0|1)} \left( \xi^{(0|1)} V + \bar\xi^{(0|1)} \bar V \right) + $$ $$ + \xi^{(0|1)} \bar \xi^{(0|1)} V\wedge \bar V +
W^{(0|1)} d W^{(0|1)} \wedge V + \left( d\Phi^{(0|1)} \right)' \wedge Q^{(1|1)} \Big] \ ,
\end{equation}
that is, the action with Sen's term inserted. This argument shows (or less presumptuously seems to show) that, when working with pseudoforms, Sen's term is already included. This resembles what we have found in \cite{Cremonini:2019aao,Cremonini:2019xco} for the supersymmetric term of super Chern-Simons theory: when working with the non-factorised Lagrangian, it is not necessary to add other terms in order to obtain a supersymmetric action, since it is supersymmetric by construction.
As we said, this method has not been investigated yet and it may carry other challenges; for example, pseudoforms have an infinite number of components (for super Chern-Simons \cite{Cremonini:2019aao,Cremonini:2019xco} we have shown how to deal with this problem). Are pseudoforms a new way to deal with auxiliary fields?
\section*{Conclusions}\label{sec:conclusions}
\addcontentsline{toc}{section}{\nameref{sec:conclusions}}
\noindent The present work opens new directions in the study of supersymmetric models, supergravity and
string theory, by using the powerful techniques of supergeometry. It gives a path from string field theory to quantum field
theories, justifying the cited results and leading to several generalizations such as non-flat backgrounds and higher dimensional models
(for example, $M5$-branes \cite{Lambert:2019khh,Andriolo:2020ykk}, supergravity couplings, string theory on supermanifolds).
The old problem of self-dual field strengths and the absence of
auxiliary fields might be faces of the same coin and we believe that they both can be solved by introducing new non-factorized couplings
of the Sen's form. The technique introduced here might have a relevant impact in future analysis. The underlying string field
theory and the powerfulness of supergeometric techniques provide a universal constructive method. We plan to develop this program for the $M5$-branes studied from a supergeometrical point of view in a forthcoming publication \cite{prep}.
\section*{Acknowledgements}
\noindent This work has been partially supported by Universit\`a del Piemonte Orientale research funds. We thank L. Castellani, R. Catenacci and S. Noja for many useful discussions. We also thank A. Sen for his comments on the draft.
|
1,116,691,500,409 | arxiv | \section{Introduction}
A remarkable feature of nonlinear non-integrable lattices is the existence of time-periodic spatially localized (typically exponentially) excitations called breather solutions~\cite{ovchinikov1970localized,sievers1988intrinsic,flach1998discrete,flach2008discrete,flach2005qBreathers,flach2006qBreathers}. Although forming a set of zero measure, such solutions are typically linearly stable and they impact the chaotic dynamics of the system as a generic trajectory may spend long times in their neighborhoods in phase space~\cite{tsironis1996slow,rasmussen2000statistical,eleftheriou2003breathers,eleftheriou2005discrete,gershgorin2005renormalized,matsuyama2015multistage,zhang2016dynamical} -- phenomenon visualized experimentally in superfluids \cite{Ganshin2009energy}, optical fibers \cite{Solli2007optical} and arrays of waveguides~\cite{Eisenberg1998discrete,Lederer2008discrete}. From exponentially localized solutions, the question whether discrete breathers can have zero tail turning into strictly \emph{compact breather} solutions naturally emerged. Page found spatially compact breather solutions in the \emph{Fermi-Pasta-Ulam-Tsingou} system in the limit of non-analytic box interaction potential~\cite{page1990asymptotic}, while Rosenau \emph{et.al.} showed the existence of traveling solitary waves with compact support in the modified \emph{Korteweg-de Vries} model~\cite{rosenau1993compactons}. Other successful attempts include compact solutions in one-dimensional lattices in the presence of non-local nonlinear terms~\cite{kevrekidis2002bright}; compact solutions in discrete nonlinear Klein–Gordon models~\cite{Kevrekidis2003discrete}; and compact traveling bullet excitations (as well as super-exponentially localized moving breathers) in nonlinear discrete time quantum walks~\cite{Vakulchyk2018almost}.
One possible mean to trigger spatially compact excitations in translationally invariant lattices which has become vastly popular in the recent years is destructive interference. Indeed, destructive interference in crystalline linear lattices may yield compact localized eigenstates (CLS) which are macroscopically degenerate and they form a dispersionless (or flat) band in the Bloch spectrum~\cite{derzhko2015strongly,Leykam2018artificial,leykam2018perspective}. Dubbed as flatband those networks supporting dispersionless bands, the intense activity around them is motivated from the experimental feasibility of CLS in a variety of different platforms, from photonics~\cite{mukherjee2015observation,vicencio2015observation,weimann2016transport}, to microwaves~\cite{bellec2013tight,casteels2016probing}, exciton-polariton~\cite{masumoto2012exciton} and ultra cold atoms~\cite{taie2015coherent}, among others.
Therefore it was no surprise that the introduction of local Kerr nonlinearity in notable flatband networks yielded diverse examples of compact breather solutions~\cite{johansson2015compactification,gligoric2016nonlinear,belicev2017localized,bastian2018controlled,johansson2019nonlinear,stojanovic2020localized}. Their existence have been partly explained by a continuation criterium from linear CLS of flatband networks to compact breathers introduced in Ref.~\onlinecite{danieli2018compact} for CLS whose nonzero amplitudes are all equal. Meanwhile, compact time-periodic solutions have been found in one-dimensional nonlinear mechanical lattice networks~\cite{perchikov2017flat,perchikov2020stabilty} and in dissipative coherent perfect absorbers~\cite{danieli2020casting}. Furthermore, it has been found that compact discrete breathers induce Fano resonances~\cite{ramachandran2018fano} (similarly to discrete breathers~\cite{flach2003fano,vicencio2007fano}), and their existence is linked to nonlinear caging in linear lattices supporting only flat bands~\cite{gligoric2018nonlinear,diliberto2019nonlinear,danieli2020nonlinear,*danieli2020quantum}.
In this work, we introduce a systematic scheme to generate nonlinear lattices with any number of sites $\nu$ per unit cell supporting compact breather solutions spanning any given number $U$ of unit cells. This scheme is inspired and is based on the recently proposed single particle flatband generators~\cite{maimaiti2017compact,maimaiti2019universal,maimaiti2021flatband} -- schemes following and generalising previously proposed ones~\cite{flach2014detangling,dias2015origami,morales2016simple,rontgen2018compact,toikka2018necessary}. Our generator addresses systems supporting either \emph{accidental compact breathers} -- i.e. compact breather solutions existing at specific values of the nonlinear strength only -- or \emph{parametric compact breathers} -- i.e. compact breather existing at any value of the nonlinear strength. In particular, we are able to broaden the latter class of parametric compact breathers beyond the continuation criteria introduced in Ref.~\onlinecite{danieli2018compact} which rely on the spatial homogeneity of the compact solutions. Throughout the work, we provide explicit lattice samples with different number of bands supporting compact breather solutions for the cases $U=1$ and $U=2$ number of lattice unit cells.
\section{Set-up and fundamental concepts}
Let us consider a one-dimensional nonlinear lattice with $\nu$ bands and nearest-neighbor hopping, whose equations of motion read
\begin{equation}
i \dot{\Psi}_n = - H_0\Psi_n - H_1\Psi_{n+1} - H_1^\dagger\Psi_{n-1} + \gamma\mathcal{G}(\Psi_n).
\label{eq:FB_ham_NL1}
\end{equation}
For any $n\in\mathbb{Z}$, each entry of the time-dependent vector $\Psi_n = (\psi_n^1,\dots,\psi_n^\nu)^T\in\mathbb{C}^\nu$ represents one site of the lattice -- hence $\Psi_n$ represents its unit cell. The square matrices $H_0,H_1$ of size $\nu$ with Hermitian $H_0$, define the lattice profile, while $\mathcal{G}$ is the nonlinear function -- chosen here such that $\mathcal{G}(0) = 0$. We seek for lattices Eq.~\eqref{eq:FB_ham_NL1} which posses compact discrete breather solutions (CB), {\it e.g.} time-periodic spatially compact solutions
\begin{align}
C_{n,n_0} (t) &= \left[ \sum_{l=1}^{U} \Phi_l \delta_{n,n_0 + l}\right] e^{-i \Omega t}
\label{eq:NL_FB_states1}
\end{align}
spanning over $U$ unit cells and with frequency $\Omega$. For convenience we refer to such breathers as breathers of size $U$. The vectors $\Phi_l = (\phi_l^1,\dots,\phi_l^\nu)^T$ for $1\leq l\leq U$ define the CB spatial profile.
The ansatz Eq.~\eqref{eq:NL_FB_states1} is a solution of Eq.~\eqref{eq:FB_ham_NL1} if for all $1\leq j\leq U$
\begin{align}
\label{eq:ex_cond_sol}
& \Omega \Phi_j = -H_0\Phi_j - H_1\Phi_{j+1} - H_1^\dagger\Phi_{j-1} + \gamma\mathcal{G}(\Phi_j),\\
& \quad H_1 \Phi_1 =0 \qquad \qquad H_1^\dagger \Phi_U = 0
\label{eq:ex_cond_DI}
\end{align}
The conditions in Eq.~\eqref{eq:ex_cond_DI} ensure \emph{destructive interference} at the boundary of the compact sub-region occupied by the breather -- similarly to the flatband case discussed in Refs.~\onlinecite{maimaiti2017compact,maimaiti2019universal,maimaiti2021flatband} for linear compact localized eigenstates. For nonlinear lattices of Eq.~\eqref{eq:FB_ham_NL1} defined for given $H_0,H_1$, the classification of compact breathers is twofold: We distinguish CB based on the homogeneity of their spatial profiles on one hand, and their dependence on the nonlinearity strength $\gamma$ on the other hand. Specifically, a compact breather in Eq.~\eqref{eq:NL_FB_states1} is:
\begin{itemize}
\item[(i)]
\emph{homogeneous}: if
\begin{equation}
|\phi_l^j|^2 \in \{0,A_l^2\}\ , \quad 1\leq l\leq U\ ,\ \ 1\leq j\leq \nu
\label{eq:CLS_H}
\end{equation}
for a real $A\neq 0$ amplitude, while a CB is \emph{heterogeneous} otherwise.
\item[(ii)]
\emph{accidental}: if it exists at specific fine-tuned values of the nonlinearity strength $\gamma_*\neq 0$. Otherwise, CB is \emph{parametric} if it exists for any value of the nonlinearity strength $\gamma$.
\end{itemize}
To clarify the context of these definitions, it's worth reminding, that breathers always come in families, e.g. if present they would exist for any strength of nonlinearity~\cite{flach1998discrete,flach2008discrete}, however they need not be compact always. So an accidental compact breather is part of family of breathers, that turn compact only for specific (isolated) control parameters. Consequently a parametric compact breather corresponds to a family of breathers that are compact for any parameter value.
Following previous studies, it is known that homogeneous CB are parametric and they can be derived as a continuation of linear CLS of a flatband network into the nonlinear regime~\cite{danieli2018compact}. Otherwise, heterogeneous CB are instead typically accidental~\cite{johansson2015compactification}. We illustrate these distinctions with an example of a $\nu=3$ nonlinear network shown in Fig.~\ref{fig:CB_samples} with $\Psi_n = (a_n,b_n,c_n)$ and matrices
\begin{equation}
H_0 = \begin{pmatrix}
0 & J & 0 \\[0.3em]
J & 0 & J \\[0.3em]
0 & J & 0
\end{pmatrix},
\quad
H_1 = \begin{pmatrix}
0 & J & 0 \\[0.3em]
0 & 1 & 0 \\[0.3em]
0 & J & 0
\end{pmatrix},
\label{eq:diam_matr}
\end{equation}
in Eq.~\eqref{eq:FB_ham_NL1}, with $J>0$ the hopping parameter. In this example, we consider the local cubic nonlinear term $\mathcal{G}(\Psi_n) = (|a_n|^2 a_n, |b_n|^2 b_n, |c_n|^2 c_n)$.
\begin{figure
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{Example of $\nu=3$ network supporting both
parametric compact breather - shown in (a) - and accidental compact breather - shown in (b). Both panels: hopping $1$ (solid lines) and $J$ (dashed lines). The black dots indicate the non-zero amplitudes of the breathers.}
\label{fig:CB_samples}
\end{figure}
This lattice remarkably supports both types of the breathers -- parametric and accidental. Fig.~\ref{fig:CB_samples}(a) shows a parametric $U=1$ compact breather present for any given $J\neq 0$ and any nonlinearity strength $\gamma\neq 0$, with frequency $\Omega = \gamma A^2$~\cite{danieli2018compact}. At the same time, Fig.~\ref{fig:CB_samples}(b) shows an accidental $U=2$ compact breather that at a given $J\neq \pm 1, \pm 1/2$~\footnote{For $J=1$, one finds $\gamma = 0$, while $J=\pm1/2$ implies $\gamma\rightarrow\infty$. However, in this latter case $J=\pm1/2$ the CB in Fig.~\ref{fig:CB_samples}(b) becomes homogeneous, but to exist it requires an additional onsite energy $-3/2$ on the central site.} it exists for the specific nonlinearity strengths
\begin{equation}
\gamma_* = 2 \frac{J^2 - 1}{4 J^2 - 1}
\label{eq:cb_cond1_ex}
\end{equation}
with frequency $\Omega_* = 2J^2 + \gamma A^2$.
\section{Generating compact breathers}
We now outline schematically the generator scheme that we use to construct nonlinear lattices supporting compact breathers. This scheme is similar and is inspired by the generator scheme introduced in Refs.~\cite{maimaiti2017compact,maimaiti2019universal,maimaiti2021flatband} for linear flatband lattices: it relies on the same idea -- reconstruct, if possible, the nonlinear network from a given compact breather. The main steps for the construction of the network and the accidental compact breathers are
\begin{enumerate}
\item Choose the nonlinear term $\mathcal{G}$ in Eq.~\eqref{eq:FB_ham_NL1}
\item Choose the desired size $U$ of the breather (in lattice unit cells)
\item Choose the nonlinearity strength $\gamma_*\neq 0$ and the breather frequency $\Omega_*$
\item Fix the CB profile - $\{\Phi_l\}_{l=1}^U$ in Eq.~\eqref{eq:NL_FB_states1}
\item Reconstruct the hopping matrices $H_0,H_1$ such that Eqs.~(\ref{eq:ex_cond_sol}-\ref{eq:ex_cond_DI}) support this CB
\end{enumerate}
Parametric compact breathers can be generated from accidental breathers upon further fine-tuning by placing some suitable conditions on the generated hopping matrix $H_1$, as we show later. Let us also remark that for $U\geq 2$, the CB profile $\{\Phi_l\}_{l=1}^U$ is object to some nonlinear constraints that need to be resolved. This becomes obvious if one considers the last step in the above construction -- the reconstruction of the hopping matrices: assuming the profile $\Psi_l$, the frequency $\Omega$ and the nonlinearity is known, the problem of finding the matrices is a modification of an inverse eigenvalue problem presented in Ref.~\cite{boley1987survey}. It is then obvious that not any set of vectors $\Psi_l$ can be compatible with the desired block tridiagonal (or more generic, banded) structure of the Hamiltonian matrix of the linear problem.
In this work, we will focus for convenience on the case of local Kerr nonlinearity $\mathcal{G}(\Psi_n) = \mathcal{F}(\Psi_n) \Psi_n $ with
\begin{equation}
\mathcal{F}(\Psi_n) \equiv \sum_{j=1}^\nu |\psi_n^j|^{2\alpha} \ e_j\otimes e_j
\label{eq:FB_ham_NL1_2}
\end{equation}
However, the scheme can be applied for other types of nonlinearities $\mathcal{G}$, including nonlocal ones. We first construct nonlinear networks supporting both accidental and parametric $U=1$ compact breathers. In this case, we also show the existence and provide explicit examples of parametric \emph{heterogeneous compact breathers}. Then, we discuss the case of nonlinear networks supporting $U=2$ compact breathers.
\subsection{Class $U=1$ breathers}
\label{sec:CBgen_U=1}
Let us construct a nonlinear network with $\nu$ sites per unit cell supporting accidental compact breather following the steps outlined above. We fix the frequency $\Omega_*$, the nonlinearity strength $\gamma_*$, and parameterise the amplitude vector as $\Phi_1 = \lambda_* \ket{n_1}$ with $\bra{n_1}\ket{n_1} = 1$, where $n_1$ is an arbitrary $\nu$ component vector. Then Eqs.~(\ref{eq:ex_cond_sol}-\ref{eq:ex_cond_DI}) reduce to
\begin{gather}
\label{eq:ex_cond_sol_U1}
H_0 \ket{n_1} = - \Omega_* \ket{n_1} + \gamma_* \mathcal{F}(\lambda_*\ket{n_1}) \ket{n_1}, \\
H_1 \ket{n_1} = H_1^\dagger \ket{n_1} = 0. \notag
\end{gather}
From the r.h.s of Eq.~\eqref{eq:ex_cond_sol_U1} we define the vector
\begin{gather}
\ket{x_*} = -\Omega_* \ket{n_1} + \gamma_*\lambda_*^{2\alpha} \mathcal{F}(n_1 ) \ket{n_1}.
\label{eq:x_U1}
\end{gather}
Assuming $\bra{x_*}\ket{n_1} \neq 0$ (for the case $\bra{x_*}\ket{n_1} = 0$ see Appendix~\ref{app:1}) and introducing a transverse projector $Q$ on $n_1$: $Q\ket{n_1}=0$, the matrices $H_0,H_1$ follow straightforwardly
\begin{gather}
\label{eq:H0H1_U1}
H_0 = \frac{\ket{x_*}\bra{x_*}}{\bra{x_*}\ket{n_1}} + Q_1 K Q_1\ ,\quad H_1 = Q_1 M Q_1
\end{gather}
where $K,M$ are arbitrary $\nu\times\nu$ matrices with $K$ Hermitian in order to ensure the Hermicity of $H_0$. The presence of this free parameters is expected, as every inverse eigenvalue problem has multiple solutions.
\begin{figure
\centering
\includegraphics[width=0.85\columnwidth]{fig2.pdf}
\caption{Examples of $U=1$ heterogeneous accidental compact breathers on $\nu=3$ network (a) and $\nu=4$ network (b). In both cases, $\lambda_*=1, \gamma_*=1$, and $\Omega_*=1$. The black dots indicate the non-zero amplitudes of the breathers.}
\label{fig:CB_U1_nu3_4}
\end{figure}
Examples of networks supporting $U=1$ accidental compact breathers are schematically shown in Fig.~\ref{fig:CB_U1_nu3_4} for $\nu=3$ [panel (a)] and $\nu=4$ [panel (b)]. The three components $\nu=3$ network reported in Fig.~\ref{fig:CB_U1_nu3_4}(a) has been constructed with $\ket{n_1} = \frac{1}{\sqrt{5}}(1,2,-1)$, while the four components $\nu=4$ network shown in Fig.~\ref{fig:CB_U1_nu3_4}(b) has been constructed with $\ket{n_1} = \frac{1}{\sqrt{15}}(1,3,1,2)$. In both cases, we chose $\lambda_*=1, \gamma_*=1$, and $\Omega_*=1$. Their correspondent matrices $H_0,H_1$ have been generated via Eq.~\eqref{eq:H0H1_U1}, and are presented in Appendix~\ref{app:CB_U1}.
Does the the nonlinear system Eq.~\eqref{eq:FB_ham_NL1} defined for fixed $H_0,H_1$ in Eq.~\eqref{eq:H0H1_U1} supports also \emph{parametric compact breathers} in addition to the accidental one at $\gamma = \gamma_*$? The answer depends on the number of the zero modes of $H_1$ {\it i.e.} whether $\ket{n_1}$ is the only zero mode of $H_1$ or not.
If $\ket{n_1}$ is the only zero mode of $H_1$, then parametric compact breathers can exist if and only if the zero mode $\ket{n_1}$ of $H_1$ is homogeneous, {\it e.g.} has all of its components equal either to zero or to some real number $A$ -- as detailed in Appendix~\ref{app:1}. This case satisfies the criterium discussed in Ref.~\cite{danieli2018compact} and parametric compact breathers exists for any nonlinearity strength $\gamma$ with the profile $\Phi_1 = \lambda \ket{n_1}$ defined for any $\lambda$. The frequency $\Omega$ depends on $\gamma$ as
\begin{equation}
\Omega = \Omega_* + \gamma \frac{ (\lambda A)^{2\alpha}}{R^{2\alpha}} \left[ 1 - \frac{\gamma_*}{\gamma } \frac{\lambda_*^{2\alpha}}{ \lambda^{2\alpha}}\right]
\label{eq:CB_U1_freq}
\end{equation}
Here $A$ is the amplitude in every non-zero site of $\ket{n_1}$, and $R$ is the renormalization coefficient of $\ket{n_1}$ -- see Appendix~\ref{app:U1_sm} for details.
If instead $H_1$ has multiple zero-modes, $\ket{n_l}$ (that can always be taken orthogonal $\bra{n_i}\ket{n_j} = \delta_{ij}$) the lattice defined for fixed $H_0,H_1$ in Eq.~\eqref{eq:H0H1_U1} can feature parametric compact breathers whose spatial profile does not require any homogeneity condition Eq.~\eqref{eq:CLS_H}~\cite{danieli2018compact}. We demonstrate this for the simplest case of two zero-modes $\ket{n_1}, \ket{n_2}$ and fixed $H_0,H_1$ obtained for given $\ket{n_1}, \gamma_*,\lambda_*, \Omega_*$ from Eq.~\eqref{eq:H0H1_U1}. We then search for a parametric CB solutions in the subspace of profiles $\Phi_1$ parameterised by $p, \lambda$:
\begin{equation}
\Phi_1 = \lambda (\ket{n_1} + p\ket{n_2})
\label{eq:CB_par_U1}
\end{equation}
By construction, for $\lambda = \lambda_*$ and $p=0$ in Eq.~\eqref{eq:CB_par_U1}, the network defined by the matrices $H_0,H_1$ in Eq.~\eqref{eq:H0H1_U1} supports an accidental compact breather at strength $\gamma_*$ and frequency $\Omega_*$.
For $\gamma\neq \gamma_*$ and $\lambda \neq \lambda_*$ we search for $p\neq 0$ such that Eq.~\eqref{eq:CB_par_U1} is a CB solution of the network defined by the matrices $H_0,H_1$, Eq.~\eqref{eq:H0H1_U1}. The idea is to search a solution of Eq.~\eqref{eq:ex_cond_sol_U1} using ansatz~\eqref{eq:CB_par_U1} in terms of the unknown $\lambda$ and $p$. This results into a system of algebraic equations for $\lambda, p$. As detailed in Appendix~\ref{app:U1_mm}, we found that the real parameter $p$ can always be continuously expressed as a function of $(\gamma,\lambda)$, with $p(\gamma\to \gamma_*,\lambda \to \lambda_*) \to 0$. The resulting $\Phi_1(\gamma,\lambda, p(\gamma, \lambda))$ given by Eq.~\eqref{eq:CB_par_U1} is a compact breather solution of the lattice defined by the hopping matrices $H_0,H_1$ for any $\gamma,\lambda$. Importantly there is no reason for the $\Phi_1$ constructed this way to be spatially homogeneous in general as we illustrate below with an example.
\begin{figure
\centering
\includegraphics [width=\columnwidth]{fig3.pdf}
\caption{(a) Examples of $\nu=4$ networks which supports parametric heterogeneous compact breather. (b) parametrization of the breather spatial profile. (c) Root $p$ versus $\delta$, with $\delta$ control parameter $\gamma = \gamma_*+\delta$. (d) $|a_n|^2 = |c_n|^2$ (red) and $|b_n|^2 = |d_n|^2$ (blue) versus $\delta$. Here $\lambda_*=1, \gamma_*=1$, and $\Omega_*=1$. The black dots indicate the non-zero amplitudes of the breathers.}
\label{fig:CB_U1_het}
\end{figure}
We illustrate this construction with an example shown in Fig.~\ref{fig:CB_U1_het} showing parametric heterogeneous compact breathers obtained via the above construction. We fix $\lambda = \lambda_* = 1$ in Eq.~\eqref{eq:CB_par_U1} and only vary $\gamma = \gamma_* + \delta$, where parameter $\delta$ controls the deviation away from $\gamma_*$. In panel~\eqref{fig:CB_U1_het}(a) we show the $\nu=4$ lattice with black dots indicating the nonzero amplitudes of the breathers. We choose $\Omega_*=1$, $\gamma_*=1$, $\ket{n_1} = \frac{1}{\sqrt{10}} (1,2,1,-2)$, $\ket{n_2} = \frac{1}{\sqrt{10}} (2,-1,2,1)$ and generate the hopping matrices $H_0,H_1$ via Eq.~\eqref{eq:H0H1_U1} supporting $\ket{n_1}$ as an accidental compact breather, while using the freedom in the choice of $H_1$, {\it e.g.} the appropriate choice of matrix $M$ in Eq.~\eqref{eq:H0H1_U1}, to ensure that $\ket{n_2}$ is the second zero mode of $H_1$ orthogonal to $\ket{n_1}$. Then the vector $\Phi_1$ is parameterised as $\Phi_1=\ket{n_1} + p \ket{n_2} = (a_n, b_n, c_n, d_n)$ -- where $n$ is the unit cell indexes; also see Eq.~\eqref{eq:CB_par_U1} -- as shown in Fig.~\ref{fig:CB_U1_het}(b). For the parametric compact breathers the parameter $p$ is expressed as a function of $\delta$ such that $p(\delta\to 0) \to 0$, as we discussed above. Figure~\eqref{fig:CB_U1_het}(d) shows the values of the components $|a_n|^2 = |c_n|^2$ (red curve) and $|b_n|^2 = |d_n|^2$ (blue curve) as a function of $\delta$: the difference between the values of $a_n$ and $c_n$ confirms that these parametric compact breathers are indeed heterogeneous. The details of this derivation are reported in Appendix~\ref{app:CB_het_fam}.
\subsection{Class $U=2$ breathers}
\label{sec:CBgen_U=2}
The previously discussed construction scheme directly extends to other compact breathers of larger sizes $U\geq 2$ albeit with some additional complications. In this section, we explicitly discuss the case of $U=2$ compact breathers and build explicit lattice network examples, but similar derivations can be performed for other values of $U$.
Let us fix the number of bands $\nu$ and $\Omega_*, \gamma_*$. Then Eqs.~(\ref{eq:ex_cond_sol},\ref{eq:ex_cond_DI}) become for $U=2$
\begin{align}
\label{eq:ex_cond_sol_U2_a}
H_1\Phi_2 & = - (H_0 + \Omega_*) \Phi_1 + \gamma_* \mathcal{F}(\Phi_1) \Phi_1, \\
\label{eq:ex_cond_sol_U2_b}
H_1^\dagger\Phi_1 & = - (H_0 + \Omega_*) \Phi_2 + \gamma_* \mathcal{F}(\Phi_2) \Phi_2, \\
H_1 \Phi_1 & = H_1^\dagger \Phi_2 = 0. \notag
\end{align}
Just like in the flatband case~\cite{maimaiti2017compact,maimaiti2019universal}, this system of equations can be regarded as an inverse eigenvalue problem for $H_1$ given $\Phi_1,\Phi_2$, while $H_0$ can be considered as a free parameter. However it is also simple to show that not every profile $\{\Phi_1, \Phi_2\}$ produces an $H_1$ -- the parametrization vectors $\{\Phi_1,\Phi_2\}$ are subject to nonlinear constraints, that ensure the existence of a solution $H_1$. One way to see that is notice that $\mel{\Phi_1}{H_1}{\Phi_2}$ can be computed independently from each of the above two equations~(\ref{eq:ex_cond_sol_U2_a}-\ref{eq:ex_cond_sol_U2_b}). The presence of such nonlinear constraints is a generic feature for any $U>1$. In our $U=2$ case we can pick independently $\Phi_1$ (alternatively $\Phi_2$), then the above equations yield polynomial constraints on the second vector $\Phi_2$ (alternatively $\Phi_1$) - see Appendix~\ref{app:2} for details of the derivation and resolution of the constraints. Assuming that these constraints are resolved and parameterizing the vectors as $\Phi_l = \lambda_{l*} \ket{n_l}$ with $\bra{n_l}\ket{n_l} = 1$ for $l=1,2$, we can use a simple ansatz for the hopping matrix $H_1 = \ket{u_*}\bra{v_*}$. Plugging it into Eqs.~(\ref{eq:ex_cond_sol_U2_a}-\ref{eq:ex_cond_sol_U2_b}) we find (see Appendix~\ref{app:2} for details)
\begin{align}
\label{eq:U2_H1_u}
\ket{u_*} & = - \lambda_{1*} (\Omega_* + H_0) \ket{n_1} + \gamma_* \lambda_{1*}^{2\alpha+1} \mathcal{F}( n_1) \ket{n_1}, \\
\bra{v_*} & = - \lambda_{2*} \bra{n_2} (\Omega_* + H_0) + \gamma_* \lambda_{2*}^{2\alpha+1} \bra{n_2} \mathcal{F}(n_2).
\label{eq:U2_H1_v}
\end{align}
By construction, $H_1$ satisfies Eq.~\eqref{eq:ex_cond_DI} - namely $H_1 \Phi_1 =0$ and $H_1^\dagger \Phi_U =0$.
\begin{figure
\centering
\includegraphics[width=0.55\columnwidth]{fig4.pdf}
\caption{Example of $U=2$ heterogeneous accidental compact breathers on $\nu=3$ network. See the main text for sample amplitudes $A,B,C$. In this case $\lambda_{1*}= \lambda_{2*} =1$, $\Omega_*=1$, and $\gamma_*=1$. The black dots indicate the non-zero amplitudes.}
\label{fig:CB_U2_nu3}
\end{figure}
In Fig.~\ref{fig:CB_U2_nu3} we show samples of accidental $U=2$ compact breathers for $\nu=3$ network constructed following the above algorithm with $\ket{n_1} = \frac{1}{\sqrt{5}}(1,2,-1)$. We chose $\lambda_{1*} = \lambda_{2*} = \Omega_* = \gamma_* = 1$. By picking an arbitrary hermitian $H_0$ and resolving the nonlinear constraints the second parametrizing vector $\Phi_2 = (A,B,C)$ follows. For instance, two different choices of $H_0$ reported in Appendix~\ref{app:U2_ex} give two distinct vectors $\Psi_2$: $(A,B,C) = (-1.464, 0.56026, -1.982)$ and $(A,B,C) = (-2.1328, 2.3334, -1.2823)$. Then we construct $H_1$ via Eqs.~(\ref{eq:U2_H1_u},\ref{eq:U2_H1_v}), defining the nonlinear network -- see Appendix~\ref{app:U2_ex} for details.
The construction of heterogeneous parametric compact breathers for $H_1$ with multiple orthogonal zero-modes follows the blueprint of the $U=1$ case discussed above. As a very cumbersome and involved procedure, we omit its presentation in the manuscript.
\section{Discussions and Perspectives}
In this work we have proposed a generator scheme for one-dimensional nonlinear lattices supporting discrete compact breathers. This scheme follows the generator schemes recently proposed in Refs.~\onlinecite{maimaiti2017compact,maimaiti2019universal,maimaiti2021flatband} for the single particle flatband networks, and we we have explicitly applied our results to the case of local Kerr nonlinearity, In particular, we have outlined explicitly the generator for compact breathers spanning over $U=1$ and $U=2$ unit cells, and we have presented several example nonlinear lattices supporting such solutions. Furthermore, we have successfully shown the existence and explicitly constructed nonlinear lattices supporting parametric heterogeneous compact breathers. These solutions substantially widen the class of parametric compact breathers, as the formerly known families of compact breathers follow as continuation in the nonlinear regime of spatially homogeneous linear CLS of flatband networks according to the criterium discussed in Ref.~\onlinecite{danieli2018compact}.
The proposed compact breather generator for one-dimensional nonlinear networks and applied for the case of local Kerr nonlinearity acts as a blueprint and it can be employed for other types of nonlinear contributions $\mathcal{G}$ in Eq.~\eqref{eq:FB_ham_NL1} - \textit{e.g.} saturable, nonlocal, among others, as well as in higher-dimensional nonlinear networks following the flatband generator discussed in Ref.~\onlinecite{maimaiti2021flatband}. This scheme is the first one addressing classical interacting systems, complementing those recently proposed in Refs.~\onlinecite{santos2020methods,danieli2020many} for quantum many-body systems featuring localization properties. Our results allow to devise nonlinear lattices capable to support exact compactly localized excitations based on the principle of destructive interference, broadening the successful research areas of localization of linear lattices due to flatbands onto novel nonlinear structures.
\begin{acknowledgments}
The authors thank Sergej Flach for helpful discussions. This work was supported by the Institute for Basic Science, Korea (IBS-R024-D1).
\end{acknowledgments}
|
1,116,691,500,410 | arxiv | \section{Introduction}
A century after its discovery, the theory of general relativity continues to challenge all validity tests.
The latest is the fabulous detection of gravitational waves emitted by a neutron star merger \cite{TheLIGOScientific:2017qsa} with a first measurement of their propagation speed which is probably the same as the speed of light in vacuum... as predicted by Einstein. In spite of all these successes, the reasons for believing that general relativity is not the ultimate theory of space-time and that
it will have to be surpassed are numerous and so interesting that modifying gravity has become a very dynamical field of research per se in theoretical physics and cosmology
these last years (see \cite{Clifton:2011jh,Koyama:2015vza} for example).
Going beyond general relativity necessary leads to relax one of the fundamental hypothesis of the Lovelock theorem
that makes Einstein theory unique: invariance under diffeomorphisms, locality, pure
metric formulation in four space-time dimensions... For instance, massive gravity in four dimensions \cite{deRham:2010kj} renounces to the invariance under diffeomorphims, and scalar-tensor theories relies on the fact
that a scalar degree of freedom comes with the metric to describe the physics of space-time (at least at very large or very short scales)... There exist many modifications
of gravity, and most of them (exactly like massive gravity or scalar-tensor theories) often share the property that one or more additional degree(s) of freedom propagate in the theory. When such theories are designed to account for dark energy for example, the extra degrees of freedom are responsible for the fifth force which makes the expansion of the universe accelerating. When they are constructed
to cure the well-known (in)famous ultra-violet problems of general relativity, these degrees of freedom play the role of making the theory renormalizable, and eventually quantifying
\cite{Horava:2009uw,Blas:2010hb}.
Hence, there might be the general belief that modifying gravity cannot be done without introducing new degree(s) of freedom in the scenario, in addition to the usual two spin-2
massless degrees of freedom of the gravitational field.
In this spirit, scalar-tensor theories are sometimes considered as the ``simplest" theories of modified gravity because they come with
one extra degree of freedom only. These last years, they have been at the core of a huge activity and scalar-tensor theories, whose
actions involve up to second derivatives of the scalar field, have been systematically classified and extensively studied \cite{Horndeski:1974wa,Deffayet:2011gz,Kobayashi:2011nu,Gleyzes:2014dya,Gleyzes:2014qga,Lin:2014jga,Langlois:2015cwa,Langlois:2015skt,Crisostomi:2016tcp,Crisostomi:2016czh,Deffayet:2015qwa,Achour:2016rkg,BenAchour:2016fzp,Langlois:2017mxy}.
Adding higher order derivatives in a Lagrangian is potentially very dangerous because it could lead to the fact that, not only one, but two scalars propage in the theory,
one of them being the Ostrogradski ghost. Degeneracy conditions \cite{Langlois:2015cwa}
in a higher order scalar-tensor theory insure that at most three degrees of freedom propagate, but one has
to study the theory in more details to see whether these degrees of freedom are safe or not.
Furthermore, it has been realized that degeneracy conditions in the unitary gauge (where the
scalar field is fixed to be a function of time only) is enough to ensure that a unique scalar propagates in addition to the usual two tensor modes \cite{DeFelice:2018mkq}. Also, there exists the possibility that only two tensorial degrees of freedom propagate in a scalar-tensor theory which is, of course, different from gravity (the scalar mode is in fact shadowy in the sense of \cite{DeFelice:2018mkq}).
They form the class of ``cuscuton" theories \cite{Afshordi:2006ad,Iyonaga:2018vnu}.
These theories are particularly interesting and they can be
considered as minimal modifications of general relativity.
A systematic construction of gravitational theories with only (up to) two degrees of freedom has been initiated in \cite{Lin:2017oow,Aoki:2018brq}.
The idea consists in renouncing to the invariance under four dimensional diffeomorphisms but keeping the three dimensional diff-invariance.
This is equivalent to considering scalar-tensor theories in the unitary gauge. As generically
Lorentz-breaking gravity have more than two degrees of freedom, one has to find the conditions for
the theory to possess enough constraints that would kill the extra degrees of freedom, which would leave us with (at most) two gravitational degrees modes. More precisely, one starts with
the ADM parametrization of the metric
\begin{eqnarray}
\label{ADM}
ds^2 = - N^2 dt^2 + h_{ij} (dx^i + N^i dt)(dx^j + N^j dt) \, ,
\end{eqnarray}
where $N$, $N^i$ and $h_{ij}$ are respectively the lapse function, the shift vector and the induced spatial metric. Then, one considers general
actions of the form
\begin{eqnarray}
\label{general action}
S[N,N^i,h_{ij}] = \int d^3x \, dt \, \sqrt{h} \, {\cal L}(K_{ij},R_{ij},h^{ij},N,\nabla_i) \, ,
\end{eqnarray}
where $K_{ij}$ is the extrinsic curvature, $R_{ij}$ the three-dimensional curvature and $\nabla_i$ the spatial covariant derivative.
And finally, one performs a Hamiltonian analysis to find the necessary conditions for the theory to propagate (at most) two degrees of freedom.
This program was completed in the case where the Lagrangian \eqref{general action} was supposed to be linear in the lapse function \cite{Lin:2017oow}.
In that way, one found a large class of modified theory of gravity with only two degrees of freedom that have been dubbed for obvious reasons ``minimally modified gravity".
In this paper, we construct minimally modified gravity from a Hamiltonian point of view with the idea the Hamiltonian framework is more suited for studying and
classifying Lorentz breaking theories than the Lagrangian framework.
Indeed, we modify the phase space of general relativity (and not directly
the Lagrangian) in such a way that the modified theory remains invariant under spatial diffeomorphisms only and still propagates two tensorial degrees of freedom.
More precisely, we start with a phase space which is parametrized by the usual ten pairs of conjugate variables (the metric variables in the ADM decomposition and their momenta),
and we consider a ``modified" Hamiltonian of the form
\begin{eqnarray}
\label{H intro}
H = \int d^3x \, \sqrt{h} \, \left[ {\cal H}(\pi^{ij},R_{ij},h^{ij},N,\nabla_i) + N^i {\cal V}_i \right] \, ,
\end{eqnarray}
where ${\cal V}_i$ is the usual vectorial constraint, and ${\cal H}$ is a three dimensional diff-invariant function which is a priori different from the usual scalar constraint.
Then, the problem consists in finding the conditions that $\cal H$ must satisfy for the theory to propagate two (or less) degrees of freedom. We address this issue and
find that ${\cal H}$ must be an affine function of the lapse, of the form
\begin{eqnarray}
\label{cond1}
{\cal H}= N \, {\cal H}_0(\pi^{ij},R_{ij},h^{ij},\nabla_i) + {\cal V}(\pi^{ij},R_{ij},h^{ij},\nabla_i) \, ,
\end{eqnarray}
with additional conditions on the functions ${\cal H}_0$ and $\cal V$. {A necessary condition is that $\{ {\cal H}_0(x) , {\cal H}_0(y)\}$, viewed as an operator
acting on the space of functions Fun($M$) on the space manifold by integration, has a non-trivial kernel, and a sufficient condition is that }
\begin{eqnarray}
\label{cond2}
\{ {\cal H}_0(x) , {\cal H}_0(y)\} \, \approx \, 0 \, ,
\end{eqnarray}
where $\approx$ means weakly vanishing (i.e. it vanishes up to constraints).
In this construction, we recover the well-known class of ``cuscuton'' theories that can be extended to non-local theories. But we also find new classes of theories.
In particular, we exhibit a remarkably simple class of theories which are such that ${\cal H}_0= f({\cal H}_{gr})$ where
$f$ is an arbitrary function and ${\cal H}_{gr}$ is the usual scalar constraint of general relativity.
Such theories are invariant under a four dimensional local symmetry (which contains the 3D diffeomorphims) and possess very interesting properties that we discuss in the paper.
\medskip
The paper is organized as follows. We start, in section \ref{Maxwell} with the simpler case of a spin-1 field to illustrate our construction.
Hence, we construct modified Maxwell theory in a four dimensional Minkowski space-time, where the dynamical variable is a one form $A_\mu$.
To mimic the construction of minimally modified theories of gravity, we relax some hypothesis which makes Maxwell theory unique:
we break the $U(1)$ gauge symmetry and also the global Lorentz invariance keeping, however, a symmetry under one rotational subgroup $SO(3)$ (the one
that leaves $A_0$ invariant).
Then, we modify the Maxwell Hamiltonian and find the conditions for the new theory to propagate only (up to) two degrees of freedom.
Finally, we give some concrete examples. In section \ref{gravity}, we turn to the more interesting case of minimally modified gravities.
We write conditions that the modified Hamiltonian constraint \eqref{H intro} must satisfy to have (up to) two tensorial degrees of freedom.
These conditions \eqref{cond1} and \eqref{cond2} appear to be very simple in the Hamiltonian framework, and they can be explicitly solved in some cases. As we said previously, we recover the cuscuton theories, and we find an interesting and remarkably simple new class of theories, dubbed $f({\cal H})$ theories,
where the usual Hamiltonian constraint of general relativity ${\cal H}_{gr}$ has been replaced by $f({\cal H}_{gr})$ where $f$ is an arbitrary function.
We quickly study their cosmology
to show interesting differences with general relativity.
We conclude with a brief summary of our results and some perspectives.
\section{Minimally Modified Maxwell Theory}
\label{Maxwell}
Following the ideas that lead to the construction of minimally modified gravity theories,
we build, in this section, a large class of modified Maxwell theories which propagates 2 (vectorial) degrees of freedom in the 4-dimensional Minkowski space-time.
Maxwell theory provides us with a simpler but very interesting context to illustrate the construction of minimally modified gravity theories from a Hamiltonian point of view that
we will present in section \ref{gravity}.
\subsection{Framework: symmetry breaking and degeneracy}
Maxwell theory
is the unique free action for a $U(1)$ connection $A_\mu$ evolving in a Minkowski space-time, which is invariant under the usual $U(1)$ gauge symmetry, also invariant under the global Lorentz symmetry (i.e. the isometry group of the Minkowski metric $SO(1,3)$), and which in addition produces (at most) second order equations of motion. The $U(1)$ invariance implies that the action is a functional of the curvature two-form only
\begin{eqnarray}
\label{curvature}
F_{\mu\nu} \; \equiv \; \partial_\mu A_\nu - \partial_\nu A_\mu \, .
\end{eqnarray}
The global Lorentz symmetry implies that the curvature components must be contracted with the metric $\eta_{\mu\nu}= \text{diag}(-1,1,1,1)$ (and its inverse)
such that the Lagrangian density is a scalar for the Lorentz group. Finally, the freeness of the theory says that
the action is at most quadratic in the connection. Hence the only possible theory is described by the action (in vacuum)
\begin{eqnarray}
S[A_\mu] \; = \; -\frac{1}{4 \mu_0} \int d^4x \, F_{\mu\nu} F^{\mu\nu},
\end{eqnarray}
where $\mu_0$ is the usual permeability, and indices are raised with $\eta^{\mu\nu}$.
A simple analysis shows that this very well-known theory propagates only 2 degrees of freedom which are the 2 (tranverse) photons.
Generalizing the action to any space-time is straightforward.
\medskip
In order to mimic the construction of minimally modified theories of gravity, we relax some of the conditions that make Maxwell theory unique.
In minimally modified gravity theories, one breaks the full space-time diffeomorphism invariance and keep only symmetry under three dimensional
diffeomorphisms. In the case of Maxwell theory, there is only the one-dimensional local symmetry group $U(1)$ that we choose to break, and then there
is no remaining local symmetry in the theory. However, to be close to the gravity case, we also decide to break the global Lorentz symmetry keeping
only the invariance under the subgroup of rotations $SO(3)$ that leaves $A_0$ invariant. In that sense, $A_0$ is similar to the lapse function
in the context of Maxwell theory.
As a consequence,
we look for theories whose action is of the form
\begin{eqnarray}
\label{general lagrangian}
S[A_0,A_i] \; = \; \int d^4x \, {\cal L}(A_0,\dot A_0,A_i,\dot A_i,\partial_i) \, ,
\end{eqnarray}
where ${\cal L}$ is the Lagrangian density. In other words, $\cal L$ is constructed from $A_0$, $A_i$, their first time derivatives and their space derivatives
at any order.
As we are going to see in a few lines, this theory propagates generically more than 2 degrees of freedom.
To find the conditions for the theory to propagate only 2 degrees of freedom, we perform a Hamiltonian analysis.
Hence, we start by introducting the phase-space variables
\begin{eqnarray}
\label{canvar}
\{A_\mu(x),P^\nu(y)\} \; = \; \delta_\mu^\nu \, \delta^3(x-y) \, .
\end{eqnarray}
If there is no constraints, the theory propagates 4 degrees of freedom. The presence of a primary constraint is then an obvious necessary condition for the theory to
propagate only 2 (vectorial) degrees of freedom. The theory admits a primary constraint if its action is degenerate, i.e. the 4 dimensional Hessian matrix defined by
\begin{eqnarray}
\label{Hessian}
\mathbb H^{\mu\nu} \equiv \frac{\partial^2 {\cal L}}{\partial \dot A_\mu \partial \dot A_\nu} \, ,
\end{eqnarray}
for $\mu,\nu \in \{0,1,2,3\}$ is not invertible. Furthermore, as we want vector modes to propagate, we add the condition that the submatrix $\mathbb H^{ij}$, for $i,j \in \{1,2,3\}$,
is invertible. If this is the case, we can formally reformulate the Lagrangian density in \eqref{general lagrangian} as a function
\begin{eqnarray}
{\cal L}(A_0,\dot A_0,A_i,\dot A_i,\partial_i) \; = \; {\cal F}(A_0,A_i,\dot A_i - \alpha_i \dot A_0 ,\partial_i) \, ,
\end{eqnarray}
where $\alpha_i$ depends on the connection $A_\mu$ and their spatial derivatives in general. In general (even when there is no coupling to external current)
time derivatives of $A_0$ cannot be absorbed into a redefinition of $A_i$. But, for simplicity, we assume that $A_0$ is not a dynamical variable as in the original Maxwell theory,
and then it does not appear a priori with time derivatives in the action, which means that $\alpha_i=0$.
In that case, the theory possesses the simple primary constraint\footnote{The generalization to a non-zero $\alpha_i$ is immediate and the primary constraint is replaced by
the combination ${\cal P} \equiv P^0 + \alpha_i P^i \approx 0$.}
\begin{eqnarray}
\label{primary}
{\cal P} \equiv P^0 \approx 0 \, ,
\end{eqnarray}
where we recall that $\approx$ means weakly vanishing,
At this stage, there is no more primary constraint (which is a consequence of the fact that $\mathbb H^{ij}$ is not degenerate), and then one can (in principle) uniquely express (at least locally, on any open set of the phase space) the velocities $\dot A_i$ in terms of the momenta $P^i$. As a consequence, one can construct (formally) the canonical and the total Hamiltonians, respectively given by
\begin{eqnarray}
\label{totalH}
H \; = \; \int d^3x \, {\cal H}(A_\mu,P^i,\partial_i) \, , \qquad
H_{tot} \; = \; H + \int d^3x \, \lambda \, P^0 \, ,
\end{eqnarray}
where $\lambda$ is a Lagrange multiplier which enforces the primary constraint.
As we have already emphasized above, the relation between the Lagrangian and the canonical Hamiltonian is, in general, implicit. It can be made explicit in simple cases only, for free (quadratic) Lagrangians for instance. Furthermore, it will be much more convenient to find the conditions
for the theory to propagate (at most) 2 degrees of freedom in its Hamiltonian formulation than in its Lagrangian formulation. For all these reasons, we
will construct modified Hamiltonian Maxwell theories, and in some cases, we will show how to recover the associated Lagrangian.
\subsection{Killing the extra degrees of freedom}
\label{killing}
From now on, the starting point is the Hamiltonian \eqref{totalH} together with the primary constraint \eqref{primary}.
The stability under time evolution of the primary constraint leads to a secondary constraint
\begin{eqnarray}
\label{calS}
{\cal S} \; \equiv \; \{ {\cal P} \, , H \} \, = \, \frac{\partial \cal H}{\partial A_0} - \partial_i \left(\frac{\partial \cal H}{\partial (\partial_i A_0)}\right)
+ \partial_i \partial_j \left(\frac{\partial \cal H}{\partial (\partial_i \partial_j A_0)}\right) + \cdots \, \approx 0 \; ,
\end{eqnarray}
when $ \cal H$ depends explicitly on $A_0$. { In the particular case where $\cal H$ does not depend on $A_0$ (and on its spatial derivatives),
then the Lagrangian itself does not depend on $A_0$ and the theory propagates 3 degrees of freedom. For this reason, we assume from now on that $\cal H$ depends on
(the spatial derivatives of) $A_0$. To be more precise, we exclude the case where $\mathcal{H}$ depends on $A_0$ and its spatial derivatives only through a total spatial derivative.}
Even in that case, the theory could propagate
up to 3 degrees of freedom (if there is no more constraint and if the two constraints are second class).
To go further and to find the conditions on the Hamiltonian for the theory to propagate (at most) two degrees of freedom, we compute
the Poisson bracket between the primary and the secondary constraints,
\begin{eqnarray}
\Delta (x,y) \; \equiv \; \{ \, {\cal S}(x) \, , \, {\cal P}(y) \} \, ,
\end{eqnarray}
and one studies whether it (weakly) vanishes or not. Notice that we are using the shortened notations $F(x)=F(A_\mu(x),P^i(x),\partial_i)$ for any
function $F$ in the phase space.
First, we study the case where $\Delta$ is not weakly vanishing.
There are no more constraints in the theory, and the pair ($\cal P$, $\cal S$) form a set of second class constraints.
Hence, the theory propagates
$[(2\times 4) - 2]/2=3$ degrees of freedom, i.e. one more than Maxwell theory. The extra degree of freedom is the longitudinal mode which comes
with the usual two polarizations of the graviton.
Now, we study the more interesting case where $\Delta$ is weakly vanishing.
The number of degrees of freedom depends on whether the bracket $\Omega(x,y) \equiv \{ {\cal S}(x) , { \cal H}(y)\}$ is vanishing or not.
If $\Omega$ is weakly vanishing, the theory has no more constraints, the pair $({\cal P},{\cal S})$ forms a set of first class constraints,
which means that there is a ``hidden" local symmetry in the theory. Furthermore, the theory propagates $[(2\times 4)-(2\times 2)]/2=2$ degrees of freedom, as in the Maxwell theory.
If $\Omega$ is not weakly vanishing, there is a tertiary constraint ${\cal T}$, {but this may be not enough to insure that the theory propagates
2 degrees of freedom only. If one of the three constraints is first class (which is necessary the case if all the constraints are local), then the theory admits an extra symmetry and only
2 degrees of freedom. If this is not the case, one needs the presence of an extra quaternary constraint which would definitively imply that there is strictly less than $2$ degrees of freedom. }
As a consequence, in any cases, we see that a necessary condition for the theory to propagate 2 or less degrees of freedom is that
\begin{eqnarray}
\label{conditionfor2DOF}
\{ {\cal P}(x) \, , \, \{ {\cal P}(y), H \}\} \; \approx \; 0 \, ,
\end{eqnarray}
i.e. it vanishes up to terms proportional to ${\cal S}$.
Let us make this condition more
explicit, and show that it necessarily implies that ${\cal S} \equiv \{ {\cal P}, H_0 \}$ does not depend on $A_0$. For that, let us assume
the reverse is true, and then ${\cal S}$ is supposed to depend at least on $A_0$ or on one of its spatial derivatives.
Hence, the constraint ${\cal S}(A_0,\partial_i A_0, \cdots)\approx 0$ can be viewed as a differential equation
that we can solve for $A_0$ (with appropriate boundary conditions) in terms of the remaining phase space variables, at least formally.
In that case,
the secondary constraint can be (locally) replaced by the equivalent constraint
\begin{eqnarray}
\tilde{\cal S} \; \equiv \; A_0 - {\cal A}_0(A_i,P^i,\partial_i) \, \approx \, 0 \, ,
\end{eqnarray}
where ${\cal A}_0$ is the explicit solution for $A_0$. As a consequence, the new bracket between the constraints
$ \{ \tilde{\cal S}(x) , {\cal P}(y) \} = \delta (x-y)$ is clearly non-vanishing, and then the theory propagates 3 degrees of freedom, which contradicts the initial assumption. As a consequence,
the condition \eqref{conditionfor2DOF} is (locally) equivalent to the condition that $\cal S$ can be written as
\begin{eqnarray}
{\cal S} \; = \; \nu(A_0) \, {\cal H}_0(A_i,P^i,\partial_i) \, ,
\end{eqnarray}
where ${\cal H}_0$
does not depend neither on $A_0$ nor on its derivatives, and $\nu$ is an arbitrary non-vanishing function of $A_0$, say positive.
Hence, the Hamiltonian density takes necessarily the form {(up to a total spatial derivative)}
\begin{eqnarray}
{\cal H} = {\cal V} + N(A_0){\cal H}_0\, ,
\end{eqnarray}
where ${\cal H}_0$ and ${\cal V}$ depends on $A_i, P^i$ and their spatial derivatives only. The function $N$ is an integral of $\nu$, and
then it is an increasing function of $A_0$ (as $\nu$ is supposed to be positive).
Furthermore a simple canonical transformation allows us to fix (locally) $N(A_0)=A_0$ without loss of generality.
\subsection{Complete Hamiltonian description}
To summarize, we found that any Hamiltonian theory which satisfies the necessary condition
\eqref{conditionfor2DOF} is defined (up to a canonical transformation) by a phase space parametrized by the 4
pairs of canonical variables \eqref{canvar} whose dynamics is governed
by a Hamiltonian of the form
\begin{eqnarray}
\label{Ham2dof}
H \; = \; \int d^3x \, \left[ {\cal V}(A_i,P^i,\partial_i) + A_0 \, {\cal H}_0(A_i,P^i,\partial_i) \right] \, ,
\end{eqnarray}
together with the primary constraint ${\cal P} \approx 0$ \eqref{primary}.
Hence, the secondary constraint is now simply given by
\begin{eqnarray}
{\cal S} \; \equiv \; {\cal H}_0(A_i,P^i,\partial_i) \; \approx \; 0 \, .
\end{eqnarray}
The existence of this constraint implies immediately that the constraint ${\cal P} \approx 0$ is in fact first class, and it corresponds to the
(on-shell) invariance of the theory under the arbitrary shift,
\begin{eqnarray}
\label{shiftsymA0}
A_0 \mapsto A_0 + u \, ,
\end{eqnarray}
of the non-dynamical variable $A_0$, by an arbitrary function $u(x)$.
Requiring conservation under time evolution of the secondary constraint leads to the condition
\begin{eqnarray}
\label{condHH}
\int d^3y \, \left( \{ {\cal H}_0(x), {\cal H}_0(y) \} A_0(y) + \{ {\cal H}_0(x), {\cal V}(y)\}\right) \; \approx \; 0 \, ,
\end{eqnarray}
{whose resolution depends on the properties of $\Delta(x,y) \equiv \{ {\cal H}_0(x), {\cal H}_0(y) \} $ viewed as an operator acting on the space of functions Fun$(\mathbb R^3)$
by integration. When $\Delta$ is invertible, the condition \eqref{condHH} fixes completely the Lagrange multiplier $A_0$ in terms of the phase space variables.
Furthermore, in that case, $\Delta(x,y)$ is necessary not scalar\footnote{A two-point distribution $F(x,y)$ is scalar if and only if $F(x,y)=F(x,0) \delta(x-y)$ where $\delta$ is the Dirac distribution.} (it involves derivatives of Dirac distributions) and the constraint on $A_0$ is in fact a partial differential equation
which would need appropriate boundary conditions to be explicitly resolved. There is no quaternary constraint as the time evolution of ${\cal T} \approx 0$ fixes completely
the Lagrange multiplier. Then the theory admits three secondary constraints with a non-scalar Dirac matrix and a non-scalar Poisson bracket between $\cal P$ and $\cal T$ in particular. As a consequence, the theory is not well-posed. }
{The case where $\Delta$ is a non (weakly) vanishing operator with a non-trivial kernel is much more complicated to study. To understand this situation, it is convenient to decompose the space of functions on which $\Delta$ acts as the direct sum Fun$(\mathbb R^3)$=Im($\Delta$) $\oplus$ Ker($\Delta$) where Im($\Delta$) and Ker($\Delta$) are respectively the image and the kernel of $\Delta$. Hence, the condition \eqref{condHH} not only fixes the component of $A_0$ in Im($\Delta$) but also can produce a new (quaternary) constraint obtained by projecting \eqref{condHH} into Ker($\Delta$). The new constraint may be non-scalar, the general Dirac analysis appears to be very subtil, and it should be done on a case-by-case basis. For that reason, we will exclusively consider the simpler case where $\Delta$ is weakly
vanishing:}
\begin{eqnarray}
\label{comSS}
\{ {\cal H}_0(x), {\cal H}_0(y)\} \; \approx \; 0 \, .
\end{eqnarray}
If this is the case, the conservation of the secondary constraint under time evolution leads either to a tertiary constraint
\begin{eqnarray}
{\cal T}(x) \; \equiv \; \{ {\cal H}_0(x) \, , \, \int d^3y \, {\cal V}(y)\} \, ,
\end{eqnarray}
or to no new constraint if ${\cal T} $ is itself weakly vanishing. In any of these two cases, the theory propagates 2 degrees of freedom or less.
\begin{itemize}
\item {Case where ${\cal T} \approx 0$ is automatically satisfied}. The theory admits 2 first class constraints ${\cal P} \approx 0$ and
${\cal H}_0 \approx 0$. The constraint $\cal P$ is associated to the (on-shell) symmetry described above \eqref{shiftsymA0}, and the constraint ${\cal S}$ generates a gauge symmetry acting on the phase space variables $(A_i,P^i)$, exactly as in Maxwell
theory. As a result the theory propagates $[(2\times 4)-(2\times 2)]/2=2$ degrees of freedom.
\item {Case where ${\cal T} \approx 0$ is a new constraint, and $\cal T$ does not commute with ${\cal H}_0$}. The Dirac analysis stops here with
one first class constraint ${\cal P} \approx 0$ and two second class constraints ${\cal H}_0 \approx 0$, $ {\cal T} \approx0$, which lead
to $[(2 \times 4) - (2+1+1)]/2=2$ degrees of freedom.
\item {Case where ${\cal T} \approx 0$ is a new constraint, and $\cal T$ commutes with ${\cal H}_0$}. Either the Dirac analysis continues producing
constraints, or $\cal T$ and ${\cal H}_0$ are first class. In any case, the theory propagates 1 or 0 degree of freedom.
\end{itemize}
{As a conclusion, any deformation of Maxwell theory which breaks the $U(1)$ symmetry, which is invariant under the global $SO(3)$ group that leaves
$A_0$ invariant and which propagates at most 2 degrees of freedom has necessarily a Hamiltonian of the form \eqref{Ham2dof}. Furthermore, the condition \eqref{comSS}
is sufficient to insure that the theory propagates at most 2 degrees of freedom, but it has not been rigorously proven that it is also necessary because the theory admits a quaternary
(eventually non-local) constraint when $\Delta$ is non-vanishing with a non-trivial kernel.}
\subsection{Example: quadratic theories}
Let us illustrate the previous analysis with a simple example. We consider a Hamiltonian which is, at most, quadratic in the phase space
variables $(A_0,A_i,P^i)$.
\subsubsection{General Hamiltonian analysis}
Furthermore, we assume that the Hamiltonian can be written in terms of the fields and their first order (spatial) derivatives only. In that case ${{\cal H}_0}$ is linear in $(A_i,P^i)$ whereas
${{\cal V}}$ is quadratic in $(A_i,P^i)$. Hence, these two functions can be written as
\begin{eqnarray}
{\cal V} & = & \alpha_1 \, A^2+ \alpha_2 \, P^2+ \alpha_3 \, (AP) + \alpha_4 \, (\partial A)^2 + \alpha_5 \, (\partial P)^2 + \alpha_6 \, (\partial A)(\partial P) \nonumber \\
&-& \alpha_7 \, \partial_j A_i \partial^j A^i - \alpha_8 \, \partial_j P_i \partial^j P^i - \alpha_9 \, \partial_j A_i \partial^j P^i \, ,\\
{\cal H}_0 & = & \beta_1 \, \partial A + \beta_2 \, \partial P\, ,
\label{quadraticH1}
\end{eqnarray}
where $\alpha_I$ and $\beta_I$ are constant, and we used the shortened notations
\begin{eqnarray}
\partial X \equiv \partial_i X^i \, , \quad
XY \equiv X_i Y^i \, , \quad
X^2\equiv X_i X^i \, ,
\end{eqnarray}
for $X$ being $A$ or $P$. Notice that indices are lowered and raised with the flat metric $\delta_{ij}$ and its inverse $\delta^{ij}$.
As ${\cal H}_0$ trivially satisfies the condition \eqref{comSS}, the theory propagates at most 2 degrees of freedom for any values of
the coefficients $\alpha_I$ and $\beta_I$.
Let us study these theories in details. First, using canonical transformations, we can simplify the shape of the Hamiltonian.
Indeed, canonical transformations (with no explicit time dependency) which preserves quadratic and first order Hamiltonians
are of the form
\begin{eqnarray}
A \; \longmapsto \; x A + y P \, , \qquad
P \; \longmapsto \; z A + w P \, , \qquad xw-yz=1 \, .
\end{eqnarray}
Hence, (when $\beta_2 \neq 0$) one can find a canonical transformation such that
\begin{eqnarray}
{\cal V} & = & \alpha_1 \, A^2+ \alpha_2 \, P^2+ \alpha_4 \, (\partial A)^2 + \alpha_5 \, (\partial P)^2 + \alpha_6 \, (\partial A)(\partial P) \nonumber \\
&-& \alpha_7 \, \partial_j A_i \partial^j A^i - \alpha_8 \, \partial_j P_i \partial^j P^i - \alpha_9 \, \partial_j A_i \partial^j P^i \, ,\\
{\cal H}_0 & = & - \partial P\, ,
\label{quadraticH}
\end{eqnarray}
which corresponds to taking $\alpha_3=0$, $\beta_1=0$ and $\beta_2=-1$ in the general expression \eqref{quadraticH1}. As a consequence,
the expression of the constraint ${\cal H}_0 \approx 0$ has exactly the same form as in Maxwell theory, and then, one can fix $\alpha_5=0$
without loss of generality (by a redefinition of the Lagrange multiplier $A_0$). Notice that, even though the constraint ${\cal H}_0 \approx 0$
is the same as in Maxwell theory, it is not necessarily first class. This can be easily seen if one re-expresses the total Hamiltonian
as follows
\begin{eqnarray}
\label{Hamiltoniancurv1}
H & = & \int d^3x \,[- A_0 \partial P + \alpha_2 P^2 - \frac{1}{2} \alpha_4 F_{ij} F^{ij} + \alpha_6 F_{ij} \partial^j P^i
+ (\alpha_9 - \alpha_6) P_i \Delta A^i ] \nonumber \\
&& \qquad + [ \alpha_1 A^2 + (\alpha_7-\alpha_4) A_i \Delta A^i ] \, ,
\end{eqnarray}
where $F_{\mu\nu}$ is the curvature of the connection \eqref{curvature}.
The first line in \eqref{Hamiltoniancurv1} is invariant under the $U(1)$ gauge symmetry $\delta_\varepsilon A_i = \partial_i \varepsilon$.
The second line is clearly not, which makes the constraint second class, and,
from its expression, we see that the conditions for the theory to be $U(1)$ gauge invariant are immediately given by
$\alpha_1=0$ and $\alpha_7=\alpha_4$.
For the moment, let us complete the Hamiltonian analysis. Using the notations of section \ref{killing}, the secondary constraint is
${\cal S} \equiv \partial P$. To compute the remaining constraints, it is convenient to first write the equations of motion:
\begin{eqnarray}
\dot A_i & = & \{ A_i,H_0 \} \; \approx \; \partial_i A_0 + 2 \alpha_2 P_i - \alpha_6 \partial_i (\partial A)
+2\alpha_8 \Delta P_i + \alpha_9 \Delta A_i \, , \label{HamEOMA}\\
\dot P_i & = & \{ P_i,H_0 \} \; \approx \; -2\alpha_1 A_i + 2 \alpha_4 \partial_i (\partial A) -2 \alpha_7 \Delta A_i - \alpha_9 \Delta P_i \, . \label{HamEOMB}
\end{eqnarray}
The tertiary constraint is obtained from the requirement that ${\cal S}\approx 0$ has to be weakly
conserved under time evolution, which means that
\begin{eqnarray}
\dot{\cal S} \approx 0 \; \Longleftrightarrow\; \partial_i \dot P^i \approx 0 \; \Longleftrightarrow\;
{\cal T} \equiv [\alpha_1 + (\alpha_7 - \alpha_4) \Delta ] (\partial A) \approx 0 \; .
\end{eqnarray}
Using suitable boundary conditions, one can replace the constraints $\cal T$ by the condition
\begin{eqnarray}
\partial A \approx 0 \, ,
\end{eqnarray}
except if $\alpha_1 =0$ and $\alpha_4=\alpha_7$, in which case the constraint $\cal S$ is first class, as we have already seen previously.
Clearly, the constraints $\cal S$ and $\cal T$ do not commute, and then the conservation of $\cal T$ under time evolution does not lead to any new constraints. As a conclusion, the Dirac analysis of the theory closes with one first class constraint ${\cal P} \approx 0$ and the two second class constraints ${\cal S}\approx 0$ and ${\cal T} \approx 0$. This leads to 2 degrees of freedom, as expected.
\subsubsection{Lagrangian}
{Let us focus on the case with $\alpha_1\neq 0$ or $\alpha_4\neq \alpha_7$, in which the theory has one first-class constraint and two second-class constraints.}
The first class constraint allows us to choose a gauge where $A_0=0$. In this gauge, the equations
of motion \eqref{HamEOMA} and \eqref{HamEOMB} simplify into
\begin{eqnarray}
\dot A_i & = & 2 (\alpha_2 + \alpha_8 \Delta) P_i + \alpha_9 \Delta A_i \; , \\
-\dot P_i & = & 2 (\alpha_1 + \alpha_7 \Delta) A_i + \alpha_9 \Delta P_i \, ,
\end{eqnarray}
with the constraints that $\partial P = 0 = \partial A$, which means that both vectors are transverse. From this, we immediately see
that the theory admits 2 degrees of freedom only which are governed, after decoupling the previous system, by the equation
\begin{eqnarray}
\label{eomforA}
-\ddot A_i + \alpha_9 (\Delta \dot A_i - \dot A_i + \alpha_9 \Delta A) - 4(\alpha_2 + \alpha_8 \Delta)(\alpha_1 + \alpha_7 \Delta) A_i = 0 \, .
\end{eqnarray}
Notice that this equation is second order in time but higher order (up to fourth order) in space. This ensures that the theory is healthy and does not propagate
Ostrogradski ghosts. {Notice that the presence of higher space derivatives in the equations of motion could mean the existence of generalized instantaneous mode
(or shadowy modes) which would appear in a ``covariantization'' of the theory similar to what happens in scalar-tensor theories \cite{DeFelice:2018mkq}}.
\medskip
It is also instructive to compute the Lagrangian and study some of its properties. As the Hamiltonian is quadratic, the associated Lagrangian is easily obtained from the Legendre transformation
\begin{eqnarray}
{L}[A_\mu] \; = \; \int d^4x \, \left(P \dot{A} - {\cal V} + A_0 \partial P \right) \, ,
\end{eqnarray}
where the momenta $P^i$ are expressed in terms of the velocities $\dot A_i$ solving the equation of motion \eqref{HamEOMA}.
Formally the momenta variables are given by
\begin{eqnarray}
P_i & = & \frac{1}{2}(\alpha_2 + \alpha_8 \Delta)^{-1} [\dot A_i - \partial_i A_0 + \alpha_6 \partial_i (\partial A) - \alpha_9 \Delta A_i] \nonumber \\
& = & \frac{1}{2}(\alpha_2 + \alpha_8 \Delta)^{-1} \left[F_{0i} + \alpha_6 \partial^j F_{ij} + (\alpha_6 - \alpha_9) \Delta A_i \right] \, ,
\end{eqnarray}
which, to be defined, needs suitable spatial boundary conditions.
First, we immediately remark that a non-vanishing $\alpha_8$ coefficient in the Hamiltonian makes the Lagrangian (spatially) non-local. In general, any terms which involve spatial derivatives of the momenta in the Hamiltonian will produce non-local terms in the Lagrangian, even though we started from a local Hamiltonian. For simplicity, we restrict the analysis to local Lagrangians, which, in this case, implies
$\alpha_8=0$. Notice that $P_i$ is not $U(1)$ gauge invariant when $\alpha_6 \neq \alpha_9$.
The calculation of the Lagrangian is now immediate and shows that it contains higher spatial derivatives but not higher time derivatives as expected. This is obviously consistent
with the equation of motion for the vector field $A_i$ \eqref{eomforA}.
\subsubsection{Modified gauge invariant Maxwell theories}
To finish with this example, let us consider quadratic theories which are gauge invariant, i.e.
$\alpha_1=\alpha_7-\alpha_4=0$. {For simplicity, we assume $\alpha_9-\alpha_6=0$ as well as $\alpha_8=0$.} In that case, ${\cal H}_0 \approx 0$ is first class, and the full connection transforms as expected according to $A_\mu \mapsto A_\mu + \partial_\mu \theta$ where $\theta$ is an arbitrary function, under the symmetry. The infinitesimal transformation law of $A_i$ comes from the Poisson action of ${\cal H}_0$,
\begin{eqnarray}
\delta_\varepsilon A_i \; = \; \{A_i , \int d^3x \, \varepsilon(x) {\cal H}_0 \} \, .
\end{eqnarray}
{The transformation law for $A_0$ under gauge transformations can be seen from the
gauge invariance of the full (covariant) Lagrangian}. In that context, the canonical Hamiltonian is simply given by
\begin{eqnarray}
\label{Hamiltoniancurv}
H \; = \; \int d^3x \, \left[- A_0 \partial P + \alpha_2 P^2 - \frac{1}{2} \alpha_4 F_{ij} F^{ij} + \alpha_6 F_{ij} \partial^j P^i \right] \, ,
\end{eqnarray}
and, after some calculations, one finds that the action is given by
\begin{eqnarray}
S[A_0,A_i]\; = \; \frac{1}{2} \int d^4x \; \left[ -\frac{1}{2\alpha_2} F_{0i} F^{0i} + \alpha_4 F_{ij} F^{ij} - \frac{\alpha_6^2}{2\alpha_2}
(\partial_j F^{ij})^2 \right] \, .
\end{eqnarray}
As expected, the action is not Lorentz invariant, it contains spatial higher derivatives terms and its equations of motion are given by
\begin{eqnarray}
\partial_i F^{0i}=0 \, , \qquad
\partial_0 F^{0i} + 4 \alpha_2 \alpha_4 \partial_j F^{ij} + \frac{\alpha_6^2}{2} \Delta \partial_j F^{ij}=0 \, ,
\end{eqnarray}
which we can compare to standard Maxwell equations $\partial_\mu F^{\mu\nu}=0$. Using the usual definitions of the electromagnetism
field $E^i \equiv F^{0i}$ and $B^i \equiv \varepsilon^{ijk} F_{jk}$, we obtain the following modified Maxwell equations in the vacuum,
\begin{eqnarray}
\text{div} \vec{E} = 0 \, , \quad
(1+ \mu \Delta) \vec{\text{rot}} \vec{B} = \lambda \frac{\partial \vec{E}}{\partial t} \, ,
\end{eqnarray}
where $\mu=\alpha_6^2/(8 \alpha_2 \alpha_4)$ and $\lambda = -1/(4\alpha_2\alpha_4)$. The equations $\text{div} \vec{B}=0$
and $\vec{\text{rot}}\vec{E}+ \partial \vec{B}/\partial t=0$ which are equivalent to the existence of the gauge field $A_\mu$ are
obviously unchanged. Hence, the propagation equations become
\begin{eqnarray}
\Delta \vec{V} - \lambda \frac{\partial^2 \vec{V}}{\partial t^2} + \mu \Delta^2 \vec{V} \; = \; \vec{0} \, ,
\end{eqnarray}
where $\vec{V}$ can be either $\vec{E}$ or $\vec{B}$.
It is obvious that $\lambda$ parametrizes the deviation to the speed of light and $\mu$ parametrizes higher derivatives
deviations. As the theory is still linear and $\lambda$ is constant, we can fix it to $\lambda=1$ by a rescaling of the time variable, {provided
that $\lambda$ is neither vanishing nor divergent}.
To close the analysis of this example, let us make a couple of remarks.
First, It is easy to generalize our analysis to cases where higher derivatives have an order higher than two, including in $H$ terms with higher than 2 spatial derivatives of the fields $A_\mu$. Introducing higher derivatives of the momenta variables would produce non local actions.
Second, as we briefly discussed below \eqref{Hessian}, one could have started with a dynamical $A_0$ variables in the Hamiltonian framework. For that, one
would have replaced the primary constraint by a more general constraint ${\cal P}(P^0,P^i) \approx 0$
which would mix all the components of the momenta. The analysis would be similar to what we have done. Another way to make $A_0$
dynamical would be to the consider ``disformal-like" transformations on the connection which preserve the quadratic form:
\begin{eqnarray}
A_0 \mapsto A_0 + x \partial_i A^i \, , \quad
A_i \mapsto A_i + y \partial_i A_0 \, ,
\end{eqnarray}
where $x$ and $y$ are constant.
\section{Generalization to gravity}
\label{gravity}
In this section, we adapt the previous construction to gravity, and we construct a large class of minimally modified gravity theories from
the Hamiltonian point of view. We first find (sufficient) conditions on the Hamiltonian for the theory to propagate at most two tensorial degrees of freedom.
Then, we illustrate our construction with examples. In particular, we will exhibit a new interesting class of minimally modified gravities, dubbed
$f({\cal H})$ theories.
We start with the ADM parametrization of the metric in terms of the lapse function $N$, the shift vector $N^i$ and the induced spatial
$h_{ij}$, as it was recalled in the introduction \eqref{ADM}.
\subsection{The modified phase space}
The phase space is parametrized by
the usual ten pairs of canonical variables
\begin{eqnarray}
&&\{h_{ij}(x),\pi^{kl}(y) \} = \delta_{ij}^{kl} \, \delta(x-y) \, , \\
&&\{N^i(x),\pi_j(y)\}=\delta^i_j \, \delta(x-y) \, , \\
&&\{N(x),\pi_N(y)\}=\delta(x-y) \, .
\end{eqnarray}
We want to construct a Hamiltonian in this phase space
which satisfies the properties of minimally modified gravity, i.e.
\begin{itemize}
\item It is invariant under space-like diffeomorphisms;
\item It propagates only 2 tensorial degrees of freedom (or less);
\item The lapse and the shift are non-dynamical.
\end{itemize}
Notice that the last requirement is not necessary, and one can relax the condition that the lapse function is not dynamical at the price to add a degeneracy condition
as it is done in the context of DHOST theories \cite{Langlois:2015cwa}.
Another way to make the lapse function dynamical would be to perform a disformal transformation on the metric variables.
For simplicity, we will consider only the case where $N$ is not dynamical.
The invariance under space-like diffeomorphisms implies immediately that the canonical Hamiltonian takes the form
\begin{eqnarray}
\label{MMGHamiltonian}
H = \int d^3x \, \sqrt{h} \, \left[ {\cal H}(h_{ij},\pi^{ij},N,\nabla_i) + N^i {\cal V}_i \right] \, ,
\end{eqnarray}
where ${\cal V}_i \equiv -2\nabla^j (\pi_{ij}/\sqrt{h})$ is the usual vectorial constraint of gravity, and $\cal H$ is a priori an arbitrary scalar.
At this stage, with no restriction on the function $\cal H$, it is straightforward to see that the theory generically propagates 3 degrees of freedom.
Following what has been done for Maxwell theory in the previous section, we can immediately show that a necessary condition
(up to a redefinition of the lapse function by a canonical transformation) for the theory to propagate (up to) 2 degrees of freedom is that
\begin{eqnarray}
{\cal H} \; = \; {\cal V} + N \, {\cal H}_0 \, ,
\end{eqnarray}
where ${\cal H}_0$ and ${\cal V}$ are three-dimensional scalar which depend on $h_{ij}$, $\pi^{ij}$ and their covariant spatial derivatives only.
The fact that there are scalars insures that they commute with the vectorial constraint. Hence, the conservation under time evolution of the
constraints $\pi_N \approx 0$ and $\pi_i \approx 0$ creates respectively the constraints
\begin{eqnarray}
{\cal H}_0 \; \approx \; 0 \, , \qquad {\cal V}_i \; \approx \; 0 \, .
\end{eqnarray}
By construction, the vectorial constraints ${\cal V}_i \approx 0$, together with $\pi_i \approx 0$, are necessarily first class.
Then, requiring that the theory has enough constraints to kill the extra degrees of freedom implies, as in the vector case, {leads to the condition
that $\{ {\cal H}_0(x) , {\cal H}_0(y) \}$ has necessarily a non-trivial kernel (see \eqref{condHH} and the paragraph below). A sufficient condition is that}
\begin{eqnarray}
\label{H1condition}
\{ {\cal H}_0(x) \, , \, {\cal H}_0(y) \} \; \approx \; 0 \, ,
\end{eqnarray}
{and we restrict our analysis to that case only}
where the conservation under time evolution of ${\cal H}_0 \approx 0$ implies the condition
\begin{eqnarray}
\{ {\cal H}_0(x) \, , \, {\cal V}(y) \} \; \approx \; 0 \, .
\end{eqnarray}
If this condition is trivially (weakly) satisfied, then there is no tertiary constraints in the theory. The constraints ${\cal H}_0 \approx 0$ and $\pi_N \approx 0$ are also first class,
and the theory propagates $[10 \times 2 -(3\times 2 + 3 \times 2 + 1 \times 2 + 1 \times 2)]/2=2$ degrees of freedom, as in Einstein theory.
Furthermore, in that case, the theory admits an extra symmetry in addition to three-dimensional diffeomorphisms.
If, on the contrary, the condition \eqref{H1condition} is not trivially satisfied, then the theory admits a new constraint which is
\begin{eqnarray}
{\cal T}(x) \; \equiv \; \{ {\cal H}_0(x) \, , H \} \; \approx \; 0 \; .
\end{eqnarray}
The existence of this new constraint is sufficient to conclude that the theory propagates at most 2 degrees of freedom. Indeed, as the constraint
$\pi_N \approx 0$ is necessarily first class (because the theory is invariant by any redefinition of the lapse), the theory admits 7 first class constraints
in addition to the two constraints ${\cal H}_0 \approx 0$ and ${\cal T} \approx 0$, which implies immediately that the theory propagates 2 or less
degrees of freedom. It has exactly 2 degrees of freedom if ${\cal H}_0$ and ${\cal T}$ are second class, and no degrees of freedom if they are first class.
\medskip
As a conclusion,
{the following Hamiltonian satisfies the three
conditions recalled at the beginning of this section and thus defines
a class of minimally modified theories of gravity:}
\begin{eqnarray}
&&H = \int d^3x \, \sqrt{h} \, \left[ {\cal V}(h_{ij},\pi^{ij},\nabla_i) + N {\cal H}_0(h_{ij},\pi^{ij},\nabla_i)-2 N^i \nabla^j \left(\frac{\pi_{ij}}{\sqrt{h}}\right)\right] \, , \label{HMMG} \\
&&\text{with} \qquad \{ {\cal H}_0(x) \, , \, {\cal H}_0(y) \} \; \approx \; 0 \, . \label{selfcommutation}
\end{eqnarray}
{In that case, the function ${\cal V}$ is totally free.
Notice that, as $N$ and $N^i$ are not dynamical, the Hamiltonian comes with the primary constraints $\pi_i \approx 0$ and $\pi_N \approx 0$ which are
first class. They are associated to the invariance of the theory under arbitrary redefinitions of the lapse and shift. }
\subsection{Simple examples: ${\cal H}_0$ is the Hamiltonian constraint and ${\cal V}$ is polynomial in $\pi$}
To illustrate the previous general construction, let us consider the simple example defined by
\begin{eqnarray}
\label{Hgr}
{\cal H}_0 \; = \; \frac{1}{{\vert h \vert }} \left( \pi_{ij} \pi^{ij} - \frac{1}{2} \pi^2\right) - R \, , \qquad
{\cal V} \; = \; \lambda \, \pi \, - \mu \, \sqrt{\vert h \vert } \, ,
\end{eqnarray}
where $\lambda$ and $\mu$ are constant, and $R$ is the three-dimensional curvature. We notice that, as ${\cal H}_0$ is the Hamiltonian constraint of gravity, it trivially satisfies the condition
\eqref{H1condition}. In fact, if we fix ${\cal H}_0$ to this expression, one could have chosen any arbitrary function for ${\cal V}$ but for simplicity we make the choice above.
With this example, one can easily compute the explicit action which is given by
\begin{eqnarray}
\label{actionmodel}
S \; = \; \int d^4x \, N \sqrt{h} \, \left[
K_{ij}K^{ij}-K^2+R + \lambda \left( \frac{K}{N} - \frac{3\lambda}{4 N^2} \right) + \frac{\mu}{N}
\right] \, ,
\end{eqnarray}
where $K_{ij}$ is the usual extrinsic curvature
\begin{eqnarray}
K_{ij} = \frac{1}{2N} \left( \dot h_{ij} - \nabla_i N_j - \nabla_j N_i\right) \, ,
\end{eqnarray}
and $K \equiv K_i^i$ is its trace.
Let us remark that the change of variable
\begin{eqnarray}
K_{ij} \; \equiv \overline{K}_{ij} + \frac{\lambda}{2N} h_{ij} \, ,
\end{eqnarray}
allows to see that the previous action \eqref{actionmodel} takes exactly the same form as the general relativity action
\begin{eqnarray}
S \; = \; \int d^4x \, N \sqrt{h} \left( \overline{K}_{ij} \overline{K}^{ij}-\overline{K}^2+R + \frac{\mu}{N}\right) \,,
\end{eqnarray}
up to the $\mu$-term.
However, as $\overline{K}_{ij}$ cannot be interpreted as the extrinsic curvature of a metric, the theory is not equivalent to general relativity.
To illustrate the difference between the modified theory and general relativity, let us now make the following time dependent change of variable on the metric components
\begin{eqnarray}
\hat{h}_{ij} \; \equiv \; e^{-\lambda t} h_{ij} \, , \quad
\hat{N}_i \; \equiv \; e^{-\lambda t} N_i \, , \quad
\hat{N} \; \equiv \; e^{-\lambda t/2} N \, .
\end{eqnarray}
Hence, the action takes the form
\begin{eqnarray}
S \; = \; \int d^4x \, \hat{N} \sqrt{\hat{h}}
\left(\hat{K}_{ij}\hat{K}^{ij}-\hat{K}^2+e^{-\lambda t}\hat R + e^{3 \lambda t} \frac{\mu}{\hat{N}}\right) \,\end{eqnarray}
which makes obvious that the theory propagates only 2 tensorial degrees of freedom because the modification affects only terms with spatial derivatives in the action.
To finish with this example, let us remark that the action \eqref{actionmodel} can easily be made covariance introducing, as usual, a scalar field $\phi$ whose gradient is
orthogonal to the space-like hyper-surfaces. Using the results of \cite{Langlois:2017mxy}, one obtains
\begin{eqnarray}
S[g_{\mu\nu},\phi] \; = \; \int d^4x \, \sqrt{\vert g \vert} \left[ {\cal R} - \frac{\lambda}{2} \ln(X^2) \Box \phi + \frac{3\lambda^2}{2} X + 2 \mu \sqrt{-X}\right] \, .
\end{eqnarray}
From this action, it is clearly not obvious that only two gravitational degrees of freedom are propagating. But the theory belongs to the class of ``cuscuton"
theories \cite{Afshordi:2006ad,Iyonaga:2018vnu}.
\medskip
A more interesting example would be to assume that ${\cal V}$ is a scalar quadratic in $\pi_{ij}$, in which case, it can be written as
\begin{eqnarray}
{\cal V} \; = \; \frac{1}{{\vert h \vert}}\left( \lambda_1 \pi^{ij}\pi_{ij} -\frac{ \lambda_2}{2} \pi^2 \right) \, ,
\end{eqnarray}
where $\lambda_1$ and $\lambda_2$ are constant.
Using the results of the Hamiltonian analysis of DHOST theories \cite{Langlois:2015skt},
we see that such a Hamiltonian can be obtained from a DHOST theory in the unitary gauge with a k-essence term,
a generalized cubic galileon term and a quadratic DHOST term with
\begin{eqnarray}
\frac{a_1}{N^2}+1 = \frac{1}{N + \lambda_1} \, , \quad
\frac{a_2}{N^2}-1 = \frac{N + \lambda_2}{(N+\lambda_1)(2\lambda_1 - 3 \lambda_2 - N)} \, ,
\end{eqnarray}
in the unitary gauge.
We notice that the theory belongs to (the safe) class I only if $a_1+a_2=0$, which implies that $\lambda_1=\lambda_2$. Otherwise, perturbations about any cosmological background develop gradient instabilities.
Furthermore, all these theories belong by definition to the class of extended cuscuton \cite{Afshordi:2006ad,Iyonaga:2018vnu}.
\subsection{A new class of theories: $f({\cal H})$ theories}
In this section, we introduce a new interesting class of minimally modified theories of gravity. To explain the construction of this class, we first recall that a
Hamiltonian of the form \eqref{MMGHamiltonian} corresponds to a theory with (up to) two tensorial modes only if the ``modified" Hamiltonian constraint ${\cal H}_0$
commutes with itself \eqref{selfcommutation}. The function $\cal V$ is a priori free, but to have a modified theory which is very close to general relativity, we make the choice ${\cal V}=0$.
In order for the theory to propagate gravitational wave, it is necessary that ${\cal H}_0$ contains both $K_{ij}$ terms and three dimensional curvature terms (like
the Ricci scalar $R$) as in the expression of the Hamiltonian constraint of general relativity \eqref{Hgr}. The presence of such terms makes difficult the problem of finding
an expression of ${\cal H}_0$ which is different from the usual Hamiltonian constraint. However, there is a simple modification that we can think about which is
\begin{eqnarray}
{\cal H}_0 \; = \; f({\cal H}_{gr}) \qquad \text{with} \quad {\cal H}_{gr} \equiv \frac{ 2\pi_{ij} \pi^{ij} - \pi^2}{2{\vert h \vert }} - R \, ,
\end{eqnarray}
where $f$ is an arbitrary function. As ${\cal H}_{gr}$ is dimensionful, the function $f$ needs at least a mass scale to be defined, which could be the Planck mass and something else,
like the cosmological constant...
In that case, the modified Hamiltonian constraint satisfies the Poisson algebra
\begin{eqnarray}
\{ {\cal H}_0(N_1),{\cal H}_0(N_2)\} \; = \; [f'({\cal H}_{gr})]^2 (N_1 \nabla_i N_2 - N_2 \nabla_i N_1) {\cal V}^i \, ,
\end{eqnarray}
which is, in general, non-linear. Obviously, the Poisson bracket weakly vanishes. Hence, we have found a new class of minimally modified theories of gravity
that we dub $f({\cal H})$ theories with reference to $f(R)$ theories. Contrary to $f(R)$ theories, $f({\cal H})$ theories do not propagate scalar modes, and the main
reason is that the associated equations of motion remain second order.
From a Legendre transformation, one can easily compute the corresponding action. Indeed, the equation of motion for $h_{ij}$ enables us to relate the momenta $\pi_{ij}$
to the extrinsic curvature $K_{ij}$ as follows
\begin{eqnarray}
K_{ij} \; = \; \frac{f'({\cal H}_{gr})}{\sqrt{\vert h \vert}} \left( \pi_{ij} - \frac{1}{2} \pi h_{ij} \right) \, ,
\end{eqnarray}
from which we can implicitely obtain $\pi_{ij}$ in terms of $K_{ij}$ because, in general, this equation is non-linear in $\pi_{ij}$. Nonetheless, one can compute the action
which, after a simple calculation, is given by
\begin{eqnarray}
\label{minimalLag}
S[h_{ij},N,N^i] \; = \; \int d^4x \sqrt{\vert g \vert} \left[ \frac{2}{f'(C)}(K_{ij} K^{ij} - K^2) - f(C) \right] \, ,
\end{eqnarray}
where $C$ is formally obtained by solving the equation
\begin{eqnarray}
\label{constraintC}
C \; = \; \frac{K_{ij} K^{ij} - K^2}{[f'(C)]^2} - R \, .
\end{eqnarray}
In the case where $f(x)=x$, one immediately recovers the action of general relativity. However, any other choice for $f$ leads to a different theory which admits
a four dimensional symmetry algebra (the constraints satisfy a deformed diffeomorphisms algebra) and propagates only
two tensorial modes. For instance, the choice $f(x)= x(1 - x/(2\Lambda))$ could be interesting for dark energy because the solutions of the deformed Hamiltonian constraint
contain both a sector with no cosmological constant and a sector with a cosmological constant $\Lambda$. In fact, in any situation where $f(x)=0$ has a non-vanishing solution
$x_0$, there is in the theory an effective cosmological constant given by $x_0=2\Lambda$. For this reason, this new class of theories is very interesting and certainly deserves a
deeper study.
\subsubsection{Hamilton equations}
We can easily compute the Hamilton equations of motion for any function ${\cal O}(h_{ij},\pi^{ij})$ in the phase space using the definition of the time derivative
\begin{eqnarray}
\dot {\cal O}(x) \; = \; \{ {\cal O}(x) \, , \, H \} \, .
\end{eqnarray}
The explicit form of the time derivative is easily obtained from the Hamilton equations of general relativity due to the fact that
\begin{eqnarray}
\{ {\cal O}(x) \, , \, H \} & = & \int d^3y \, \sqrt{h(y)} \, \left[ f'({\cal H}_{gr}(y)) \,N(y) \{ {\cal O}(x) \, , \, {\cal H}_{gr}(y)\} + N^i(y) \{ {\cal O}(x) \, , \, {\cal V}_i (y)\} \right] \nonumber \\
&+& \int d^3y \, f({\cal H}_{gr}(y)) \, N(y) \{ {\cal O}(x) \, , \, \sqrt{h(y)}\} \, .
\end{eqnarray}
In the vacuum, the second line vanishes due to the constraint, but this is not the case in the presence of matter.
Applying this formula to the spatial metric $h_{ij}$ and its momenta $\pi^{ij}$ and using well-known results of Hamiltonian general relativity (see \cite{Poisson:2009pwt} for instance)
leads immediately to the expressions
\begin{eqnarray}
\dot{h}_{ij} & = & D_i N_j + D_j N_i + \frac{N f'}{\sqrt{h}} \left( 2 \pi_{ij} - \pi h_{ij}\right) \, , \label{Eqhij}\\
\dot{\pi}^{ij} & = & - \sqrt{h} N \left[f' R^{ij} + \frac{1}{2} f h^{ij}\right]+ \sqrt{h} (D^i D^j - h^{ij} D^2) (N f') - D_k \left[ {2 N^{(i} \pi^{j)k} - N^k \pi^{ij}}\right] \nonumber \\
&&-\frac{N f'}{\sqrt{h}} \left[ 2 \pi_k^i \pi^{kj} - \pi \pi^{ij} - \left( \pi_{kl} \pi^{kl} - \frac{1}{2} \pi^2\right) h^{ij}\right] \,,\label{Eqpiij}
\end{eqnarray}
where $f$ and $f'$ are evaluated at ${\cal H}_{gr}$. Combining these two equations would allow us in principle to obtain the modified Einstein equations.
To do so, one has to express $\pi_{ij}$ in terms of the extrinsic curvature $K_{ij}$ using the first equation
\begin{eqnarray}
K_{ij} \; = \; \frac{f'({\cal H}_{gr})}{\sqrt{h}} \left( \pi_{ij} - \frac{1}{2} \pi h_{ij}\right) \, ,
\end{eqnarray}
and then one substitutes the obtained expression in the second equation of motion for $\pi_{ij}$.
When $f(x)=x$, we recover immediately
the Hamilton equations of general relativity using the Hamiltonian constraint ${\cal H}_{gr}=0$.
{In the presence of matter, these equations have to be supplemented with source terms. However, describing explicitly how matter is coupled to the (modified) gravitational field
is subtle and has been analyzed in great details in \cite{Aoki:2018zcv,Aoki:2018brq}. A ``naive'' minimal coupling\footnote{If the matter is minimally coupled (with no derivative couplings) and is described
by a action $S_M$ associated to an energy-momentum tensor $T^{\mu\nu}$, then the equation for $h_{ij}$ \eqref{Eqhij} is unchanged, the deformed Hamiltonian constraint
becomes
\begin{eqnarray}
f({\cal H}_{gr}) + 16\pi G_N \, N^2 T^{00} \; \approx \; 0 \, ,
\end{eqnarray}
and the equation for the momenta $\pi_{ij}$ contains a source term
\begin{eqnarray}
\dot{\pi}^{ij} \; = \; \dot{\pi}_0^{ij} + \frac{\delta S_M}{\delta h_{ij}} \; = \; \dot{\pi}_0^{ij} + 8\pi G_N \, {N} \sqrt{h} \left( T^{ij} - N^i N^j T^{00} \right) \, ,
\end{eqnarray}
where $ \dot{\pi}_0^{ij}$ is the expression of $\dot{\pi}^{ij}$ in vacuum given by \eqref{Eqpiij}. However, as we said, in general such a coupling leads to new propagating
degrees of freedom in addition to the tensors and the matter.} of the matter fields, for instance, would break the gauge invariance
generated by the first class constraint ${\cal H}_0$ which, as a consequence, would become second class. Therefore, in general, one extra mode (besides those of the matter field) appears in the phase space. A consistent way to introduce the matter field is, before inclusion of the matter fields, to split the first class constraint into a pair of second class constraints by introducing a ``gauge fixing condition''. Since these constraints remain second class after introducing the matter field, the number of gravitational degrees of freedom remains four in the phase space, i.e., two in the real space. This strategy has been successfully applied in \cite{Aoki:2018zcv,Aoki:2018brq} by adding to the Hamiltonian a gauge fixing term ${\cal H}_{\rm{gf}}$ which is, by definition, not commuting with the Hamiltonian constraint. In our case, one need to introduce a gauge fixing term which does not commute
with the ${\cal H}_0$ or equivalently ${\cal H}_{gr}$. Following \cite{Aoki:2018brq}, one could think about adding to the total Hamiltonian a term
which imposes, using a Lagrange multiplier, a new constraint either of the form $ \partial_i {\cal S}\approx 0$ or of the form ${\cal S} \approx 0$ where ${\cal S}$ is a three-dimensional scalar, such that, together with ${\cal H}_0$, they form a pair of second class constraints while the invariance under space-like diffeomorphisms is preserved. The coupling to matter (particularly the choice of ${\cal H}_{\rm{gf}}$) needs to be studied in great details and goes beyond the scope of the present work. For this reason, we leave this analysis for future investigations. }
\subsubsection{Cosmology}
To illustrate the difference between $f({\cal H})$ theories and general relativity, we consider simple examples. First, let us study the cosmology of these theories in the presence of
a perfect fluid (of density $\rho$ and pressure $p$) which
corresponds to taking a time dependent lapse function $N(t)$, a vanishing shift vector $N^i=0$, homogeneous and isotropic spatial metric and momenta as follows
\begin{eqnarray}
h_{ij} \; = \; a^2(t) \delta_{ij} \, , \qquad \pi^{ij} \; = \; b(t) \delta^{ij} \, .
\end{eqnarray}
Here we assume that the spatial slices are flat.
{To make the dynamics in the cosmological sector more interesting, we consider the coupling to matter in the form a perfect fluid, as we have said previously. In that case, contrary to the generic situation, we do not really need
an explicit form of ${\cal H}_{\rm{gf}}$. Indeed, if the gauge-fixing condition is of the form $\partial_i {\cal S} \approx 0$, then
it is trivially satisfied by FLRW space-time with the (space-independent) time
reparametrization symmetry unbroken (namely, the lapse function is arbitrary).
On the other hand, if the gauge condition of the form ${\cal S} \approx 0$, it may
imply a specific choice of the lapse function if $\cal S$ involves a fixed function of time, for instance. In this case,
the (space-independent) time reparametrization symmetry is broken.
In any cases, the gauge fixing term does not explicitly show up in the equations of motion, and we can consider a minimal coupling to matter (as described in the footnote 3)
where the lapse is either free or fixed to a specific value. Hence, the deformed Hamiltonian constraint simplifies drastically and becomes}
\begin{eqnarray}
\label{ModGcosmo}
f({\cal H}_{gr})+ 16\pi G_N \, \rho \; = \; 0 \qquad \text{with} \quad {\cal H}_{gr}=-\frac{3}{2} \left( \frac{b}{a}\right)^2 \, .
\end{eqnarray}
Furthermore, the Hamilton equations of motion reduces to
\begin{eqnarray}
\dot{a} = - \frac{N f'({\cal H}_{gr})}{2} b \, , \qquad \dot{b} = - \frac{N}{2} a \left[ f({\cal H}_{gr}) + f'({\cal H}_{gr}) \left(\frac{b}{a} \right)^2- 16\pi G_N \, p \right]\, .
\end{eqnarray}
{Notice that FLRW cosmology could also be analyzed starting from the Lagrangian \eqref{minimalLag}
where $C$ has been defined by the relation \eqref{constraintC}. The result is, as expected, the same as in the
Hamiltonian formalism.}
In general, the Friedmann equations are strongly modified compared to the classical ones. To write them,
it is useful to introduce
\begin{eqnarray}
F(\rho)=f^{-1}(-16\pi G_N \, \rho) \quad \Longrightarrow \quad f'({\cal H}_{gr}) = - \frac{16\pi G_N}{F'(\rho)} \, ,
\end{eqnarray}
in order to reformulate the previous three equations (with $N=1$) equivalently as follows
\begin{eqnarray}
\left( \frac{b}{a}\right)^2 = -\frac{2}{3} F(\rho) \, , \quad
b = \frac{F'(\rho)}{8 \pi G_N} \dot{a} \, , \quad
\dot{b}= 8\pi G_N {a} \left[ \rho+ p - \frac{2 F(\rho)}{3 F'(\rho)} \right] \, ,
\end{eqnarray}
which lead to the following modified Friedmann equations
\begin{eqnarray}
&& H^2= -\frac{2}{3}(8\pi G_N)^2\frac{F(\rho)}{[F'(\rho)]^2} \, , \\
&& F'(\rho) \frac{\ddot a}{a} - 3H^2 F''(\rho)(\rho+p) = (8\pi G_N)^2 \left[ \rho+p - \frac{2F(\rho)}{3F'(\rho)}\right] \, ,
\end{eqnarray}
whereas the conservation equation for the fluid remains unchanged
\begin{eqnarray}
\dot{\rho} + 3H (\rho+p) \; = \; 0 \, .
\end{eqnarray}
When $f(x)=x$, one immediately recover the usual Friedmann equations.
Furthermore, in vacuum (when $\rho=0=p$), these equations admit a self-accelerating solution if $F(0) < 0$.
This is for instance the case for
\begin{eqnarray}
\label{examplefx}
f(x) \; = \; x \left(1 - \frac{x}{2\Lambda} \right) \qquad \Longrightarrow \quad
F(\rho) \; = \; {\Lambda} \left[ -1 \pm \sqrt{1- 32 \pi G_N \rho/\Lambda}\right]
\end{eqnarray}
where $\Lambda$ is a non-negative constant. The function $F(\rho)$ has two branches, and the minus branch, which is such that
$F(0)=-2\Lambda <0$, admits a self-accelerating solution in vacuum with cosmological constant $\Lambda$. This result has a simple
interpretation. Indeed, in vacuum, the modified Hamiltonian constraint reduces to $f({\cal H}_{gr})=0$ whose solutions fall into two branches:
${\cal H}_{gr}=0$ which corresponds to general relativity with no cosmological constant and ${\cal H}_{gr}=2\Lambda$ which corresponds to general
relativity with a cosmological constant. In general, any deformation of general relativity associated to $f(x)$ admits a self accelerating solution if $f(x)=0$
admits a non-negative solution $x_0$.
{Notice that in the absence of matter,
the FLRW background reduces to a de Sitter spacetime and that the
analysis of scalar perturbations about the de Sitter background
(without matter nor gauge fixing term) confirms that no scalar modes
are propagating in these theories.}
\section{Conclusion}
In this paper, we constructed theories of minimally modified gravity (MMG) from a Hamiltonian point of view.
To illustrate the construction, we started in section \ref{Maxwell} with a complete study of minimally modified Mawxell theories which propagates 2 (vectorial) degrees of freedom in the
4-dimensional Minkowski space-time. Maxwell theory provides us with a simpler but very interesting context to present the main ingredients that enter in the construction of minimally modified gravity theories from a Hamiltonian point of view. Then, we considered the most interesting case of gravity.
We started with the phase space of general relativity parametrized with 10 pairs of canonically conjugate variables (the metric components and their momenta) and whose dynamics is governed by the Hamiltonian and vectorial constraints. We modified the theory in such a way that, first, the lapse function function and the shift vector remain non-dynamical (i.e. with vanishing conjugate momenta), second, the theory is still invariant under 3D diffeomorphisms, and third the theory propagates only two tensorial degrees of freedom. We found that these three requirements lead to a Hamiltonian of the form \eqref{HMMG} with the condition \eqref{selfcommutation}.
We showed that these MMG theories encompass the so-called
cuscuton theories (in the unitary gauge) which are (higher derivative) scalar-tensor theories with only two tensorial modes. In these theories, the scalar degree of freedom is in fact
a shadow mode \cite{DeFelice:2018mkq} and thus does not propagate. Notice that our construction naturally extends the cuscuton models to non-local theories which involve
infinite spatial derivatives.
We also found a particularly interesting and simple novel class of MMG whose Hamiltonian differs from the Hamiltonian of general relativity by the fact that the Hamiltonian constraint ${\cal H}_{gr}$ has been replaced by $f({\cal H}_{gr})$ where $f$ is an arbitrary function. We dubbed them $f({\cal H})$-theories .
The class of $f({\cal H})$-theories opens numerous new windows in cosmology and in astrophysics. We have quickly studied cosmological solutions
for a generic choice of function $f(x)$, but it would be interesting to make a systematic analysis of cosmological perturbations and of
the constraints that observations put on these theories if they account for dark energy.
For that, it is important to first understand in details how to consistently couple matter in these theories following the analysis of \cite{Aoki:2018zcv,Aoki:2018brq}. This would also allow us to study, for instance, the structure of stars in these theories and to see how Newton laws are modified in this framework.
From a more formal point of view, we are curious to understand the relations and the differences
with the very well-studied $f(R)$ or $f(R,T)$ theories. We hope to investigate all these questions in the future...
\subsection*{Acknowledgments}
We want to thank David Langlois for very interesting and useful discussions.
The work of S.M. was supported by Japan Society for the Promotion of
Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) No.
17H02890, No. 17H06359, and by World Premier International Research
Center Initiative (WPI), MEXT, Japan. He is grateful to Institut Denis
Poisson for hospitality during his stay.
\bibliographystyle{utphys}
|
1,116,691,500,411 | arxiv | \section{Introduction}
One of the dreams in physics, chemistry and materials science is to be able to monitor the dynamical behavior of matter with atomic spatial and temporal resolution, i.e. $0.1$~nm and $100$~fs, and thus enable the investigation\cite{Sciaini2011a,King2005} of phase transitions, chemical reactions and conformational changes at the most fundamental level.
One of the techniques which has emerged recently and is developing at a rapid pace is ultrafast electron diffraction\cite{Sciaini2011a} (UED). This technique uses femtosecond laser pulses to excite a dynamical process in a crystalline material, which is subsequently probed by a synchronized, ultrashort electron bunch.
Measurements of transverse beam quality have shown that electron pulses as cold as $10$~K can be produced using femtosecond photoionization\cite{Engelen2013,Engelen2014}. It has also been shown that the ultracold electron source is capable of producing high quality diffraction patterns\cite{VanMourik2014a,Speirs2015a}.
The pulse length of the ultracold electron bunches extracted from a laser cooled gas was measured using a RF deflecting cavity\cite{Franssen2017} and resulted in pulse lengths of $\sim 20~$ps for femtosecond laser ionization. Similarly electron pulse lengths of $130~$ps were measured using a high voltage deflector when using a two-color multi-photon excitation process\cite{PhysRevA.95.053408}.
These measured pulse lengths were limited by the energy spread of the electron beam. The energy spread causes degradation in temporal resolution by the lengthening of the electron bunches. This effect can be cancelled by compressing the electron pulses using established RF techniques\cite{VanOudheusden2010a,Pasmans2013}. The ultimate temporal resolution is governed by the longitudinal emittance which is determined by the uncorrelated pulse length and energy spread. The temporal resolution in UED experiments can be further improved by cancelling the arrival time jitter induced by RF phase fluctuations\cite{Franssen2017a} in bunch compression cavities.
The energy spread of the electron bunches extracted from the ultracold electron source has not yet been measured directly. This article presents the first experiments using a specially designed Wien filter. The device can eventually be used to investigate the effects of space charge forces in picosecond electron bunches. Simultaneous measurement of both the energy spread and pulse length will allow us to determine the longitudinal phase space distribution and thus the ultimately achievable temporal resolution in UED experiments.
\section{The ultra cold electron source}
The ultracold electron bunches are created by near-threshold femtosecond photoionization of laser cooled and trapped 85-rubidium atoms, as is illustrated in \Fref{mot}. First, \Fref{mot}a, rubidium atoms are trapped and cooled in a magneto optical trap (MOT). Second, \Fref{mot}b, the trapping laser is temporarily switched off for $1~\mu$s, so that the atoms relax back to their ground state. After the atoms have relaxed to the $5S_{\frac{1}{2}}F=3$ state, a $780$~nm excitation laser beam is switched on, creating a small cylinder (radius of $25~\mu$m) of excited atoms in the $5P_{\frac{3}{2}} F=4$ state. Subsequently a small volume of rubidium atoms is ionized by a pulsed $480$~nm femtosecond ionization laser beam, \Fref{mot}d, intersecting the excitation laser beam at right angles, resulting in a cloud of cold electrons and ions. The shape and size of the ionization volume can be controlled by the overlap of the excitation and ionization laser beams\cite{PhysRevLett.95.164801,McCulloch2011}. Finally, \Fref{mot}c, the electrons are extracted by a static electric field\cite{Taban2008}.
\begin{figure}[htb!]
\centering
\includegraphics[width=14cm]{fig1.pdf}
\caption{\label{mot}Schematic representation of the electron bunch production sequence. The six red beams in figure~(a) represent the cooling laser beams and the two blue coils the magnetic field coils in anti-Helmholtz configuration. In the figure~(b) the red beam indicates the excitation laser and the blue beam the ionization laser. Figure~(c) illustrates the acceleration process in a static electric field. Figure~(d) illustrates the two step ionization scheme. The atoms are first excited from the $5S_{1/2}F=3$ state to the $5P_{3/2}F=4$ state, from which they are ionized using a $480~$nm photon.}
\end{figure}
\Fref{beamline} shows a schematic representation of the electron beam line. First the electron beam passes through a magnetic solenoid lens which is used to focus the beam onto the detector. The Wien filter is positioned a distance $d_{wien}=0.68~$m from the source. The electrons are detected by a micro channel plate in combination with a phosphor screen, positioned at a distance $d_{det}=1.9~$m from the source.
\begin{figure*}[htb!]
\centering
\includegraphics[width=14cm]{fig2.pdf}
\caption{\label{beamline}Schematic representation of the beam line, consisting of an electrostatic accelerator, a magnetic solenoid lens, the Wien filter and a MCP with a phosphor screen. The light emanating from the phosphor screen is imaged onto a CCD camera.}
\end{figure*}
The longitudinal energy spread, in absence of space charge forces, is to a good approximation governed by the width of the ionization laser beam in the direction of the acceleration field\cite{Franssen2017,Reijnders2009}. The relative energy spread is given by
\begin{equation}\frac{\sigma_{U}}{U}=\frac{\sigma_{\rm ion}}{d_{acc}}\label{relenergylasersize},\end{equation}
with $\sigma_{\rm ion}$ the rms size of the ionization laser profile in the acceleration ($\hat{z}$) direction, $\sigma_{U}$ the rms energy spread, $U=eE_{acc}d_{acc}$ the average bunch energy, $e$ the elementary charge, $E_{acc}$ the acceleration field and $d_{acc}=13.5~$mm the effective acceleration length\cite{Taban2008,Reijnders2009}. The initial energy spread due to thermal motion is negligible compared to the energy spread due to the finite ionization laser beam size\cite{Franssen2017} which means that we can neglect the Boersch effect\cite{Boersch1954}.
\section{Wien filter}
A Wien filter is an electro-magnetic element that separates charged particles by their velocity\cite{Wienfilterart}. An ideal Wien filter consists of a static and uniform magnetic field, with magnitude $B_{0}$, perpendicular to a static and uniform electric field, with magnitude $E_{0}$. Both fields should be perpendicular to the direction of the electron beam that is passing through. A schematic representation of an ideal Wien filter is shown in \Fref{beamline}.
When an electron is injected into such an ideal field, it will gain a transverse momentum with a magnitude that is dependent on the particle velocity. The particles will exit the Wien filter with a transverse momentum distribution that is correlated to the initial longitudinal momentum distribution. This transverse momentum distribution will result in a spread on a detector screen which therefore maps the longitudinal momentum distribution on the screen.
For an electron beam moving in the $\hat{z}$ direction the longitudinal velocity $v_{z}$ will be much greater than the transverse velocities: $v_{z}>>v_{x},v_{y}$. In a Wien filter with a uniform electric field $\vec{E}=E_{0}~\hat{y}$ and a uniform magnetic field $\vec{B}= B_{0}~\hat{x}$, that both extend over a length $L_{w}$ in the $\hat{z}$ direction, a particle with velocity $\vec{v}=v_{z}~\hat{z}$ is deflected by an angle
\begin{equation}\theta_{y}=\theta_{c} \left(1-\frac{E_{0}}{v_{z} B_{0}}\right).\label{defllor}\end{equation}
With $\theta_{c}\equiv \omega_{c}t_{w}$ the cyclotron angle, $\omega_{c}= \frac{eB_{0}}{m}$ the cyclotron frequency and $t_{w}=\frac{L_{w}}{v_{z}}$ the time spent inside the Wien filter. \Eref{defllor} shows us that a particle with a velocity
\begin{equation}\vec{v}_{c}=\frac{E_{0}}{B_{0}}\hat{z}\label{Wiencriterium}\end{equation}
is not deflected.
A particle with a velocity $\vec{v}(t)=\vec{v}_{c}+\delta\vec{v}(t)$ entering the Wien filter at time $t=0$ will be deflected by gaining additional longitudinal and transverse velocities described by
\numparts
\begin{eqnarray}
\label{wienexchange1}
\delta v_{y}(t)=\delta v_{z}(0) \sin(\omega_{c}t),\\
\label{wienexchange2}
\delta v_{z}(t)=\delta v_{z}(0) \cos(\omega_{c}t),
\end{eqnarray}
\endnumparts
with $t$ the time spent inside the Wien filter. At one quarter of a cyclotron orbit, $\theta_{c}=\frac{\pi}{2}$, the longitudinal velocity $\delta v_{z}(0)$ is fully converted into transverse velocity; this is illustrated in \Fref{Wiencyclmotion}.
\begin{figure*}[htb!]
\centering
\includegraphics[width=12cm]{fig3.pdf}
\caption{\label{Wiencyclmotion}Illustration of the longitudinal and transverse momentum exchange, described by \Eref{wienexchange1} and \Eref{wienexchange2}, as a function of cyclotron angle $\theta_{c}$ for different particles.}\end{figure*}
The cyclotron angle $\theta_{c}$ can be calibrated by measuring the change in average deflection angle $\Delta \theta_{y}$ of an electron pulse when varying the magnetic field by an amount $\Delta B$. These two quantities are related by
\begin{equation}
\Delta \theta_{y} = \theta_{c}~\frac{\Delta B}{B_{0}},\label{caldeflec}
\end{equation}
which is easily verified by expanding \Eref{defllor} for small $\frac{\Delta B}{B_{0}}$ around $\theta_{y}=0$. Knowing the cyclotron angle $\theta_{c}$, we can calculate the relative energy spread by measuring the rms streak length $\sigma_{y}$ on a screen. The rms streak length is given by
\begin{equation}
\sigma_{y}^{2}=\sigma_{\rm off}^{2}+ \left(\frac{d_{det}-d_{wien}}{2} \frac{\sigma_{U}}{U}\sin(\theta_{c})\right)^{2}\label{energyfit},
\end{equation}
with $\sigma_{\rm off}$ the rms transverse beam size when the Wien filter is turned off.
\subsection{Wien filter design}
The Wien filter has been designed and built such that it can be inserted into a CF63 vacuum cube. \Fref{Wiendesign} shows a schematic representation of the design.
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{fig4.pdf}
\caption{\label{Wiendesign}Schematic representation of the Wien filter. The blue line indicates the electron beam. An aluminum cylinder (light grey) is mounted onto the vacuum flange. The green parts represent PEEK insulators. The coils generating the magnetic field $B_{0}~\hat{x}$ and the high-voltage electrodes generating the electric field $E_{0}~\hat{y}$ are indicated in orange.}
\end{figure}
The Wien filter electrostatic field is generated by a pair of capacitor plates. The electrodes are separated by a distance of $7$~mm. Electric field strengths up to $300$~kV/m can be generated. The high-voltage electrodes are suspended by two PEEK insulator wedges. The static magnetic field is produced by a pair of coils in the Helmholtz configuration: the $20$~mm radius of the coil is equal to the distance between the centers. The two coils are wound from $0.6$~mm insulated copper wire with $54$ turns on each coil. The Wien filter is able to produce on-axis magnetic fields up to $5$~mT, which is limited by coil heating.
A $54$~mm diameter, $0.2$~mm thick $\mu$-metal tube provides magnetic shielding. The $\mu$-metal shielding suppresses the magnetic field outside the Wien filter. Without this shielding it is practically impossible to direct the electron beam through the Wien filter. The on-axis magnetic field profiles, both simulated and measured, with and without the $\mu$-metal shielding are depicted in \Fref{WienfilterBfield}. The fields were simulated using the CST\cite{CST} software package and measured using a Hall probe.
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{fig5.pdf}
\caption{\label{WienfilterBfield}The on-axis magnetic field produced by the Wien filter with and without $\mu$-metal shielding. All fields have been normalized with respect to the maximum field without $\mu$-metal. The solid grey line represents the analytically calculated magnetic field without shielding\cite{simpson1829} and the solid black line the simulated magnetic field with $\mu$-metal shielding. The data points represent the measured field profile, both with and without $\mu$-metal.}
\end{figure}
\section{Results and discussion}
First we have used the Wien filter to measure the average bunch energy of the electron pulses which agrees with time-of-flight measurements (\Sref{avgbunche}). Hereafter we calibrated the Wien filter (\Sref{seccalib}) and determined the relative energy spread of the electron bunches (\Sref{secEspread}).
All experiments have been performed with $\sim20$ ps electron bunches containing $\sim10^{3}$ electrons produced by a $100~$fs ionization laser pulse\cite{Franssen2017}. The rms spot size of the ionization laser at the position of the MOT was measured to be $\sigma_{\rm ion} = 90\pm10~\mu$m, resulting in an expected rms relative energy spread $\frac{\sigma_{U}}{U}= 0.67\pm0.07\%$ according to \Eref{relenergylasersize}.
\subsection{Average bunch energy}\label{avgbunche}
The electric field in the Wien filter has been determined by measuring voltage across the Wien filter electrodes and the magnetic field by measuring the current running through the coils. The average bunch energy $U$ has been determined using the electric and magnetic field strengths for which no average deflection inside the Wien filter occurred, \Eref{Wiencriterium}.
\Fref{WienEB} shows the electric and magnetic field strengths for which no average deflection occurs together with a linear fit. The electric field is proportional to the voltage on the electrodes and the magnetic field is proportional to the current running through the coils. From the slope of the fit shown in \Fref{WienEB} the average electron beam energy $U$ was determined to be $U=8.8\pm0.2~$keV.
\begin{figure*}[htb!]
\centering
\includegraphics[width=10cm]{fig6.pdf}
\caption{\label{WienEB}The electric field a function of the magnetic field, such that the center of the electron bunch is not deflected by the Wien filter. The solid line represents a fit with \Eref{Wiencriterium}.}
\end{figure*}
The average electron energy was also determined by an ion time-of-fight scan which was done by changing the polarity of the accelerating field. This measurement allows us to determine the position of the ionization volume inside the accelerating structure\cite{Engelen2014,Reijnders2009}. \Fref{TOFscan} shows the ion time-of-flight for various accelerator voltages $V_{acc}$ together with a fit to our model\cite{Engelen2014}.
\begin{figure*}[htb!]
\centering
\includegraphics[width=10cm]{fig7.pdf}
\caption{\label{TOFscan}The measured ion time-flight as a function of the potential $V_{acc}$ across the accelerating structure together with a fit to our model (solid line). The uncertainty in the data is smaller than the dot size.}
\end{figure*}
From the fit we measure that the average bunch energy $U=(0.50\pm 0.01)\cdot e V_{acc}$. In all the experiments we have used an accelerator potential $V_{acc}=17.5~$kV which results in an average bunch energy of $U=8.7\pm0.2~$keV which agrees well with the bunch energy measured using the Wien Filter.
\subsection{Calibration}\label{seccalib}
We have calibrated the cyclotron angle $\theta_{c}$ by measuring the average deflection $\Delta y = \frac{\Delta v_{y}}{v_{z}} (d_{det}-d_{wien})$ as a function of magnetic field amplitude at a constant electron energy. \Fref{calibrationthetac} shows the deflection for various relative magnetic field strengths, which scales $\propto \frac{\Delta I}{I}$ with $I$ the current running through the coils. The linear fit through the data in \Fref{calibrationthetac} was used together with \Eref{caldeflec} to determine the cyclotron angle $\theta_{c}$. The magnetic field in \Fref{calibrationthetac} ranges from $B_{0}\approx 1~$mT (light grey), $B_{0}\approx 1.8~$mT, $B_{0}\approx 2.6~$mT, to $B_{0}\approx 3.3~$mT (black).
\begin{figure*}[htb!]
\centering
\includegraphics[width=10cm]{fig8.pdf}
\caption{The average bunch deflection $\Delta y$ for various relative magnetic field strengths $\frac{\Delta B}{B_{0}} \propto \frac{\Delta I}{I}$ together with linear fits to determine the cyclotron angle $\theta_{c}$. The magnetic field ranges from $B_{0}\approx 1~$mT (light grey), $B_{0}\approx 1.8~$mT, $B_{0}\approx 2.6~$mT, to $B_{0}\approx 3.3~$mT (black).\label{calibrationthetac}}
\end{figure*}
\Fref{cyclotroncal} shows the calibrated cyclotron angle as a function of magnetic field strength $B_{0}$ together with the theoretical curve which was calculated using $\theta_{c}=\frac{eB_{0}L_{w}}{mv_{z}}$. The figure shows that the Wien filter is working as expected .
\begin{figure*}[htb!]
\centering
\includegraphics[width=10cm]{fig9.pdf}
\caption{The calibrated cyclotron angle (circles) and the theoretical curve (solid) as function of magnetic field strength $B_{0}$ for a fixed electron energy $U=8.74~$keV. The uncertainty in the data is smaller than the dot size.\label{cyclotroncal}}
\end{figure*}
\subsection{Energy spread}\label{secEspread}
\Fref{Wienstreak} represents a false color plot of the measured spatial electron distribution as a function of cyclotron angle, $\theta_{c}$, from $\theta_{c}=0$ (left) to $\theta_{c}=0.55$ (right). The figure clearly shows that the electron bunch is streaked in the vertical direction. The rms spot size in the $\hat{y}$-direction together with the cyclotron angle $\theta_{c}$ allows us to determine the relative energy spread of the electron bunch.
\begin{figure}[htb!]
\centering
\includegraphics[width=14cm]{fig10.pdf}
\caption{\label{Wienstreak}A false color plot of the measured electron distribution as a function of Wien filter cyclotron angle, $\theta_{c}$ from $\theta_{c}=0$ (left) to $\theta_{c}=0.55$ (right). The electron pulse is streaked in the $\hat{y}$ direction.}
\end{figure}
\Fref{rmsstreak} shows the rms size of the electron bunch as a function of the cyclotron angle. The circles indicate the rms size parallel to the streak axis ($\hat{y}$-direction) and the squares the rms size perpendicular to the streak axis ($\hat{x}$-direction). The Wien filter should exert no forces along this direction which is mostly confirmed by the data (squares) presented in \Fref{rmsstreak}.
The rms relative energy spread was determined by fitting the streak data with \Eref{energyfit} with $\frac{\sigma_{U}}{U}$ the only fitting parameter. This resulted in a rms relative energy spread $\frac{\sigma_{U}}{U}=0.64\pm0.09\%$. This agrees well with the expected value $\frac{\sigma_{U}}{U}=0.67\pm0.07\%$, which is based on the rms size of the ionization laser beam. This is not surprising since the model (\Eref{relenergylasersize}) was already indirectly verified by the energy spread dominated pulse length measurements\cite{Franssen2017} and ion time-of-flight energy spread measurements\cite{Reijnders2009}.
\begin{figure*}[htb!]
\centering
\includegraphics[width=14cm]{fig11.pdf}
\caption{\label{rmsstreak} The rms electron spot sizes as a function of cyclotron angle $\theta_{c}$. The circles represent the data parallel to the streak axis and the squares the data perpendicular to the streak axis. The solid line is a fit with \Eref{energyfit}. The uncertainty in the data is smaller than the dot size.}
\end{figure*}
\section{Conclusions and outlook}
The Wien filter has been used to measure the relative energy spread of electron bunches produced by near threshold femtosecond photoionization of a laser cooled and trapped ultracold atomic gas. These are the first measurements of the energy spread of the ultracold electron source.
The Wien filter has been used to determine the average bunch energy $U=8.8\pm0.2~$keV which agrees with time-of-flight measurements which resulted in $U=8.7\pm0.2~$keV.
The energy spread measurement resulted in $\frac{\sigma_{U}}{U}=0.64 \pm 0.09~\%$, this relative energy spread agrees well with the expected value $\frac{\sigma_{U}}{U}= 0.67\pm0.07\%$ which is based on the rms size of the ionization laser beam.
Furthermore, the energy spread measurements in combination with pulse length measurements\cite{Franssen2017} will allow us to investigate the full longitudinal phase space distribution of the electron bunches extracted from the ultra cold source. This would make it possible to investigate the longitudinal beam emittance and thus the ultimately achievable temporal resolution in UED experiments for a given energy spread.
\ack
This research is supported by the Institute of Complex Molecular Systems (ICMS) at Eindhoven University of Technology. Furthermore we thank Eddy Rietman and Harry van Doorn for expert technical assistance.
\section*{References}
\bibliographystyle{iopart-num.bst}
\providecommand{\newblock}{}
|
1,116,691,500,412 | arxiv | \section{Introduction}
The modern theory of large-scale structure of the Universe is based on the
assumption
that this structure is formed as the result of development of gravitational
instability
from small initial perturbations of density or gravitational potential. As a
rule, these
perturbations are gaussian, but some version of nongaussian perturbations have
also
been considered.
In this paper we analyze the problem, inherent to practically all the
cosmological cold
dark matter models of invisible axion, that concerns primordial inhomogeneity in
the
distribution of the energy of coherent oscillations of the axion field. This
problem,
referred to as the problem of {\it archioles}, invokes nongaussian component in
the
initial perturbations for axionic cold dark matter. Archioles are the formation
that
represents a replica of the percolation Brownian vacuum structure of axionic
walls
bounded by strings, which is fixed in the strongly inhomogeneous primeval
distribution of cold dark matter. They can give rise to interesting alternative
scenarios
of structure formation that relate the mechanism responsible for structure
formation to
inhomogeneities of the type of topological defects.
The analysis of observable effects associated with {\it archioles} leads to a
new
model-independent constraint on the mass of invisible axion. Such analysis is
also
very useful for further development of full cosmological theories, based on the
model
of horizontal unification (MHU), which has been proposed in \Ref{13} as the
minimal phenomenology of everything, including the physics of inflation,
bariosynthesis, and dark matter. In particular, the combination of archioles
effect with
the consideration of nonthermal symmetry restoration in the horizontal phase
transitions on inflation stage, puts the upper limit on the scale of family
symmetry
breaking (the main parameter of MHU) and consequently severely reduces the set
of
possible realizations of dark matter scenarios in this model.
\section{Formation of the archiole structure in the early
\mbox{Universe}}
In the standard invisible axion scenario \Ref{1} the breaking of the Peccei-
Quinn
symmetry is induced by the complex $SU(3)\bigotimes SU(2)\bigotimes U(1)$ --
singlet Higgs field $\phi$ with a "Mexican hat" potential
\begin{equation}
\label{Vmex}
V(\phi )=\frac{\lambda}{2}\left(\phi^+\phi -F_a^2\right)^2
\end{equation}
Such field can be represented as $\phi =F_a\exp (i\theta )$, where $\theta =a/F_a$ and
$a$
is the angular Goldstone mode-axion. QCD instanton effects remove the vacuum
degeneracy and induce effective potential for $\theta$
\begin{equation}
\label{T}
V(\theta )=\Lambda^4_1(1-\cos(\theta N))
\end{equation}
Below we will simply assume for standard axion that $N=1$. In the context of
standard big bang scenario it is usually assumed that the phase transition with
$U(1)$ --
symmetry breaking occurs when the Universe cools below the temperature
$T\congF_a$. Thus, in the standard case the crucial assumption is that from the
moment of the PQ phase transition and all the way down to the temperatures
$T\cong\Lambda_{QCD}$, the bottom of the potential \Eq{Vmex} is exactly flat and
there is no preferred value of $a$ during this
period (the term given by \Eq{T} vanishes). Consequently,
at the moment of the QCD phase transition, when the instanton effects remove
vacuum degeneracy, $a$ rolls to the minimum and starts coherent oscillations
(CO)
about it with energy density \Ref{1}
\begin{equation}
\label{dens}
\rho_a(T,\theta)=19.57\left(\frac{T^2_1m_a}{M_P}\right)\left(\frac{T}{T_1}\right)^3\
T^
2F_a^2
\end{equation}
The coherent axion field oscillations turn on at the moment $\tilde t\approx
8.8\cdot
10^{-7}$s.
It is generally assumed, that PQ transition takes place after inflation and the
axion
field starts oscillations with different phase in each region causally connected
at
$T\congF_a$, so one has the average over all the values to obtain the modern
axion density. Thus in the standard cosmology of invisible axion, it is usually
assumed that the energy density of coherent oscillations is distributed
uniformly and
that it corresponds to the averaged phase value of $\bar{\theta} =1$
($\bar{\rho_a}=\rho
(\bar{\theta})$). However, the local value of the energy density of coherent
oscillations
depends on the local phase $\theta$ that determines the local amplitude of these
coherent
oscillations. It was first found in \Ref{2}, that the initial large-scale (LS)
inhomogeneity of the distribution of $\theta$ must be reflected in the distribution
of the
energy density of coherent oscillations of the axion field. Such LS modulation
of the
distribution of the phase $\theta$ and consequently of the energy density of CO
appears
when we take into account the vacuum structures leading to the system of axion
topological defects.
As soon as the temperature of Universe becomes less then $F_a$, the field $\phi$
acquires the vacuum expectation value (VEV) $\langle\phi\rangle =F_a\exp {(i\theta
)}$,
where $\theta$ varies smoothly at the scale $F_a^{-1}$. The existence of
noncontractable
closed loops that change the phase by $2\pi n$ leads to emergence of axion
strings.
These strings can be infinite or closed. The numerical simulation of global
string
formation \Ref{3} revealed that about 80\% of the length of strings corresponds
to
infinite Brownian lines. The remaining 20\% of this length is contributed by
closed
loops. Infinite strings form a random Brownian network with the step
$L(t)\approx t$.
After string formation when the temperature becomes as low as
$T\approx\Lambda_{QCD}$, the \Eq{T} makes a significant contribution to the
total
potential so that the minimum of energy corresponds to a vacuum with $\theta =2\pi
k$,
where $k$ is an integer -- for example, $k=0$. However, the vacuum value of the
$\theta$
cannot be zero everywhere, since the phase must change by $\Delta\theta =2\pi$ upon
a
loop around a string. Hence, we come from the vacuum with $\theta =0$ to the vacuum
with $\theta =2\pi$ as the result of such circumvention. The vacuum value of $\theta$
is
fixed at all points with the exception of the point $\theta =\pi$. At this point, a
transition
from one vacuum to another occurs, and the vacuum axion wall is formed
simultaneously with CO turning on. The width of such wall bounded by string is
$\delta\cong m_a^{-1}$. Thus, the initial value of $\theta$ must be close to $\pi$
near
the wall, and the amplitude of CO in \Eq{dens} is determined by the difference
of the
initial local phase $\theta (x)$ and the vacuum value, which is different from the
one
of the true vacuum only in
a narrow region within the wall of thickness $\delta\cong m_a^{-1}$. Therefore
in
this region we can write $\theta (x)=\pi -\varepsilon (x)$, where \Ref{2}
$\varepsilon
(x)=2\tan^{-1}(\exp (m_ax))$ and $x\cong m_a^{-1}$. Thereby the energy density
of
CO in such regions is given by
\begin{equation}
\label{adens}
\rho^A\approx\pi^2\bar{\rho_a}
\end{equation}
And so we obtain, that the distribution of CO of axion field is modulated by
nonlinear
inhomogeneities in which relative density contrasts are $\delta\rho /\rho >1$.
Such
inhomogeneities were called {\it archioles}. In the other words {\it archioles}
are a
formation that represents a replica of the percolational Brownian vacuum
structure of
axionic walls bounded by strings an which is fixed in the strongly inhomogeneous
initial distribution of axionic CDM. The scale of this modulation of density
distribution exceeds the cosmological horizon because of the presence of 80\%
infinite
component in the structure of axionic wall bounded by strings system. The
superweakness of the axion field selfinteraction results in the separation of
archioles
and of the vacuum structure of axionic walls -- bounded -- by -- strings. So
these two
structures evolve independently. The structure of walls bounded by strings
disappears
rapidly due to disintegration into separate fragments and further axion
emission. The
structure of archioles remains frozen at the RD stage. On the large scales, the
structure
of archioles is an initially nonlinear formation, a Brownian network of quasi --
one --
dimensional filaments of dustlike matter with the step
\begin{equation}
\label{step}
L^A(t)=\lambda\tilde t
\end{equation}
(where $\lambda \cong 1$). At the moment of creation $\tilde t$, the linear
density of
this quasilinear filamentary formations given by
\begin{equation}
\label{6}
\mu_A=\pi^2\bar {\rho_a}\tilde t\delta
\end{equation}
In accordance with this, the cosmological evolution of archioles in the
expanding
Universe is reduced to the extension of lines along only one direction.
We have studied in \Ref{4} the spectrum of inhomogeneities that the density
develops in response to the large-scale Brownian modulation of the distribution
of CO
of axion field. Density perturbations, associated with Brownian network of
archioles,
may be described in the terms of a two -- point autocorrelation function
\Ref{4}. To
obtain such autocorrelation function, it is necessary to perform averaging of
energy
density of infinite Brownian lines over all lines and over the Winner measure,
which
corresponds to the position along of Brownian line (see \Ref{4}).
The two -- point autocorrelation function in the Fourier representation has the
form
\begin{equation}
\label{7}
\langle\frac{\delta\rho}{\rho_0}(\vec k) \frac{\delta\rho}{\rho_0}(\vec
k')\rangle
=12\rho_A\mu_Ak^{-2}\delta (\vec k+\vec k')\tilde t^{-1}f^{-2}t^4G^2
\end{equation}
where $\rho_0$ is background density, $f_{MD}=3/(32\pi )$ for dustlike stage,
$f_{RD}=(6\pi)^{-1}$ for RD stage, $G$ is the gravitational constant, $\rho_A$
is
the total energy density of the Brownian lines. The mean- square fluctuation of
the
mass is given by
\begin{equation}
\label{8}
\left(\frac{\delta M}{M}\right)^2(k,t)=12\rho_A\mu_A\tilde t^{-1}f^{-2}G^2kt^4
\end{equation}
\section{Cosmological impact of archioles}
Let us consider a region characterized at instant $t$ by a size $l$ and a
density
fluctuation $\Delta$. For anisotropy of relic radiation we then obtain
\begin{equation}
\label{9}
\frac{\delta T}{T}\cong -\delta\left(\frac{l}{t}\right)^2
\end{equation}
If $l=t$, we have $|\delta T/T|\cong|\Delta |$; that is, the anisotropy of relic
radiation
is equal to the density contrast calculated at the instant when the size of the
region is
equal to the size of the horizon (Sachs -- Wolf effect). To estimate the
quadrupole
anisotropy that is induced in relic radiation by the structure of archioles, we
must find
the amplitude of perturbations on the scale of the modern horizon
\begin{equation}
\label{10}
\left(\frac{\delta M}{M}\right)^2=2.1\cdot 10^{-
25}\left(\frac{F_a}{10^{10}GeV}\right)^4\left(\frac{t_{RD}}{1s.}\right)^{2/3}
\left(\frac{t_{pres}}{1s.}\right)^{1/3}(k_{hor}t_{pres})
\end{equation}
Thus Sachs-Wolf quadrupole anisotropy of relic radiation induced by archioles
will
be
\begin{equation}
\label{11}
\frac{\delta T}{T}\cong 2.3\cdot 10^{-6}\left(\frac{F_a}{10^{10}GeV}\right)^2
\end{equation}
According to COBE data (see for examle \Ref{cobe}),
the measured quadrupole anisotropy of relic radiation is at
the level of
\begin{equation}
\label{12}
\frac{\delta T}{T}\approx 5\cdot 10^{-6}
\end{equation}
If we take into account the uncertainties of our consideration such as the
uncertainties
in correlation length scale of Brownian network ($\lambda \approx 1\div 13$) and
in
temperature dependence of axion mass, we can obtain a constraint on the scale of
symmetry breaking in the model of an invisible axion
\newcommand{m_a}{m_a}
\begin{eqnarray}
\label{13}
F_a\le 1.5\cdot 10^{10}GeV\div 4\cdot 10^{9}GeV;\qquad
m_a\ge 410\mu eV\div 1500\mu eV
\end{eqnarray}
This upper limit for $F_a$ is close to the strongest upper limits in
\Refs{5,6,7},
obtained by comparing the density of axions from decays of axionic strings with
the
critical density, but has an essentially different character.
The point is that the density of axions formed in decays of axionic strings
depends
critically on the assumption about the spectrum of such axions (see \Refs{5,6})
and
on the model of axion radiation from the strings (see \Ref{7}). For example,
Davis
\Ref{5} assumed that radiated axions have a maximum wavelength of $\omega
(t)\cong t^{-1}$ while Harari and Sikivie \Ref{6} have argued that the motion of
global strings was overdamped, leading to an axion spectrum emitted from
infinite
strings or loops with a flat frequency spectrum $\propto k^{-1}$. This leads to
an
uncertainty factor of $\simeq 100$ in the estimate of the density of axions from
strings and to
the corresponding uncertainty in the estimated upper limit on $F_a$
\begin{eqnarray}
\label{14}
F_a\le 2\cdot 10^{10}\varsigma GeV;\qquad
m_a\ge 300/\varsigma\mu eV
\end{eqnarray}
Here, $\varsigma =1$ for the spectrum from Davis \Ref{5}, and $\varsigma\approx
70$ for
the spectrum from Harari and Sikivie \Ref{6}.
In their treatment of axion radiation from global strings, Battye and Shellard
\Ref{7} found that the dominant source of axion radiation are string loops
rather than
long strings, contrary to what was assumed by Davis \Ref{5}. This leads to the
estimations
\begin{eqnarray}
\label{15}
F_a\le 6\cdot 10^{10}GeV\div 1.9\cdot 10^{11}GeV; \qquad
m_a\ge 31\mu eV\div 100\mu eV
\end{eqnarray}
Arguments that lead to the constraint \Eq{13} are free from these uncertainties,
since
they have a global string decay model -- independent character.
At the smallest scales, corresponding to the horizon in the period $\tilde t$,
evolution of archioles just in the beginning of axionic CDM dominancy in the
Universe (at redshifts $z_{MD}\cong 4\cdot 10^4$) should lead to formation of
the
smallest gravitationally bound axionic objects with the minimal mass
$M\simeq \rho_a \tilde t^3\simeq 10^{-6}M_{\odot}$ and of typical minimal size
$\tilde
t(1+z_A)/(1+z_{MD})\cong 10^{13}cm$. One can expect the mass distribution of
axionic objects at small scale to peak around the minimal mass, so that the
existence
of halo objects with the mass ($10^{-6} M_{\odot}\div 10^{-1} M_{\odot}$) and
size
$10^{13}\div 10^{15}cm$ is rather probable, what may have interesting
application
to the theoretical interpretation of MACHOs microlensing events.
\section{Physical impact of archioles}
The inclusion of obtained restriction into the full cosmoparticle analysis
provides
detailed quantitative definition of the cosmological scenario, based on the
respective
particle physics model. Consider, for example, a simple variant of gauge theory
of
broken family symmetry (TBFS) \Ref{8}, which is based on the standard model of
electroweak interactions and QCD, supplemented by spontaneously broken local
$SU(3)_H$ symmetry for quark--lepton families. This theory provides natural
inclusion of Peccei--Quinn symmetry $U(1)_H\equiv U(1)_{PQ}$, being associated
with heavy "horizontal" Higgs fields and it gives natural solution for QCD CP --
violation problem. The global $U(1)_H$ symmetry breaking results in the
existence
of axion--like Goldstone boson -- $a$. TBFS turns to be a simplest version of the
unified theoretical physical quantitative description of all main types of dark
matter
(HDM--massive neutrinos, axionic CDM and UDM in the form of unstable neutrinos
\Refs{8,9}) and the dominant form of the dark matter is basically determined by
the
scale of the "horizontal" symmetry breaking $V_H$, being the new fundamental
energy scale of the particle theory. For given value of $V_H$ the model defines
the
relative contribution of hot, cold and unstable dark matter into the total
density.
Since in the TBFS the scale of horizontal symmetry breaking $V_H$ is associated
with $F_a$, we have, from \Eq{13}, the same upper limit on $V_H$. However, this
limit assumes, that the considered inflationary model permits topological
defects and
hence archioles formation due to the sufficiently high reheating temperature
$T_{RH}\ge V_H$. In the inflationary model, which occurs in TBFS, we can achieve
$T_{RH}\sim 10^{10}GeV$.
The "horizontal" phase transitions on inflationary stage lead to the appearance
of a
characteristic spikes in the spectrum of initial density perturbations. These
spike--like
perturbations, on scales that cross the horizon (60--1) $e$-- folds before the
end of
inflation reenter the horizon during the radiation or dust like era and could in
principle
collapse to form primordial black holes. The minimal interaction of "horizontal"
scalars of TBFS $\xi^{(0)}$, $\xi^{(1)}$, $\xi^{(2)}$ with inflaton allows us to
include them in the effective inflationary potential \Ref{10}:
\begin{eqnarray}
\label{16}
V(\phi , \xi^{(0)},\xi^{(1)},\xi^{(2)})=-
\frac{m^2_{\phi}}{2}\phi^2+\frac{\lambda_{\phi}}{4}\phi^4-
\sum_{i=0}^2\frac{m^2_i}{2}\left(\xi^{(i)}\right)^2
+\sum_{i=0}^2\frac{\lambda^{(i)}_{\xi}}{4}\left(\xi^{(i)}\right)^4+\sum_{i=0}^2
\frac
{\nu^2_{\xi}}{2}\phi^2\left(\xi^{(i)}\right)^2
\end{eqnarray}
The analysis of processes of primordial black holes formation from density
fluctuations, which can be generated by "horizontal" phase transitions at the
inflationary stage gives rise to an upper limit on the scale of horizontal
symmetry
breaking \Ref{10}.
\begin{equation}
\label{17}
V_H\le 1.4\cdot10^{13}GeV
\end{equation}
Therefore the range between the two upper limit \Eq{13} and \Eq{17} turns to
be
not closed, and the following values seem to be possible
\begin{equation}
\label{18}
10^{11}GeV\le V_H\le 10^{13}GeV
\end{equation}
The indicated range corresponds to the case when all the horizontal phase
transitions
take place on the dust like stage and $\phi_{c_2}\ll m_{Pl}$. In this case the
inflationary field $\phi$ oscillates with initial amplitude $\sim m_{Pl}$.
According to
\Refs{11,10} it means that any time the amplitude of the field becomes smaller
then
$\phi_{c_2}\ll m_{Pl}$, the last (axion $\xi^{(2)}$) phase transition with
symmetry
breaking occurs, and topological defects are produced. Then the amplitude of the
oscillating field $\phi$ becomes greater than $\phi_{c_2}$, and the
symmetry is restored again. However, this regime does not continue too long.
Within
a few oscillations, quantum fluctuations of the field $\xi^{(2)}$ will be
generated
with the dispersion $\langle\left(\xi^{(2)}\right)^2\rangle\simeq\nu^{-
1}_{\xi}\lambda^{1/2}_{\phi}\ln^{-2}1/\nu^2_{\xi}$ \Ref{11}. For
\begin{equation}
\label{19}
m^2_2\le\nu^{-1}_{\xi}\lambda^{1/2}_{\phi}\lambda_{\xi}m^2_{Pl}\ln^{-
2}1/\nu^2_{\xi}
\end{equation}
these fluctuations will keep the symmetry restored. The symmetry breaking will
be
finally completed when $\langle\left(\xi^{(2)}\right)^2\rangle$ will become
small
enough. Thus such phase transition leads to formation of topological defects and
archioles without any need for high -- temperature effects. Substituting the
typical
values for potential \Eq{16} such as $m^2_2\approx 10^{-3}V^2_H$,
$\lambda_{\xi}\simeq 10^{-3}$, $\nu_{\xi}\simeq 10^{-10}$ (see \Ref{10}) we will
obtain that the condition \Eq{19} means that for the scales
\begin{equation}
\label{22}
V_H\le 10^{-3}m_{Pl}
\end{equation}
the phenomenon of non -- thermal symmetry restoration takes place in simplest
inflationary scenario based on TBFS. Owing to this phenomenon oscillations of
the
field $\xi^{(2)}$ do not suppress the topological defects and archioles
production for
the range \Eq{18}. So the range \Eq{18} turns to be closed by comparison of BBBR
quadrupole anisotropy, induced by archioles, with the COBE data. As a result,
the
upper limit on the scale of horizontal symmetry breaking will be given by
\Eq{13}.
Note, in conclusion, that the axion emission can influence the time scale and
energetics of neutrino flux from collapsing stars. Analysis of this effect for
SN1987A
excludes the interval $3\cdot 10^6GeV\le V_H\le 3\cdot 10^9GeV$ (see \Ref{12})
and
establishes the lower limit on the high energetic branch of TBFS. Thus putting
together all these limits we can extract narrow window in the high energetic
branch of
the so called model of horizontal unification (MHU) \Ref{13}:
\begin{equation}
\label{21}
3\cdot 10^9GeV\le V_H\le 1.5\cdot 10^{10}GeV
\end{equation}
On the base of this choice for the main parameter of MHU we can build a
quantitatively definite dark matter scenario, which associates the formation of
the
large--scale structure in the Universe with a mixture of axions and massive
neutrinos,
since in this interval the total density equal to the critical one makes in the
most cases
the contribution of massive neutrinos necessary.
\small
|
1,116,691,500,413 | arxiv | \section{Introduction}
\label{intro}
Pathology departments across most countries in the world are experiencing severe under-staffing issues \cite{metter_trends_2019, bainbridge_s_testing_2016}. With digital slide scanners becoming increasingly ubiquitous in pathology labs, it is expected that machine learning based decision support systems would substantially reduce the workload for pathology labs \cite{colling_artificial_2019}. The slide scanners are capable of producing high-resolution multi-gigapixel whole-slide images (generally of the order of $150K{\times}100K$ pixels), resulting in a wealth of pixel data that could be honed for diagnostic and prognostic purposes. The rapidly growing research community in the area of computational pathology has developed machine learning algorithms for narrow applications such as mitotic counting, cancer grading, cancer detection \cite{balkenhol_deep_2019, liu_detecting_2017, nagpal_development_2019}. These advances are important, but implementing specific but separate approaches for numerous tasks in pathology might be impractical, and in fact may be impossible due to the distribution of target categories in the population (see Figure 1). Furthermore, algorithms are frequently built on datasets that may not be representative of the population, and therefore under-perform in practice \cite{gamper_pannuke:_2019}. As such, building general-purpose algorithms, pre-trained on multiple datasets could substantially speed up the application of machine learning tools in practice \cite{hegde_similar_2019}. Furthermore, creating general algorithms that learn from few examples, would allow to easily solve very specific and narrow tasks that contain only few learning examples.
In this paper, we propose a general one-class classification model for histology that is meta-trained on multiple datasets and can be applied to new tasks without re-training.
We name this approach Meta-SVDD.
Past approaches for deep anomaly detection, include deterministic or variational auto-encoding, and adversarial deep generative methods \cite{schlegl_unsupervised_2017, schlegl2019f, an2015variational}. First, VAE or GAN based methods are computationally intensive, either requiring back propagation at test time, or re-training for new tasks. Second, these generative models are built with the assumption of estimating the underlying data density well, however this assumption has been put under question \cite{nalisnick_deep_2018}. Third, both autoencoding and adversarial methods solve optimisation tasks during training that are different to the downstream objectives. To address these vulnerabilities, we propose a one-class classification model that is meta-learned with the explicit loss function for one-class classification. Our novel method allows to adapt to new tasks by only observing few examples without the need for re-training as the task specific parameters inference is amortized using a neural network. The proposed method also uses out-of- as well as in-distribution examples during meta-learning optimisation.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{uhcw_dist}
\caption{A breakdown by diagnosis of a sample of 23,222 patient cases from a hospital in the UK. Notice clear asymmetric distribution of categories, where for example an abnormal category has even further categorisation of various sub-abnormalities. \textit{Other} contains diagnoses of bacterial infection (Spirochete or Tuberkulosis) where only a small visual field in the whole slide image might be useful for learning.}
\label{treemap}
\end{figure}
\section{Methods \& Results}
\label{gen_inst}
Consider a sample dataset $D^{(t)}$ for a given task $t$, which could correspond to a set of patches from histology WSIs, where $t$ corresponds to a specific tissue type (colon, lung, breast, etc.). In an ideal supervised learning task $t$ with labeled training data $D^{(t)} = \{ (\mathbf{x}_{n}^{(t)}, (\mathbf{y}_{n}^{(t)})\}^{n_t}_{n=1}$ and test data $\{ (\tilde{\mathbf{x}}_{m}^{(t)}, (\tilde{\mathbf{y}}_{m}^{(t)})\}^{M_t}_{m=1}$ pairs, the empirical marginal distribution of class labels $p(y)$ is more or less uniform \cite{buda_systematic_2018}. This allows one to estimate an optimal set of parameters $\mathbf{w}$ for a function $\mathbf{f}_{\mathbf{w}}$, i.e. a probability vector output from a convolutional neural network, via empirical risk minimization: $\argmin_{\phi} \frac{1}{N} \sum_{n=1}^{N} \mathcal{L}( \mathbf{f}_{\mathbf{w}}(\mathbf{x}_n), \mathbf{y}_n)$. Where $\mathcal{L}$ is a loss function.
\subsection{One Class Deep Support Vector Data Description (OC-SVDD)}
\label{dsvdd}
In the case, when the empirical marginal distribution of categories is not necessarily uniform, as mentioned in Section \ref{intro}, we can formulate it as a one-class classification problem: (1) reducing the complexity of the task by having only the positive, in-distribution samples, $y_i = 1, \forall y_i \in D^{(t)}$; (2) adapting the function $\mathbf{f}_{\mathbf{w}}(\mathbf{x}_n)$ to map a given sample to a latent encoding; and (3) minimizing the empirical one-class Deep SVDD (OC-SVDD) loss \cite{ruff_deep_2018}, namely
\begin{equation}
\label{equation_one}
\mathcal{L}(\phi) = \frac{1}{N} \sum_{i=1, \forall y_i = 1} \left\|\mathbf{f}_{\mathbf{w}}(x_i) - \mathbf{c} \right\|^{2},
\end{equation}
where one learns a hyper-sphere by minimising the mean distance of all data representations to the center $\mathbf{c}$ for all positive samples \cite{ruff_deep_2018}.
\subsubsection{OC-SVDD Experiments}
\label{oc_svdd_exp}
In Table A\ref{svdd-results}, we demonstrate the results of applying OC-SVDD loss to histology data. The datasets were obtained from the following sources: Colon from \cite{kather_deep_2019}, Lung \cite{alsubaie_bottom-up_2018}, Ovary \cite{kobel_diagnosis_2010}, Lymphoma \cite{janowczyk_deep_2016}, Oral \cite{shaban_prognostic_2018}, Breast from Chamelyon Challenge\footnote{https://camelyon16.grand-challenge.org/}, and Meningioma \cite{qureshi_adaptive_2008}.
For these experiments we took an Imagenet pre-trained ResNet18 \cite{he_deep_2015}, where we replaced the final linear layer to produce the output of dimensionality of the hyper-sphere, 128 for every task. We set batch size to 64, learning rate to $1\mathrm{e}{-4}$, and optimized over 100 epochs using ADAM \cite{kingma_adam:_2014}. The hyper-sphere center $\mathbf{c}$ is initialised using the first pass through the network. For the preliminary results presented in this paper, we did not optimise any of the hyper-parameters, these were taken from \cite{ruff_deep_2018}.
For every tissue type, we picked one of the classes and treated it as in-distribution data i.e. one task, and optimised the loss function in Equation \ref{equation_one}. For most tasks, the in-distribution data is uni-modal, and only consists of that particular category. However, in the case of Breast tissue, the category \textit{Other} contains healthy tissue, lymphocytes and other tissue phenotypes, which demonstrates the potential of OC-SVDD for tumor screening purposes.
\subsection{Probabilistic meta-learning for SVDD (Meta-SVDD)}
\label{meta}
While the OC-SVDD method offers an explicit loss function for deep learning in one class classification and could be easily applied at test time, it still requires expensive training for any given task. One has to train 32 networks to produce the results of OC-SVDD in Table A\ref{svdd-results}. We propose to address this issue using meta-learning. We induce a distribution over function $\mathbf{f}$, by introducing an amortized distribution $q_{\phi}(\mathbf{f} | D^{(t)}, \theta)$ \cite{garnelo_neural_nodate}. We adopt the network architecture and inference method for the parameters $\phi$ according to \cite{gordon_meta-learning_2018}. Namely, during optimisation we: (i) select a task $t$ at random, (ii) sample some training data $D^{(t)}$, (iii) form the posterior predictive $q_{\phi}(\mathbf{f} | D^{(t)}, \theta)$ given in-distribution data $D^{(t)} = \{ (\mathbf{x}_{n}^{(t)}, (\mathbf{y}_{n}^{(t)})\}^{n_t}_{n=1}$, where $y_n = 1$, (iv) next we evaluate the posterior predictive on meta test data using semi-supervised SVDD loss:
\begin{equation}
\label{equation_two}
\frac{1}{N+M+L} \left[ \sum_{i=1, \forall y_i = 1} \left\|\mathbf{f}_{l}(x_i) - \mathbf{c} \right\|^{2} + \eta \sum_{j=1, \forall y_j = -1} (\left\|\mathbf{f}_{l}(x_j) - \mathbf{c} \right\|^{2})^{y_j}\right]
\end{equation}
where $\mathbf{f}_{l} \sim q_{\phi}(\mathbf{f} | D^{(t)}, \theta)$. $L$ is the number of samples from predictive posterior. We assume $q_{\phi}$ to be Gaussian and use reparameterisation trick during optimisation \cite{kingma_auto-encoding_2013}. Compared to the Equation \ref{equation_one}, the right hand side learns the inverse of the left hand side, pushing positive samples further from the center \cite{ruff_deep_2019}, and $\eta > 0$ in Equation \ref{equation_two} is a hyper-parameter. This approach allows us to amortize the parameter learning for new tasks directly to inference network, that predicts the parameterizations for the distribution over $\mathbf{f}$ that maps test data to the latent space. We name this approach Meta-SVDD.
\subsubsection{Meta-SVDD Experiments}
\label{meta_svdd_exp}
Following \citet{gordon_meta-learning_2018}. We use the same encoder (ResNet18) for all tasks represented by $\theta$ in predictive posterior. We set $L$ to 10. We set a meta batch size to 5, and optimise using gradient accumulation, due to restrained computational resources. The inference network $q_\phi$ consists of three fully connected layers, with ReLU activation functions. The inference network takes mean of in-distribution features and produces parameterisation for posterior from which parameters of function $\mathbf{f}$ are sampled. The remaining hyper-parameters are the same as in Section \ref{oc_svdd_exp}.
We adopt leave-one-out cross-validation setup where we pretrain on 32 tasks, and test on the remaining tasks. The results are presented in Table \ref{svdd-results}. Note that for inferring the posterior we are using only 10 in-distribution samples. Therefore the results of the proposed method are promising, however at the current stage of work we have faced the computational limitations during meta-training \cite{nichol_first-order_2018}.
\section{Discussion \& Future Directions}
\label{results_discussion}
We present preliminary results for meta-learned one-class classification model for histology tasks, such model does not require expensive training and, parameter inference is done at test time. We demonstrated its potential for screening task in the case of Breast tissue, and flexibility with learning uni-modal tasks in other tissues. Future work would include hyper-parameter optimisation for neural network architecture, and for meta-learning. For example OC-SVDD loss resembles the tasks of self-supervised learning, and as it has been demonstrated benefits significantly from larger networks \cite{kolesnikov_revisiting_nodate}. However, that would require a careful treatment of sensitive meta-learning optimisation process \cite{antoniou_how_2018}. Once a stable set of architecture and optimisation hyper-parameters are established, we plan to thoroughly test the proposed meta learning scheme for one-class classification on whole slide images for screening and speeding up annotation. Additionally, we are planning on expanding the tasks for training using existing datasets for meta-learning \cite{triantafillou_meta-dataset:_2019}, this, which we hope would also increase the performance on fine-grained tasks such Lung adenocarcinoma subtypes. By increasing the size of the network, stabilising the optimisation process, and increasing the number of datasets, we aim to significantly improve the performance of Meta-SVDD.
\subsubsection*{Acknowledgments}
This research was partially supported by Philips Pathology.
\small
\bibliographystyle{plainnat
|
1,116,691,500,414 | arxiv | \section{Introduction}
Constraint solvers and optimizers have been used heavily in the design, synthesis, and verification of embedded and cyber-physical systems. Examples include multiprocessor system-on-chip verification \cite{MPSoC_verif}, quantum circuits temporal planning \cite{quantum_circuit}, synthesis for reactive embedded systems \cite{reac_embed_syst}, control-command software verification~\cite{verifsof}, Galois field arithmetic hardware circuits~\cite{verifhard}, air traffic management of unmanned aircraft systems~\cite{airtraffic}, and software verification for the next generation space-shuttle \cite{spaceshuttle}, and conflict detection for aircraft~\cite{verifaircraft}.
In this paper, we will focus on the class of general multivariate polynomial constraints (also known as nonlinear real arithmetic theory). Multivariate polynomial constraints
appear naturally in the design, synthesis, and verification of these systems. It is not then surprising that attention given to this problem in the last decade as evidenced by the amount of off-the-self solvers and optimizers that have support for solving feasibility and optimization problems over general multivariate polynomial constraints, including Z3~\cite{Z3}, Coq~\cite{coq}, Yices~\cite{yices}, NasaLib~\cite{pvsnasa}, Cplex, \cite{cplex}, Cvxopt \cite{cvxopt}, and Quadprog \cite{quadprog}. Regardless of their prevail in several synthesis and verification problems, well-known algorithms---that are capable of solving a set of polynomial constraints---are shown to be a doubly exponential~\cite{complexityproblem1}, placing a significant challenge to design efficient solvers for such problems.
Recently, neural networks (NNs) have shown impressive empirical results in approximating unknown functions. These observations motivated several researchers to ask how to use NNs to tame the complexity of NP-hard problems. Examples are the use of NNs to design scalable solvers for program synthesis~\cite{progsyn}, traveling salesman problem~\cite{travelsales}, first-order theorem proving~\cite{1thepro}, higher-order theorem proving~\cite{highthepro}, and Boolean satisfiability (SAT) problems~\cite{satsolve}. While several of these solvers sacrifice either soundness or correctness guarantees, we are interested in this paper on using such empirically powerful NNs to design a sound and complete solver for nonlinear real arithmetic theory.
In addition to NNs, polynomials constitute a rich class of functions for which several approximators have been studied. Two of the most famous approximators for polynomials are Taylor approximation and Bernstein polynomial basis. These two approximators have been successfully used in solvers like Coq and NASALib. This opens the question of how to combine all those approximation techniques, i.e., NNs, Taylor and Bernstein approximations, to come up with a scalable solver that can reason about general multivariate polynomial constraints.
We introduce PolyARBerNN, a novel sound and complete solver for polynomial constraints that combines these three function approximators (NNs, Taylor, and Bernstein) to prune the search space and produce small enough instances in which existing sound and complete solvers (based on the well-known Cylindrical Algebraic Decomposition algorithm) can easily reason about. In general, this paper provides the following contributions:
\begin{itemize}
\item We introduce a novel NN-guided abstraction refinement process. Such NN guides the use of Taylor approximations to find a solution or prune the search space. We evaluated the generalizability of the trained NN and showed its ability to guide the abstraction refinement process for unseen polynomials with a different number of variables and orders.
\item We complement the NN-guided abstraction refinement with a state-space pruning phase using Bernstein approximations that accelerates the process of removing portions of the state space in which the sign of the polynomial does not change.
\item We validate our approach by first comparing the scalability of the proposed PolyARBerNN solver with respect to NASALib, a library that uses Bernstein expansion to solve polynomial constraints. Second, we compare the execution times of the proposed tool with the latest versions of the state-of-the-art non-linear arithmetic solvers, such as Z3 8.9, Yices 2.9 by varying the order, the number of variables, and the number of the polynomial constraints for instances when a solution exist and when a solution does not exist. We also compare the scalability of the solver against Z3 8.9 and Yices 2.9 on the problem of synthesizing a controller for a cyber-physical system.
\item We propose PolyAROpt, an optimizer that uses PolyARBerNN to solve constrained multivariate polynomial optimization problems. Our theoretical analysis shows that PolyAROpt is capable of providing solutions that are $\epsilon$ close to the global optima (for any $\epsilon > 0$ chosen by the user).
Numerical results show that PolyAROpt solves high-dimensional and high-order optimization problems with high speed compared to the built-in optimizer in Z3 8.9 solver.
\end{itemize}
\textbf{Related work:}
Cylindrical algebraic decomposition (CAD) was introduced by Collins \cite{collins} in 1975 and is considered to be the first algorithm
that solves general polynomial inequality constraints. Several improvements were introduced across the years to reduce the high time complexity of the CAD algorithm~\cite{Hong, McCalum}. Although the CAD algorithm is sound and complete, it scales poorly with the number of polynomial constraints and their order. Other techniques to solve general polynomial inequality constraints include the use of transformations and approximations to scale the computations. For instance, the authors in \cite{pvsnasa} incorporated Bernstein polynomials in NASA prototype verification systems (PVS) which are included in a library called NASALib. The library uses the range enclosure propriety of Bernstein polynomials to solve quantified polynomial inequalities. However, the library is not complete for non-strict inequalities \cite{pvsnasa} and is not practical for higher dimensional polynomials.
On the other hand, recent years witnessed multiple successes in using machine learning to solve combinatorial problems \cite{gcnn,satsolve}. In particular, the authors in~\cite{gcnn} proposed a graph convolutional neural network (GCNN) to learn heuristics that can accelerate mixed-integer linear programming (MILP) solvers.
Similarly, NeuroSAT solver~\cite{satsolve} uses a message-passing neural network (MPNN) to solve Boolean SAT problems.
The authors of~\cite{satsolve} showed that NeuroSAT generalizes to novel distributions after training only on random SAT problems. Nevertheless, NeuroSAT is not competitive with state-of-art SAT solvers and it does not have a correctness guarantee.
\section{Problem Formulation}
\textbf{Notation:}
We use the symbols $\mathbb{N}$ and $\mathbb{R}$ to denote the set of natural and real numbers, respectively.
We denote by $x=\big(x_1,x_2,\cdots,x_n\big) \in \mathbb{R}^n$ the vector of real-valued variables, where $x_i \in \mathbb{R}$. We denote by $I_n (\underline{d}, \overline{d}) =\big[\underline{d}_1,\overline{d}_1\big] \times \cdots \times$ $\big[\underline{d}_n,\overline{d}_n\big] \subset \mathbb{R}^{n}$ the $n$-dimensional hyperrectangle where $\underline{d} = \left(\underline{d}_1, \cdots, \underline{d}_n\right)$ and $\overline{d} = \left(\overline{d}_1, \cdots, \overline{d}_n\right)$ are the lower and upper bounds of the hyperrectangle, respectively.
For a real-valued vector $x =\big(x_1,x_2,\cdots,x_n\big)\in \mathbb{R}^n$ and an index-vector $K = \left(k_1, \cdots, k_n\right) \in \mathbb{N}^n$, we denote by $x^K \in \mathbb{R}$ the scalar $x^K = x_1^{k_1}\cdots x_n^{k_n}$.
Given two multi-indices $K = \left(k_1, \cdots, k_n\right) \in \mathbb{N}^n$ and $L = \left(l_1, \cdots, l_n\right) \in \mathbb{N}^n$, we use the following notation throughout this paper: $K + L = \left(k_1+l_1, \cdots, k_n + l_n\right)$, ${L \choose K}={l_1 \choose k_1} \cdots {l_n \choose k_n}$, and $\sum\limits_{K \leq L}=\sum\limits_{k_1 \leq l_1}^{}\cdots \sum\limits_{k_n \leq l_n}$.
A real-valued multivariate polynomial $p:\mathbb{R}^n \rightarrow \mathbb{R}$ is defined as:
\begin{align*}
p(x_1, \ldots, x_n) &=& \sum_{k_1 = 0}^{l_1 } \sum_{k_2 = 0}^{l_2} \ldots \sum_{k_n = 0}^{l_n} a_{(k_1,\ldots,k_n)} x_1^{k_1} x_2^{k_2} \ldots x_n^{k_n} \nonumber \\ &=&\sum\limits_{K \leq L} a_K x^K,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\end{align*}
where $L = (l_1, l_2, \ldots, l_n)$ is the maximum degree of $x_i$ for all $i = 1, \ldots, n$.
We denote the space of multivariate polynomials with coefficients in $\mathbb{R}$ by $\mathbb{R}[\left(x_1,x_2,\cdots,x_n\right)]$.
Given a real-valued function $f:\mathbb{R}^n\rightarrow\mathbb{R}$, we denote by $L^{-}_{0}\left(f\right)$ and $L^{+}_{0}\left(f\right)$ the zero sublevel and zero superlevel sets:
$L^{-}_{0}\left(f\right)=\{x \in \mathbb{R}^n\big|f\big(x\big)\leq 0\}$ and
$L^{+}_{0}\left(f\right)=\{x \in \mathbb{R}^n \big|f\big(x\big)\geq 0\}$
.~\\
\textbf{Main Problem:}
In this paper, we focus on two problems namely (Problem 1) the feasibility problems that involve multiple \textit{polynomial inequality constraints} with input ranges confined within closed hyperrectangles and (Problem 2) the constrained optimization problem which aims to maximize (or minimize) a polynomial objective function subject to other polynomial inequality constraints and input range constraints. More precisely, the two problems of interest can be defined as follows.
\begin{problem}\label{prob1}
\begin{align*}
\qquad \qquad
\exists \left(x_1,\cdots,x_n\right) &\in [\underline{d}_1,\overline{d}_1] \times \ldots \times [\underline{d}_n,\overline{d}_n] \qquad\qquad\qquad\\
\text{subject to:} \qquad& p_1\left(x_1,\cdots,x_n\right)~\leq~0, \\
&\qquad \qquad \vdots \\
& p_m\left(x_1,\cdots,x_n\right)~\leq~0,
\end{align*}
where $p_i\left(x\right)=p_i\left(x_1,\cdots,x_n\right) \in \mathbb{R}[\left(x_1,x_2,\cdots,x_n\right)]$ is a polynomial over variables $x_1,\cdots,x_n$. Without loss of generality, $p_i\big(x\big)$ $\geq~0$ and $p_i\left(x\right)~=~0$ can be encoded using the constraints above.
\end{problem}
Similarly, given a polynomial objective function $p(x)$, we define the optimization problem as:
\begin{problem}
\begin{align*}
\min\limits_{x \in I_n (\underline{d}, \overline{d})} ~ & p\left(x\right) ~~[\textbf{or} ~ \max\limits_{x \in I_n (\underline{d}, \overline{d})} ~ p\left(x\right)] \qquad\qquad\qquad\\
\text{subject to:} \qquad&
p_1\left(x_1,\cdots,x_n\right)~\leq~0, \\
&\qquad \qquad \vdots \\
& p_m\left(x_1,\cdots,x_n\right)~\leq~0
\end{align*}
\end{problem}
\section{Convex Abstraction Refinement: Benefits and Drawbacks}
In this section, we overview the previously reported framework on using
convex abstraction refinement process introduced in~\cite{polyar} along with some drawbacks that motivate the need for the proposed framework.
\subsection{Overview of Convex Abstraction Refinement}
Sound and complete algorithms that solve Problem 1 are known to be doubly exponential in $n$ with a total running time that it is bounded by $\left(md\right)^{2^n}$ \cite{complexityproblem1}, where $d$ is the maximum degree among the polynomials $p_1, \ldots, p_m$. Since the complexity of the problem grows exponentially, it is useful to remove (or prune) subsets of the search space in which the solution is guaranteed not to exist. Since Problem 1 asks for an $x$ in $\mathbb{R}^n$ for which all the polynomials are negative, a solution does not exist in subsets of $\mathbb{R}^n$ at which one of the polynomials $p_i$ is always positive (i.e., $L^{+}_{0}\left(p_i\right)$). In the same way, finding regions of the input space for which some of the polynomials are negative $L^{-}_{0}\left(p_i\right)$ helps with finding the solution faster.
To find subsets of $L^{+}_{0}\left(p_i\right)$ and $L^{-}_{0}\left(p_i\right)$ efficiently, the use of ``convex abstractions'' of the polynomials was previously proposed in~\cite{polyar}. Starting from a polynomial $p_i(x) \in \mathbb{R}[x]$ and a hyperrectangle $I_n \subset \mathbb{R}^n$, the framework in~\cite{polyar} computes two quadratic polynomials $O^{p_i}_j$ and $U^{p_i}_j$ such that:
\begin{align*}
U^{p_i}_j(x) \le p(x) \le O^{p_i}_j(x) \qquad \forall x \in I_n,
\end{align*}
where $O$ and $U$ stands for Over-approximate and Under-approximate quadratic polynomials, respectively, and the subscript $j$ in $O^{p_i}_j(x)$ and $U^{p_i}_j(x)$ encodes the iteration index of the abstraction refinement process. It is easy to notice that the zero superlevel set of $U^{p_i}_j(x)$ is a subset of $L^{+}_{0}\left(p_i\right)$, i.e., $L^{+}_{0}\left(U^{p_i}_j\right) \subseteq L^{+}_{0}\left(p_i\right)$. Similarly, the zero sublevel set of $O^{p_i}_j(x)$ is a subset of $L^{-}_{0}\left(p_i\right)$, i.e., $L^{-}_{0}\left(O^{p_i}_j\right) \subseteq L^{-}_{0}\left(p_i\right)$. Moreover, being convex polynomials, identifying the zero superlevel sets and zero sublevel sets of $O^{p_i}_j(x)$ and $U^{p_i}_j(x)$ can be computed efficiently using convex programming tools. By iteratively refining these upper and lower convex approximations, the framework in~\cite{polyar} was able to rapidly prune the search space until regions with relatively small volume are identified, at which sound and complete tools such as Z3 8.9 and Yices 2.6 (which are based on the Cylindrical Algebraic Decomposition algorithm) are used to search these small regions, efficiently, to find a solution. It is important to notice that these solvers (especially Yices) are optimized towards the cases when the search space is a bounded hyperrectangle.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.35\textwidth]{AbstRefi1.png}
\includegraphics[width=0.35\textwidth]{AbstRefi2.png} \\
\includegraphics[width=0.35\textwidth]{AbstRefi4.png}
\includegraphics[width=0.35\textwidth]{AbstRefi5.png}
\caption{Exemplary cases where abstracting higher order polynomial (black curves) using convex approximations fails to provide helpful information: \textbf{Top-Left:} under-approximation (green curve) is entirely negative and hence fails to identify any subsets of $L^{+}_{0}\left(p\right)$. \textbf{Top-Right:} over-approximation (red curve) is entirely positive and hence fails to identify subsets of $L^{-}_{0}\left(p\right)$. \textbf{Bottom:} under and over approximations failed to identify polynomials which are consistently positive (left) or negative (right).
}
\label{F1}
\end{figure*}
\subsection{Drawbacks of Convex Abstraction Refinement}
Although the prescribed convex abstraction refinement process was shown to provide several orders of magnitude speedup~\cite{polyar}, it adds unnecessary overhead in certain situations. In particular, and as shown in Figure~\ref{F1}, the quadratic abstractions $O^{p_i}_j(x)$ and $U^{p_i}_j(x)$ may fail to identify meaningful subsets of $L^{-}_{0}\left(p_i\right)$ and $L^{+}_{0}\left(p_i\right)$. One needs to split the input region to tighten the over-/under-approximation in such cases. Indeed, applying the convex abstraction refinement process, above, may lead to several unnecessary over-approximations or under-approximations until a tight one that prunes the search space is found. These drawbacks call for a methodology that is capable of:
\begin{enumerate}
\item Guiding the abstraction refinement process: To reduce the number of unnecessary computations of over/under approximations, one needs a heuristic that guides the convex abstraction refinement process. In particular, such a heuristic needs to consider the properties of the polynomials and the input region to estimate the volume of the sets that the convex under/over-approximation will identify.
\item Alternative Abstraction: As shown in Figure~\ref{F1} (bottom), abstracting high-order polynomials using convex ones may fail to identify easy cases when the polynomial is strictly positive or negative. Therefore, it is beneficial to use alternative ways to abstract high-order polynomials that can augment the convex abstractions.
\end{enumerate}
Designing a strategy that addresses the two requirements above is the main topic for the following two sections.
\section{Neural Network Guided Convex Abstraction Refinement}
In this section, we are interested in designing and training a Neural Network (NN) that can be used to guide the abstraction refinement process. Such NN can be used as an oracle by the solver to estimate the volume of the zero super/sub-level sets (for each polynomial) within a given region $I_n(\underline{d}, \overline{d})$ and select the best approximation strategy out of three possibilities namely: (i) apply convex under-approximation, (ii) apply convex over-approximation, and (iii) split the region to allow for finer approximations in the subsequent iterations of the solver. In this section, we aim to develop a scientific methodology that can guide the design of such NN.
\subsection{On the relation between the NN architecture and the characteristics of the polynomials:}
In this subsection, we aim to understand how the properties of the polynomials affect the design of the NN. We start by reviewing the following result from the machine learning literature:
\begin{theorem}[Theorem 1.1~\cite{NNcomplexity1}] \label{thm:errorBound}
There exists a Rectifier Linear Unit (ReLU)-based neural network $\phi$ that can estimate a continuous function $f$ such that the estimation error is bounded by:
$$ || \phi - f || \le \omega_f \; \sqrt{d} \; O(N^{-2/d} L^{-2/d})$$
where $N, L, d$ is the neural network depth, the neural network width, and the number of neural network inputs, respectively, and $\omega_f$ is the Lipschitz constant of the function $f$. Moreover, this bound is nearly tight.
\end{theorem}
The above result can be interpreted as follows. The depth $N$ and width $L$ of a neural network depend on the rate of change of the underlying function (captured by its Lipschitz constant $\omega_f$). That is, if we use a NN to estimate a function with a high $\omega_f$, then one needs to increase the depth $N$ and width $L$ of the NN to achieve an acceptable estimation error.
Now we aim to connect the result above with the characteristics of the polynomials. To that end, we recall the definition of ``condition numbers'' of a polynomial~\cite{faroukinumstab}:
\begin{definition}
Given a polynomial $p\left(x\right) = \sum\limits_{K}a_Kx^K$ and a root $x_0$ of $p$, the quantity $C_{a_K}\left(x_0\right)$ is called the condition number for the root $x_0$. The condition number characterize the sensitivity of the root $x_0$ to a perturbation of the coefficients $a_K$. That is, if we allow a random perturbations of a fixed relative magnitudes $\epsilon = \abs{\frac{\delta a_K}{a_K}}$ in each coefficient $a_K$, then the magnitude of the maximum displacement $\delta x$ of a root $x_0$ is bounded as: $\abs{\delta x} \leq C_{a_K}\left(x_0\right) \epsilon$. For a polynomial with multiple roots, then we define the condition number of the polynomial $\overline{C}_{a_K}$ as the largest $C_{a_K}\left(x_0\right)$ among all roots, i.e., $\overline{C}_{a_K} = sup_{x_0 \in \{ x | p(x) = 0\}}C_{a_K}\left(x_0\right)$
\end{definition}
We are now ready to present our result that connects a polynomial's condition number to the NN architecture. As stated before, we are interested in designing an NN that can estimate the zero sub/super level volume set within a given region. We show that the larger the condition number, the larger the neural network depth and width, as captured by the following result.
\begin{theorem}
~\label{prop:lipNN}
Given a polynomial $p$ with coefficients $a_K$ and a region $I_n(\underline{d}, \overline{d})$.
There exists a neural network $NN$ that estimates the volume of zero sub/super level sets from the polynomial coefficients $a_K$. The Lipschitz constant of this NN is in the order of the condition number $\mathcal{O}(\overline{C}_{a_K})$ of the polynomial $p$.
\end{theorem}
\begin{proof}
To prove the result, we construct one NN that matches the properties above. We first consider a sub neural network $NN_{I_n \rightarrow I^{i}_N}$ that splits the region $I_n$ into sub-regions $I^{1}_n, \ldots, I^{l}_n$ and return the $i$th sub-region. Such partitioning can be carried over using ReLU activation units due to the ability of ReLU activation units to implement arbitrary hyperplanes. Now consider the following NN that consists of several sub-neural networks defined as follows:
\begin{align}
I_n^i(I_n, i) & = NN_{I_n \rightarrow I^{i}_N} (I_n) \\
ZC_i(a_K, I_n) &= NN_{x_0 \rightarrow ZC} \big( NN_{a_K \rightarrow x_0} (a_K, I_n^i), I_n^i\big) \\
NN(a_K, I_n) &= NN_{ZC \rightarrow L^{+}/L^{-}} \big(ZC_1(a_K, I_n) \nonumber \\ ,& \ldots, ZC_l(a_K, I_n) \big)
\end{align}
where the sub neural network $NN_{a_K \rightarrow x_0}$ estimates the roots $x_0$ of the polynomial from the coefficients $a_K$ and the sub-region $I_n^{i}$, the sub neural network $NN_{x_0 \rightarrow ZC}$ maps the location of the roots $x_0$ into binary indicator variable $ZC_i$ that indicates whether a zero-crossing takes place within the sub-region $I_n^i$ or whether the polynomial is always positive/negative within $I_n^{i}$, and finally $NN_{ZC \rightarrow L^{+}/L^{-}}$ which maps all the zero-crossing indicators $ZC_1, \ldots, ZC_l$ into an estimate of the zero sub/super level sets.
Now we compute the Lipschitz constant of each sub neural network. We start by the sub neural network $NN_{a_K \rightarrow x_0}$ as follows:
\begin{align*}
&||NN_{a_K \rightarrow x_0} (a_K) - NN_{a_K \rightarrow x_0} (a_K + \epsilon) || \nonumber\\ &= || x_0(a_K) - x_0(a_K + \epsilon)|| \leq C_{a_K}(x_0) \epsilon
\end{align*}
where $x_0(a_K)$ and $x_0(a_K + \epsilon)$ are the location of roots for the polynomials with coefficients $a_K$ and $a_K + \epsilon$, respectively. The last inequality follows from the definition of the condition number (Definition 1 above). From the inequality above, we conclude that the Lipschitz constant of $NN_{a_K \rightarrow x_0}$ is bounded by the condition number of the polynomial $\overline{C}_{a_K}$.
The sub neural network $NN_{x_0 \rightarrow ZC}$ simply checks if the root estimated by $NN_{a_K \rightarrow x_0}$ lies within the sub-region $I_n^i$ which can be computed by a set of comparisons between the coordinates of the root and the sub-region which can be carried over using ReLU NN~\cite{arora2016understanding} with a resulting NN that has a Lipschitz constant of 1.
If the root does not exist inside the sub-region $I_n^i$, then the polynomial does not change its sign (i.e., the polynomial is always positive or consistently negative within $I_n^i$). In such a case, this NN will set the indicator variable accordingly. This additional logic can be carried by evaluating the polynomial within an arbitrary point inside $I_n^i$ and comparing the result to zero. Similar to the comparisons above, such comparisons can be carried over using ReLU NN~\cite{arora2016understanding} with a resulting NN that has a Lipschitz constant of 1.
The zero-crossing indicator variables will then be processed by the sub neural network $NN_{ZC \rightarrow L^{+}/L^{-}}$ which will count the number of sub-regions in which the zero-crossing indicator points to positive or negative sub-regions and multiply it by the volume of each sub-region (which is equal to the volume of the overall region divided by $l$). A similar analogy can be applied to $NN_{I_n \rightarrow I^{i}_N}$.
Multiplying the Lipschitz constant of all the sub neural networks, we conclude that the Lipschitz constant of the constructed $NN(a_K, I_n)$ is in the order of $\mathcal{O}(\overline{C}_{a_K})$ which concludes our proof.
\end{proof}
It follows from Theorem~\ref{thm:errorBound} and Theorem~\ref{prop:lipNN} that the higher the condition number of a polynomial $\overline{C}_{a_K}$, the larger the network width and depth needed to estimate the volume of zero sub/super level sets with high accuracy. Unfortunately, the power basis representation of polynomials (i.e., representing the polynomial as a summation $\sum_{K \le L}a_K x^K$) is shown to be an unstable representation with extremely large condition numbers~\cite{faroukinumstab} which may necessitate neural networks with substantial architecture.
\subsection{Bernstein Polynomials: A Robust Representation of Polynomials}
Motivated by the challenges above, we seek a representation of polynomials that is more robust to changes in coefficients, i.e., we seek a representation in which the roots of the polynomial change slowly with changes in the coefficients (and hence smaller condition numbers). We start with the following definition.
\begin{definition}
Let $p\left(x\right) = \sum\limits_{K \leq L}^{} a_K x^K \in \mathbb{R}[(x_1,\ldots,x_n)]$ be a multivariate polynomial over a hyperrectangle $I_n (\underline{d}, \overline{d})$
and of a maximal degree $L = \left(l_1, \cdots, l_n\right) \in \mathbb{N}^n$.
The polynomial:
\begin{align}\label{bernpol}
B_{p, L}\left(x\right) &= \sum\limits_{K \leq L}^{} b_{K,L} Ber_{K, L}\left(x\right),
\end{align}
is called the Bernstein polynomial of $p$, where $Ber_{K, L}\left(x\right)$ and $b_{K, L}$ are called the Bernstein basis and Bernstein coefficients of $p$, respectively, and are defined as follows:
\begin{align}\label{bernpolcoeff}
Ber_{K, L}\left(x\right) &= {L \choose K} x^{K}\left(1-x\right)^{L-K},\\
b_{K, L} &=\sum\limits_{J = (0,\ldots, 0)}^{K} \frac{{K \choose J}}{{L\choose J}}\left(\overline{d} - \underline{d}\right)^J \sum\limits_{I = J}^{L}{I \choose J} \underline{d}^{I - J} a_I.
\end{align}
\end{definition}
\comment{\r{@Wael: Put here a theorem saying that Bern is the most robust representation.
Slide 33 from the following URL is very useufl to bring the point:
https://faculty.engineering.ucdavis.edu/farouki/wp-content/uploads/sites/41/2013/02/Bernstein-polynomial-basis.pdf
}}
Bernstein representation is known to be the most robust representation of polynomials which is captured by the next result~\cite{faroukinumstab}.
\begin{theorem}[Theorem~\cite{faroukinumstab}] \label{thm:bernStable}
The Bernstein basis is optimally stable, i.e. there exists no other basis with a condition number smaller than the condition number of the Bernstein coefficients $\overline{C}_{b_{K, L}}$.
\end{theorem}
Theorems~\ref{thm:errorBound}-\ref{thm:bernStable} point to the optimal way of designing the targeted neural network. Such a neural network needs to take as input the Bernstein coefficients $B_{K, L}$ instead of the power basis coefficients $a_K$. To validate this conclusion, we report empirical evidence in Table~\ref{TabComBases}. In this numerical experiment, we trained two neural networks with the same exact architecture, using the same exact number of data points, and both networks have the same number of inputs. Both neural networks are trained to estimate whether a zero-crossing occurs in a region (recall from our analysis in Theorem~\ref{prop:lipNN} that the Lipschitz constant of this NN is equal to the condition number of the polynomial). The only difference is that one neural network is trained using power basis coefficients $a_K$ (column 3-4 of Table~\ref{TabComBases}) while the second is trained using Bernstein basis coefficients $B_{K,L}$ (column 5-6 of Table~\ref{TabComBases}). The coefficients are randomly generated via a uniform distribution between $-0.1$ and $0.1$, i.e., $\mathcal{U}\left(-0.1, 0.1\right)$. We generated 40000 training samples and 10000 validation samples for both bases. We evaluate the trained NN on three different benchmarks for the two bases. Each evaluation benchmark has 10000 samples. The results are summarized in Table~\ref{TabComBases}. As it can be seen from Table~\ref{TabComBases}, the NN trained with Bernstein coefficients generalize better than the NN trained with power basis coefficients. This empirical evidence matches our analysis in Theorem~\ref{prop:lipNN} along with the insights of Theorem~\ref{thm:errorBound} and Theorem~\ref{thm:bernStable}.
\b{
\begin{table}[t!]
\caption{Evaluation of three trained neural networks on three different benchmarks for the different polynomial basis. Each benchmark has 10000 samples. The coefficients of the polynomial within each basis are generated following a uniform distribution given in the table.}
\begin{adjustbox}{width=0.99\columnwidth,center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{2}{c|}{Power} & \multicolumn{2}{c|}{Bernstein} & \multicolumn{2}{c|}{Reduced Bernstein} \\
Benchmark & Coefficients & \multicolumn{2}{c|}{Basis} & \multicolumn{2}{c|}{Basis} & \multicolumn{2}{c|}{Basis} \\
\cline{3-8}
& & Accuracy & Overhead & Accuracy & Overhead & Accuracy & Overhead\\
\hline
1 & $\mathcal{U}\left(-0.1, 0.1\right)$ & $46\%$ & 0 [s] & $91\%$ & 0.01 [s] & $82\%$ & 0.002 [s]\\
\hline
2 & $\mathcal{U}\left(-0.5, 0.5\right)$ & $32\%$ & 0 [s] & $87\%$& 0.03 [s] & $79\%$ & 0.005 [s]\\
\hline
3 & $\mathcal{U}\left(-1, 1\right)$ & $30\%$ & 0 [s] & $88\%$ & 0.04 [s] & $80\%$ & 0.007 [s]\\
\hline
\end{tabular}
\end{adjustbox}
\label{TabComBases}
\end{table}
}
\section{Taming the Complexity of Computing Bernstein Coefficients}
In section 4, we concluded that Bernstein's representation has a smaller condition compared to other representations, which helps build a more efficient NN. Nevertheless, computing this representation adds a significant overhead even by using the most efficient algorithms to calculate these coefficients~\cite{range4,berncomplex}.
For example, computing all the Bernstein coefficients of a $6^{th}$-dimensional polynomial with $7^{th}$ order using Matrix method and Garloff’s methods~\cite{range4,berncomplex} require $1.1e07$ and $7.1e06$ summation and multiplication operations~\cite{berncomplex}.\comment{Table~\ref{TabComBases} (column 6) shows the execution time for computing the Bernstein coefficients for the benchmark mentioned in the previous section.}
To exacerbate the problem, the Bernstein coefficients depend on the region $I_n$ and need to be recomputed every iteration of the abstraction refinement process. Reducing such overhead is the main focus of this section.
\subsection{Range Enclosure Property of Bernstein polynomials}
Given a multivariate polynomial $p\left(x\right)$ that is defined over the $n$-dimensional box $I_n(\underline{d}, \overline{d})$, we can bound the range of $p\left(x\right)$ over $I_n(\underline{d}, \overline{d})$ using the range enclosure property of Bernstein polynomials as follows:
\begin{theorem}[Theorem 2 \comment{\r{put the number of the theorem from [8] here}} \cite{garloff}]\label{th1}
Let $p$ be a multivariate polynomial of degree $L$ over the $n$-dimensional box $I_n(\underline{d}, \overline{d})$ with Bernstein coefficients $b_{K, L}$, $0 \leq K \leq L$. Then, for all $x \in I_n$, the following inequality holds:
\begin{align}\label{bernbound}
\min_{K \leq L}~b_{K, L} \leq p\left(x\right) \leq \max_{K \leq L}~b_{K, L}.
\end{align}
\end{theorem}
The traditional approach to compute the range enclosure of $p$ is to compute all the Bernstein coefficients of $p$ and their minimum and maximum is determined~\cite{range1,range2,range3}. However, computing all the coefficients has a complexity of $\mathcal{O}\left(\left(l_{max} + 1\right)^n\right)$, where $l_{max} = \max\limits_{1 \leq i \leq n}l_i$, which increases exponentially with the dimension $n$.
Luckily, the Bernstein coefficients enjoy monotonicity properties, whenever the region $I_n(\underline{d}, \overline{d})$ is restricted to be an orthant (i.e., the sign of $x_i$ does not change within $I_n(\underline{d}, \overline{d})$, for each $i \in \{1,\ldots, n\}$)~\cite{range4}. Using such monotonicity properties, one can compute the minimum and maximum Bernstein coefficients (denoted by $\underline{B}_{p, L}$, $\overline{B}_{p, L}$) with a time complexity of $\mathcal{O}\left(2\left(l_{max} + 1\right)^2\right)$ which does not depend on the dimension $n$.
\subsection{Zero Crossing Estimation using only few Bernstein Coefficients}
Now we discuss how to use the range enclosure property above to reduce the number of computed Bernstein coefficients. First, we note that the zero crossing of a polynomial $p$ in a given input region $I_n$ depends on its estimate range given by $\underline{B}_{p, L}$ and $\overline{B}_{p, L}$. More specifically, if $\underline{B}_{p, L} > 0$ ($\overline{B}_{p, L} < 0$), means that the entire polynomial is positive (negative), which is means that there is no zero-crossing. If $\underline{B}_{p, L}$ and $\overline{B}_{p, L}$ have different signs, and because of the estimation error of these bounds, the polynomial $p$ may still be positive, negative, or has a zero crossing in the region. In this case, we need additional information such as the bounds of the gradient of the polynomial $p$ within the input region, that are given by $\underline{B}_{\nabla p, L}$ and $\overline{B}_{\nabla p, L}$ (which can be computed efficiently thanks to the fact that gradients of polynomials are polynomials themselves). Such additional information about the worst-case gradient of the polynomial leads to a natural estimate of whether a zero crossing occurs in a region.
Due to space constraints, we omit the analysis of bounding the estimation error introduced by relying only on the maximum and minimum of the polynomial $\underline{B}_{p, L}$ and $\overline{B}_{p, L}$ along with the maximum and minimum of the gradient $\underline{B}_{\nabla p, L}$. Instead, we support our claim using the empirical evidence shown in Table~\ref{TabComBases}. Using the same benchmarks used in Section 4.2, we train a third neural network that takes as input only the four inputs $\underline{B}_{p, L}, \overline{B}_{p, L}, \underline{B}_{\nabla p, L}, \overline{B}_{p, L}$ and compare its generalization performance (column 7-8 of Table~\ref{TabComBases}). As shown in the table, the third neural network sacrifices some accuracy compared to the ones that use all Bernstein coefficients. But on the other side, it reduces the overhead to compute the Bernstein coefficients by order of magnitude.
\subsection{Search Space pruning using Bernstein Coefficients}
The range enclosure property and the discussion above open the door for a natural solution of the ``alternative abstraction'' problem mentioned in Section 3.2. The maximum and minimum Bernstein coefficients can be used as an abstraction (in addition to convex upper and lower bounds) of high-order polynomials. Such abstractions can be refined with every iteration of the solver. They can be used to identify portions of the search space for which one of the polynomials is guaranteed to be positive (and hence a solution does not exist). More details about integrating this abstraction and the convex abstraction are given in the implementation section below.
\section{Algorithm Architecture and Implementation Details}
\begin{algorithm}[t!]
\caption{PolyARBerNN} \label{alg:PolyARBerNN}
\begin{flushleft}
\textbf{Input:} $I_n(\underline{d}, \overline{d}), p_1, p_2, \ldots, p_m, \epsilon$\\
\textbf{Output}: $x_{\text{Sol}}$
\end{flushleft}
\begin{algorithmic}[1]
\STATE $orthants := \texttt{Partition\_Region}(I_n)$
\STATE $Neg := \{\}$
\STATE $Ambig := \{orthants\}$
\STATE $\text{List\_pols} := \{p_1,\ldots,p_m\}$
\WHILE{$\texttt{Compute\_Maximum\_Volume}(Ambig) \ge \epsilon$}
\STATE $p :=\texttt{Select\_Poly}\left(\text{List\_pols}, Neg\right)$
\STATE $region :=$\\ $\texttt{Remove\_Ambiguous\_Region\_From\_List}\left(Ambig\right)$
\STATE $\big(\underline{B}_{p, L} ,\overline{B}_{p, L}, \underline{B}_{\nabla p, L}, \overline{B}_{\nabla p, L}\big)$ := \\
\qquad \qquad \qquad\qquad $\texttt{Compute\_Bern\_Coeff}(p, region)$\\
\IF{$\underline{B}_{p, L} > 0$}
\STATE \textbf{break}
\ELSIF{$\underline{B}_{p, L} < 0$}
\STATE $Neg := Neg \cup (p, L^{-}_{0}\left(p\right))$
\STATE \textbf{break}
\ENDIF
\STATE $\text{(under\_approx, over\_approx, split)} :=$ \\ \qquad \qquad \qquad$NN(\underline{B}_{p, L} ,\overline{B}_{p, L}, \underline{B}_{\nabla p, L}, \overline{B}_{\nabla p, L}, region\big)$
\STATE $action := \texttt{Select\_best\_action}()$
\STATE $L^{-}_{0}\left(p\right),L^{+}_{0}\left(p\right),L^{+/-}_{0}\left(p\right):=$ \\ \; $\texttt{Convex\_Abst\_Refin\_PolyAR}\left(p, action, region\right)$
\STATE $Ambig := Amibg \cup L^{+/-}_{0}\left(p\right)$
\STATE $Neg := Neg \cup (p, L^{-}_{0}\left(p\right))$
\ENDWHILE
\IF{$\texttt{is\_List\_Empty}(Ambig)$}
\IF{A negative region in $Neg$ has all the polynomials}
\STATE $x_{\text{Sol}}:=$ any point in the negative region
\ELSE
\RETURN $\text{the problem is UNSAT}$
\ENDIF
\ELSE
\STATE $x_{\text{Sol}}:=\texttt{CAD\_Solver\_Parallel}\left(Ambig,p_1, \ldots, p_m\right)$
\ENDIF
\end{algorithmic}
\end{algorithm}
\comment{
\begin{algorithm}
\caption{PolyARBerNN$\left(F\right)$} \label{alg:PolyARBerNN}
\begin{flushleft}
\textbf{Input:} $F = I^n \wedge P_m$\\
\textbf{Output}: STATUS, $x_{\text{Sol}}$
\end{flushleft}
\begin{algorithmic}[1]
\STATE $Neg= \{I_n \}$
\STATE $Ambig= \{\}$
\STATE $\text{List\_pols}= \{p_1,\ldots,p_m\}$
\STATE $\text{STATUS}:=\textbf{Bern\_Solver}\left(\text{Neg, List\_pols}\right)$
\IF{$\text{STATUS} == \text{UNSAT}$}
\RETURN STATUS
\ELSE
\IF{$\text{STATUS} == \text{SAT}$}
\STATE $x_{sol}$ = center$\left(Neg\right)$
\RETURN STATUS, $x_{sol}$
\ELSE
\WHILE{$\text{List\_pols}~\neq~ \emptyset$}
\STATE $x_{\text{Sol}}:=\textbf{Conv\_Solver}\left(Neg,\text{List\_pols}\right)$ \label{alg:line:conv}
\IF{$x_{\text{Sol}}~\neq~\text{None}$}
\STATE STATUS=SAT
\RETURN STATUS, $x_{\text{Sol}}$
\ENDIF
\STATE $p_j=\textbf{Select\_Poly}\left(\text{List\_pols}\right)$ \label{alg:line:selectPoly}
\STATE $L^{-}_{0}\left(p_j\right),L^{+}_{0}\left(p_j\right),L^{+/-}_{0}\left(p_j\right):= \textbf{Abst\_Refin\_NN}\left(Neg,p_j\right)$ \label{alg:line:abst_ref}
\STATE $Ambig.\text{add}\left(L^{+/-}_{0}\left(p_j\right)\right)$
\IF{$L^{-}_{0}\left(p_j\right)~==~\emptyset$}
\STATE break
\ENDIF
\STATE $Neg=L^{-}_{0}\left(p_j\right)$ \label{alg:line:neg}
\STATE $\text{List\_pols}=\text{List\_pols}\setminus{p_j}$
\ENDWHILE
\IF{$\text{List\_pols}~\neq~ \emptyset$}
\STATE $\text{STATUS}, x_{\text{Sol}}:=\textbf{Solver\_Parallel}\left(Ambig,P_m\right)$ \label{alg:line:solver}
\RETURN $\text{STATUS}, x_{\text{Sol}}$
\ELSE
\STATE $\text{STATUS=SAT}$
\STATE $x_{\text{Sol}}=\text{center}\left(Neg\right)$
\RETURN $\text{STATUS}, x_{\text{Sol}}$
\ENDIF
\ENDIF
\ENDIF
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t!]
\caption{$\textbf{Abst\_Refin\_NN}\left(Neg,p_j\right)$}
\begin{flushleft}
\textbf{Input:} $Neg$, $p_j$ \\
\textbf{Output}: $L^{-}_{0}\left(p_j\right)$, $L^{+}_{0}\left(p_j\right)$, $L^{+/-}_{0}\left(p_j\right)$
\end{flushleft}
\begin{algorithmic}[1]
\STATE $L^{-}_{0}\left(p_j\right)=\{~\}$, $L^{+}_{0}\left(p_j\right)=\{~\}$, $L^{+/-}_{0}\left(p_j\right)=\{~\}$
\FOR{$region \in Neg$}
\STATE $\text{vertices}^{N}=\{~\}$, $\text{vertices}^{P}=\{~\}$
\STATE $List\_Ambig\_reg=\{region\}$
\STATE $Ambig\_reg=\textbf{Select\_region}\left(List\_Ambig\_reg\right)$
\WHILE{$\text{Volume}\left(Ambig\_reg\right)~>~\text{Vol}_{\text{threshold}}$}
\STATE $List\_Ambig\_reg=List\_Ambig\_reg\setminus{Ambig\_reg}$
\STATE $\underline{B}_{p_j, L} ,\overline{B}_{p_j, L}, \underline{B}_{\nabla p_j, L}, \overline{B}_{\nabla p_j, L} = \textbf{Bern\_Coeff\_Grad}\left(p_j, Ambig\_reg\right)$
\IF{$\underline{B}_{p_j, L} > 0$}
\STATE $\mathcal{B}^{P}=Ambig\_reg$
\STATE $L^{+}_{0}\left(p_j\right).\text{add}\left(\mathcal{B}^{P}\right)$;
\STATE $Ambig\_reg=Ambig\_reg\setminus{\mathcal{B}^{P}}$
\STATE \textbf{break}
\ENDIF
\IF{$\overline{B}_{p_j, L} \leq 0$}
\STATE $\mathcal{B}^{N}=Ambig\_reg$
\STATE $L^{-}_{0}\left(p_j\right).\text{add}\left(\mathcal{B}^{N}\right)$;
\STATE $Ambig\_reg=Ambig\_reg\setminus{\mathcal{B}^{N}}$
\STATE \textbf{break}
\ENDIF
\STATE $\big[Pr_{under\_app},Pr_{over\_app},Pr_{split}\big]= NN\left(\underline{B}_{p_j, L} ,\overline{B}_{p_j, L}, \underline{B}_{\nabla p_j, L}, \overline{B}_{\nabla p_j, L}\right)$
\STATE $Pr_{max} = \max \big[Pr_{under\_app},Pr_{over\_app},Pr_{split}\big]$
\IF{$Pr_{max}==Pr_{under\_app}$}
\FOR{$i~\in~\big(1,\cdots,n+1\big)$}
\STATE $v^{N}_i=\argmin\limits_{v_i \in Ambig\_reg} \left(l^T_iv_i\right) \quad s.t. \quad O^{p_j}(v_i) ~\leq~0.$
\IF{$v^{N}_i~\neq~\text{None}$}
\STATE $\text{vertices}^{N}.\text{add}\left(v^{N}_i\right)$
\ENDIF
\ENDFOR
\STATE $\mathcal{P}^N=\textbf{Convex\_Hull}\left(\text{vertices}^{N}\right)$
\STATE $\mathcal{B}^{N}=\textbf{Box}\left(\mathcal{P}^N\right)$
\STATE $L^{-}_{0}\left(p_j\right).\text{add}\left(\mathcal{B}^{N}\right)$;
\STATE $Ambig\_reg=Ambig\_reg\setminus{\mathcal{B}^{N}}$
\ENDIF
\IF{$Pr_{max}==Pr_{over\_app}$}
\FOR{$i~\in~\big(1,\cdots,n+1\big)$}
\STATE $v^{P}_i=\argmin\limits_{v_i \in Ambig\_reg} \left(l^T_iv_i\right) \quad s.t. \quad U^{p_j}(v_i)~\leq~0.$
\IF{$v^{P}_i~\neq~\text{None}$}
\STATE $\text{vertices}^{P}.\text{add}\left(v^{P}_i\right)$
\ENDIF
\ENDFOR
\STATE $\mathcal{P}^P=\textbf{Convex\_Hull}\left(\text{vertices}^{P}\right)$
\STATE $\mathcal{B}^{P}=\textbf{Box}\left(\mathcal{P}^P\right)$
\STATE $L^{+}_{0}\left(p_j\right).\text{add}\left(\mathcal{B}^{P}\right)$;
\STATE $Ambig\_reg=Ambig\_reg\setminus{\mathcal{B}^{P}}$
\ENDIF
\STATE $List\_Ambig\_reg.\text{add}\big(Ambig\_reg\big)$
\IF{$Pr_{max}==Pr_{split}$}
\STATE $Ambig\_reg_{1}$, $Ambig\_reg_{2}$ \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad $:=\text{\textbf{Half\_Div}}\left(Ambig\_reg\right)$
\STATE $List\_Ambig\_reg.add\left(Ambig\_reg_{1}, Ambig\_reg_{2}\right)$
\ENDIF
\STATE $Ambig\_reg=\textbf{Select\_region}\left(List\_Ambig\_reg\right)$
\ENDWHILE
\STATE $L^{+/-}_{0}\left(p_j\right).add\left(List\_Ambig\_reg\right)$
\ENDFOR
\RETURN $L^{-}_{0}\left(p_j\right)$, $L^{+}_{0}\left(p_j\right)$, $L^{+/-}_{0}\left(p_j\right)$
\end{algorithmic}
\end{algorithm}
}
In this section, we describe the implementation details of our solver PolyARBerNN. As a pre-processing step, the tool divides the input region $I_n$ into several regions such that each one is an orthant. This allows the tool to process each orthant in parallel or sequentially. The tool keeps track of all regions for which the sign of a polynomial is not fixed. These regions are called ambiguous regions, and they are stored in a list called $Ambig$. As long as the volume of the regions in this list is larger than a user-defined threshold, then our tool will continuously use abstractions to identify portions in which one of the polynomials is always positive (and hence removed from the search space) or negative (and hence the tool will give higher priority for this region). The abstraction refinement is iteratively applied in Lines 3-15 of Algorithm 1. In each abstraction refinement step, the tool picks a polynomial $p$ and a region $region$ based on several heuristics (Line 4-5). In lines 6-12, we compute the maximum/minimum Bernstein coefficients followed by checking the sign of the polynomial within this region. Suppose the Bernstein coefficients indicate that the polynomial is always positive in this region. In that case, this provides a guarantee that a solution does not exist in this region (recall that Problem 1 searches for a point where \emph{all} polynomials are negative). Similarly, if the polynomial is always negative, then it will be added to the list of negative regions. For those polynomials for which the Bernstein abstraction failed to identify their signs, we query the trained neural network to estimate the best convex abstraction possible (Line 13-14). Based on the neural network suggestion, we use the PolyAR tool~\cite{polyar} to compute the convex abstraction (Line 15), which returns portions of this region that are guaranteed to belong to the zero sub-level set $L^-_0(p)$, those who belong to the zero super-level set $L^+_0(p)$, and those still remain ambiguous $L^{+/-}_0(p)$. The process of using Bernstein abstraction and the convex abstraction (which is guided by the trained neural network) continues until all remaining ambiguous regions are smaller than a user-defined threshold $\epsilon$ in which case will be processed in parallel using a sound and complete tool that is based on the Cylindrical Algebraic Decomposition (CAD) (line 26 in Algorithm 1).
The neural network itself is trained using randomly generated, quadratic, two-dimensional polynomials where the coefficients follow a uniform distribution between $-1$ and $1$. For each randomly generated polynomial, we used PolyAR to compute the volumes of the $L^+_0(p), L^-_0(p), L^{+/-}_0(p)$ regions. We use a fully connected NN that contains an input layer, three hidden layers, and one output layer. The input layer has four neurons, the hidden layer has 40 neurons each, and the output layer has three neurons. We use a dropout of probability $0.5$ in the first and second hidden layers to avoid overfitting. We use the ReLU activation function for all the hidden layers and the Softmax activation function for the output layer. We use Adam as an optimizer and cross-entropy as a loss function. Although the neural network is trained on simple quadratic two-dimensional polynomials, we observed it generalizes well to higher-order polynomials with several variables. This will become apparent during the numerical evaluation in which polynomials of different orders and several variables will be used to evaluate the tool.
\comment{
In this section, we describe the different steps used by our solver PolyARBerNN to solve Problem 1. Before applying our solver PolyARBerNN, we do a simple test on the input region $I_n$. If $I_n$ is not restricted on a single orthant of $\mathbb{R}^n$, i.e., there is one or more components intervals of $I_n$, $\big[\underline{d}_j,~\overline{d}_j\big]$, $1\leq j \leq n$, which contain both positive and negative numbers, then we subdivide $I_n$ around $0$ and apply our solver PolyARBerNN in parallel for each sub-orthant box $I^{o, i}_n$. The main goal behind our design methodology for PolyARBerNN solver is to reduce the number of the required abstraction refinement and tries to find a solution early on in the process. To that end, the solver starts with computing a set of the minimum and maximum Bernstein coefficients that lower and upper bounds the original polynomials. These bounds given by the Bernstein expansion are generally tighter than those given by interval arithmetic and many centered form \cite{tightarithm}. The next step is to check signs of the lower and upper bounds for each polynomial. If a lower bound of at least one polynomial is strictly positive means that problem 1 is UNSAT and the solver returns UNSAT without continuing further. If an upper bound of a one polynomial is strictly negative means that the polynomial does satisfy the constraints in problem 1 and there is no need to process it. Therefore, we remove it from the list of polynomials $\text{List\_pols}$. In the end, if the list of polynomials is empty, it means that all polynomials verify all the constraints in problem. Therefore, the problem is SAT and any point in the domain $I_n$ satisfies the problem. In this case, we pick the center of the domain as the satisfiable solution for problem 1. If the lower and upper bounds of the original polynomials have opposite signs, we can not conclude much about the satisfiability of problem 1. To that end, the tool starts by computing a set of convex (quadratic or linear) polynomials $O_{0}^{p_i}, i = 1, \ldots, m$, that over approximate the original polynomials. Indeed, if such a convex problem is feasible, the tool terminates and returns the solution found by the convex feasibility problem above (\textbf{Conv\_Solver}, Line~\ref{alg:line:conv} in Algorithm~\ref{alg:PolyARBerNN}). More details about the implementation of \textbf{Conv\_Solver} algorithm is given in this technical report \cite{techreport}. If the convex problem fails to find such a solution, the tool selects one polynomial $p_j$ (\textbf{Select\_Poly}, Figure~\ref{fig:PolyARBerNN}) to perform the abstraction refinement process. In the PolyARBerNN tool, we opt-out to select the polynomial with lowest lower bound that is given by the Bernstein expansion. Our intuition is that the lower the bound the higher probability that the polynomial is negative in the given domain. Once a polynomial $p_j$ is selected, the next step is to apply the NN guided abstraction refinement process on $p_j$ (\textbf{Abst\_Refin\_NN}, Figure~\ref{fig:PolyARBerNN}). The objective of the NN guided abstraction refinement is to use a NN to predict which action should be taken, either an under approximation, over approximation, or split the domain, and to identify the subsets of the positive regions $L^{+}_{0}\left(p_j\right)$ or negative regions $L^{-}_{0}\left(p_j\right)$. Indeed, such abstraction refinement may not be able to identify all positive and negative regions, and hence a remaining portion of the search space may not be identified to belong to either $L^{+}_{0}\left(p_j\right)$ or $L^{-}_{0}\left(p_j\right)$ in which case it belongs to the ambiguous region $L^{+/-}_{0}\left(p_j\right)$. The abstraction refinement process of the polynomial $p_j$ ensure that the volume of such ambiguous regions is below a certain user defined threshold.
The process of using the convex solver to find the solution and abstracting one polynomial continues. Since a solution of Problem 1 needs to lie in a negative region for all the polynomials, we confine the tool attention to the negative regions identified by the abstraction refinement in the previous iterations to accelerate the process of searching for the solution.
While excluding the positive regions identified in previous iterations does not affect the tool (since a solution is guaranteed not to exist in such regions), excluding the ambiguous regions from the next iterations may affect the correctness of the tool. Therefore, the last step in the PolyARBerNN tool is to examine all the identified ambiguous regions using off-the-shelf solvers (e.g., Z3 and Yices) to search for a solution in these regions (\textbf{Solver\_Parallel}, Figure~\ref{fig:PolyARBerNN}).More details about the implementation of \textbf{Solver\_Parallel} algorithm can be found in \cite{techreport}. Because the volume of these ambiguous regions is smaller than a user-defined threshold, the execution time of running off-the-shelf tools on such small volume regions is shorter than solving the original problem. This reflects that the number of roots for each polynomial is limited in small regions. Moreover, searching for a solution in these ambiguous regions can be highly parallelized, leading to an extra level of efficiency.
\begin{figure}[!t]
\includegraphics[width=1.0\columnwidth]{Abst_Refin_NN.png}
\caption{Framework of NN guided abstraction refinement.}
\label{fig:AbstrefinNN}
\end{figure}
\subsection{Neural Network Training}
For each ambiguous region $Ambig\_reg$ which is an orthant domain, we compute the minimum and maximum of Bernstein coefficients of the polynomial $p_j$, i.e., $\underline{B}_{p_j, L}$ and $\overline{B}_{p_j, L}$, which correspond to the lower and upper bounds for this polynomial. Furthermore, we compute the minimum and maximum Bernstein coefficient of the gradient of the polynomial $\nabla p_j$, i.e., $\underline{B}_{\nabla p_j, L}$ and $\overline{B}_{\nabla p_j, L}$. The way we do it is to compute the minimum (maximum) over all the minimum (maximum) Bernstein coefficients of each component in the gradient, respectively. As shown in Figure \ref{fig:AbstrefinNN}, those four values are the inputs for a NN and the output of this network is a three-dimensional vector where component represents the probability of implementing under-approximation or over-approximation in the ambiguous region $Ambig\_reg$, or split $Ambig\_reg$. Our intuition is that by using the ranges of the polynomial and its gradient given by the minimum and maximum Bernstein coefficients, we can predict efficiently the right action that should be implemented in each iteration and this will help our tool to run faster.
\subsubsection{Data collection}
To collect the data, we generate random quadratic two-dimensional polynomials $q\left(x_1,~x_2\right)~=~c_1x_1^2 + c_2x_2 + c_3x_1x_2 + c_4x_2 + c_5y + c_6$, where the coefficients follow a uniform distribution between $-1$ and $1$, i.e, $c_i~\sim~\text{uniform}\left(-1, 1\right),~1\leq i \leq 4$. The random generated polynomials are defined over the domain $I_2 = \big[-2, 2\big]^2$. For each random generated polynomial, we perform iteratively the abstraction refinement on the domain $I_2$. In every iteration, we perform under-approximation, over-approximation of the original polynomial over a selected ambiguous region, and a split of the ambiguous region. After that, we compute the volume of the remaining ambiguous region after each action was implemented. Then, we pick the action with minimum volume of the remaining ambiguous region, which represents the label of our data. For instance, if the minimum volume corresponds to the under-approximation then we represent the label as a one hot encoding $\big[1, 0, 0\big]$. A one hot encoding of $\big[0, 1, 0\big]$ ($\big[0, 0, 1\big]$) means that over-approximation (split) is chosen. We run the abstraction refinement process on all the generated polynomials to collect the data $\big(\underline{B^i}_{p_j, L} ,\overline{B^i}_{p_j, L}, \underline{B^i}_{\nabla p_j, L}, \overline{B^i}_{\nabla p_j, L}\big)$, where $i$ denote the index of the sample. The labels are one-hot vector of dimension three where each component represent the action that should be taken, either under-approximation, over-approximation, or divide the region into two regions. We generate $50000$ samples for training, $10000$ samples for validation, and $10000$ for testing.
\subsubsection{Data Normalization}
\r{This is unnecessary section .. we can remove this section and just add one sentence to the section above saying that we normalized the data prior to training .. you can just mention that we normalize the data .. everyone in the ML literature is aware of the need to }
In the literature of NN \cite{lecun}, it is efficient to normalize the data when the data's values are different from each other. This normalization leads to a faster training and improves the generalization performance of the NN \cite{lecun}. Therefore, we normalize all the input data to a zero mean and unit variance by adopting a simple affine transformation $\text{data\_sample} \leftarrow \frac{\text{data\_sample} - \mu}{\sigma}$, where $\mu$ and $\sigma$ are the mean of the data and its standard deviation. The $\mu$ and $\sigma$ parameters are initialized with respectively the empirical mean and standard deviation of the dataset and they are compute off-line before the training.
\subsubsection{NN's architecture}
We use a a fully connected NN that contains an input layer, three hidden layers, and one output layer. The input layer has four neurons, the hidden layer has 40 neurons each, and the output layer has three neurons. We use a dropout of probability $0.5$ in the first and second hidden layer to avoid overfitting. We use ReLU activation function for all the hidden layers and Softmax activation function for the output layer. We use Adam as an optimizer and cross entropy as a loss function.
\subsubsection{NN's training}
\begin{figure}[!t]
\includegraphics[width=1.0\columnwidth]{Learning_curve.png}
\caption{The training and validation loss of the NN during the learning process.}
\label{fig:learn_curve}
\end{figure}
Figure \ref{fig:learn_curve} shows the evolution of the loss of the the NN while the number of epochs is increasing. As it can be seen from the figure, the loss for the training data decreases as the number of epochs increases until it reaches a final loss around $0.1$. The loss for the validation data decreases up to epoch 25 and increases after that until it reaches a final loss around $0.1$. The gap between the training and validation loss is almost $0$ around epoch $300$ which shows the NN avoids the over-fitting and is able to achieve a low estimation error on the new data. From the figure, we can conclude that the NN is able to learn the behaviour of the abstraction refinement process.
\subsection{NN's evaluation}
We evaluate the NN on 6 different benchmarks. In the first and second benchmarks, we generate the same random quadratic polynomials but in a different domains $\big[-4, 4\big]^2$ and $\big[-10, 10\big]^2$. In the third and fourth benchmarks, we generated random polynomials with degree $4$ and $10$ over the domain $\big[-2, 2\big]^2$. Finally, in the fifth and sixth benchmarks, we generated random polynomials with higher dimensions, i.e., with dimension $n = 4$ and $n = 7$. We want to answer the following question: is the trained NN able to generalise to a new data with different domains (benchmarks 1 and 2), higher orders (benchmarks 3 and 4), and higher dimensions (benchmark 5 and 6)? More detail about the different benchmarks is shown at Table \ref{TabBenchmarks}. As it can be seen from Table \ref{TabBenchmarks}, the trained NN is able to generalize on the different benchmarks. For instance, evaluating the NN on a different domains results in a lowest accuracy of $93\%$. Furthermore, evaluating the NN on polynomials with higher order results in a accuracy of $87\%$. Finally, the evaluation accuracy for the NN on higher dimension benchmarks results in accuracy of $80\%$.
}
~\\
\noindent\textbf{Correctness Guarantees:}
We conclude our discussion with the following result which captures the correctness guarantees of the proposed tool:
\begin{theorem}
The PolyARBerNN solver is sound and complete.
\end{theorem}
\begin{proof}
This result follows from the fact that search space is pruned using sound abstractions (convex upper bounds or Bernstein-based). The neural network and the convex lower bound polynomials are just used as heuristics to guide the refinement process. Finally, CAD-based algorithms (which are sound and complete) are used to process the portions of the search space which are not pruned by the abstraction refinement.
\end{proof}
\section{Generalization to polynomial optimization problems:}
In this section, we focus on providing a solution to Problem 2. Our approach is to turn the optimization problem (problem 2) into a feasibility problem (Problem 1). First, we recall that the gradient of $p$, $\nabla p = [\frac{\partial p}{\partial x_1}, \cdots{}, \frac{\partial p}{\partial x_n} ]$, where $\frac{\partial p}{\partial x_i}$ is the partial derivative of $p$ with respect to $x_i$---is a vector of $n$ polynomials. The optimal value of $p$ occurs either (i) when the vector of partial derivatives are all equal to zero or (ii) at the boundaries of the input region.
To find the critical points $x^{*}$ of $p$ where $\nabla p \left(x^{*}\right) = 0$, we add the $n$ polynomial constraints $\frac{\partial p}{\partial x_i} \le 0, 1\leq i \leq n $ and $- \frac{\partial p}{\partial x_i} \le 0, 1\leq i \leq n $ to the constraints of the optimization problem. Now, we modify the PolyARBernNN solver to output \emph{all} possible regions in which all the constraints are satisfied. This can be easily computed by taking the intersections within the regions stored in the data structure $Neg$ in Algorithm 1. These regions enjoy the property that \emph{all} points in these regions are critical points of $p$. In addition, we modify PolyARBernNN to output \emph{all} the remaining ambiguous regions whose volumes are smaller than the user-specified threshold $\epsilon$ and for which the CAD-based solvers returned a solution. These regions enjoy the property that \emph{there exists} a point inside these regions which is a critical point. These modifications are captured in (Line 2 of Algorithm 2).
Since the minimum/maximum of $p$ may occur at the boundaries of the region $I_n\left(\underline{d}, \overline{d}\right)$, our solver samples from the boundaries of the region $I_n\left(\underline{d}, \overline{d}\right)$ (Line 4 in Algorithm 2). The solver uses $2 \sqrt{n} (\epsilon)^{1/n}$ as sampling distance between two successive boundary samples---recall $\epsilon$ is a user-defined parameter and was used in Algorithm 1 as a threshold on the refinement process. Finally, we evaluate the polynomial $p$ in the obtained samples and we take the minimum and maximum over the obtained values (Line 6 in Algorithm 2). All the details can be found in Algorithm 2.
\begin{algorithm}[t!]
\caption{PolyAROpt} \label{alg:PolyAROpt}
\begin{flushleft}
\textbf{Input:} $I_n(\underline{d}, \overline{d}), p, p_1, p_2, \ldots, p_m, \epsilon$\\
\textbf{Output}: $p^{app}_{min}, p^{app}_{max}$
\end{flushleft}
\begin{algorithmic}[1]
\STATE $\nabla p = \texttt{Grad\_Poly}(p)$
\STATE $\hat{reg}_{\text{list}} = \texttt{PolyARBerNN}(I_n(\underline{d}, \overline{d}), \nabla p, p_1, \ldots, p_m, \epsilon)$
\STATE $\hat{x}_{\text{list}} = \texttt{center}(\hat{reg}_{\text{list}})$
\STATE $x^{end}_{list} = \texttt{Sample\_boundaries}(I_n(\underline{d}, \overline{d}), \epsilon)$
\STATE $\hat{p}_{list} = p(\hat{x}_{\text{list}})$; $p^{end}_{list} = p(x^{end}_{\text{list}})$
\STATE $p_{list} = \hat{p}_{list} \cup p^{end}_{list}$
\STATE $\hat{x}_{min} = \arg \min (p_{list}$), $\hat{x}_{max} = \arg \max (p_{list})$
\STATE $\hat{p}_{min} = p(\hat{x}_{min})$; $\hat{p}_{max} = p(\hat{x}_{max})$
\end{algorithmic}
\end{algorithm}
We conclude our discussion with the following result which captures the error between the solutions provided by PolyAROpt and the global optima.
\begin{theorem}
Let $p^{*}_{min}$ and $p^{*}_{max}$ be the global optimal points for the solution of Problem 2. The solution obtained by Algorithm 2, denoted by $\hat{p}_{min}$ and $\hat{p}_{max}$ satisfies the following:
\begin{align}
&\norm{\hat{p}_{min} - p^{*}_{min}}\leq 2 L\sqrt{n}(\epsilon)^{1/n}, \label{eq:min_bound} \\
& \norm{\hat{p}_{max} - p^{*}_{max}}\leq 2 L\sqrt{n}(\epsilon)^{1/n}, \label{eq:max_bound}
\end{align}
where $L$ is the Lipschitz constant of the polynomial $p$ and $\epsilon > 0$ is a user defined error.
\end{theorem}
\begin{proof}
We note that there are three cases that Algorithm 2 uses to compute the critical points, $\hat{x}_{min}$ and $\hat{x}_{max}$, which corresponds to the approximation of the minimum $\hat{p}_{min}$ and maximum $\hat{p}_{max}$ of the polynomial:
\begin{enumerate}
\item Using the center of the regions in the $Neg$ list
\item Using the center of the regions in the $Ambig$ list
\item Using samples from the boundaries
\end{enumerate}
We proceed by case analysis. \textbf{Case 1:} First, we note that \emph{all} the points within the $Neg$ regions satisfy that $\nabla p = 0$ and hence the value of the polynomial takes the same exact value overall the region, hence the value of $p$ at the center $\hat{x}$ of the region is the same at the global optima $x^*$. \textbf{Case 2 and Case 3:}
If we can show that are bounded from the actual optimal points $x^*_{min}$ and $x^*_{max}$ by:
\begin{align}
&\norm{\hat{x}_{min} - x^{*}_{min}}\leq 2 \sqrt{n}(\epsilon)^{1/n} \label{eq:x_min_bound} \\
& \norm{\hat{x}_{max} - x^{*}_{max}}\leq 2 \sqrt{n}(\epsilon)^{1/n} \label{eq:x_max_bound}
\end{align}
then~\eqref{eq:min_bound} and~\eqref{eq:max_bound} will follow directly from the definition of the Lipschitz continuity of polynomials. To show that inequalities~\eqref{eq:x_min_bound} and~\eqref{eq:x_max_bound} hold we proceed by case analysis. However~\eqref{eq:x_min_bound} and~\eqref{eq:x_max_bound} follow directly in case 2 from the fact that $Ambig$ regions have a volume that is smaller than $\epsilon$ (Line 5 in Algorithm 1) and hence the distance between any two points within the regions is bounded by $2 \sqrt{n}(\epsilon)^{1/n}$. Similarly, case 3 follows from the fact that Algorithm 2 samples from the boundaries with a maximum distance between the samples that is equal to $2 \sqrt{n}(\epsilon)^{1/n}$.
\end{proof}
\section{Numerical Results - NN Training}
In this section, we show the details of training and evaluating the NN used to help PolyARBerNN selecting the best convex abstraction. We evaluate the trained NN on six different benchmarks. The benchmarks are different than the training benchmarks with respect to the input region, the degree of the polynomial, and the number of variables of the polynomial.
All the experiments were executed on an Intel Core i7 2.6-GHz processor with 16 GB of memory.
\begin{figure}[!t]
\includegraphics[width=1.0\columnwidth]{Abst_Refin_NN.png}
\caption{The architecture of the trained NN that is used to guide the abstraction refinement process within PolyARBerNN.}
\label{fig:AbstrefinNN}
\vspace{-5mm}
\end{figure}
\subsection{Training data collection and pre-processing}
\subsubsection{Data collection}
To collect the data, we generated random quadratic two-dimensional polynomials $q\left(x_1,~x_2\right)~=~c_1x_1^2 + c_2x_2 + c_3x_1x_2 + c_4x_2 + c_5y + c_6$, where the coefficients follow a uniform distribution between $-1$ and $1$, i.e, $c_i~\sim~\text{uniform}\left(-1, 1\right),~1\leq i \leq 4$. The random generated polynomials are defined over the domain $I_2 = \big[-2, 2\big]^2$. For each randomly generated polynomial, we perform the abstraction refinement on the domain $I_2$, iteratively. In every iteration, we perform under-approximation, over-approximation of the original polynomial over a selected ambiguous region, and a split of the ambiguous region. Next, we compute the volume of the remaining ambiguous region after each action was implemented.
The labels are a one-hot vector of dimension three where each component represents the action that leads to the maximum reduction in the volume of the ambiguous region, either under-approximation, over-approximation or divide the region into two regions.
We ran the abstraction refinement process on all the generated polynomials to collect the data $\big(\underline{B^i}_{p_j, L} ,\overline{B^i}_{p_j, L}, \underline{B^i}_{\nabla p_j, L}, \overline{B^i}_{\nabla p_j, L}\big)$, where $i$ denote the index of the sample. We generate $50000$ samples for training, $10000$ samples for validation, and $10000$ for testing.
\subsubsection{Data Normalization}
In the literature of NN \cite{lecun}, it is important to normalize the data when the data vary across a wide range of values. This normalization leads to faster training and improves the generalization performance of the NN \cite{lecun}. Therefore, we normalize all the input data to a zero mean and unit variance by adopting a simple affine transformation $\text{data\_sample} \leftarrow \frac{\text{data\_sample} - \mu}{\sigma}$, where $\mu$ and $\sigma$ are the mean of the data and its standard deviation. The $\mu$ and $\sigma$ parameters are initialized with respectively the empirical mean and standard deviation of the dataset and they are computed offline before the training.
\subsection{NN's architecture}
We used a fully connected NN (shown in Figure~\ref{fig:AbstrefinNN}) that contains an input layer, three hidden layers, and one output layer. The input layer has four neurons, the hidden layers have 40 neurons each, and the output layer has three neurons. We use a dropout of probability $0.5$ in the first and second hidden layers to avoid overfitting. We use the ReLU activation function for all the hidden layers and the Softmax activation function for the output layer. We use Adam as an optimizer and cross-entropy as a loss function.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.32\textwidth]{Figure_1.png}
\includegraphics[width=0.32\textwidth]{Figure_2.png}
\includegraphics[width=0.32\textwidth]{Figure_3.png}
\includegraphics[width=0.32\textwidth]{Figure_4.png}
\includegraphics[width=0.32\textwidth]{Figure_5.png}
\includegraphics[width=0.32\textwidth]{Figure_6.png}
\vspace{-3mm}
\caption{Percentage in reduction of the volume of ambiguous regions along with the NN output number (the number is at the top of histograms) for 20 samples for 6 evaluation benchmarks described in Table II.}
\label{fig:nn_performance}
\vspace{-5mm}
\end{figure*}
\subsection{NN's evaluation}
We evaluated the NN on 6 different benchmarks as follows:
\begin{itemize}
\item In the first and second benchmarks, we generate the same random quadratic polynomials but in different domain $\big[-4, 4\big]^2$ and $\big[-10, 10\big]^2$. This choice is made to test the generalization of the NN outside the data domains that was used in its training. This is important since the Bernstein coefficients of a polynomial (the input to the NN) depends on the input region $I_n$.
\item In the third and fourth benchmarks, we generated random polynomials with degrees $4$ and $10$ over the domain $\big[-2, 2\big]^2$. These benchmarks are used to validate the generalization of the NN to polynomials of orders higher than the ones used in its training.
\item Finally, in the fifth and sixth benchmarks, we generated random polynomials with higher dimensions, i.e., with dimension $n = 4$ and $n = 7$.
\end{itemize}
In summary, these benchmarks will help us to answer the following question: is the trained NN able to generalize to new data with different domains (benchmarks 1 and 2), higher orders (benchmarks 3 and 4), and higher dimensions (benchmark 5 and 6)? More detail about the different benchmarks is shown at Table \ref{TabBenchmarks}.
Figure~\ref{fig:nn_performance} shows the performance of the trained NN over 20 random samples of each of the six benchmarks. For each sample, we used the framework in~\cite{polyar} to compute the ground-truth percentage in the reduction of the volume of ambiguous regions after applying every action either under-approximation, over-approximation, or split. We then evaluated the NN on each sample and reported in Figure~\ref{fig:nn_performance} both the ground-truth reduction of the ambiguous regions (as bars) against the index of the action suggested by the NN (as the text above the bars).
As it can be seen from Figure \ref{fig:nn_performance}, with the exception of the second sample of the first benchmark, the NN outputs represent the actions that lead to the maximum reduction of the ambiguous region's volume.
\begin{table}[t!]
\caption{Evaluation of the trained NN on the six different benchmarks.
}
\begin{adjustbox}{width=1.0\columnwidth,center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Benchmark & $p\left(x\right)$ & $n$ & order & region & Accuracy \\
\hline
\hline
1 & $c_1x_1^2 + c_2x_2 + c_3x_1x_2 + c_4x_2 + c_5y + c_6$ & $2$ & $2$ & $[-4, 4]^2$ & $95\%$\\
\hline
2 & $c_1x_1^2 + c_2x_2 + c_3x_1x_2 + c_4x_2 + c_5y + c_6$ & $2$ & $2$ & $[-10, 10]^2$ & $93\%$ \\
\hline
3 & $c_1x_1^4 + c_2x_2^3 + c_3x_1^4x_2^3 + c_4x_1^3 + c_5 $ & $2$ & $10$ & $[-2, 2]^2$ & $88\%$ \\
\hline
4 & $c_1x_1^10 + c_2x_2^5 + c_3x_1^5x_2^3 + c_4x_1^5 + c_5 $ & $2$ & $10$ & $[-2, 2]^2$ & $87\%$ \\
\hline
5 & $c_1x_1^3 + c_2x_2^3 + c_3x_3^3 + c_4x_4^3 $ & $4$ & $3$ & $[-2, 2]^4$ & $81\%$\\
\hline
6 & $c_1x_1^3 + c_2x_2^3 + c_3x_3^3 + c_4x_4^3 + c_5x_5^3 + c_6x_6^3 + c_7x_7^3 $ & $7$ & $3$ & $[-2, 2]^7$ & $80\%$\\
\hline
\end{tabular}
\end{adjustbox}
\label{TabBenchmarks}
\end{table}
Finally, we ran the same experiment for 1000 samples and report the percentage of samples for which the NN was able to predict the action that leads to the maximum reduction in the ambiguous region's volume. As it can be seen from Table \ref{TabBenchmarks}, the trained NN is able to generalize on the different benchmarks. For instance, evaluating the NN on different domains results in the lowest accuracy of $93\%$. Furthermore, evaluating the NN on polynomials with higher-order results in an accuracy of $87\%$. Finally, the NN achieves $80\%$ on higher dimension benchmarks.
\begin{figure*}[!t]
\centering
\resizebox{.8\textwidth}{!}
\centering
\begin{tabular}{ c | c | c | c |}
$m$ & SAT/UNSAT & \textbf{Execution times vs Polynomial Order} & \textbf{Execution times vs Number of Variables}\\\hline
%
\multirow{2}{*}{1}
&
\raisebox{5.0\totalheight}{UNSAT}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_1_a_UNSAT.png}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_1_b_UNSAT.png}
\\ \cline{2-4}
%
&
\raisebox{5.0\totalheight}{SAT}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_1_a_SAT.png}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_1_b_SAT.png}
\\ \hline
%
\multirow{2}{*}{5}
&
\raisebox{5.0\totalheight}{UNSAT}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_3_a_UNSAT.png}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_3_b_UNSAT.png}
\\ \cline{2-4}
%
&
\raisebox{5.0\totalheight}{SAT}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_3_a_SAT.png}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_3_b_SAT.png}
\\ \hline
\multirow{2}{*}{10}
&
\raisebox{5.0\totalheight}{UNSAT}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_2_a_UNSAT.png}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_2_b_UNSAT.png}
\\ \cline{2-4}
%
&
\raisebox{5.0\totalheight}{SAT}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_2_a_SAT.png}
&
\includegraphics[width=0.65\columnwidth,trim=0mm 0 8mm 2mm, clip]{Figure_2_b_SAT.png}
\\ \hline
\end{tabular}
}
\caption{\small{Scalability results of PolyARBerNN in the UNSAT case for $1$, $5$, and $10$ constraints. (left) evolution of the execution time in seconds as a function of the order of the polynomials, (right) evolution of the execution time in seconds as a function of the number of variables. The timeout is equal to 1 hour.}}
\label{figUNSAT}
\vspace{-5mm}
\end{figure*}
\section{Numerical Results - Scalability Results}
In this section, we study the scalability of PolyARBerNN in terms of execution times by varying the order, the number of variables, and the number of the polynomial constraints for instances when a solution exist (the problem is Satisfiabile or SAT for short) and when a solution does not exist (or UNSAT for short). Moreover, we compare the performance of PolyARBerNN against a theorem prover named NASALib which implements a Bernstein library to solve multivariate polynomial constraints \cite{pvsnasa}. In addition, we compare the scalability performance of PolyARBerNN against state-of-the-art solvers such as Z3 8.9 and Yices 2.6 to synthesize a controller for a nonlinear dynamical system~\cite{MPCdesign}. Finally, we compare the scalability of the PolyAROpt optimizer against the built-in optimization library in Z3 8.9 to solve an unconstrained multivariate polynomial optimization problem with varying order and number of variables.
\subsection{Scalability of PolyARBerNN against other solvers}
In this experiment, we compare the execution times of PolyARBerNN against the PolyAR tool~\cite{polyar}, Z3 8.9, and Yices 2.6. We consider two instances of Problem 1: an UNSAT and a SAT problems. For each instance, we consider three scenarios, $m=1$, $m=5$, and $m=10$ where $m$ is the number of polynomial constraints. First, we vary the order of the polynomials from 0 to 1000 while fixing the number of variables (and hence the dimension of the search space) to two. Alternatively, we also fix the order of the polynomials to $30$ while varying the number of variables from 1 to 200.
We set the timeout of the simulations to be $1$ hour. Figure~\ref{figUNSAT} reports the execution times for all the experiments whenever the problem is UNSAT and SAT.
As evidenced by the figures, PolyARBerNN succeeded to solve the instances of Problem 1 for all orders and numbers of variables in a few seconds. For instance, solving $10$ polynomial constraints with $200$ variables and a maximum order of $30$ took around $20~s$ leading to a speed-up of $200\times$ compared to Z3 and Yices. On the other hand, other solvers are incapable of solving the polynomial constraints for all orders or number of variables and they time out after one hour.
These results show the scalability of the proposed approach by including Bernstein coefficients to prune the search space and a NN to guide the abstraction refinement.
\subsection{Scalability of PolyARBerNN versus NASALib}
In this experiment, we want to investigate the effect of the NN on the overall performance of the system. \comment{To that end, we consider two versions of our tool (i) PolyARBerNN which uses the NN to guide the abstraction refinement, and (ii) PolyARBer which does not use a NN and instead applies all possible convex abstractions.}
We compare PolyARBerNN against the NASALib theorem prover which also uses Bernstein coefficients to reason about polynomial constraints. The polynomials and the domains of the associated variables are given in~\cite{pvsnasa} which were originally used to assess the scalability of the NASALib theorem prover. As evident by the results in Table~\ref{tab:nasalib}, using a combination of Bernstein and adding the NN to guide the convex abstractions (in PolyARBerNN) leads to additional savings in the execution time leading to a speedup of $483\times$ in the Heart dipole example.
\comment{
\begin{itemize}
\item \textbf{Schwefel}:
\begin{align*}
&\text{schwefel}(x_1,x_2,x_3) = (x_1 - x_2^2)^2 + (x_2 - 1)^2 \\
& + (x_1 - x_3^2)^2 + (x_3 - 1)^2,\\&\text{where}~x_1, ~x_2,~x_3~\in \big[-10, 10\big].
\end{align*}
\item \textbf{3-Variable Reaction Diffusion}:
\begin{align*}
&\text{rd}(x_1,x_2,x_3) = -x_1 + 2x_2 - x_3 - 0.835634534 x_2(1 + x_2),\\&\text{where}~x_1, ~x_2,~x_3~\in ~ \big[-5, 5\big].
\end{align*}
\item \textbf{Caprasse’s System}:
\begin{align*}
&\text{caprasse}(x_1,x_2,x_3,x_4) = - x_1x_3 + 4x_2x_3^2x_4 + 4x_1x_3x_4^2 + 2x_2x_4^3 + 4x_1x_3 \\ &+ 4 x_3^2 - 10x_2x_4 - 10x4^2 + 2,\\&\text{where}~x_1, ~x_2,~x_3,~x_4,~\in ~ \big[-0.5, 0.5\big].
\end{align*}
\item \textbf{Adaptive Lotka-Volterra System}:
\begin{align*}
&\text{lv}(x_1,x_2,x_3,x_4) = x_1x_2 + x_1x_3^2 + x_1x_4^2 - 1.1x_1 + 1,\\&\text{where}~x_1, ~x_2,~x_3,~x_4,~\in ~ \big[-2, 2\big].
\end{align*}
\item \textbf{Butcher’s Problem}:
\begin{align*}
&\text{butcher}(x_1,x_2,x_3,x_4, x_5, x_6) = x_6x_2 + x_5x_3 - x_1x_4 + x_4 + x_4 - x_1 + x_4,\\
&\text{where}~x_1 \in \big[-1,0\big],~x_2 \in \big[-0.1,0.9\big],~x_3 \in \big[-0.1,0.5\big],\\&x_4 \in \big[-1, -0.1\big],~x_5 \in \big[-0.1,
-0.05\big],~\text{and}~x_6 \in \big[-0.1, -0.03\big].
\end{align*}
\item \textbf{7-Variable Magnetism}:
\begin{align*}
&\text{magnetism}(x_1,x_2,x_3,x_4,x_5,x_6,x_7) = x_1^2 + 2x_2 + 2x_3^2 + 2x_4^2 \\& + 2x_5^2 + 2x_6^2 + 2 x_7^2 - x_1,\\&\text{where}~x_1, ~x_2,~x_3,~x_4,~x_5,~x_6,~x_7,~\in ~ \big[-1, 1\big].
\end{align*}
\item \textbf{Heart Dipole}:
\begin{align*}
&\text{heart}(x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8) = - x_1x_6^3 + 3x_1x_6x_7^2 \\& - x_3x_7^3 + 3x_3x_7x_6^2 - x_2x_5^3 + 3x_2x_5x_8^2 - x_4x_8^3 + 3x_4x_8x_5^2 - 0.9563453,\\&\text{where}~x_1 \in \big[-0.1, 0.4\big],~x_2 \in \big[0.4, 0.1\big],~x_3 \in \big[-0.7,-0.4\big],~x_4 \in \big[-0.7, 0.4\big],\\&x_5 \in \big[0.1,0.2\big],~\text{and}~x_6 \in \big[-0.1, 0.2\big],~x_7 \in \big[-0.3,1.1\big],~\text{and}~x_8 \in \big[-1.1, -0.3\big].
\end{align*}
\end{itemize}
}
\begin{table}[t!]
\caption{Scalability results for PolyARBerNN against NASALib.}
\label{tab:nasalib}
\begin{adjustbox}{width=0.99\columnwidth,center}
\begin{tabular}{|c|c|c|c|}
\hline
Problem & NASALib & \multicolumn{2}{c|}{PolyARBerNN}\\
\cline{3-4}
& & Times (sec) & Speedup\\
\hline
\hline
Schwefel & $1.23 $ & $\mathbf{0.022}$ & $54.9 \times$ \\
\hline
Reaction diffusion & $0.25$ & $\mathbf{0.018}$ & $12.88 \times$ \\
\hline
Caprasse & $0.03$ & $\mathbf{0.025}$ & $0.19 \times$ \\
\hline
Lotka-Volterra & $0.22$ & $\mathbf{0.026}$ & $7.46 \times$ \\
\hline
Butcher & $0.46$ & $\mathbf{0.029}$ & $14.86 \times$\\
\hline
Magnetism & $1.82$ & $\mathbf{0.028}$ & $64.0 \times$ \\
\hline
Heart dipole & $15.01$ & $\mathbf{0.031}$ & $483.19 \times$ \\
\hline
\end{tabular}
\end{adjustbox}
\vspace{-2mm}
\end{table}
\subsection{Non-Linear Controller Design for a Duffing Oscillator}
In this subsection, we assess the scalability of the PolyARBerNN solver compared to state-of-the-art solvers for synthesizing a non-parametric controller for a Duffing oscillator reported by~\cite{MPCdesign}. All the details of the dynamics of the oscillator and how we generated the polynomial constraints can be found in \cite{polyar}. We consider by $n$ the dimension of the Duffing oscillator.
\comment{
is given by the higher-order differential equation:
\begin{align}\label{duff1eq}
y^{(n)}\!\!\left(t\right)\!+\!\cdots\!+\!y^{(2)}\!\!\left(t\right)\!+\!2\zeta y^{(1)}\!\!\left(t\right) \!+\! y\!\left(t\right) \!+\! y\!\left(t\right)^3 \!\!=\!\! u\left(t\right),
\end{align}
where $y \in \mathbb{R}$ is the continuous state variable and $u \in \mathbb{R}$ is the control input. The parameter $\zeta$ is the damping coefficient. The objective of the control is to regulate the state to the origin. To derive the discrete-time model, forward difference approximation is used (with sampling period of $h=0.05$ time units). The resulting state space model with discrete state vector $x=[x_1,x_2,\cdots,x_n]^T=[y,y^{(1)},\cdots,y^{(n-1)}]^T \in \mathbb{R}^{n-1}$ and input $u \in \mathbb{R}$ is:
\begin{align}\label{duff2eq}
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{bmatrix}^{+}&\!\!\!\!\!\!=\!\!\begin{bmatrix}
\!1\! & \!\!\!h\! & \!\!\!0\! & \!\!\!\cdots\! & \!\!\!\cdots\! & \!\!\!0\!\\
\!0\! & \!\!\!1\! & \!\!\!h\! & 0 & \!\!\!\cdots\! & \!\!\!0\!\\
\!\vdots\! & \!\!\!\vdots\! & \!\!\!\vdots\! & \!\!\!\vdots\! & \!\!\!\vdots\! & \!\!\!\vdots\!\\
\!-h\! & \!\!\!-2\zeta h\! & \!\!\!-h\! & \!\!\!\hdots\! & \!\!\!-h\! & \!\!\!1\!\!-\!\!h\!
\end{bmatrix}\!\!\!\!
\begin{bmatrix}
x_1 \\
x_2 \\
\vdots\\
x_2
\end{bmatrix}
\!\!\!+\!\!\!\begin{bmatrix}
0 \\
0 \\
\vdots\\
h
\end{bmatrix}\!\!u\!+\!\!\!\begin{bmatrix}
0 \\
0 \\
\vdots\\
\!\!-hx_1^3\!
\end{bmatrix}.
\end{align}
The previous equation
is written in the form of $x\left(k+1\right)=Ax\left(k\right)+Bu\left(k\right)+E\left(x\right)$, which includes a nonlinear term $E\left(x\right)=\begin{bmatrix} 0, \cdots, -hx_1^3\left(k\right) \end{bmatrix}^T$. Our objective is to design a non-parametric controller. To that end, we encode the controller as the solution of a feasibility problem of several constraints that capture the system dynamics, state/input constraints, and stability constraints as discussed below.
First, to enforce the stability of the resulting non-parametric controller, we consider the candidate quadratic Lyapunov function $V\left(x\right)=x^TPx$ with the symmetric positive definite matrix $P$ is a solution of the discrete-time Lyapunov equation $APA^T+P+Q=0$ and is a positive definite matrix. Thanks to the fact that $E(x)$ satisfies $\text{lim}_{\norm{x} \rightarrow 0}\frac{\norm{E\left(x\right)}}{\norm{x}}=0$ along with the Lyapunov's indirect method in~\cite{khalil}, one can directly conclude that $V(x)$ is indeed a Lyapunov function.
For simplicity, we pick $Q=I_n$, where $I_n$ is the identity matrix of size $n$.
Moreover, to ensure the smoothness of the resulting controller signals, we add additional filters in the form of high order polynomial $L(x,u) \leq 0$.
In addition, we consider the state-constraints of the form $\norm{x\left(k\right)}_{\infty} \leq 100$.
The final non-parametric controller is then encoded as the solution of the following feasibility problem:
\begin{align}\label{mpc4}
&\exists x_1(k),\ldots x_n(k), x_1(k+1),\ldots x_n(k+1), u(k) \nonumber\\
&\text{subject to}: \nonumber\\
&\qquad x\left(k+1\right)=Ax\left(k\right)+Bu\left(k\right)+E\left(x\right), \nonumber\\
&\qquad V\left(x\left(k+1\right)\right)-V\left(x\left(k\right)\right) \le -\epsilon, \nonumber\\
&\qquad L\left(x\left(k\right), u(k)\right) \leq 0, \nonumber \\
&\qquad \norm{x\left(k\right)}_{\infty} \leq 100.
\end{align}
Since the PolyAR solver only handles polynomial inequalities, hence, we transform the equality constraint
$x\left(k+1\right)=Ax\left(k\right)+Bu\left(k\right)+E\left(x\right) $ above into two inequalities $x\left(k+1\right)-Ax\left(k\right)+Bu\left(k\right)+E\left(x\right) \leq\epsilon~\wedge~x\left(k+1\right)-Ax\left(k\right)+Bu\left(k\right)+E\left(x\right) \geq -\epsilon$, where $\epsilon \in \mathbb{R}$ is a small value.
}
We consider two instances of the controller synthesis problem for the Duffing oscillator with the following parameters:
\begin{itemize}
\item $n=3$, $\zeta=1.0$, $x\left(0\right)=[0.15,0.15,0.15]^T$, $L_1\left(x\left(k\right),u\left(k\right)\right)=\left(-x_1^{3}\left(k\right)+x_3^{3}\left(k\right)+ u\left(k\right) - 2 \right)^{51}$, $L_2\left(x\left(k\right),u\left(k\right)\right)=x_1^{51}\left(k\right)x_3^7\left(k\right) + x_1^9\left(k\right)x_3^5\left(k\right) - 5 x_2^4\left(k\right) - x_2^2\left(k\right)u^2\left(k\right)$, which results in $9$ polynomial constraints with $4$ variables and max polynomial order of $153$.
\item $n=4$, $\zeta=1.75$, $x\left(0\right)=[0.1,0.1,0.01, 0.1]^T$, $L_1\left(x\left(k\right),u\left(k\right)\right)=x_1^{4}\left(k\right)+x_2^{4}\left(k\right)+x_3^{4}\left(k\right)+x_4^{4}\left(k\right)-u^{4}\left(k\right)$, $L_2\left(x\left(k\right),u\left(k\right)\right)=-x_1^{51}\left(k\right)x_3^{20}\left(k\right) - 5x_2^4\left(k\right) - x_2^2\left(k\right)u^2\left(k\right)$, $L_3\left(x\left(k\right),u\left(k\right)\right)=\left(x_1x_2^2 - u\left(k\right) - 100\right)^{41}$, which results in $12$ polynomial constraints with $5$ variables and max polynomial order of $82$.
\end{itemize}
\begin{figure}
\resizebox{.5\textwidth}{!}
\centering
\begin{tabular}{ c | c | c |}
$n$ & State Space & Execution Time Evolution over time\\\hline
%
3 &
\raisebox{-0.5\totalheight}{\includegraphics[width=0.6\columnwidth,trim=8mm 0 15mm 0, clip]{Figure_1n3.png}} &
\raisebox{-0.5\totalheight}{\includegraphics[width=0.6\columnwidth,trim=8mm 0 15mm 0, clip]{Figure_2n3.png}} \\ \hline
%
4 &
\raisebox{-0.5\totalheight}{\includegraphics[width=0.6\columnwidth,trim=8mm 0 15mm 0, clip]{Figure_1n4.png}} &
\raisebox{-0.5\totalheight}{\includegraphics[width=0.6\columnwidth,trim=8mm 0 15mm 0, clip]{Figure_2n4.png}} \\ \hline
\end{tabular}
}
\caption{Results of controlling the Duffing oscillator with different $n$ (left) evolution of the states $x_1(k)$ and $x_2(k)$ for the solvers in the state-space, (right) evolution of the execution time of solvers during the $12$ seconds. The timeout is equal to $60s$. Trajectories are truncated once the solver exceeds the timeout limit.}
\label{fig:duff}
\vspace{-3mm}
\end{figure}
We feed the resultant polynomial inequality constraint to PolyARBerNN, PolyARBer, Yices, and Z3. We solve the feasibility problem for $n=3$ and $n=4$. We set the timeout to be $1s$. Figure~\ref{fig:duff} (left) shows the state-space evolution of the controlled Duffing oscillator for different solvers for number of variables $n$ of $3$ and $4$. Figure~\ref{fig:duff} (right) shows the evolution of the execution time of the solvers during the $12$ seconds. As it can be seen from Figure 4, our solvers PolyARBerNN and PolyARBer succeeded to find a control input $u$ that regulates the state to the origin for all $n$. However, off-the-shelf solvers are incapable of solving all three instances, and they early time out after $60~seconds$ out of the simulated $12~seconds$. Furthermore, PolyARBerNN finds the solution faster compared to PolyARBer thanks to the NN-guided abstraction refinement process. The NN avoids implementing unnecessary approximations during the abstraction refinement which makes the tool run faster compared to implementing all the approximations at once at every iteration. This shows the scalability of the proposed approach.
\begin{figure}[!t]
\centering
\includegraphics[width=0.24\textwidth]{Figure_opt_1.png}
\includegraphics[width=0.24\textwidth]{Figure_opt_2.png}
\caption{Scalability results of PolyAROpt for unconstrained optimization. (left) evolution of the execution time in seconds as a function of the order of the polynomial. (right) evolution of the execution time in seconds as a function of the number of variables. The timeout is equal to 1 hour.
}
\label{optimiz}
\end{figure}
\subsection{Scalability of PolyAROpt against other solvers}
We compare the scalability results of PolyAROpt with the Z3 solver due to the fact that Z3 has a built-in optimization library. Unfortunately, Yices does not have such an optimizer. We set the timeout of the simulation to be 1 hour. Figure \ref{optimiz} reports the execution times of two experiments of computing the minimum and maximum of unconstrained optimization. As evidenced by the two figures, PolyAROpt succeeded to solve the unconstrained optimization problem for all order and number of variables. For instance, solving the unconstrained optimization problem $70$ variables and a maximum order of $3$ took around $50~seconds$. On the other hand, the Z3 solver is incapable of solving the unconstrained optimization problem for all orders or number of variables and times out
\comment{\section{CONCLUSION}
In this paper, we proposed PolyARBerNN, a solver for polynomial inequality constraints. The PolyARBerNN solver aims to utilize Bernstein expansion and convex programming to help to solve the non-convex polynomial constraints. To that end, it provides three main key advances. First, it adopts Bernstein expansion of the problem to accelerate finding a solution to the set of the multivariate polynomials. Second, it uses a NN guided abstraction refinement process where the NN predict which action should be taken in each iteration. The purpose of the abstraction refinement process is to prune the search space and identify the ambiguous regions for which the Bernstein expansion and convex relaxation fails to solve the problem. Third, it allows for a highly parallelizable usage of off-the-shelf polynomial constraints solvers to analyze the regions in which the Bernstein expansion and convex relaxation fail to provide solutions. Experiments on finding solutions for polynomial inequalities with one constraint shows that PolyARBerNN outperforms NASALib in terms of execution times. Furthermore, the proposed solver outperforms Z3 8.9 and Yices 2.6 which are the current state-of-the-art nonlinear arithmetic solvers in terms of execution times. We believe that the proposed solver can be used to build effective and scalable decision procedure for a wide variety of problems including software verification and control synthesis.}
~\\
\noindent \textbf{Conclusions.} In this paper, we proposed PolyARBerNN, a solver for polynomial inequality constraints. We proposed a systematic methodology to design neural networks that can be used to guide the abstraction refinement process by bridging the gap between neural network properties and the properties of polynomial representations. We showed that the use of Bernstein coefficients leads the way to designing better neural network guides and provides an additional abstraction that can be used to accelerate the solver. We generalized the solver to reason about optimization problems. We demonstrated that the proposed solver outperforms state-of-the-art tools by several orders of magnitude.
\bibliographystyle{plain}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.